id
stringlengths 4
10
| text
stringlengths 4
2.14M
| source
stringclasses 2
values | created
timestamp[s]date 2001-05-16 21:05:09
2025-01-01 03:38:30
| added
stringdate 2025-04-01 04:05:38
2025-04-01 07:14:06
| metadata
dict |
---|---|---|---|---|---|
84538857 | dartanalyzer is too slow.
$ time dartanalyzer <redacted>.dart
Analyzing <redacted>.dart...
<redacted>
1 error and 1 warning found.
real 0m4.150s
user 0m10.053s
sys 0m0.248s
This is on a current-model ThinkPad, analyzing a 71 line source file.
IMO, an acceptable run time for this class of tool is <500ms.
I use dartanalyser with vim plugin syntastic and it's blocking vim when I run :SyntasticCheck about 2~3 sec. There is my time result:
$ time dartanalyzer foo.dart
Analyzing [foo.dart]...
No issues found
real 0m2.469s
user 0m2.404s
sys 0m0.156s
$ dart --version
Dart VM version: 1.12.1 (Tue Sep 8 11:14:08 2015) on "linux_x64"
| gharchive/issue | 2013-05-30T17:22:41 | 2025-04-01T06:38:20.093704 | {
"authors": [
"jbdeboer",
"uralbash"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/10981",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
84549317 | Please review new void used as type message
I have looked at these error messages:
foo.dart:1:1: expected identifier, but got 'void'
void x;
^^^^
foo.dart:1:5: Error: Type "void" is only allowed in a return type.
foo(void x) {}
^^^^
It is changed to:
foo.dart:1:1: Error: Type 'void' can't be used here because it isn't a return type.
Try removing 'void' keyword or replace it with 'var', 'final', or a type.
void x;
^^^^
foo.dart:1:5: Error: Type 'void' can't be used here because it isn't a return type.
Try removing 'void' keyword or replace it with 'var', 'final', or a type.
foo(void x) {}
^^^^
Let me know what you think.
I'm guessing this has been resolved and can be closed, right @peter-ahe-google?
| gharchive/issue | 2013-09-01T12:40:45 | 2025-04-01T06:38:20.096835 | {
"authors": [
"bkonyi",
"peter-ahe-google"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/12969",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
84559803 | Extend OAuth2 package to support OpenID connect
This issue was originally filed by warren.strange...@gmail.com
After thinking this through, I realize this is a fairly major feature request.
The current OAuth2 package (http://pub.dartlang.org/packages/oauth2)
does not support OpenID connect. For example, Openid connect returns a id_token as part of the authorization flow.
A small enhancement would be to extend Credentials.dart to provide the raw value of id_token if it is present.
Ideally, support would be provided for JSON web tokens, signature verification, etc.
This issue has been moved to dart-lang/oauth2#8.
| gharchive/issue | 2013-11-21T20:38:38 | 2025-04-01T06:38:20.099177 | {
"authors": [
"DartBot"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/15248",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
84564741 | dart:io WebSocket needs OnBadCertificate() callback for wss:// socket connections
This issue was originally filed by j.m.sloc...@gmail.com
Using Dart VM version: 1.1.1 (Wed Jan 15 04:11:49 2014) on "linux_x64"
The following dart program starts a secure HTTP server and waits for a websocket connection
import 'dart:io';
void main(List<String> args){
String password = new File('pwdfile').readAsStringSync().trim();
SecureSocket.initialize(database: "./",
password: password);
HttpServer.bindSecure(InternetAddress.ANY_IP_V4, 4443,
certificateName: "CN=devcert")
.then((HttpServer server) {
print("Secure server listening on 4443...");
server.serverHeader = "Secure WebSocket server";
server.listen((HttpRequest request) {
if (request.headers.value(HttpHeaders.UPGRADE) == "websocket"){
WebSocketTransformer.upgrade(request).then(handleWebSocket);
}
else {
request.response.statusCode = HttpStatus.FORBIDDEN;
request.response.reasonPhrase = "WebSocket connections only";
request.response.close();
}
});
});
}
void handleWebSocket(WebSocket socket){
print("Secure client connected!");
socket.listen((String s) {
print('Client sent: $s');
socket.add('echo: $s');
},
onDone: () {
print('Client disconnected');
});
}
The following program is a client that can connect to websockets.
import 'dart:io';
WebSocket ws;
void main(List<String> args){
if (args.length < 1){
print('Please specify a server URI. ex ws://example.org');
exit(1);
}
String server = args[0];
//Open the websocket and attach the callbacks
WebSocket.connect(server).then((WebSocket socket) {
ws = socket;
ws.listen(onMessage, onDone: connectionClosed);
});
//Attach to stdin to read from the keyboard
stdin.listen(onInput);
}
void onMessage(String message){
print(message);
}
void connectionClosed() {
print('Connection to server closed');
}
void onInput(List<int> input){
String message = new String.fromCharCodes(input).trim();
//Exit gracefully if the user types 'quit'
if (message == 'quit'){
ws.close();
exit(0);
}
ws.add(message);
}
What is the expected output? What do you see instead?
When I run this server using a self signed cert and try to connect with a client I get the following exception
$ dart secureWebSocketClient.dart wss://localhost:4443
Uncaught Error: HandshakeException: Handshake error in client (OS Error: Issuer certificate is invalid., errno = -8156)
Unhandled exception:
HandshakeException: Handshake error in client (OS Error: Issuer certificate is invalid., errno = -8156)
0 _rootHandleUncaughtError.<anonymous closure>.<anonymous closure> (dart:async/zone.dart:677)
1 _asyncRunCallback (dart:async/schedule_microtask.dart:18)
2 _asyncRunCallback (dart:async/schedule_microtask.dart:21)
3 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:119)
However there is no way of indicating to the WebSocket class to ignore certificate errors.
The server works if I use a "plain" HTTP server, but not a secure server. It would appear that the WebSocket class should have an onBadCertificate(X509Certificate) callback like the SecureSocket classes.
Has this been addressed? I am having an issue where I need to accept a self signed cert and this is holding me up.
after 5 years, this is still open. A solution needed.
At least for the dart:io version of websockets, an onBadCertificate() would be great!
| gharchive/issue | 2014-01-26T00:45:47 | 2025-04-01T06:38:20.115453 | {
"authors": [
"DartBot",
"EPNW",
"linuxjet",
"neaplus"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/16300",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
84590415 | Comparison of String and dynamic (function) fails with uncaught error in core_ajax_dart.dart
This issue was originally filed by frable9...@gmail.com
What steps will reproduce the problem?
Fire a CoreAjax request with go()
What is the expected output? What do you see instead?
I get an uncaught error from the core_ajax_dart.dart.
The essential code snippet from core_ajax_dart.dart is:
if (!hasContentType && this.contentType) {
headers['Content-Type'] = this.contentType;
}
whereby hasContentType is a function with a boolean as return value and this.contentType is a String.
Exception: Uncaught Error: type 'String' is not a subtype of type 'bool' of 'boolean expression'.
Stack Trace:
#0 CoreAjax.go (package:core_elements/core_ajax_dart.dart:285:33)
#1 AelAjax.go (http://localhost:8080/components/ael-ajax/ael-ajax.dart:39:17)
#2 AelCtrl.getUserProfile (http://localhost:8080/app.dart:157:26)
#3 AelCtrl.AelCtrl (http://localhost:8080/app.dart:77:19)
#4 main.<anonymous closure>.<anonymous closure> (http://localhost:8080/app.dart:27:18)
#5 _RootZone.runUnary (dart:async/zone.dart:1082)
#6 _Future._propagateToListeners.handleValueCallback (dart:async/future_impl.dart:488)
#7 _Future._propagateToListeners (dart:async/future_impl.dart:571)
#8 _Future._completeWithValue (dart:async/future_impl.dart:331)
#9 _Future._asyncComplete.<anonymous closure> (dart:async/future_impl.dart:393)
#10 _asyncRunCallbackLoop (dart:async/schedule_microtask.dart:41)
#11 _asyncRunCallback (dart:async/schedule_microtask.dart:48)
#12 _handleMutation (dart:html:39006)
What version of the product are you using?
core_elements 0.2.1+1
Dart 1.6.0
On what operating system?
Windows 7 64 bit
What browser (if applicable)?
Dartium 37.0.2062.76
Please provide any additional information below.
This issue has been moved to dart-lang/polymer-dart#301.
| gharchive/issue | 2014-09-17T00:20:46 | 2025-04-01T06:38:20.124298 | {
"authors": [
"DartBot"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/20978",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
100823041 | Analyzer: When multiple imports provide the same symbol, one import should be marked as "unused"
e.g. In Angular, the package:angular/angular.dart file exports package:di/di.dart.
In the following file, the Module symbol is coming from di.dart, but also exported through angular.
import package:angular/angular.dart
import package:di/di.dart
class MyModule extends Module { ... }
Currently, the analyzer does not give any hints about unused imports.
However, I would expect angular.dart to be flagged as "unused". angular.dart is not used since Module is also available through di.dart.
Even a subset of this, examining just shown names would be useful. I found some code with:
import 'package:a/a.dart';
import 'package:a/src/foo.dart' show foo;
because at one point, a.dart did not export foo. But now it does, so the second import is unnecessary. Not sure if one is easier to implement or faster to run than the other...
I'll close this in favor of the issue I've been referencing when landing changes. https://github.com/dart-lang/sdk/issues/44569
| gharchive/issue | 2015-08-13T17:20:39 | 2025-04-01T06:38:20.127601 | {
"authors": [
"jbdeboer",
"srawlins"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/24073",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
101770654 | dartanalyzer crashes when .packages target is empty
.packages:
foo:
stderr:
Bad state: No element
#0 List.last (dart:core-patch/growable_array.dart:212)
#1 startsWith (package:analyzer/src/generated/utilities_dart.dart:47:20)
#2 SourceFactory._getPackageMapping.<anonymous closure> (package:analyzer/src/generated/source.dart:786:13)
#3 _HashVMBase&MapMixin&&_LinkedHashMapMixin.forEach (dart:collection-patch/compact_hash.dart:340)
#4 MapView.forEach (dart:collection/maps.dart:194)
#5 SourceFactory._getPackageMapping (package:analyzer/src/generated/source.dart:784:23)
#6 SourceFactory.restoreUri (package:analyzer/src/generated/source.dart:762:32)
#7 Driver._computeLibrarySource (package:analyzer_cli/src/driver.dart:341:38)
#8 Driver._analyzeAll (package:analyzer_cli/src/driver.dart:137:23)
#9 Driver.start.<anonymous closure> (package:analyzer_cli/src/driver.dart:99:16)
#10 _BatchRunner.runAsBatch.<anonymous closure> (package:analyzer_cli/src/driver.dart:536:39)
#11 _RootZone.runUnaryGuarded (dart:async/zone.dart:1103)
#12 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:341)
#13 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:270)
#14 _SinkTransformerStreamSubscription._add (dart:async/stream_transformers.dart:67)
#15 _EventSinkWrapper.add (dart:async/stream_transformers.dart:14)
#16 _StringAdapterSink.add (dart:convert/string_conversion.dart:256)
#17 _LineSplitterSink._addLines (dart:convert/line_splitter.dart:127)
#18 _LineSplitterSink.addSlice (dart:convert/line_splitter.dart:102)
#19 StringConversionSinkMixin.add (dart:convert/string_conversion.dart:180)
#20 _ConverterStreamEventSink.add (dart:convert/chunked_conversion.dart:80)
#21 _SinkTransformerStreamSubscription._handleData (dart:async/stream_transformers.dart:119)
#22 _RootZone.runUnaryGuarded (dart:async/zone.dart:1103)
#23 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:341)
#24 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:270)
#25 _SinkTransformerStreamSubscription._add (dart:async/stream_transformers.dart:67)
#26 _EventSinkWrapper.add (dart:async/stream_transformers.dart:14)
#27 _StringAdapterSink.add (dart:convert/string_conversion.dart:256)
#28 _StringAdapterSink.addSlice (dart:convert/string_conversion.dart:260)
#29 _Utf8ConversionSink.addSlice (dart:convert/string_conversion.dart:336)
#30 _Utf8ConversionSink.add (dart:convert/string_conversion.dart:329)
#31 _ConverterStreamEventSink.add (dart:convert/chunked_conversion.dart:80)
#32 _SinkTransformerStreamSubscription._handleData (dart:async/stream_transformers.dart:119)
#33 _RootZone.runUnaryGuarded (dart:async/zone.dart:1103)
#34 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:341)
#35 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:270)
#36 _StreamController&&_SyncStreamControllerDispatch._sendData (dart:async/stream_controller.dart:744)
#37 _StreamController._add (dart:async/stream_controller.dart:616)
#38 _StreamController.add (dart:async/stream_controller.dart:562)
#39 _Socket._onData (dart:io-patch/socket_patch.dart:1793)
#40 _RootZone.runUnaryGuarded (dart:async/zone.dart:1103)
#41 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:341)
#42 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:270)
#43 _StreamController&&_SyncStreamControllerDispatch._sendData (dart:async/stream_controller.dart:744)
#44 _StreamController._add (dart:async/stream_controller.dart:616)
#45 _StreamController.add (dart:async/stream_controller.dart:562)
#46 _RawSocket._RawSocket.<anonymous closure> (dart:io-patch/socket_patch.dart:1344)
#47 _NativeSocket.issueReadEvent.issue (dart:io-patch/socket_patch.dart:728)
#48 _microtaskLoop (dart:async/schedule_microtask.dart:43)
#49 _microtaskLoopEntry (dart:async/schedule_microtask.dart:52)
#50 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:96)
#51 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:149)
This looks like an invalid .packages file that should have been caught by the code that reads the .packages file.
We are asking Lasse what the appropriate thing to do is in this case:
https://codereview.chromium.org/1298323002/
On Tue, Aug 18, 2015, 4:50 PM Brian Wilkerson notifications@github.com
wrote:
This looks like an invalid .packages file that should have been caught by
the code that reads the .packages file.
—
Reply to this email directly or view it on GitHub
https://github.com/dart-lang/sdk/issues/24126#issuecomment-132393040.
We are asking Lasse what the appropriate thing to do is in this case:
https://codereview.chromium.org/1298323002/
Awesome. Please update this issue when we know where we're headed. It'd be easy enough to guard against on our end but probably better handled in package_config once and for all rather than in all the clients.
This crash looks like a "real" crash, and thus a candidate for fixing for 1.12.
This crash looks like a "real" crash, and thus a candidate for fixing for 1.12.
Agreed.
We should fix it in package_config. Feel free to open a bug there and assign it to me and I'll happily take a look.
We should fix it in package_config. Feel free to open a bug there and assign it to me and I'll happily take a look.
Actually, I'm less sure now. I'm looking into it.
https://codereview.chromium.org/1298393004/
Fixed with e11ce8ba87952ee2efeb7ed8211801f6cb6d9c9d.
Request to merge to dev filed here: https://github.com/dart-lang/sdk/issues/24138.
| gharchive/issue | 2015-08-18T23:14:06 | 2025-04-01T06:38:20.137367 | {
"authors": [
"bwilkerson",
"hterkelsen",
"pq",
"sethladd"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/24126",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
131328320 | FSE in Analyzer
This may or may not be a bug. The occurance count looks a little high. The elided path name is available on request, it was in Angular.
2016-02-04T11:51:32.918: Common stack: Count: 392
Session: 1454002852255.0693
Caused by FileSystemException:: Cannot open file, path = '...' (OS Error:: No such file or directory, errno = 2)
#0 _File.throwIfError (dart::io/file_impl.dart::562)
#1 _File.openSync (dart::io/file_impl.dart::413)
#2 _File.readAsBytesSync (dart::io/file_impl.dart::473)
#3 _File.readAsStringSync (dart::io/file_impl.dart::507)
#4 JavaFile.readAsStringSync (package::analyzer/src/generated/java_io.dart::100)
#5 FileBasedSource.contentsFromFile (package::analyzer/src/generated/source_io.dart::180)
#6 FileBasedSource.contents.<anonymous closure> (package::analyzer/src/generated/source_io.dart::163)
#7 _PerformanceTagImpl.makeCurrentWhile (package::analyzer/src/generated/utilities_general.dart::170)
#8 FileBasedSource.contents (package::analyzer/src/generated/source_io.dart::162)
#9 AnalysisContextImpl.getContents (package::analyzer/src/context/context.dart::777)
#10 AnalysisContextImpl.parseCompilationUnit (package::analyzer/src/context/context.dart::1070)
#11 DartCompletionManager.computeFast.<anonymous closure> (package::analysis_server/src/services/completion/dart_completion_manager.dart::148)
#12 CompletionPerformance.logElapseTime (package::analysis_server/src/services/completion/completion_manager.dart::165)
#13 DartCompletionManager.computeFast (package::analysis_server/src/services/completion/dart_completion_manager.dart::147)
#14 DartCompletionManager.computeSuggestions.<anonymous closure> (package::analysis_server/src/services/completion/dart_completion_manager.dart::243)
#15 CompletionPerformance.logElapseTime (package::analysis_server/src/services/completion/completion_manager.dart::165)
#16 DartCompletionManager.computeSuggestions (package::analysis_server/src/services/completion/dart_completion_manager.dart::242)
#17 CompletionManager.results.<anonymous closure>.<anonymous closure> (package::analysis_server/src/services/completion/completion_manager.dart::108)
#18 _rootRun (dart::async/zone.dart::903)
#19 _CustomZone.run (dart::async/zone.dart::802)
#20 _CustomZone.runGuarded (dart::async/zone.dart::708)
#21 _CustomZone.bindCallback.<anonymous closure> (dart::async/zone.dart::733)
#22 _rootRun (dart::async/zone.dart::907)
#23 _CustomZone.run (dart::async/zone.dart::802)
#24 _CustomZone.runGuarded (dart::async/zone.dart::708)
#25 _CustomZone.bindCallback.<anonymous closure> (dart::async/zone.dart::733)
#26 _microtaskLoop (dart::async/schedule_microtask.dart::43)
#27 _microtaskLoopEntry (dart::async/schedule_microtask.dart::52)
#28 _runPendingImmediateCallback (dart::isolate-patch/isolate_patch.dart::96)
#29 _RawReceivePortImpl._handleMessage (dart::isolate-patch/isolate_patch.dart::151)
","stackTrace"::"#0 AnalysisContextImpl.parseCompilationUnit (package::analyzer/src/context/context.dart::1072)
#1 DartCompletionManager.computeFast.<anonymous closure> (package::analysis_server/src/services/completion/dart_completion_manager.dart::148)
#2 CompletionPerformance.logElapseTime (package::analysis_server/src/services/completion/completion_manager.dart::165)
#3 DartCompletionManager.computeFast (package::analysis_server/src/services/completion/dart_completion_manager.dart::147)
#4 DartCompletionManager.computeSuggestions.<anonymous closure> (package::analysis_server/src/services/completion/dart_completion_manager.dart::243)
#5 CompletionPerformance.logElapseTime (package::analysis_server/src/services/completion/completion_manager.dart::165)
#6 DartCompletionManager.computeSuggestions (package::analysis_server/src/services/completion/dart_completion_manager.dart::242)
#7 CompletionManager.results.<anonymous closure>.<anonymous closure> (package::analysis_server/src/services/completion/completion_manager.dart::108)
#8 _rootRun (dart::async/zone.dart::903)
#9 _CustomZone.run (dart::async/zone.dart::802)
#10 _CustomZone.runGuarded (dart::async/zone.dart::708)
#11 _CustomZone.bindCallback.<anonymous closure> (dart::async/zone.dart::733)
#12 _rootRun (dart::async/zone.dart::907)
#13 _CustomZone.run (dart::async/zone.dart::802)
#14 _CustomZone.runGuarded (dart::async/zone.dart::708)
#15 _CustomZone.bindCallback.<anonymous closure> (dart::async/zone.dart::733)
#16 _microtaskLoop (dart::async/schedule_microtask.dart::43)
#17 _microtaskLoopEntry (dart::async/schedule_microtask.dart::52)
#18 _runPendingImmediateCallback (dart::isolate-patch/isolate_patch.dart::96)
#19 _RawReceivePortImpl._handleMessage (dart::isolate-patch/isolate_patch.dart
Original message:
{"event"::"server.error","params"::{"isFatal"::false,"message"::"Failed to handle completion domain request:: {clientRequestTime:: 1449509388893, params:: {file:: ..., offset:: 214}, method:: completion.getSuggestions, id:: 384}
AnalysisException:: Could not get contents of ...
Caused by FileSystemException:: Cannot open file, path = '...' (OS Error:: No such file or directory, errno = 2)
#0 _File.throwIfError (dart::io/file_impl.dart::562)
#1 _File.openSync (dart::io/file_impl.dart::413)
#2 _File.readAsBytesSync (dart::io/file_impl.dart::473)
#3 _File.readAsStringSync (dart::io/file_impl.dart::507)
#4 JavaFile.readAsStringSync (package::analyzer/src/generated/java_io.dart::100)
#5 FileBasedSource.contentsFromFile (package::analyzer/src/generated/source_io.dart::180)
#6 FileBasedSource.contents.<anonymous closure> (package::analyzer/src/generated/source_io.dart::163)
#7 _PerformanceTagImpl.makeCurrentWhile (package::analyzer/src/generated/utilities_general.dart::170)
#8 FileBasedSource.contents (package::analyzer/src/generated/source_io.dart::162)
#9 AnalysisContextImpl.getContents (package::analyzer/src/context/context.dart::777)
#10 AnalysisContextImpl.parseCompilationUnit (package::analyzer/src/context/context.dart::1070)
#11 DartCompletionManager.computeFast.<anonymous closure> (package::analysis_server/src/services/completion/dart_completion_manager.dart::148)
#12 CompletionPerformance.logElapseTime (package::analysis_server/src/services/completion/completion_manager.dart::165)
#13 DartCompletionManager.computeFast (package::analysis_server/src/services/completion/dart_completion_manager.dart::147)
#14 DartCompletionManager.computeSuggestions.<anonymous closure> (package::analysis_server/src/services/completion/dart_completion_manager.dart::243)
#15 CompletionPerformance.logElapseTime (package::analysis_server/src/services/completion/completion_manager.dart::165)
#16 DartCompletionManager.computeSuggestions (package::analysis_server/src/services/completion/dart_completion_manager.dart::242)
#17 CompletionManager.results.<anonymous closure>.<anonymous closure> (package::analysis_server/src/services/completion/completion_manager.dart::108)
#18 _rootRun (dart::async/zone.dart::903)
#19 _CustomZone.run (dart::async/zone.dart::802)
#20 _CustomZone.runGuarded (dart::async/zone.dart::708)
#21 _CustomZone.bindCallback.<anonymous closure> (dart::async/zone.dart::733)
#22 _rootRun (dart::async/zone.dart::907)
#23 _CustomZone.run (dart::async/zone.dart::802)
#24 _CustomZone.runGuarded (dart::async/zone.dart::708)
#25 _CustomZone.bindCallback.<anonymous closure> (dart::async/zone.dart::733)
#26 _microtaskLoop (dart::async/schedule_microtask.dart::43)
#27 _microtaskLoopEntry (dart::async/schedule_microtask.dart::52)
#28 _runPendingImmediateCallback (dart::isolate-patch/isolate_patch.dart::96)
#29 _RawReceivePortImpl._handleMessage (dart::isolate-patch/isolate_patch.dart::151)
","stackTrace"::"#0 AnalysisContextImpl.parseCompilationUnit (package::analyzer/src/context/context.dart::1072)
#1 DartCompletionManager.computeFast.<anonymous closure> (package::analysis_server/src/services/completion/dart_completion_manager.dart::148)
#2 CompletionPerformance.logElapseTime (package::analysis_server/src/services/completion/completion_manager.dart::165)
#3 DartCompletionManager.computeFast (package::analysis_server/src/services/completion/dart_completion_manager.dart::147)
#4 DartCompletionManager.computeSuggestions.<anonymous closure> (package::analysis_server/src/services/completion/dart_completion_manager.dart::243)
#5 CompletionPerformance.logElapseTime (package::analysis_server/src/services/completion/completion_manager.dart::165)
#6 DartCompletionManager.computeSuggestions (package::analysis_server/src/services/completion/dart_completion_manager.dart::242)
#7 CompletionManager.results.<anonymous closure>.<anonymous closure> (package::analysis_server/src/services/completion/completion_manager.dart::108)
#8 _rootRun (dart::async/zone.dart::903)
#9 _CustomZone.run (dart::async/zone.dart::802)
#10 _CustomZone.runGuarded (dart::async/zone.dart::708)
#11 _CustomZone.bindCallback.<anonymous closure> (dart::async/zone.dart::733)
#12 _rootRun (dart::async/zone.dart::907)
#13 _CustomZone.run (dart::async/zone.dart::802)
#14 _CustomZone.runGuarded (dart::async/zone.dart::708)
#15 _CustomZone.bindCallback.<anonymous closure> (dart::async/zone.dart::733)
#16 _microtaskLoop (dart::async/schedule_microtask.dart::43)
#17 _microtaskLoopEntry (dart::async/schedule_microtask.dart::52)
#18 _runPendingImmediateCallback (dart::isolate-patch/isolate_patch.dart::96)
#19 _RawReceivePortImpl._handleMessage (dart::isolate-patch/isolate_patch.dart::151)
"}}
v 1.14.0-dev.7.2
Confirmed this is ocuring in v 1.14.0-dev.7.2
@danrubel The method DartCompletionManager.computeFast (from line 11 of the stack trace) no longer exists, nor does that file contain any invocations of AnalysisContextImpl.parseCompilationUnit (line 10). I'm guessing this can be closed as stale.
Yes, this is stale.
| gharchive/issue | 2016-02-04T12:14:49 | 2025-04-01T06:38:20.142981 | {
"authors": [
"bwilkerson",
"danrubel",
"lukechurch"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/25676",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
353857000 | BitTestImmediate crashes on Windows 32-bit in Dart 1 mode
This tests started failing after landing 2beb05b8. There is a corresponding line in vm.status. Marking as P2 because it only occurs in Dart 1 mode.
https://dart-review.googlesource.com/c/sdk/+/78861 should fix this.
| gharchive/issue | 2018-08-24T16:54:03 | 2025-04-01T06:38:20.145050 | {
"authors": [
"a-siva",
"sjindel-google"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/34252",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
490406191 | error: expected: !is_optimized() || (pc_offset == EntryPoint() - PayloadStart())
Isolate (/b/s/w/itHvzd2Z/dart_fuzzHFHIBB) NO-FFI NO-FP : AOT-ReleaseX64 - KBC-INT-SRC-DebugIA32: !DIVERGENCE! 1.37:1253594753 (0 vs -6)
fail2:
-6
../../runtime/vm/object.cc: 15284: error: expected: !is_optimized() || (pc_offset == EntryPoint() - PayloadStart())
version=2.6.0-edge.2246f0a8a36fbad0cd9c87aacc22ba29086d1c74 (Fri Sep 6 03:29:28 2019 +0000) on "linux_ia32"
thread=15485, isolate=main(0x4910700)
pc 0x01cc698c fp 0xf607ce08 dart::Profiler::DumpStackTrace(void*)
pc 0x021cc6b4 fp 0xf607ce28 Dart_DumpNativeStackTrace
pc 0x0193de8b fp 0xf607ce58 dart::Assert::Fail(char const*, ...)
pc 0x01c6b895 fp 0xf607cea8 dart::Code::GetStackMap(unsigned int, dart::Array*, dart::StackMap*) const
pc 0x01d5a7bd fp 0xf607cf28 dart::StackFrame::VisitObjectPointers(dart::ObjectPointerVisitor*)
pc 0x01d6ec0a fp 0xf607cfd8 dart::Thread::VisitObjectPointers(dart::ObjectPointerVisitor*, dart::ValidationPolicy)
pc 0x01bc1f9d fp 0xf607d018 dart::Isolate::VisitObjectPointers(dart::ObjectPointerVisitor*, dart::ValidationPolicy)
pc 0x01f564a6 fp 0xf607d098 dart::Scavenger::IterateRoots(dart::Isolate*, dart::ScavengerVisitor*)
pc 0x01f57adc fp 0xf607d1b8 dart::Scavenger::Scavenge()
pc 0x01f446a0 fp 0xf607d238 dart::Heap::CollectNewSpaceGarbage(dart::Thread*, dart::Heap::GCReason)
pc 0x01f426bd fp 0xf607d278 dart::Heap::AllocateNew(int)
pc 0x01bff03b fp 0xf607d2a8 /b/s/w/ir/out/DebugIA32/dart+0x17ff03b
pc 0x01c02dee fp 0xf607d308 dart::Object::Allocate(int, int, dart::Heap::Space)
pc 0x01c804b9 fp 0xf607d358 dart::OneByteString::New(int, dart::Heap::Space)
pc 0x01c80262 fp 0xf607d3b8 dart::String::FromUTF8(unsigned char const*, int, dart::Heap::Space)
pc 0x01efa0f2 fp 0xf607d418 dart::kernel::TranslationHelper::DartString(dart::kernel::StringIndex, dart::Heap::Space)
pc 0x01f01dd7 fp 0xf607d468 dart::kernel::KernelReaderHelper::ReadNameAsMethodName()
pc 0x01edbfef fp 0xf607d598 dart::kernel::StreamingFlowGraphBuilder::BuildMethodInvocation(dart::TokenPosition*)
pc 0x01ed46a7 fp 0xf607d5e8 dart::kernel::StreamingFlowGraphBuilder::BuildExpression(dart::TokenPosition*)
pc 0x01ed6257 fp 0xf607d618 dart::kernel::StreamingFlowGraphBuilder::BuildStatement()
pc 0x01edfcda fp 0xf607d668 dart::kernel::StreamingFlowGraphBuilder::BuildBlockExpression()
pc 0x01ed46b6 fp 0xf607d6b8 dart::kernel::StreamingFlowGraphBuilder::BuildExpression(dart::TokenPosition*)
pc 0x01ed4373 fp 0xf607d738 dart::kernel::StreamingFlowGraphBuilder::BuildGraphOfFieldInitializer()
pc 0x01ed8e8b fp 0xf607d858 dart::kernel::StreamingFlowGraphBuilder::BuildGraph()
pc 0x01eeddc4 fp 0xf607d9b8 dart::kernel::FlowGraphBuilder::BuildGraph()
pc 0x01f1814c fp 0xf607dac8 dart::DartCompilationPipeline::BuildFlowGraph(dart::Zone*, dart::ParsedFunction*, dart::ZoneGrowableArray<dart::ICData const*>*, int, bool)
pc 0x01f199b0 fp 0xf607de48 dart::CompileParsedFunctionHelper::Compile(dart::CompilationPipeline*)
pc 0x01f1ab55 fp 0xf607dfd8 /b/s/w/ir/out/DebugIA32/dart+0x1b1ab55
pc 0x01f1b6bf fp 0xf607e058 dart::Compiler::CompileOptimizedFunction(dart::Thread*, dart::Function const&, int)
pc 0x01f1c70d fp 0xf607e0d8 dart::BackgroundCompiler::Run()
pc 0x01f1d7a7 fp 0xf607e0f8 /b/s/w/ir/out/DebugIA32/dart+0x1b1d7a7
pc 0x01d7372f fp 0xf607e148 dart::ThreadPool::Worker::Loop()
pc 0x01d732ca fp 0xf607e198 dart::ThreadPool::Worker::Main(unsigned int)
pc 0x01cc0bbf fp 0xf607e2e8 /b/s/w/ir/out/DebugIA32/dart+0x18c0bbf
pc 0xf7ce6295 fp 0xf607e3a8 /lib/i386-linux-gnu/libpthread.so.0+0x6295
-- End of DumpStackTrace
To reproduce:
dart fuzz.dart.txt
fuzz.dart.txt
Fixes pending at:
#38231
#38248
| gharchive/issue | 2019-09-06T16:04:04 | 2025-04-01T06:38:20.148101 | {
"authors": [
"aartbik",
"feli-citas"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/38244",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
529763093 | Dart Analyzer Error
Analyzer Feedback from IntelliJ
Version information
IDEA AI-191.8026.42.35.5977832
2.5.0-edge.1ef83b86ae637ffe7359173804cbc6d3fa25e6db
AI-191.8026.42.35.5977832, JRE 1.8.0_202-release-1483-b03x64 JetBrains s.r.o, OS Windows 10(amd64) v10.0 , screens 1536x864, 2560x1440
Exception
Dart analysis server, SDK version 2.5.0-edge.1ef83b86ae637ffe7359173804cbc6d3fa25e6db, server version 1.27.2, error: Captured exception
RangeError: Value not in range: 1
#0 _StringBase.substring (dart:core-patch/string_patch.dart:392:7)
#1 unescapeString (package:front_end/src/fasta/quote.dart:140:14)
#2 AstBuilder.endLiteralString (package:analyzer/src/fasta/ast_builder.dart:1415:22)
#3 Parser.parseSingleLiteralString (package:front_end/src/fasta/parser/parser.dart:4892:14)
#4 Parser.parseLiteralString (package:front_end/src/fasta/parser/parser.dart:4817:13)
#5 Parser.parsePrimary (package:front_end/src/fasta/parser/parser.dart:4251:14)
#6 Parser.parseUnaryExpression (package:front_end/src/fasta/parser/parser.dart:4184:12)
#7 Parser.parsePrecedenceExpression (package:front_end/src/fasta/parser/parser.dart:3972:13)
#8 Parser.parseExpression (package:front_end/src/fasta/parser/parser.dart:3944:13)
#9 Parser.parseLiteralListSuffix (package:front_end/src/fasta/parser/parser.dart:4437:19)
#10 Parser.parsePrimary (package:front_end/src/fasta/parser/parser.dart:4290:14)
#11 Parser.parseUnaryExpression (package:front_end/src/fasta/parser/parser.dart:4184:12)
#12 Parser.parsePrecedenceExpression (package:front_end/src/fasta/parser/parser.dart:3972:13)
#13 Parser.parseExpression (package:front_end/src/fasta/parser/parser.dart:3944:13)
#14 Parser.parseVariableInitializerOpt (package:front_end/src/fasta/parser/parser.dart:2513:15)
#15 Parser.parseOptionallyInitializedIdentifier (package:front_end/src/fasta/parser/parser.dart:5389:13)
#16 Parser.parseVariablesDeclarationRest (package:front_end/src/fasta/parser/parser.dart:5370:13)
#17 Parser.parseExpressionStatementOrDeclarationAfterModifiers (package:front_end/src/fasta/parser/parser.dart:5363:15)
#18 Parser.parseStatementX (package:front_end/src/fasta/parser/parser.dart:3738:16)
#19 Parser.parseStatement (package:front_end/src/fasta/parser/parser.dart:3716:20)
#20 Parser.parseFunctionBody (package:front_end/src/fasta/parser/parser.dart:3621:15)
#21 Parser.parseTopLevelMethod (package:front_end/src/fasta/parser/parser.dart:2460:13)
#22 Parser.parseTopLevelMemberImpl (package:front_end/src/fasta/parser/parser.dart:2340:14)
#23 Parser.parseTopLevelDeclarationImpl (package:front_end/src/fasta/parser/parser.dart:493:14)
#24 Parser.parseUnit (package:front_end/src/fasta/parser/parser.dart:350:15)
#25 ParserAdapter.parseCompilationUnit2 (package:analyzer/src/generated/parser_fasta.dart:157:32)
#26 ParserAdapter.parseCompilationUnit (package:analyzer/src/generated/parser_fasta.dart:152:12)
#27 _File._parse (package:analyzer/src/services/available_declarations.dart:1839:23)
#28 _File.refresh (package:analyzer/src/services/available_declarations.dart:1160:30)
#29 DeclarationsTracker._performChangeFile (package:analyzer/src/services/available_declarations.dart:701:10)
#30 DeclarationsTracker.doWork (package:analyzer/src/services/available_declarations.dart:567:7)
#31 CompletionLibrariesWorker.performWork (package:analysis_server/src/domains/completion/available_suggestions.dart:283:13)
<asynchronous suspension>
#32 AnalysisDriverScheduler._run (package:analyzer/src/dart/analysis/driver.dart:1901:35)
<asynchronous suspension>
#33 AnalysisDriverScheduler.start (package:analyzer/src/dart/analysis/driver.dart:1855:5)
#34 new AnalysisServer (package:analysis_server/src/analysis_server.dart:212:29)
#35 SocketServer.createAnalysisServer (package:analysis_server/src/socket_server.dart:86:26)
#36 StdioAnalysisServer.serveStdio (package:analysis_server/src/server/stdio_server.dart:37:18)
#37 Driver.startAnalysisServer.<anonymous closure> (package:analysis_server/src/server/driver.dart:572:21)
#38 _rootRun (dart:async/zone.dart:1124:13)
#39 _CustomZone.run (dart:async/zone.dart:1021:19)
#40 _runZoned (dart:async/zone.dart:1516:10)
#41 runZoned (dart:async/zone.dart:1463:12)
#42 Driver._captureExceptions (package:analysis_server/src/server/driver.dart:689:12)
#43 Driver.startAnalysisServer (package:analysis_server/src/server/driver.dart:570:7)
#44 Driver.start.<anonymous closure> (package:analysis_server/src/server/driver.dart:474:9)
#45 _AsyncAwaitCompleter.start (dart:async-patch/async_patch.dart:43:6)
#46 Driver.start.<anonymous closure> (package:analysis_server/src/server/driver.dart:469:43)
#47 CompilerContext.runInContext.<anonymous closure>.<anonymous closure> (package:front_end/src/fasta/compiler_context.dart:122:46)
#48 new Future.sync (dart:async/future.dart:224:31)
#49 CompilerContext.runInContext.<anonymous closure> (package:front_end/src/fasta/compiler_context.dart:122:19)
#50 _rootRun (dart:async/zone.dart:1124:13)
#51 _CustomZone.run (dart:async/zone.dart:1021:19)
#52 _runZoned (dart:async/zone.dart:1516:10)
#53 runZoned (dart:async/zone.dart:1463:12)
#54 CompilerContext.runInContext (package:front_end/src/fasta/compiler_context.dart:121:12)
#55 CompilerConte...
For additional log information, please append the contents of
file://C:\Users\phoenix\AppData\Local\Temp\report.txt.
Thanks for the report! This was previously reported in #39092 and has been fixed. Please upgrade to the latest version, reopen if you still see the failure.
Duplicate of #39092
| gharchive/issue | 2019-11-28T08:29:42 | 2025-04-01T06:38:20.152656 | {
"authors": [
"double-headed-eagle",
"srawlins"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/39559",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1178276743 | documentation for strong-mode rules
Describe the issue
there seems to be no documentatioin for the analyzer.strong-mode options. though the implicit-casts option is mentioned briefly here, i can't seem to find any documentation for the other options.
i think there are 2 more (implicit-dynamic and declaration-casts)
raised #48651
@srawlins
The strong-mode rules are soft deprecated; soon to be for reals deprecated, so we will not be writing documentation for them.
I think this can be closed since the strong-mode rules were removed in Dart 3 and the strict language modes are documented in https://dart.dev/guides/language/analysis-options#enabling-additional-type-checks :D
I'm going to close this as the replacement strict language modes are documented at https://dart.dev/tools/analysis#enabling-additional-type-checks and https://github.com/dart-lang/sdk/issues/50679 is tracking removing the old strong-mode options.
Please open an issue on site-www if you'd like to see any further improvements to the docs. Thanks!
| gharchive/issue | 2022-03-23T15:07:51 | 2025-04-01T06:38:20.158394 | {
"authors": [
"DetachHead",
"parlough",
"scheglov",
"srawlins"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/48650",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2242287343 | gzip decodes only first line of the file
GZipCodec can't decode attached file. It decodes only the first line of the file. I could decode it with gzip shell comand. I have also tried other libraries from pub.dev but with no luck.
void main(List<String> arguments) async {
final file = File('test.csv.gz');
final bytes = file.readAsBytesSync();
print(bytes.length); // prints 3796
print(gzip.decode(bytes).length); // prints 65
}
test.csv.gz
I suspect the archive itself is corrupt or somehow non-standard. On macos, I see:
Closing as I think the issue is with the archive and not the GZipCodec class. If on further investigation of the archive you believe its well-formed / something GZipCodec should parse, please re-open.
@devoncarew I updated the file. Now it opens with the standard macos archive utility. Could you check it again?
//cc @brianquinlan
I can repro. Python is able to decompress this file:
>>> s = open('test.csv.gz', 'rb').read()
>>> import gzip
>>> t = gzip.decompress(s)
>>> len(t)
68541
The bytes that Dart actually decodes are:
>>> x = [105, 100, 83, 117, 98, 67, 97, 109, 112, 97, 105, 103, 110, 84, 105, 116, 108, 101, 44, 105, 100, 83, 117, 98, 65, 100, 83, 101, 116, 84, 105, 116, 108, 101, 44, 105, 100, 83, 117, 98, 67, 97, 109, 112, 97, 105, 103, 110, 44, 105, 100, 67, 97, 109, 112, 97, 105, 103, 110, 84, 105, 116, 108, 101, 10]
>>> bytes(x)
b'idSubCampaignTitle,idSubAdSetTitle,idSubCampaign,idCampaignTitle\n'
Which is the first line of the file. If I understand correctly, the GZIP file format consists of concatenated compressed data sets. So maybe we are only decoding the first data set?
I get the same output as Dart when use the zpipe example after changing:
- ret = inflateInit(&strm);
+ ret = inflateInit2(&strm, 32 + 15);
The Python implementation looks very similar to ours.
If I extract the file and recompress it with gzip, both Dart and zpipe and decompress the file. How did you generate this archive?
@brianquinlan This archive is from raw data export API response of https://docs.tracker.my.com/api/export-api/raw/about
Seems related to https://github.com/dart-lang/sdk/issues/47244
Yep. OK, I missed that Python deals with gzip data starting in Python code:
https://github.com/python/cpython/blob/fc21c7f7a731d64f7e4f0e82469f78fa9c104bbd/Lib/gzip.py#L622
I also found an example on how to handle concatenated gzip streams in C from Mark Adler himself:
https://stackoverflow.com/questions/17820664/is-this-a-bug-in-this-gzip-inflate-method/17822217#17822217
I have a straightforward fix for this but it will take a while for me to convince myself that it always works.
| gharchive/issue | 2024-04-14T17:55:09 | 2025-04-01T06:38:20.166513 | {
"authors": [
"a-siva",
"brianquinlan",
"devoncarew",
"lrhn",
"meowofficial"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/issues/55469",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1561140395 | issue 4433: updated web.dart & create_test.dart
added --webdev serve to add a small caveat
Spam.
| gharchive/pull-request | 2023-01-29T06:01:27 | 2025-04-01T06:38:20.167766 | {
"authors": [
"anujcontractor",
"mraleph"
],
"repo": "dart-lang/sdk",
"url": "https://github.com/dart-lang/sdk/pull/51155",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
920435146 | Setting up dart on a self hosted runner using tool cache not working as expected
We are in the process of migrating from GitHub's macos-latest runners to self hosted runners running on Mac Minis. When we use the dart-lang/setup-dart action now, the first run is fine, but from the second and onwards runs we run into the following issue.
Installing Dart SDK version "2.13.3" from the stable channel on macos-x64
12Downloading https://storage.googleapis.com/dart-archive/channels/stable/release/2.13.3/sdk/dartsdk-macos-x64-release.zip...
13 % Total % Received % Xferd Average Speed Time Time Time Current
14 Dload Upload Total Spent Left Speed
15
16 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
17 0 183M 0 20786 0 0 74501 0 0:42:56 --:--:-- 0:42:56 74235
18 26 183M 26 48.2M 0 0 34.8M 0 0:00:05 0:00:01 0:00:04 34.8M
19 64 183M 64 118M 0 0 52.9M 0 0:00:03 0:00:02 0:00:01 52.8M
20100 183M 100 183M 0 0 58.6M 0 0:00:03 0:00:03 --:--:-- 58.6M
21replace /Users/runner/actions-runner/_work/_tool/dart-sdk/bin/dart? [y]es, [n]o, [A]ll, [N]one, [r]ename: NULL
22(EOF or read error, treating as "[N]one" ...)
23Error: Download failed! Please check passed arguments.
24Error: Process completed with exit code 1.
TL;DR How do we use the dart-lang/setup-dart properly, in combination with the tool cache and caching multiple dart versions. Maybe as a workaround we can just answer yes instead of NULL on the replace /Users/runner/actions-runner/_work/_tool/dart-sdk/bin/dart? question?
More detail
I see this in setup.sh
Unzipping dartsdk.zip into the RUNNER_TOOL_CACHE directory
# Download installation zip.
curl --connect-timeout 15 --retry 5 "$URL" > "${HOME}/dartsdk.zip"
unzip "${HOME}/dartsdk.zip" -d "${RUNNER_TOOL_CACHE}" > /dev/null
Then appending to the GITHUB_PATH
# Update paths.
echo "${HOME}/.pub-cache/bin" >> $GITHUB_PATH
echo "${RUNNER_TOOL_CACHE}/dart-sdk/bin" >> $GITHUB_PATH
So this action is not doing anything with versioning or checking if the requested version is already installed, like we see done in flutter for example
/Users/runner/actions-runner/_work/_tool runner$ ls -l flutter/
total 0
drwxr-xr-x 4 runner staff 128B Jun 14 12:52 ./
drwxr-xr-x 6 runner staff 192B Jun 14 14:26 ../
drwxr-xr-x 4 runner staff 128B Jun 14 12:52 2.0.3-stable/
drwxr-xr-x 4 runner staff 128B Jun 14 09:54 2.2.1-stable/
/Users/runner/actions-runner/_work/_tool runner$ ls -l dart-sdk
total 40
drwx------ 10 runner staff 320B Jun 9 13:02 ./
drwxr-xr-x 6 runner staff 192B Jun 14 14:26 ../
-rw-r--r-- 1 runner staff 1.5K Jun 7 13:14 LICENSE
-rw-r--r-- 1 runner staff 981B Jun 7 13:14 README
drwx------ 14 runner staff 448B Jun 10 10:05 bin/
-rw-r--r-- 1 runner staff 189B Jun 9 13:02 dartdoc_options.yaml
drwxr-xr-x 9 runner staff 288B Jun 9 13:02 include/
drwxr-xr-x 28 runner staff 896B Jun 9 13:19 lib/
-rw-r--r-- 1 runner staff 41B Jun 9 13:02 revision
-rw-r--r-- 1 runner staff 7B Jun 9 13:02 version
Anybody else running into this?
Update: The https://github.com/cedx/setup-dart action doesn't have this issue, so reverting to that for now.
The issue seems to be related to the unzip command in this line:
https://github.com/dart-lang/setup-dart/blob/ade92c2f32c026078e6297a030ec6b7933f71950/setup.sh#L80
A possible solution would be to pass -o to disable user input and force overriding of files like described in the man page here: https://linux.die.net/man/1/unzip .
@hauketoenjes did you confirm that the -o option fixes the issue? If so, are you interested in sending a PR for that?
| gharchive/issue | 2021-06-14T13:51:08 | 2025-04-01T06:38:20.174474 | {
"authors": [
"hauketoenjes",
"jpelgrim",
"mit-mit"
],
"repo": "dart-lang/setup-dart",
"url": "https://github.com/dart-lang/setup-dart/issues/35",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
459620389 | Wallet node stop syncing
Hello,
After 22.06.2019 wallet node stop syncing.
Dashd version: v0.14.0.1 (from releases binary)
Machine specs:
OS: Ubuntu 18.04.2 LTS (Bionic Beaver)
CPU: Intel(R) Celeron(R) CPU J3355 @ 2.00GHz
RAM: 8Gb
Disk size: 500Gb
Disk Type (HD/SDD): HDD
debug.log attached: debug.log
Possible duplicate of #2995 , pls try the solution mentioned there i.e. reconsiderblock 00000000000000112e41e4b3afda8b233b8cc07c532d2eac5de097b68358c43e
Thank you, seems that helped (I was need to wait about a 15-60 minutes to node started syncing)
| gharchive/issue | 2019-06-23T21:34:19 | 2025-04-01T06:38:20.241322 | {
"authors": [
"UdjinM6",
"bitfex"
],
"repo": "dashpay/dash",
"url": "https://github.com/dashpay/dash/issues/2996",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2147055106 | feat: stage getitem calls
@lgray , this doesn't actually work, but I thought you would appreciate a glimpse of what I had in mind.
Yeah this is more or less what I did for the histograms in the end, so that makes sense. I guess I just don't see how to pull it to the end in the case of getitems.
don't see how to pull it to the end
What do you mean?
I am thinking, that the with_field case is essntially identical, and instead of queueing a specific set of things (items to get) like this, we can have a small structure of stuff to do, where there can be a couple of specific sorts, and for each a single method says how to execute. Execution happens as soon as we see an operation that doesn't map into the queue (or ._dask,._meta get accessed).
Oh - as in - starting from that entry point I don't see how to get it to a functioning implementation because my brain is occupied with other tasks. :-)
I'm sure I could see the whole way through in a more quiet moment. The initial direction makes a lot of sense though.
A couple of failures here to wrap my head around, perhaps because of mutation somewhere; but here are the timings
Post
In [1]: import dask_awkward as dak
In [2]: arr = dak.from_lists([{"a": {"b": [1, 2, 3]}}]*5)
In [3]: arr2 = arr.a.b
In [4]: %timeit arr2 = arr.a.b
85.9 µs ± 280 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
Pre
In [1]: import dask_awkward as dak
In [2]: arr = dak.from_lists([{"a": {"b": [1, 2, 3]}}]*5)
In [3]: arr2 = arr.a.b
In [4]: %timeit arr2 = arr.a.b
215 µs ± 3.12 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
Notes:
a typetracer does not have a deterministic token in cache; am using meta.typestr
small optimization in .fields that I'm pretty sure is harmless.
Yeah I went ahead and tried it - definitely a noticeable improvement!
It leaves fancy indexing as the last thing that's taking significant time.
However, this PR has some annoying rebasing issues with #477 so I can't compose it all together pleasantly. Can't quite yet see the full picture of what is left.
@lgray , merge done, back to the same three failures as before
OK, so the problem is, that the cache also contains the output divisions, which depend on the input divisions at the time of first call. If those divisions become known, the result would be different. Interestingly, one of the couple of failing tests has this at the start:
def test_single_int(daa: dak.Array, caa: ak.Array) -> None:
daa = dak.copy(daa)
daa.eager_compute_divisions()
because it wants known divisions, but doesn't want to mutate the object held by the fixture.
Shouldn't be too hard to keep track of the state of divisions somehow as well?
I should have translated: I know what's wrong, I can fix it.
Ah, notes rather than discussion, gotcha. No problem, and cool!
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 93.16%. Comparing base (8cb8994) to head (9e39611).
Report is 29 commits behind head on main.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
Additional details and impacted files
@@ Coverage Diff @@
## main #475 +/- ##
==========================================
+ Coverage 93.06% 93.16% +0.09%
==========================================
Files 23 23
Lines 3290 3322 +32
==========================================
+ Hits 3062 3095 +33
+ Misses 228 227 -1
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
OK, that fixes it. @lgray , maybe a speed test would be nice - I got around the former problem by simply not caching the divisions, as I don't think this part was contributing a lot compared to making meta and layers. I could instead make the cache more complex, if necessary.
I perhaps ought to write a test looking at the contents of the cache? I'm not sure we need that if all passes and times are clearly faster.
Yes with the awkward _new patch + multifill + this we're getting all the speedup we've seen thus far.
one nitpick here:
tokenize(fn, *args, meta is not None and meta.typestr, **kwargs)
appears to be noticeable only due to the meta.typestr call (ok it's half a second but that's not small when we are down to 4 seconds). Particular str(self.type) over in awkward that this calls is costly when spammed.
Would be good savings if we can get around that.
The remaining place that may give us some time back after all these improvements appears to be:
I thought meta.typestr was faster than str(meta), which is what the previous version would do. It sounds like it doesn't matter. So question: should map_partitions be expected to produce the identical result whether or not meta is provided? If yes, it doesn't need to be in this tokenize call at all. If no, then it does, and I don't know of a faster way to get a unique identifier of it.
Also, I reckon output_divisions should probably have been in the tokenize, since that does change the nature of the layer produced.
For building the layers themselves it doesn't matter. But I'd like to ruminate on it for a bit.
Yeah I think my position is as follows:
the _meta only alters the outcome of evaluating typetracers, not graph structure
the from-uproot io layer, as an example, does not change its output keys when its columns are projected/optimized
this applies to any AwkwardInputLayer
likewise when mocking / optimizing we don't change the keys of layers based on the meta
but we do generate the key based on the meta which is inconsistent with typical meaning
similarly, in dask.array.Array the meta not tokenized after checking in a few expensive algorithms, as well as dask.array.Array
Therefore I agree with not tokenizing the meta.
Done. This is final once green, unless we can think of some test that might help.
Furthermore if a user is trying to manually overwrite keys they'll probably have found the cache in the first place and can manipulate it as they need to.
Agreed, I think the mapping of collection name to graph/meta is natural. I don't even think there's any particular documentation that should go with this, except that maybe the cache size should be configurable? That doesn't need to happen yet.
I'd motion for going ahead and merging this today and getting a release out, then a bunch of wheels can turn on the coffea side of things.
@martindurant can I go ahead and merge/release? You tend to do the honors on these PRs, but I'm happy to turn the cranks.
Go ahead
| gharchive/pull-request | 2024-02-21T15:33:35 | 2025-04-01T06:38:20.258846 | {
"authors": [
"codecov-commenter",
"lgray",
"martindurant"
],
"repo": "dask-contrib/dask-awkward",
"url": "https://github.com/dask-contrib/dask-awkward/pull/475",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1903129160 | [BUG][GPU Logic Bug] "SELECT ()||(<column(decimal)>) FROM " brings Error
What happened:
"SELECT (<string>)||(<column(decimal)>) FROM <table>" brings different results, when using CPU and GPU.
What you expected to happen:
It is the same result, when using CPU and GPU.
Minimal Complete Verifiable Example:
import pandas as pd
import dask.dataframe as dd
from dask_sql import Context
c = Context()
df = pd.DataFrame({
'c0': [0.5113391810437729]
})
t1 = dd.from_pandas(df, npartitions=1)
c.create_table('t1', t1, gpu=False)
c.create_table('t1_gpu', t1, gpu=True)
print('CPU Result:')
result1= c.sql("SELECT ('A')||(t1.c0) FROM t1").compute()
print(result1)
print('GPU Result:')
result2= c.sql("SELECT ('A')||(t1_gpu.c0) FROM t1_gpu").compute()
print(result2)
Result:
CPU Result:
Utf8("A") || t1.c0
0 A0.5113391810437729
GPU Result:
Utf8("A") || t1_gpu.c0
0 A0.511339181
Anything else we need to know?:
Environment:
dask-sql version: 2023.6.0
Python version: Python 3.10.11
Operating System: Ubuntu22.04
Install method (conda, pip, source): Docker deploy by https://hub.docker.com/layers/rapidsai/rapidsai-dev/23.06-cuda11.8-devel-ubuntu22.04-py3.10/images/sha256-cfbb61fdf7227b090a435a2e758114f3f1c31872ed8dbd96e5e564bb5fd184a7?context=explore
Trying out your reproducer with latest main gives me an error 😕 looks like at some point between now and 2023.6.0 our logical plan has changed such that we skip the casting of the non-string column:
# 2023.6.0
Projection: Utf8("A") || CAST(t1.c0 AS Utf8)
TableScan: t1 projection=[c0]
# main
Projection: Utf8("A") || t1.c0
TableScan: t1 projection=[c0]
Leading to errors in the binary operation; cc @jdye64 if you have any capacity to look into this. As for the original issue, it seems like that generally comes down to difference in the behavior of cast operations on CPU/GPU, as the following shows the same issue:
print('CPU Result:')
result1= c.sql("SELECT CAST(c0 AS STRING) FROM t1").compute()
print(result1)
print('GPU Result:')
result2= c.sql("SELECT CAST(c0 AS STRING) FROM t1_gpu").compute()
print(result2)
Can look into that, would you mind modifying your issue description / title to reflect this?
Dask-sql version 2024.3.0 has fixed it.
| gharchive/issue | 2023-09-19T14:30:34 | 2025-04-01T06:38:20.266550 | {
"authors": [
"charlesbluca",
"griffith-maker",
"qwebug"
],
"repo": "dask-contrib/dask-sql",
"url": "https://github.com/dask-contrib/dask-sql/issues/1226",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
682142800 | Add a 'decision_function()' method to the 'LogisticRegression' class.
I noticed that the dask_ml.linear_model.LogisticRegression class lacked a decision_function() method like is implemented in the corresponding scikit-learn API.
This PR adds a decision_function() method and updates one corresponding test.
Thanks. The CI failures are known and unrelated.
| gharchive/pull-request | 2020-08-19T20:04:32 | 2025-04-01T06:38:20.270993 | {
"authors": [
"TomAugspurger",
"wfondrie"
],
"repo": "dask/dask-ml",
"url": "https://github.com/dask/dask-ml/pull/728",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
439718980 | dask-mpi not working
I have this script dask_mpi_test.py:
from dask_mpi import initialize
initialize()
from distributed import Client
import dask
client = Client()
df = dask.datasets.timeseries()
print(df.groupby(['time', 'name']).mean().compute())
print(client)
When I try to run this script with:
mpirun -np 4 python dask_mpi_test.py
I get these errors:
~/workdir $ mpirun -np 4 python dask_mpi_test.py
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO - Scheduler at: tcp://xxxxxx:8786
distributed.scheduler - INFO - bokeh at: :8787
distributed.worker - INFO - Start worker at: tcp://xxxxx:44712
/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/bokeh/core.py:57: UserWarning:
Port 8789 is already in use.
Perhaps you already have a cluster running?
Hosting the diagnostics dashboard on a random port instead.
warnings.warn('\n' + msg)
distributed.worker - INFO - Start worker at: tcp://xxxxxx:36782
distributed.worker - INFO - Listening to: tcp://:44712
distributed.worker - INFO - bokeh at: :8789
distributed.worker - INFO - Listening to: tcp://:36782
distributed.worker - INFO - Waiting to connect to: tcp://xxxxxx:8786
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - bokeh at: :43876
distributed.worker - INFO - Waiting to connect to: tcp://xxxxx:8786
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Threads: 1
distributed.worker - INFO - Threads: 1
distributed.worker - INFO - Memory: 3.76 GB
distributed.worker - INFO - Memory: 3.76 GB
distributed.worker - INFO - Local Directory: /gpfs/fs1/scratch/abanihi/worker-uoz0vtci
distributed.worker - INFO - Local Directory: /gpfs/fs1/scratch/abanihi/worker-bb0u_737
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
Traceback (most recent call last):
File "dask_mpi_test.py", line 6, in <module>
client = Client()
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/client.py", line 640, in __init__
self.start(timeout=timeout)
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/client.py", line 763, in start
sync(self.loop, self._start, **kwargs)
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/utils.py", line 321, in sync
six.reraise(*error[0])
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/six.py", line 693, in reraise
raise value
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/utils.py", line 306, in f
result[0] = yield future
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/tornado/gen.py", line 1133, in run
value = future.result()
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/tornado/gen.py", line 1141, in run
yielded = self.gen.throw(*exc_info)
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/client.py", line 851, in _start
yield self._ensure_connected(timeout=timeout)
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/tornado/gen.py", line 1133, in run
value = future.result()
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/tornado/gen.py", line 1141, in run
yielded = self.gen.throw(*exc_info)
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/distributed/client.py", line 892, in _ensure_connected
self._update_scheduler_info())
File "/glade/work/abanihi/softwares/miniconda3/envs/analysis/lib/python3.7/site-packages/tornado/gen.py", line 1133, in run
value = future.result()
tornado.util.Timeout
$ conda list dask
# packages in environment at /glade/work/abanihi/softwares/miniconda3/envs/analysis:
#
# Name Version Build Channel
dask 1.2.0 py_0 conda-forge
dask-core 1.2.0 py_0 conda-forge
dask-jobqueue 0.4.1+28.g5826abe pypi_0 pypi
dask-labextension 0.3.3 pypi_0 pypi
dask-mpi 1.0.2 py37_0 conda-forge
$ conda list tornado
# packages in environment at /glade/work/abanihi/softwares/miniconda3/envs/analysis:
#
# Name Version Build Channel
tornado 5.1.1 py37h14c3975_1000 conda-forge
$ conda list distributed
# packages in environment at /glade/work/abanihi/softwares/miniconda3/envs/analysis:
#
# Name Version Build Channel
distributed 1.27.0 py37_0 conda-forge
Is anyone aware of anything that must have happened in an update to dask or distributed to cause dask-mpi to break?
Ccing @kmpaul
The last CircleCI tests ran with dask=1.1.0 and distributed=1.25.2. However, I've tried to reproduce the same environment as was run in the last CircleCI test, and it fails on my laptop. ...Yet, rerunning the CircleCI test worked fine.
I can reproduce this with the following environment on macOS.
I am running
mpirun dask-mpi --scheduler-file my_scheduler.json --nthreads 1
python -c "from distributed import Client; c = Client(scheduler_file='my_scheduler.json')"
I see this issue with:
dask 1.2.0 py_0 conda-forge
dask-core 1.2.0 py_0 conda-forge
dask-mpi 1.0.2 py36_0 conda-forge
distributed 1.28.1 py36_0 conda-forge
tornado 6.0.2 py36h01d97ff_0 conda-forge
and (downgraded dask)
dask 1.1.5 py_0 conda-forge
dask-core 1.1.5 py_0 conda-forge
dask-mpi 1.0.2 py36_0 conda-forge
distributed 1.28.1 py36_0 conda-forge
tornado 6.0.2 py36h01d97ff_0 conda-forge
and (downgraded distributed to 1.27.1)
dask 1.2.0 py_0 conda-forge
dask-core 1.2.0 py_0 conda-forge
dask-mpi 1.0.2 py36_0 conda-forge
distributed 1.27.1 py36_0 conda-forge
tornado 6.0.2 py36h01d97ff_0 conda-forge
and
dask 1.1.5 py_0 conda-forge
dask-core 1.1.5 py_0 conda-forge
dask-mpi 1.0.2 py36_0 conda-forge
distributed 1.26.1 py36_0 conda-forge
tornado 6.0.2 py36h01d97ff_0 conda-forge
and
dask 1.1.1 py_0 conda-forge
dask-core 1.1.1 py_0 conda-forge
dask-mpi 1.0.2 py36_0 conda-forge
distributed 1.25.3 py36_0 conda-forge
tornado 6.0.2 py36h01d97ff_0 conda-forge
however, the following works!
dask 0.20.2 py_0 conda-forge
dask-core 0.20.2 py_0 conda-forge
dask-mpi 1.0.2 py36_0 conda-forge
distributed 1.24.2 py36_1000 conda-forge
tornado 6.0.2 py36h01d97ff_0 conda-forge
Downgrading distributed below 1.25 to 1.24 and dask to 0.20 (below 1.0) seems to work. Since they are coupled, I'm not sure where the issue is, but it's clearly upstream of dask-mpi.
I had the same timeout problem.
I was able to run my job while using dask-scheduler instead of dask-mpi to create the scheduler.
After some search it appears that the main difference of the dask-scheduler cli is that it's using the current tornado IoLoop : https://github.com/dask/distributed/blob/1.28.1/distributed/cli/dask_scheduler.py#L197
Using current instead of a new instance here : https://github.com/dask/dask-mpi/blob/master/dask_mpi/cli.py#L52 and it's running.
@Timshel, @bocklund, thank you for chiming in. I am going to take a stab at a fix.
Moving forward, we may need to extend our testing environment to test different combinations of dask and distributed versions (or at least make sure that everything works with the latest versions).
I am getting the same problem with the latest versions of dask and distributed and running the example from the docs.
This is blocking https://github.com/basnijholt/adaptive-scheduler/pull/11.
Fixed with #33. Thank you @Timshel for the tip.
| gharchive/issue | 2019-05-02T18:08:42 | 2025-04-01T06:38:20.282575 | {
"authors": [
"Timshel",
"andersy005",
"basnijholt",
"bocklund",
"kmpaul"
],
"repo": "dask/dask-mpi",
"url": "https://github.com/dask/dask-mpi/issues/30",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
627850616 | Accelerate intra-node IPC with shared memory
When implementing e.g. a data loading pipeline for machine learning with Dask, I can choose either:
threaded scheduler: Only fast, when GIL is released
forking scheduler: Only fast, when the data calcuation is very CPU intense compared to the result size.
I often face the issue that the threaded scheduler effectively uses only 150% CPU, no matter how many cores it gets, because of python code that does not parallelize.
The forking scheduler sometimes works better but only if the data loading is very CPU-intense.
Recently, I tried Ray and it could speed up some of my prediction models by 5-fold due to some reason.
I'm not 100% up to date with the latest development in Dask, but AFAIK Dask serializes all data when sending it between workers. That's why I assume the huge speed difference is due to the shared-memory object store Plasma that allows zero-copy transfers of Arrow arrays from the worker to Tensorflow.
=> I'd like to share two ideas how Plasma or Ray could be helpful for Dask:
Have a shared object cache between all threads/forks in dask/cachey
Shared memory communication:
Allow producer to calculate data and consumer to read it without (de)serialization or copying
Related issues:
Investigate using plasma
Investigate UNIX domain sockets
What is the workload?
Have you tried the dask.distributed scheduler? You can set up a system with sensible defaults by running the following:
from dask.distributed import Client
client = Client()
# then run your normal Dask code
https://docs.dask.org/en/latest/scheduling.html#dask-distributed-local
In general a system like Plasma will be useful when you want to do a lot of random access changes to a large data structure and you have to use many processes for some reason.
In my experience, the number of cases where this is true is very low. Unless you're doing something like a deep learning parameter server on one machine and can't use threads for some reason there is almost always a simpler solution.
When implementing e.g. a data loading pipeline for machine learning with Dask, I can choose either:
A data loading pipeline shouldn't really require any communication, and certainly not high speed random access modifications to a large data structure. It sounds like you just want a bunch of processes (because you have code that holds the GIL) and want to minimize data movement between those processes. The dask.distributed scheduler should have you covered there, you might want to add the threads_per_worker=1 (or 2) if you have a high core machine.
In addition to what Matt said, we have tended to keep Dask's dependencies pretty lightweight when possible. My guess is if we were to implement shared memory it would either involve multiprocessing.shared_memory (added in Python 3.8 with a backport package) or using UNIX domain sockets ( https://github.com/dask/distributed/issues/3630 ) (as noted above).
That said, if serialization is really a bottleneck for you, would suggest you take a closer look at what is being serialized. If it's not something that Dask serializes efficiently (like NumPy arrays), then it might just be you need to implement Dask serialization. If you have some simple Python classes consisting of things Dask already knows how to serialize efficiently, you might be able to just register those classes with Dask. It will then recurse through them and serialize them efficiently.
Additionally if you are Python with pickle protocol 5 support and a recent version of Dask, you can get efficient serialization with plain pickle thanks to out-of-band pickling ( https://github.com/dask/distributed/pull/3784 ). Though you would have to check and make sure you are meeting those requirements. This may also require some work on your end to ensure your objects use things that can be handled out-of-band by either wrapping them in PickleBuffers (like in the docs) or using NumPy arrays, which have builtin support.
plasma might be ideally suited for e.g. shuffling operations, https://github.com/dask/dask/issues/6164
Maybe. We're not really bound by bandwidth there yet. Even if we were,
the people who are concerned about performance for dataframe shuffle
operations are only really concerned when we start talking about very large
datasets, for which single-node systems wouldn't be appropriate.
On Thu, Jun 11, 2020 at 5:11 PM Dave Hirschfeld notifications@github.com
wrote:
plasma might be ideally suited for e.g. shuffling operations, #6164
https://github.com/dask/dask/issues/6164
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/dask/dask/issues/6267#issuecomment-642991348, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AACKZTGABCAUEKJ6C2SYJ23RWFXCZANCNFSM4NO46HZA
.
plasma might be ideally suited for e.g. shuffling operations, #6164
Though if you have thoughts on how plasma would help in that issue, please feel free to suggest over there. I'm sure people would be interested to hear 😉
In the context of distributed you could have a plasma store per node and instead of having workers communicating data directly, have them send the data to the plasma store on the receiving node and only send the guid / unique reference directly to the worker. All workers on that node would then have access to that data (by passing around the guid) without having to copy or deserialize the data.
I think that could have pretty big performance benefits for a number of workloads. IIUC that's basically what ray does.
To illustrate the benefits of Plasma, we demonstrate an 11x speedup (on a machine with 20 physical cores) for sorting a large pandas DataFrame (one billion entries). The baseline is the built-in pandas sort function, which sorts the DataFrame in 477 seconds. To leverage multiple cores, we implement the following standard distributed sorting scheme...
Anyway, it would would be very big piece of work so, not something I could invest time in.I thought I'd mention it as an option if people are considering big changes to improve performance.
Yeah, I think that having some sort of shuffling service makes sense (this
is also what Spark does). I'm not sure that we need all of the machinery
that comes along with Plasma though, which is a bit of a bear. My guess is
that a system that just stores data in normal vanilla RAM on each process
would do the trick.
On Thu, Jun 11, 2020 at 5:32 PM Dave Hirschfeld notifications@github.com
wrote:
In the context of distributed you could have a plasma store per node and
instead of having workers communicating data directly, have them send the
data to the plasma store on the receiving node and only send the guid /
unique reference directly to the worker. All workers on that node would
then have access to that data (by passing around the guid) without having
to copy or deserialize the data.
I think that could have pretty big performance benefits for a number of
workloads. IIUC that's basically what ray
https://ray-project.github.io/2017/08/08/plasma-in-memory-object-store.html
does.
To illustrate the benefits of Plasma, we demonstrate an 11x speedup (on a
machine with 20 physical cores) for sorting a large pandas DataFrame (one
billion entries). The baseline is the built-in pandas sort function, which
sorts the DataFrame in 477 seconds. To leverage multiple cores, we
implement the following standard distributed sorting scheme...
Anyway, it would would be very big piece of work so, not something I could
invest time in.I thought I'd mention it as an option if people are
considering big changes to improve performance.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/dask/dask/issues/6267#issuecomment-643000455, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AACKZTE7ESNRUJGQDG7DQ73RWFZSDANCNFSM4NO46HZA
.
I could totally be wrong though. It would be great if people wanted to run
experiments here and report back.
On Thu, Jun 11, 2020 at 6:58 PM Matthew Rocklin mrocklin@gmail.com wrote:
Yeah, I think that having some sort of shuffling service makes sense (this
is also what Spark does). I'm not sure that we need all of the machinery
that comes along with Plasma though, which is a bit of a bear. My guess is
that a system that just stores data in normal vanilla RAM on each process
would do the trick.
On Thu, Jun 11, 2020 at 5:32 PM Dave Hirschfeld notifications@github.com
wrote:
In the context of distributed you could have a plasma store per node and
instead of having workers communicating data directly, have them send the
data to the plasma store on the receiving node and only send the guid /
unique reference directly to the worker. All workers on that node would
then have access to that data (by passing around the guid) without having
to copy or deserialize the data.
I think that could have pretty big performance benefits for a number of
workloads. IIUC that's basically what ray
https://ray-project.github.io/2017/08/08/plasma-in-memory-object-store.html
does.
To illustrate the benefits of Plasma, we demonstrate an 11x speedup (on a
machine with 20 physical cores) for sorting a large pandas DataFrame (one
billion entries). The baseline is the built-in pandas sort function, which
sorts the DataFrame in 477 seconds. To leverage multiple cores, we
implement the following standard distributed sorting scheme...
Anyway, it would would be very big piece of work so, not something I
could invest time in.I thought I'd mention it as an option if people are
considering big changes to improve performance.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/dask/dask/issues/6267#issuecomment-643000455, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AACKZTE7ESNRUJGQDG7DQ73RWFZSDANCNFSM4NO46HZA
.
cc @rjzamora @madsbk (in case this is of interest)
Has there been any further discussion on the multiprocessing shared memory implementation? I also run dask on single machines with high core counts and have read-only datastructures that I want shared.
@alexis-intellegens the ray depelopers created a Dask scheduler for this called dask-on-ray.
I'd recommend you to try this one, it magically dropped my memory usage by an order of magnitue.
Note that you may need to use sth like this:
# don't do this:
dask.compute(dask_fn(large_object))
# instead do this:
large_object_ref = ray.put(large_object)
dask.compute(dask_fn(large_object_ref))
ray will automatically de-reference the object for you.
Very interesting! I'll give it a go. Thanks @Hoeze
Out of curiosity, what were to happen if I made a shared memory object (via Python 3.8 multiprocessing) and tried to access it in dask workers? I'll try it later today.
Out of curiosity, what were to happen if I made a shared memory object (via Python 3.8 multiprocessing) and tried to access it in dask workers? I'll try it later today.
That should work, they'd pickle as references to the shared memory buffer and be remapped in the receiving process (provided all your workers are running on the same machine, otherwise you'd get an error). In general I think we're unlikely to add direct shared memory support in dask itself, but users are free to make use of it in custom workloads using e.g. dask.delayed. So if you have an object you want to share between workers, you can explicitly build this into your dask computations yourself (using either multiprocessing shared_memory or something more complicated like plasma).
As stated above, shared memory would make the most sense if you have objects that can be mapped to shared memory without copying (meaning they contain large buffers, like a numpy array) but also still hold the GIL. In practice this is rare - if you're using large buffers you also probably are doing something numeric (like numpy) in which case you release the GIL and threads work fine.
Closing.
| gharchive/issue | 2020-05-30T23:53:07 | 2025-04-01T06:38:20.314905 | {
"authors": [
"Hoeze",
"alexis-intellegens",
"dhirschfeld",
"jakirkham",
"jcrist",
"mrocklin"
],
"repo": "dask/dask",
"url": "https://github.com/dask/dask/issues/6267",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
734773731 | Some methods of the DataFrame API Documentation are not in the summary table
What happened: In the summary table of the Dataframe API, https://docs.dask.org/en/latest/dataframe-api.html some methods are not present.
What you expected to happen: I will expect to have the following methods in the summary table,
[ ] DataFrame.abs
[ ] DataFrame.align
[ ] DataFrame.all
[ ] DataFrame.any
[ ] DataFrame.applymap
[ ] DataFrame.bfill
[ ] DataFrame.copy
[ ] DataFrame.diff
[ ] DataFrame.divide
[ ] DataFrame.eq
[ ] DataFrame.eval
[ ] DataFrame.ffill
[ ] DataFrame.first
[ ] DataFrame.ge
[ ] DataFrame.gt
[ ] DataFrame.idxmax
[ ] DataFrame.idxmin
[ ] DataFrame.info
[ ] DataFrame.isin
[ ] DataFrame.items
[ ] DataFrame.iteritems
[ ] DataFrame.last
[ ] DataFrame.le
[ ] DataFrame.lt
[ ] DataFrame.melt
[ ] DataFrame.mode
[ ] DataFrame.ne
[ ] DataFrame.nsmallest
[ ] DataFrame.pivot_table
[ ] DataFrame.resample
[ ] DataFrame.round
[ ] DataFrame.select_dtypes
[ ] DataFrame.sem
[ ] DataFrame.size
[ ] DataFrame.squeeze
[ ] DataFrame.to_html
[ ] DataFrame.to_string
[ ] DataFrame.to_timestamp
Minimal Complete Verifiable Example: For example, DataFrame.abs is not present in the summary table,
Anything else we need to know?: If I receive instructions of how can I help and add this methods in the documentation I will like to open the PR :)
Thank you @steff456 for the report! I think you can find those name can be added to the RST file here:
https://github.com/dask/dask/blob/master/docs/source/dataframe-api.rst
A PR would be most welcome!
Thanks for the quick response! I'll create the PR shortly 👍
| gharchive/issue | 2020-11-02T19:48:02 | 2025-04-01T06:38:20.327593 | {
"authors": [
"quasiben",
"steff456"
],
"repo": "dask/dask",
"url": "https://github.com/dask/dask/issues/6788",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
774874859 | Dask repartition / trouble with keeping bounds as returned by divisions
Hello,
Trying out and completing this example provided in the doc works.
import pandas as pd
from dask import dataframe as dd
df = pd.DataFrame(dict(a=list('aabbcc'), b=list(range(6))),index = pd.date_range(start='20100101', periods=6))
ddf = dd.from_pandas(df, npartitions=3)
ddf.divisions
ddf = ddf.repartition(partition_size="10MB")
ddf.divisions
First divisions returns
(Timestamp('2010-01-01 00:00:00', freq='D'),
Timestamp('2010-01-03 00:00:00', freq='D'),
Timestamp('2010-01-05 00:00:00', freq='D'),
Timestamp('2010-01-06 00:00:00', freq='D'))
Second one returns
(Timestamp('2010-01-01 00:00:00', freq='D'),
Timestamp('2010-01-06 00:00:00', freq='D'))
Now, trying on another example, the 2nd divisions fails this time.
from dask import dataframe as dd
import pandas as pd
import numpy as np
dti = pd.date_range(start='1/1/2018', end='1/08/2018', periods=100000)
df = pd.DataFrame(np.random.randint(100,size=(100000, 20)),columns=['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T'], index=dti)
ddf = dd.from_pandas(df, npartitions=1)
ddf.divisions
ddf = ddf.repartition(partition_size="10MB")
ddf.divisions
First divisions returns
(Timestamp('2018-01-01 00:00:00'), Timestamp('2018-01-08 00:00:00'))
Second one returns
(None, None, None)
Please, why is that so? Is there a bug somewhere?
Only displaying ddf shows me the index appears to have been lost after the repartition.
Before.
ddf
Dask DataFrame Structure:
A B C D E F G H I J K L M N O P Q R S T
npartitions=1
2018-01-01 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64
2018-01-08 ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
Dask Name: from_pandas, 1 tasks
After
Dask DataFrame Structure:
A B C D E F G H I J K L M N O P Q R S T
npartitions=2
int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64 int64
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
Dask Name: repartition, 6 tasks
Thanks for your help and support,
Bests,
Further comment.
Using npartitions instead of partition_size produces expected results.
from dask import dataframe as dd
import pandas as pd
import numpy as np
n_per=20*5000
dti = pd.date_range(start='1/1/2018', end='2/08/2019', periods=n_per)
col = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T']
df = pd.DataFrame(np.random.randint(100,size=(n_per, len(col))),columns=col, index=dti)
ddf = dd.from_pandas(df, npartitions=1)
#ddf = ddf.repartition(partition_size="10MB")
ddf = ddf.repartition(npartitions=2)
ddf.divisions
Produces
(Timestamp('2018-01-01 00:00:00'),
Timestamp('2018-07-21 12:00:00'),
Timestamp('2019-02-08 00:00:00'))
Seems related to issue #6362
I think https://github.com/dask/dask/issues/6362#issuecomment-652507357 describes the behavior that you are seeing. Note that divisions does not have to be set. It is perfectly fine to have unknown divisions. If you would like them to be set you can use ddf.reset_index().set_index("index")
I think https://github.com/dask/dask/issues/6362#issuecomment-652507357 describes the behavior that you are seeing. Note that divisions does not have to be set. It is perfectly fine to have unknown divisions. If you would like them to be set you can use ddf.reset_index().set_index("index")
| gharchive/issue | 2020-12-26T13:28:07 | 2025-04-01T06:38:20.335217 | {
"authors": [
"jsignell",
"yohplala"
],
"repo": "dask/dask",
"url": "https://github.com/dask/dask/issues/7009",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
215547891 | Check and compare shapes in assert_eq
Add some checks for shapes in assert_eq. Particularly compare shapes before computing results and compare shapes between dask arrays and computed results.
Merging this soon if there are no further comments
LGTM
| gharchive/pull-request | 2017-03-20T20:37:48 | 2025-04-01T06:38:20.337099 | {
"authors": [
"jakirkham",
"mrocklin"
],
"repo": "dask/dask",
"url": "https://github.com/dask/dask/pull/2101",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
231114790 | Add token kwarg to da.map_blocks
Add the token kwarg to map_blocks, mirroring the token kwarg of atop. If provided, this is the prefix of the output key, but not the key itself.
Fixes #2380.
We may want to rethink these keyword names at some point. It'd be a bit of a pain to deprecate since this is public api, but the current keywords aren't the clearest (existing for historical reasons).
If I was to redo them I'd probably have key_name be for specifying the full key (name currently), and key_prefix for just the prefix (token currently). If we were to change them we'd probably want to mirror this convention in dask.dataframe and dask.bag as well.
LGTM. Thanks @jcrist.
| gharchive/pull-request | 2017-05-24T17:28:19 | 2025-04-01T06:38:20.339640 | {
"authors": [
"jakirkham",
"jcrist"
],
"repo": "dask/dask",
"url": "https://github.com/dask/dask/pull/2383",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
282116827 | COMPAT: Pandas 0.22.0 astype for categorical dtypes
Change in https://github.com/pandas-dev/pandas/pull/18710 caused a dask failure
when reading CSV files, as our .astype relied on the old (broken) behavior.
Closes https://github.com/dask/dask/issues/2996
All green. Since master is currently failing on this I'll merge this later today, but if I'd appreciate it if someone could take a look.
Whoops, thanks.
Thanks @TomAugspurger.
| gharchive/pull-request | 2017-12-14T14:21:42 | 2025-04-01T06:38:20.342040 | {
"authors": [
"TomAugspurger",
"jakirkham"
],
"repo": "dask/dask",
"url": "https://github.com/dask/dask/pull/2997",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1141715202 | Added compute method to raise error on use
[ ] Closes #8695
[ ] Tests added / passed
[ ] Passes pre-commit run --all-files
First (very) naive implementation of compute method
Can one of the admins verify this patch?
ok to test
It would also be great if you could add a test for this. Probably you can just add a few lines to this test: https://github.com/dask/dask/blob/2ed45454bde5a3406a0df9f492bf2917e3d15b37/dask/dataframe/tests/test_groupby.py#L103-L123
Thanks for taking this on @Dranaxel! I think this will really help people :)
| gharchive/pull-request | 2022-02-17T18:40:54 | 2025-04-01T06:38:20.344959 | {
"authors": [
"Dranaxel",
"GPUtester",
"jsignell"
],
"repo": "dask/dask",
"url": "https://github.com/dask/dask/pull/8734",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
1573986398 | Do not ravel array when slicing with same shape
I'm not very familiar with the array API so I might be missing something obvious. However, I encountered a troubling complex graph when doing an operation like array[array > 100]
import dask.array as da
arr = da.random.random((200,200,200), chunks=(20, 20, 20))
arr[arr > 1].dask
This is under the hood raveling the arrays before indexing. Raveling is effectively a rechunking operation which is relatively expensive, specifically for an operation that should be elemwise.
I'm wondering if I'm missing anything here or why there is a reason for this complexity
Three test cases failed. I assume I'm missing something. I'd be happy to be educated about the topic more. Maybe there is a way to get this working with minor modifications
I ran into this a bit ago, and I think I tried something like what you're doing here, but also found it didn't work.
I vaguely recall it had to do with the order of the elements not matching with NumPy if you just do it blockwise on multidimensional arrays. Kind of like this warning mentions: https://github.com/dask/dask/blob/834a19eaeb6a5d756ca4ea90b56ca9ac943cb051/dask/array/slicing.py#L1149-L1152
Because x[x > 100] produces a 1D array when x is N-D, if you do the operation elemwise, each chunk will be flattened. But if you just concatenate all those 1D arrays, the elements will not be overall row-major order like you'd get from NumPy. If the chunks are squares, say, chunk 0 will contain elements from multiple rows, then chunk 1 will contain elements from multiple rows. You'd expect all the elements from row 0 to come before elements in row 1.
So it kind of makes sense that rechunking is involved, since there isn't a 1:1 mapping between chunks in the input and chunks in the output.
| gharchive/pull-request | 2023-02-07T09:32:32 | 2025-04-01T06:38:20.348912 | {
"authors": [
"fjetter",
"gjoseph92"
],
"repo": "dask/dask",
"url": "https://github.com/dask/dask/pull/9925",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
413246991 | Implementing Gofer suite of components as a third-party
Hi all,
I am a graduate student at RPI who is evaluating the gofer suite of auto-grading components as a possible architecture for implementing grading of Jupyter notebooks within courses at RPI.
I see this is a new project that hasn't been matured for use by third parties and I wanted to reach out to see if you can help me get this working as a third-party and in return I am willing to help mature the project (documentation, refactoring for generalization, etc.)
In looking through the code I have been able to update the gofer_nb.py script to work on my system, but I hit a wall when trying to figure out how to construct the docker container invoked by grade_lab.py.
Is there any existing documentation on how this docker image is created and how it can be customized for different courses and or labs?
Thanks,
Stephan
@yuvipanda
you probably want to ask @vipasu, who has been doing most of the work on the gofer grader + service!
you probably want to ask @vipasu, who has been doing most of the work on the gofer grader + service!
@zednis I created a sample directory with a sample Dockerfile and 3 notebooks with various levels of correctness. Check it out and maybe it will be able to clarify some things. The binder directory has the docker file. https://github.com/RPI-DATA/submitty
@zednis I created a sample directory with a sample Dockerfile and 3 notebooks with various levels of correctness. Check it out and maybe it will be able to clarify some things. The binder directory has the docker file. https://github.com/RPI-DATA/submitty
Hi! Sorry for joining the party late, here. We actually have an public dockerfile here: https://github.com/data-8/materials-x18/blob/master/Dockerfile
As you can see, it's quite minimal. Apart from listing your packages, there is also a line that copies the tests/etc. for the course (contained in the repo) into the docker image. Because we have all of the assignments in the directory, we only need a single image rather than one per assignment (though this would also be a reasonable approach). Building currently happens manually since it shouldn't need to be rebuilt, but if assignments are changing frequently, then it might be worth automating the rebuild procedure.
Let me know if this helps and if you have additional questions!
Hi @zednis, hi everyone,
Did you get to implement the service for your courses at RPI?
We at Leuphana are trying to achieve something in the same direction. I found gofer_service and gofer_submit and thought they sound perfect for integrating them into our JupyterHub deployment.
... and in return I am willing to help mature the project (documentation, refactoring for generalization, etc.)
I would also be happy to contribute in this regard if there are intentions to further develop this extension
| gharchive/issue | 2019-02-22T05:15:12 | 2025-04-01T06:38:20.408336 | {
"authors": [
"choldgraf",
"franasa",
"jkuruzovich",
"vipasu",
"zednis"
],
"repo": "data-8/gofer_service",
"url": "https://github.com/data-8/gofer_service/issues/2",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
981902795 | Downloader nodes
Updates
Downloader nodes
It’s now possible to create nodes which will download resulting data in the browser.
Implementation details
DownloaderNode
Simple class that extends default Node type adding one extra property — downloadData, which is supposed to be filled by Node on run execution.
DownloadData
Class that holds:
data
mimeType of that data
fileName
fileExtension
This class have pretty generic download method that can be reused among most common cases, so all required work for every specific down-loader node is to specify right data, mimeType and fileName, everything about the downloading will be handled by DownloadData.
Class also supports generics so it’s possible to specify type of the Data downloaded, for now ti just any
this.downloadData = new DownloadData<any>({
data: [],
mimeType: 'application/json',
fileName: fileName,
fileExtension: 'json',
});
Diagram run
To check what nodes are supposed to run with Diagram run method, we are simply doing a couple of checks
if (
// Check of whether node is downloader
node instanceof DownloaderNode &&
// Check of whether code runs in browser
// environment
(isBrowser || isJsDom)
) {
await node.downloadData.download();
}
DownloadJSON node
Can be ran with dot-notated paths, so if we for example specify title as an attribute to download, we will get only those attributes from all features. Filename will be in such format
[node_name] [date_downloaded].json
Example of usage
Things need to be added/fixed/considered
Tests
Find a good way of testing downloader nodes. Possibly it might be testing of right DownloadData creation for the nodes in the core, and e2e tests for the download functionality in the gui
Downloading in headless mode
For now downloading functionality works only when code runs in the browser environment, it may be good feature to add separate downloading cases for both browser and node environment. So in node environment download will save data to some cross-platform data folder like dataDir/data-story/filename_date.json
Possibly, method DownloadData.download can be implemented in gui as a callback or similar instead since it is so heavily involved with browser (document, createElement etc) ? I see we have a guard clause to see if it is running in browser environment, but anyways might be a good separation to make
That's how I started implementing it at first, but that approach will require extra actions looping through the node list and applying callback on the gui side, so I decided to do all things in one place with just environment checking.
gui/"file-saver": "^2.0.5", - this can be removed. Nice to implement this without a package
Yes, current solution works pretty nice, though it may be a good idea to take a look on StreamSaver, which can handle bigger files creation asynchronously using streams directly to the file-system
Add pretty print option on json
I think that this must be an option in form of select parameter in DownloadJSON node, so user can choose whether he want's to format downloaded data or not.
When handling dot notation, we can make use of Obj.get helper
Yeah, it handles just what I've done manually, feels like nice possibility to decrease code verbosity
Might be another reason to move the actual downloading part to gui?
The problem here is by moving download function to the gui we'll still have to create the same testing workflow as with download isolated at the DataStory class in the core. So still e2e tests of downloading functionality in gui and tests of right data creation in core.
Updates
It's now possible to specify multiple attributes which then will be downloaded, so for example if we had such a config for a DownloadJSON node
We would get all attributes we specified downloaded
That's how I started implementing it at first, but that approach will require extra actions looping through the node list and applying callback on the gui side, so I decided to do all things in one place with just environment checking.
Ok that makes sense, lets keep it in core 👍
| gharchive/pull-request | 2021-08-28T18:45:06 | 2025-04-01T06:38:20.429862 | {
"authors": [
"Lenivaya",
"ajthinking"
],
"repo": "data-story-org/core",
"url": "https://github.com/data-story-org/core/pull/74",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
715625249 | ModuleNotFoundError: No module named 'databricks' when using apply_batch or apply_transform
Hi,
I'm using Spark in client mode and I've gotten Koalas working, but the apply_batch method seems to indicate that koalas is missing from the executor nodes. It it really so that koalas must be explicitly installed to worker nodes? Or is it another issue / something simple I'm missing? Spark version: 2.4.3, Koalas version: 1.2.0.
Example:
kdf = ks.DataFrame({'a': range(0, 20000), 'i': range(0, 20000)}).set_index('i')
# --> works
kdf.head(10)
# --> also works
def test_apply(df):
return df
kdf.koalas.apply_batch(test_apply)
# --> fails, see error below
Error:
...
File "/var/lib/mesos/slaves/ad6bc800-ab3b-486e-bfa2-cf24ca7aebae-S1/frameworks/7461c35c-4cf7-47a5-ae69-3ba9362cee61-71216/executors/1/runs/71cf1309-75b9-4ac2-b14e-3abc04506810/spark-2.4.3-bin-datalake-hadoop-2.9.2-1/python/lib/pyspark.zip/pyspark/serializers.py", line 580, in loads
return pickle.loads(obj, encoding=encoding)
ModuleNotFoundError: No module named 'databricks'
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:172)
at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:122)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Hi @maxpagels, yes, Koalas must be explicitly installed to worker nodes, as well as pandas, PyArrow, and numpy.
| gharchive/issue | 2020-10-06T12:28:06 | 2025-04-01T06:38:20.449181 | {
"authors": [
"maxpagels",
"ueshin"
],
"repo": "databricks/koalas",
"url": "https://github.com/databricks/koalas/issues/1826",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
129410044 | Add CSV option
We've got to the stage where our jobs are big enough that Redshift loads are becoming a major bottleneck with its AVRO performance problems. We're hoping Amazon will fix them, but in the mean time we've started running with a modified spark-redshift to allow us to import via CSV (Redshift COPYs are running at least 5 times faster with this for us).
It's probably not a good idea to merge this in at the moment - I'm not sure how robust it is to different data types / unusual characters in strings etc., and it would at least need some tests and documentation. And hopefully Amazon will soon fix AVRO import making it unnecessary anyway. But I thought I'd share my changes in case they are useful to anyone else.
Current coverage is 75.07%
Merging #165 into master will decrease coverage by -13.99% as of ed40281
@@ master #165 diff @@
======================================
Files 13 13
Stmts 649 662 +13
Branches 144 146 +2
Methods 0 0
======================================
- Hit 578 497 -81
Partial 0 0
- Missed 71 165 +94
Review entire Coverage Diff as of ed40281
Powered by Codecov. Updated on successful CI builds.
Thanks for sharing. I'm glad to see that this was a relatively small change.
I agree that it's probably best to wait and see if Amazon speeds up Avro ingest; let's wait a couple of months and re-assess this feature later if there's significant interest / demand.
| gharchive/pull-request | 2016-01-28T10:27:29 | 2025-04-01T06:38:20.454377 | {
"authors": [
"JoshRosen",
"codecov-io",
"emlyn"
],
"repo": "databricks/spark-redshift",
"url": "https://github.com/databricks/spark-redshift/pull/165",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
251174036 | SQL Error: ОШИБКА: нет прав для изменения параметра "session_replication_role"
Hello, Maxim!
There is problem (in subject) that I can't understand.
Also I can't find default mechanism of authentication without password (under 'postgres' user)
Here is listing:
root@someserver:~# perl pgcompacttable.pl -U someuser -W somepass -d somedb -t history -v
[Fri Aug 18 09:39:02 2017] (somedb) Connecting to database
[Fri Aug 18 09:39:02 2017] (somedb) Postgress backend pid: 8163
Wide character in print at pgcompacttable.pl line 187.
[Fri Aug 18 09:39:02 2017] (somedb) SQL Error: ОШИБКА: нет прав для изменения параметра "session_replication_role"
[Fri Aug 18 09:39:02 2017] (somedb) Database handling interrupt.
[Fri Aug 18 09:39:02 2017] (somedb) Disconnecting from database
[Fri Aug 18 09:39:02 2017] Processing incomplete: 1 databases left.
Best regards,
Vladimir
There is a method to authenticate without password using -h /path/to/unix/socket/dir (usually /tmp or somthing like /var/run/postgresql) under postgres user. Not very convenient, I agree.
For changing session_replication_role you have to be superuser - this setting is used to disable all triggers in session so DB won't have to do additional (and potentially dangerous) work when pgcompacttable performing fake updates.
Please add next to your code, to prevent 'Wide character in print at' error message.
#!/usr/bin/perl
use strict;
use utf8;
binmode(STDOUT,':utf8');
Alexius2,
Explain please what do you mean here:
For changing session_replication_role you have to be superuser
Because I run my 'perl pgcompacttable.pl.....' with root. :)
My first message was "....root@someserver:~#...."
default superuser in db is postgres. so to connect as superuser need to switch to postgres OS user and connect via unix socket or set password to postgres user and connect with -U postgres or set authentication method to trust in pg_hba if it's local/test machine.
Dear Alexius2,
Thanks a lot, it works for me!
IMHO, this hint should be written in --man. (about -h parameter and socket path)
also I was needed to enable pgstattuple, and it was first time I faced with, so please AUTHOR, if you're reading this - add some more info about how to use.
Some kind like this:
If you're not sure about pgstattuple was installed do this:
su - postgres
psql
\c
create extension pgstattuple;
Best regards!
Vladimir
| gharchive/issue | 2017-08-18T08:23:04 | 2025-04-01T06:38:20.510863 | {
"authors": [
"Rodgelius",
"alexius2"
],
"repo": "dataegret/pgcompacttable",
"url": "https://github.com/dataegret/pgcompacttable/issues/13",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
2392636292 | Correction for raw affiliation Univ Rennes, École Nationale Supérieure de Chimie de Rennes, CNRS, ISCR – UMR6226, F-35000 Rennes, France
Correction needed for raw affiliation Univ Rennes, École Nationale Supérieure de Chimie de Rennes, CNRS, ISCR – UMR6226, F-35000 Rennes, France
raw_affiliation_name: Univ Rennes, École Nationale Supérieure de Chimie de Rennes, CNRS, ISCR – UMR6226, F-35000 Rennes, France
new_rors: 01h0ffh48;015m7wh34;02feahw73;00adwkx90
previous_rors: 01h0ffh48;015m7wh34;02feahw73
works_examples: W4376126242;W4311302651;W4319311796;W4384153568
contact: 36a74fcd2bbf761168ab07c929a31b5b:326f7a3655c754b0818cd9cbacbcb493 @ univ-rennes.fr
This issue was accepted and ingested by the OpenAlex team on 2024-10-10. The new affiliations should be visible within the next 7 days.
| gharchive/issue | 2024-07-05T13:41:36 | 2025-04-01T06:38:20.513584 | {
"authors": [
"dataesri"
],
"repo": "dataesr/openalex-affiliations",
"url": "https://github.com/dataesr/openalex-affiliations/issues/2250",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2551170192 | Correction for raw affiliation Canadian Rivers Institute, Fredericton, NB, Canada, E3B 5A3; School of Environment and Sustainability, University of Saskatchewan, Saskatoon, SK, Canada, S7N 5C8
Correction needed for raw affiliation Canadian Rivers Institute, Fredericton, NB, Canada, E3B 5A3; School of Environment and Sustainability, University of Saskatchewan, Saskatoon, SK, Canada, S7N 5C8
raw_affiliation_name: Canadian Rivers Institute, Fredericton, NB, Canada, E3B 5A3; School of Environment and Sustainability, University of Saskatchewan, Saskatoon, SK, Canada, S7N 5C8
new_rors: 010x8gc63;05nkf0n29
previous_rors: 010x8gc63
works_examples: W3035374088
contact: 96f5c8d7bcc1169187bc3130133af506:08c5533f @ ourresearch.org
This issue was accepted and ingested by the OpenAlex team on 2024-10-10. The new affiliations should be visible within the next 7 days.
| gharchive/issue | 2024-09-26T17:32:02 | 2025-04-01T06:38:20.516131 | {
"authors": [
"dataesri"
],
"repo": "dataesr/openalex-affiliations",
"url": "https://github.com/dataesr/openalex-affiliations/issues/4688",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2679680763 | Correction for raw affiliation CNRS, ENSL, INRI A, UCBL, Univ. Lyon, Lab. LIP (UMR 5668), École Normale Supérieure de Lyon, LIP, 46 allée d'Italie, 69364 Lyon Cedex 07, France
Correction needed for raw affiliation CNRS, ENSL, INRI A, UCBL, Univ. Lyon, Lab. LIP (UMR 5668), École Normale Supérieure de Lyon, LIP, 46 allée d'Italie, 69364 Lyon Cedex 07, France
raw_affiliation_name: CNRS, ENSL, INRI A, UCBL, Univ. Lyon, Lab. LIP (UMR 5668), École Normale Supérieure de Lyon, LIP, 46 allée d'Italie, 69364 Lyon Cedex 07, France
new_rors: 02feahw73;04zmssz18;029brtt94;04msnz457
previous_rors: 02feahw73;04zmssz18;029brtt94
works_examples: W2118926140
contact: 3a668eebbfc087bfccde57b6535e3cf8:57108dd118b95fe9668c0d5e860a @ ens-lyon.fr
This issue was accepted and ingested by the OpenAlex team on 2024-12-19. The new affiliations should be visible within the next 7 days.
| gharchive/issue | 2024-11-21T14:31:53 | 2025-04-01T06:38:20.518797 | {
"authors": [
"dataesri"
],
"repo": "dataesr/openalex-affiliations",
"url": "https://github.com/dataesr/openalex-affiliations/issues/8448",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2482729728 | Migrate SQLFederationOptimizerRule to OptimizerRule::rewrite
DataFusion 40 changed the OptimizerRule format, ref apache/datafusion#9954. We'll need to migrate over.
We did the migration in this PR #64
| gharchive/issue | 2024-08-23T09:31:46 | 2025-04-01T06:38:20.549711 | {
"authors": [
"backkem",
"hozan23"
],
"repo": "datafusion-contrib/datafusion-federation",
"url": "https://github.com/datafusion-contrib/datafusion-federation/issues/46",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1315714292 | HitTriage
Workflow:
[ ] Ingestion
[ ] Enrichment
[ ] Filtering
[ ] Submission
This issue has been mirrored in Jira: https://reddata.atlassian.net/browse/GROK-14812
This issue has been mirrored in Jira: https://reddata.atlassian.net/browse/GROK-16071
| gharchive/issue | 2022-07-23T18:02:38 | 2025-04-01T06:38:20.551982 | {
"authors": [
"dnillovna",
"skalkin"
],
"repo": "datagrok-ai/public",
"url": "https://github.com/datagrok-ai/public/issues/831",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
938556502 | The sqlalchemy recipe for DB2 source is not respecting schema/table allow / deny pattern
Describe the bug
When using a sqlalchemy based recipe to fetch metadata from Db2 source , the schema / table pattern information provided is not being used. All the tables metadata is being fetched.
To Reproduce
source:
type: sqlalchemy
config:
connect_uri: "db2+ibm_db://:<my-password@host:port/"
platform: "DB2-ZOS"
table_pattern:
allow:
- "schema".<table_name>"
sink:
type: "console"
Additional context
I am using python:3.8.10 docker image with ibm_db and ibm_db_sa python libraries installed.
The allow/deny patterns use regexes - could that possibly be the issue?
It's hard to diagnose given that the recipe above has been anonymized, but logs when run with datahub --debug ingest ... would be helpful.
Hey @SwadX - did you figure this one out?
Closing due to inactivity. Please open new issue if issue persists with latest releases
| gharchive/issue | 2021-07-07T06:59:28 | 2025-04-01T06:38:20.561746 | {
"authors": [
"SwadX",
"anshbansal",
"hsheth2"
],
"repo": "datahub-project/datahub",
"url": "https://github.com/datahub-project/datahub/issues/2839",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1351947629 | Change documentation for curl command that creates a user group
Describe the bug
A clear and concise description of what the bug is.
When using the curl command to create a user group taking from the datahub documentation it fails as raised in #5161.
The issue was identiefied changing (corpUser -> corpuser) but the documentation was updated.
The correct command should be
curl 'http://localhost:8080/entities?action=ingest' -X POST --data '{
"entity":{
"value":{
"com.linkedin.metadata.snapshot.CorpGroupSnapshot":{
"urn":"urn:li:corpGroup:dev",
"aspects":[
{
"com.linkedin.identity.CorpGroupInfo":{
"email":"dev@linkedin.com",
"admins":[
"urn:li:corpuser:jdoe"
],
"members":[
"urn:li:corpuser:datahub",
"urn:li:corpuser:jdoe"
],
"groups":[
]
}
}
]
}
}
}
}'
I can probably change this myself however adding it here in case I'd forget
| gharchive/issue | 2022-08-26T08:39:23 | 2025-04-01T06:38:20.564239 | {
"authors": [
"hugwi"
],
"repo": "datahub-project/datahub",
"url": "https://github.com/datahub-project/datahub/issues/5735",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2711387391 | feat(ingestion/sql-common): add column level lineage for external tables
Checklist
[ ] The PR conforms to DataHub's Contributing Guideline (particularly Commit Message Format)
[ ] Links to related issues (if applicable)
[ ] Tests for the changes have been added/updated (if applicable)
[ ] Docs related to the changes have been added/updated (if applicable). If a new feature has been added a Usage Guide has been added for the same.
[ ] For any breaking change/potential downtime/deprecation/big changes an entry has been made in Updating DataHub
The SqlParsingAggregator has a add_known_lineage_mapping that generates CLL based on the schema of the downstream. Ideally we'd centralize on using that as the external lineage mechanism.
Long term, I want to move sql_common.py to use the SqlParsingAggregator instead of the older SqlParsingBuilder. Internal ticket tracking that - https://linear.app/acryl-data/issue/ING-779/refactor-move-sql-common-to-use-sqlparsingaggregator
In the short term, I'm ok with having this CLL generation logic, although all the complexity of the simplify_field_path logic worries me a bit on this PR.
Now that https://github.com/datahub-project/datahub/pull/12220 has been merged, we can make this implementation be a bit cleaner.
| gharchive/pull-request | 2024-12-02T10:29:48 | 2025-04-01T06:38:20.569169 | {
"authors": [
"acrylJonny",
"hsheth2"
],
"repo": "datahub-project/datahub",
"url": "https://github.com/datahub-project/datahub/pull/11997",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1577739293 | feat: add chart entities to similar browsepath as dashboards
If the ingested workspaces have multiple reports in them, usually the result is that there are a ton of ingested chart entities. Although common use case might not be to find report or datasource of a report by finding the chart first, I think it makes sense to expand similar behavior of browsepaths as the current implementation has on dashboards.
Checklist
[ ] The PR conforms to DataHub's Contributing Guideline (particularly Commit Message Format)
[ ] Links to related issues (if applicable)
[ ] Tests for the changes have been added/updated (if applicable)
[ ] Docs related to the changes have been added/updated (if applicable). If a new feature has been added a Usage Guide has been added for the same.
[ ] For any breaking change/potential downtime/deprecation/big changes an entry has been made in Updating DataHub
Thanks for the PR @looppi ! We are reviewing on our side. On the surface looking good
| gharchive/pull-request | 2023-02-09T11:44:59 | 2025-04-01T06:38:20.572460 | {
"authors": [
"jjoyce0510",
"looppi"
],
"repo": "datahub-project/datahub",
"url": "https://github.com/datahub-project/datahub/pull/7293",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1550069676 | Adjusting pyopenephys requirement for pypi publication
In this PR, I update the pyopenephys to reflect our merged PR and pypi publication.
Thanks @CBroz1! Now that the requirements.txt are fixed, please make a release of version 0.2.3 so that we can get an updated version of element-array-ephys published to PyPI.
| gharchive/pull-request | 2023-01-19T22:56:50 | 2025-04-01T06:38:20.574728 | {
"authors": [
"CBroz1",
"kabilar"
],
"repo": "datajoint/element-array-ephys",
"url": "https://github.com/datajoint/element-array-ephys/pull/125",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2353958792 | Test release
test
datalens-ui 0.1741.0
datalens-us 0.209.0
First e2e check, failed:
✘ 311 [basic] › suites/wizard/export/pivotTable.test.ts:36:17 › Wizard - export. Summary table › CSV (retry #1) (15.3s)
✘ 325 [basic] › suites/wizard/geolayer/geopoints/addTooltip.test.ts:10:17 › Wizard - Geo Points › Tooltip appearance (retry #1) (13.7s)
| gharchive/pull-request | 2024-06-14T19:27:36 | 2025-04-01T06:38:20.623351 | {
"authors": [
"Marginy605"
],
"repo": "datalens-tech/datalens-ui",
"url": "https://github.com/datalens-tech/datalens-ui/pull/1127",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1030305270 | "Error: {equatiomatic} only supports models where each random effect has a corresponding fixed effect."
Hi, I am having an issue trying to use equatiomatic with lme4, please see below:
d =
dplyr::tribble(
~study, ~treat, ~n, ~event, ~control,
1, 0, 377, 113, 1,
1, 1, 377, 128, 0,
2, 0, 40, 4, 1,
2, 1, 41, 6, 0,
3, 0, 100, 20, 1,
3, 1, 101, 22, 0,
4, 0, 1010, 201, 1,
4, 1, 1001, 241, 0
)
m1 =
lme4::glmer(
cbind(event, n - event) ~ 1 + factor(treat) + (control + treat - 1|study),
data=d,
family=binomial(link="logit"))
summary(m1)
equatiomatic::extract_eq(m1)
I get this error message: Error: {equatiomatic} only supports models where each random effect has a corresponding fixed effect. You specified the following variables as randomly varying without including the corresponding fixed effect: control, treat
Would it be possible to add support for this type of model?
Thanks!
Hi! I see. The model was extracted from this article, section: "Model 6: the “Van Houwelingen bivariate” model".
Honestly, I am an inexperienced medical student, so I am not sure I can be of any help, but the article above describes the model.
Thanks, I'll take a look and see if I can figure it out.
| gharchive/issue | 2021-10-19T13:23:26 | 2025-04-01T06:38:20.635891 | {
"authors": [
"arthur-albuquerque",
"datalorax"
],
"repo": "datalorax/equatiomatic",
"url": "https://github.com/datalorax/equatiomatic/issues/204",
"license": "CC-BY-4.0",
"license_type": "permissive",
"license_source": "github-api"
} |
216705598 | influxdb duplicate field stored
I write a kcql like insert into record select * from record WITHTAG (ptype, pid).
when I run this and look at the data in the influxdb, there exists two field ptype and ptype_1, pid and pid_1, ptype and pid is tag, ptype_1 and pid_1 is not, why should the duplicate ptype_1 and pid_1 should be stored? Is there any way to avoid this?
We don't duplicate the field names. we will investigate the issue.
thank you, I use the stream-reactor-0.2.4-3.1.1.tar.gz package, use the following config in confluent connect config.
key.converter=org.apache.kafka.connect.json.JsonConverter value.converter=org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable=false value.converter.schemas.enable=false
kafa-version: kafka_2.11-0.10.2.0
jre:1.8.0
influxdb 1.2.0
@iyuq
i can confirm we don't actually duplicate the item. i have been trying to get to the influxdb code where they would do this but i can't find it. But let's go over what you want to do, because i have a pretty good idea what's happening.
You have a row with columnd ptype,pid and then you want to add the same names as a tag. Think of the database where you have a query, joining two tables, and returning the same field table1.A, table2.A. If you run that in rdbms you will always see the second one returned as A_1. So here they would, most likely do the same thing. so how about you check 'show tag values ...' to only look at the tags.
To be honest, i don't see the value of adding the tags which are a copy of the fields already. why would you duplicate data, plus in this case is not a tag. if you want to have those fields as tags you do: SELECT * FROM record IGNORE ptype, pid WITHTAG(ptype, pid).
Hope this helps.
I haved tried this, but got the following error instead.
[2017-03-27 13:10:56,146] INFO InfluxSinkConfig values:
connect.influx.connection.database = mydb
connect.influx.connection.password = [hidden]
connect.influx.connection.url = http://localhost:8086
connect.influx.connection.user = root
connect.influx.consistency.level = ALL
connect.influx.error.policy = THROW
connect.influx.max.retires = 20
connect.influx.retention.policy = autogen
connect.influx.retry.interval = 60000
connect.influx.sink.kcql = INSERT INTO record SELECT * FROM record IGNORE ptype, pid WITHTAG (ptype, pid) WITHTIMESTAMP time
(com.datamountaineer.streamreactor.connect.influx.config.InfluxSinkConfig:180)
[2017-03-27 13:10:56,146] INFO Sink task WorkerSinkTask{id=influx-record-sink-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:222)
[2017-03-27 13:10:56,251] INFO Discovered coordinator localhost:9092 (id: 2147483647 rack: null) for group connect-influx-record-sink. (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:555)
[2017-03-27 13:10:56,251] INFO Revoking previously assigned partitions [] for group connect-influx-record-sink (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:333)
[2017-03-27 13:10:56,252] INFO (Re-)joining group connect-influx-record-sink (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:381)
[2017-03-27 13:10:56,254] INFO Successfully joined group connect-influx-record-sink with generation 363 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:349)
[2017-03-27 13:10:56,255] INFO Setting newly assigned partitions [record-0] for group connect-influx-record-sink (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:225)
[2017-03-27 13:10:56,259] ERROR Task influx-record-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerSinkTask:404)
java.lang.IllegalArgumentException: ptype can't be found on the values list:ip,bit,ua,browser,os,query,deviceType,agent,resolution,origin,cookieEnabled,region,time,title,lang,sid,device
at com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$$anonfun$fromMap$1$$anonfun$apply$4.apply(TagsExtractor.scala:52)
at com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$$anonfun$fromMap$1$$anonfun$apply$4.apply(TagsExtractor.scala:52)
at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
at scala.collection.AbstractMap.getOrElse(Map.scala:59)
at com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$$anonfun$fromMap$1.apply(TagsExtractor.scala:52)
at com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$$anonfun$fromMap$1.apply(TagsExtractor.scala:49)
at scala.collection.immutable.Stream.foldLeft(Stream.scala:610)
at com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$.fromMap(TagsExtractor.scala:49)
at com.datamountaineer.streamreactor.connect.influx.writers.InfluxBatchPointsBuilderFn$$anonfun$6.apply(InfluxBatchPointsBuilderFn.scala:126)
at com.datamountaineer.streamreactor.connect.influx.writers.InfluxBatchPointsBuilderFn$$anonfun$6.apply(InfluxBatchPointsBuilderFn.scala:117)
at scala.Option.map(Option.scala:146)
@iyuq ptype is not a field in the message schema ptype can't be found on the values list:ip,bit,ua,browser,os,query,deviceType,agent,resolution,origin,cookieEnabled,region,time,title,lang,sid,device
@andrewstevenson @stheppi I am not using the schema message, I am using the json message without schema, I'm sure it's a bug now, I have the following test using the influx-java-client
Point point1 = Point.measurement("cpu")
.time(System.currentTimeMillis(), TimeUnit.MILLISECONDS)
.tag("atag", "a")
.addField("atag", "a")
.addField("idle", 90L)
.addField("user", 9L)
.addField("system", 1L)
.build();
and I got a atag and atag_1 in the cpu measurements, so you guys must add the tag field both in .tag and .addField method.
We are doing as you instructed via KCQL, nothing more nothing less.
First:" Select *"=> this picks up all the fields and will result in an addField.
Then you say "withtag(ptype) "=> translates in addTag. So the code does what it was instructed in the KCQL. Therefore is not a bug. What i suggested with ignoring fields should be the way to go in your case bit seems we have to relax the validation rules( which i thought we did already).
If you know all your fields you can configure KCQL like: Insert into record select field1,field2,.. from record withtag(ptype,pid)
But the code will throw an error when the tags field not in the select field list, that's the problem, as I said before. So when I use select * from record withtag(ptype, pid), i will get two duplicate field, when I use select * from record IGNORE ptype, pid withtag(ptype, pid), just the same as Insert into record select field1,field2,.. from record withtag(ptype,pid), I got the error java.lang.IllegalArgumentException: ptype can't be found on the values list:ip,bit,ua,browser,os,query,deviceType,agent,resolution,origin,cookieEnabled,region,time,title,lang,sid,device, either case I can't get what I actual wanted. So if this is the bug, so the check field and throw error must be a bug. @stheppi
@iyuq Are you sure that you have ptype, consistently in your json for every message?
@iyuq We'll certainly relax the checking of ignored columns being present in the message, it's shouldn't be an error but a warning.
@andrewstevenson yeah, I'm pretty sure about that. For when I add ptype in the select fields, the error disapeared. I will have someone who knows scala to have a look at the code, thank you all for your help.
@andrewstevenson you were right all along. the stack and error clearly shows the ptype is not present in the json payload!!
I am updating the code to avoid the error and just add a big warning.
@iyuq : you would need to take the latest and build it yourself before we release the next version. the code has been changed to avoid throwing exceptions if the tag is not present (like in your case).
@stheppi @andrewstevenson thank you!
@stheppi I build the new version and find out it is not what I need either. The influx tag is like a field with index, the new version will not throw a error, but also can not insert the tag to the influxdb. I got the following error message:
[2017-03-28 17:26:17,236] WARN Tag can't be set because field:ptype can't be found or is null on the incoming value. topic=record;partition=0;offset=38 (com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$:79)
[2017-03-28 17:26:17,237] WARN Tag can't be set because field:pid can't be found or is null on the incoming value. topic=record;partition=0;offset=38 (com.datamountaineer.streamreactor.connect.influx.writers.TagsExtractor$:79)
I think the correct way is to extact the field in the select field join the tag field from message, then add field of select field exclude the tag field to builder and add tag in the tag field to the builder.
I am not following what you said at all.
Let me explain kcql because i think it adds value.
'Withtag field1'=> means the code looks at the payload for field1 and adds it to the the influxdb point as a tag.
From your message field1 doesn't exist in the kafka message value.
Are those two fields:ptype and pid part of the kafka message key ? If so, we have no support for such extraction at the moment.
@stheppi ptype and pid is just the same as other fields, they are the field key of the kafka message json field, not the kafka message key.
Well i can tell you we pick them up if they are present in the json message. It looks like they are not. Look at the 38th message on your topic and you see they are not there.
What I find out is only the field in the select field will extract from the message, which I think is a bug, the tag fields should also be picked up.
@stheppi connect.influx.sink.kcql = INSERT INTO record SELECT * FROM record IGNORE ptype, pid WITHTAG (ptype, pid) WITHTIMESTAMP time
@iyuq Can you dump the 38th message from Kafka via the console consumer and post the message becuase as @stheppi ptype and pid are not in the payload.
okay, here is all my message.
{"time":1490340519980,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490346078493,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490346569624,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490423927960,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490424086196,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490424259157,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490424604375,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490425087586,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490339822519,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490340519980,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490346078493,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490346569624,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490423927960,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490424086196,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490424259157,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490424604375,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490425087586,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490591217743,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490591734702,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490339822519,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490340519980,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490346078493,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490346569624,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490423927960,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490424086196,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490424259157,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490424604375,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490425087586,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490591217743,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490591734702,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490689409863,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490689541088,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490690792524,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490692116790,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490692147188,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490692269944,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490692363992,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490692622596,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
{"time":1490693176034,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
the 38th message is
{"time":1490692622596,"sid":"SymvD4Ghg","ptype":"lp","pid":"B1xHp7f3e","origin":"https://p.hecaila.com/l/B1xHp7f3e","resolution":"1440x900","bit":"24-bit","lang":"zh-CN","cookieEnabled":1,"title":"未项目","ua":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.110 Safari/537.36","ip":"0.0.0.0","query":"ck\u003d1\u0026ln\u003dzh-CN\u0026cl\u003d24-bit\u0026ds\u003d1440x900\u0026tt\u003d%E6%9C%AA%E9%A1%B9%E7%9B%AE\u0026u\u003dhttps%3A%2F%2Fp.hecaila.com%2Fl%2FB1xHp7f3e\u0026tp\u003dlp\u0026id\u003dB1xHp7f3e\u0026rnd\u003d620884\u0026p\u003d0\u0026t\u003d0","region":"上海","browser":"Chrome","device":"Apple Macintosh","os":"Mac OS X 10.12.3","agent":"Chrome","deviceType":"Desktop"}
@stheppi The problem fixed after your today's commit, thank you very much for helping me. Sorry to tell that I get another problem, the WITHTIMESTAMP doesn't take effect and the system_time used instead.
it is the order in KCQL:
SELECT * FROM $topic IGNORE ptype, pid WITHTIMESTAMP time WITHTAG (ptype, pid)
@stheppi Thank you!
| gharchive/issue | 2017-03-24T08:56:15 | 2025-04-01T06:38:20.682962 | {
"authors": [
"andrewstevenson",
"iyuq",
"stheppi"
],
"repo": "datamountaineer/stream-reactor",
"url": "https://github.com/datamountaineer/stream-reactor/issues/150",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
1415228039 | Problem with copying generated columns
Anonymizer throws error when dealing with table with generated columns
Error: db error: ERROR: column "tsv" is a generated column
DETAIL: Generated columns cannot be used in COPY.
pg_dump on its own works fine.
postgresql and pg_dump version is 12.12
pg_datanymizer version is 0.6.0
@ruslan-kurbanov-jr Thanks, It's interesting problem... We'll research it as soon as possible.
The main approach we use is to replace COPY stage in the dump... We'll see how pg_dumper uses it when working with similar columns.
i just ran in to this problem as well. Is there a solution?
pg_dump (PostgreSQL) 16.0
pg_datanymizer version is 0.6.0
A workaround approach for this could be to create a view of the table and dump that instead (using table filters). Do a find and replace on the dump file before importing to rename the view to the table name.
It seems like PG is supposed to work if the dump value for the generated column is default: https://stackoverflow.com/questions/64600614/restoring-pg-database-from-dump-fails-due-to-generated-columns
Tried adding the --inserts pg_dump argument but it still used COPY. I was hoping that since the error specifically mentioned COPY, then using --inserts would allow it to work.
Thanks a lot! Is new release going to be published?
| gharchive/issue | 2022-10-19T15:54:37 | 2025-04-01T06:38:20.689207 | {
"authors": [
"BillBuilt",
"akirill0v",
"gregwebs",
"ruslan-kurbanov-jr"
],
"repo": "datanymizer/datanymizer",
"url": "https://github.com/datanymizer/datanymizer/issues/189",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
477909763 | [Fix #85] use leading underscores in cached instance variable name
Before you submit a pull request, please make sure you have to follow:
[x] read and know items from the Contributing Guide
[x] add a description of the problem you're trying to solve (short summary from related issue)
[x] verified that cops are ordered by alphabet
[x] add a note to the style guide docs (if it needs)
[x] add a note to the changelog file
[x] the commit message contains the number of the related issue (if it presents)
and word Fix if this PR closes related issue
[x] squash all commits before submitting to review
@lazycoder9 rebase pls your changes
| gharchive/pull-request | 2019-08-07T12:51:08 | 2025-04-01T06:38:20.697924 | {
"authors": [
"lazycoder9",
"roman-dubrovsky"
],
"repo": "datarockets/datarockets-style",
"url": "https://github.com/datarockets/datarockets-style/pull/86",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2501470385 | Differentiate display from user ID in autocomplete
The autocomplete from this issue:
#18
Isn't fit for purpose on Datasette Cloud, where user IDs are integers.
Need to be able to display their "display names" in lists of e.g. members of a group, and also autocomplete against those when adding users to groups or to table permissions.
Also: the <datalist> autocomplete really isn't very good - it still allows freeform text input and, at least on Firefox, shows a whole butch of irrelevant suggestions mixed in with the "valid" options:
I can use the actors_from_ids mechanism to show better actors, and I can update the design of the datasette_acl_actor_ids plugin hook to return whole actors, not just IDs, and implement one of the JavaScript autocomplete things I considered in https://github.com/datasette/datasette-acl/issues/18#issuecomment-2323460110
From https://alphagov.github.io/accessible-autocomplete/#progressive-enhancement
If your autocomplete is meant to select from a small list of options (a few hundred), we strongly suggest that you render a <select> menu on the server, and use progressive enhancement.
Instances with more than a few hundred users will be rare, I'm going to do that.
I tried getting the alphagov one working - it's pretty glitchy for me:
I had to hack the source code and add the selected text here to disable the 1Password icon on it too:
Got this working with https://projects.verou.me/awesomplete/
I'd prefer it if selecting the item submitted the form, you have to enter twice right now.
Here's that prototype so far (the custom event thing doesn't work though):
diff --git a/datasette_acl/templates/manage_acl_group.html b/datasette_acl/templates/manage_acl_group.html
index b293f42..9bbf7f3 100644
--- a/datasette_acl/templates/manage_acl_group.html
+++ b/datasette_acl/templates/manage_acl_group.html
@@ -3,6 +3,8 @@
{% block title %}{{ name }}{% endblock %}
{% block extra_head %}
+<script src="{{ urls.static_plugins("datasette-acl", "awesomplete.min.js") }}"></script>
+<link rel="stylesheet" href="{{ urls.static_plugins("datasette-acl", "awesomplete.css") }}">
<style>
.remove-button {
background-color: #fff;
@@ -75,7 +77,7 @@
<form action="{{ request.path }}" method="post">
<input type="hidden" name="csrftoken" value="{{ csrftoken() }}">
- <p><label>User ID <input type="text" data-1p-ignore name="add"{% if valid_actor_ids %} list="actor-ids"{% endif %}></label> <input type="submit" value="Add member"></p>
+ <p><label>User ID <input id="id_add" type="text" data-minchars="1" data-1p-ignore name="add"{% if valid_actor_ids %} list="actor-ids"{% endif %}></label> <input type="submit" value="Add member"></p>
{% if valid_actor_ids %}
<datalist id="actor-ids">{% for actor_id in valid_actor_ids %}
<option value="{{ actor_id }}"></option>
@@ -118,10 +120,16 @@
{% endif %}
<script>
-// Focus on add input if we just added a member
-if (window.location.hash === '#focus-add') {
- document.querySelector('input[name="add"]').focus();
-}
+document.addEventListener('DOMContentLoaded', function() {
+ document.querySelector('#id_add').addEventListener('awesomplete-select', (ev) => {
+ console.log(ev);
+ // this.closest('form').submit();
+ });
+ // Focus on add input if we just added a member
+ if (window.location.hash === '#focus-add') {
+ document.querySelector('input[name="add"]').focus();
+ }
+});
</script>
{% endblock %}
I'm going to try https://choices-js.github.io/Choices/
I like Choices best:
That prototoype so far:
diff --git a/datasette_acl/templates/manage_acl_group.html b/datasette_acl/templates/manage_acl_group.html
index b293f42..12b1c0b 100644
--- a/datasette_acl/templates/manage_acl_group.html
+++ b/datasette_acl/templates/manage_acl_group.html
@@ -3,6 +3,8 @@
{% block title %}{{ name }}{% endblock %}
{% block extra_head %}
++<script src="{{ urls.static_plugins("datasette-acl", "choices-9.0.1.min.js") }}"></script>
++<link rel="stylesheet" href="{{ urls.static_plugins("datasette-acl", "choices-9.0.1.min.css") }}">
<style>
.remove-button {
background-color: #fff;
@@ -75,13 +77,17 @@
<form action="{{ request.path }}" method="post">
<input type="hidden" name="csrftoken" value="{{ csrftoken() }}">
- <p><label>User ID <input type="text" data-1p-ignore name="add"{% if valid_actor_ids %} list="actor-ids"{% endif %}></label> <input type="submit" value="Add member"></p>
- {% if valid_actor_ids %}
- <datalist id="actor-ids">{% for actor_id in valid_actor_ids %}
- <option value="{{ actor_id }}"></option>
- {% endfor %}
- </datalist>
- {% endif %}
+ <div style="display: flex; align-items: center; gap: 10px; max-width: 500px">
+ <label for="id_add" style="flex-shrink: 0;">User ID</label>
+ <div class="choices" data-type="select-one" tabindex="0" style="flex-grow: 1;">
+ <select id="id_add" name="add">
+ <option></option>
+ {% for actor_id in valid_actor_ids %}
+ <option>{{ actor_id }}</option>
+ {% endfor %}
+ </select>
+ </div>
+ </div>
</form>
{% endif %}
{% endif %}
@@ -118,10 +124,17 @@
{% endif %}
<script>
-// Focus on add input if we just added a member
-if (window.location.hash === '#focus-add') {
- document.querySelector('input[name="add"]').focus();
-}
+document.addEventListener('DOMContentLoaded', function() {
+ const select = document.querySelector('#id_add');
+ const choices = new Choices(select);
+ select.addEventListener('addItem', (ev) => {
+ ev.target.closest('form').submit()
+ });
+ // Focus on add input if we just added a member
+ if (window.location.hash === '#focus-add') {
+ choices.showDropdown();
+ }
+});
</script>
{% endblock %}
Claude artifact showing what it could look like if I use this rather than the table of checkboxes:
https://claude.site/artifacts/3b83782b-74d3-4759-ac68-523fe2a905eb
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Group Permissions UI</title>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/choices.js/10.2.0/choices.min.css">
<style>
body {
font-family: Arial, sans-serif;
margin: 20px;
background-color: #f0f0f0;
}
.container {
background-color: white;
padding: 20px;
border-radius: 8px;
box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
}
h1 {
color: #333;
}
.group-row {
display: flex;
align-items: center;
margin-bottom: 10px;
padding: 10px;
background-color: #f9f9f9;
border-radius: 4px;
}
.group-name {
width: 150px;
font-weight: bold;
color: #4a4a4a;
}
select[multiple] {
min-width: 200px;
}
/* Choices.js custom styles */
.choices__inner {
min-height: 30px;
padding: 4px 7.5px 4px 3.75px;
}
.choices__list--multiple .choices__item {
font-size: 12px;
padding: 2px 5px;
margin-bottom: 0;
}
</style>
</head>
<body>
<div class="container">
<h1>Group Permissions</h1>
<div id="groups-container">
<div class="group-row">
<span class="group-name">staff (1)</span>
<select multiple id="select-staff">
<option value="insert-row">insert-row</option>
<option value="delete-row">delete-row</option>
<option value="update-row">update-row</option>
<option value="alter-table" selected>alter-table</option>
<option value="drop-table">drop-table</option>
</select>
</div>
<div class="group-row">
<span class="group-name">devs (5)</span>
<select multiple id="select-devs">
<option value="insert-row" selected>insert-row</option>
<option value="delete-row" selected>delete-row</option>
<option value="update-row" selected>update-row</option>
<option value="alter-table" selected>alter-table</option>
<option value="drop-table">drop-table</option>
</select>
</div>
<div class="group-row">
<span class="group-name">newgroup (0)</span>
<select multiple id="select-newgroup">
<option value="insert-row">insert-row</option>
<option value="delete-row">delete-row</option>
<option value="update-row">update-row</option>
<option value="alter-table" selected>alter-table</option>
<option value="drop-table">drop-table</option>
</select>
</div>
<div class="group-row">
<span class="group-name">muppets (5)</span>
<select multiple id="select-muppets">
<option value="insert-row">insert-row</option>
<option value="delete-row">delete-row</option>
<option value="update-row">update-row</option>
<option value="alter-table">alter-table</option>
<option value="drop-table">drop-table</option>
</select>
</div>
</div>
</div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/choices.js/10.2.0/choices.min.js"></script>
<script>
document.addEventListener('DOMContentLoaded', function() {
const selects = document.querySelectorAll('select[multiple]');
selects.forEach(select => {
new Choices(select, {
removeItemButton: true,
classNames: {
containerOuter: 'choices custom-choices',
}
});
});
});
</script>
</body>
</html>
I implemented Choices for permission selection and user selection here:
#24
Still need to differentiate user ID from user display though.
New hook design: it can return a list of IDs, or it can return a list of dicts with "id" and "display" keys.
I'm going to rename datasette_acl_actor_ids to datasette_acl_valid_actors.
| gharchive/issue | 2024-09-02T18:57:31 | 2025-04-01T06:38:20.713967 | {
"authors": [
"simonw"
],
"repo": "datasette/datasette-acl",
"url": "https://github.com/datasette/datasette-acl/issues/23",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
942588966 | S3 bucket creation command is missing for archive handler
In the newer version of aws command and S3 server, S3 bucket needs to be create with an separate S3 API since the aws s3 cp command no longer creates a new bucket. Below are the quick fixes to make it working.
Ideally we should make these commands configurable via a configmap, so when there's new changes to S3 API, we can update the configmap rather than building a new image every time.
@YiannisGkoufas can you review this? Thanks.
thanks @Tomcli looks good!
| gharchive/pull-request | 2021-07-13T01:18:47 | 2025-04-01T06:38:20.716126 | {
"authors": [
"Tomcli",
"YiannisGkoufas"
],
"repo": "datashim-io/datashim",
"url": "https://github.com/datashim-io/datashim/pull/112",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2021794529 | fix OPS_API_RESPONSE union type to include lists
I inadvertently gave a very silly definition of the OPS_API_RESPONSE, which did not include List[Any] as one of the types (the DevOps API as a matter of fact does return top-level lists for some calls, such as get_databases).
Now I fixed it and to satisfy the type checker the whole of the ops methods are, correctly, moved to the OPS_API_RESPONSE return type.
LGTM! :)
| gharchive/pull-request | 2023-12-02T01:32:22 | 2025-04-01T06:38:20.717696 | {
"authors": [
"erichare",
"hemidactylus"
],
"repo": "datastax/astrapy",
"url": "https://github.com/datastax/astrapy/pull/137",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
967122918 | [HW] Muhammad Atif Ahsan
Name: Muhammad Atif Ahsan
Email: atif.rfo@gmail.com
Linkedin Profile: https://www.linkedin.com/in/atifahsan/
Attach the homework screenshots below for both step II and step III:
Hey Muhammad Atif Ahsan, great job! Congrats! Here is your badge! https://api.badgr.io/public/assertions/SlhV6khfTzKnRot2Oexd8Q
| gharchive/issue | 2021-08-11T17:25:37 | 2025-04-01T06:38:20.721160 | {
"authors": [
"HadesArchitect",
"atifahsan"
],
"repo": "datastaxdevs/workshop-intro-to-cassandra",
"url": "https://github.com/datastaxdevs/workshop-intro-to-cassandra/issues/282",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
239980202 | pull/pull cache for all branches
dvc sync [file] uploads/downloads data snapshot for only the current brunch.
dvc sync -all [file] should upload\download all versions of the file(s).
I am not sure it is worth implementing, as there are usually more branches that don't even belong to you(i.e. various branches from origin, upstream and etc), which makes pushing/pulling all branches counter-productive, as in those rare cases when you indeed need to sync a few branches, you can easily do so by checking out those branches with git yourself. Actually, same logic applies to dvc metrics, but there the cost of it was pretty low, so it was an easy choice to just implement it. Lets discuss this one later. Moving to 0.9.8 for consideration.
| gharchive/issue | 2017-07-02T00:20:38 | 2025-04-01T06:38:20.724785 | {
"authors": [
"dmpetrov",
"efiop"
],
"repo": "dataversioncontrol/dvc",
"url": "https://github.com/dataversioncontrol/dvc/issues/103",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
426822225 | Throttle/group Ops
Currently each keystroke results in a fresh op. It would be more efficient if ops were batched by some time interval, so that quickly typing multple characters in a single word results in a single op. The time interval in ms could be passed into the constructor, making this behavior opt-in.
Here's a working implementation of this feature https://github.com/datavis-tech/codemirror-6-experiments/blob/master/packages/experiments/src/client/codeMirrorShareDBBinding.js#L31
Closing as ShareDB batches like this internally.
| gharchive/issue | 2019-03-29T05:23:48 | 2025-04-01T06:38:20.726656 | {
"authors": [
"curran"
],
"repo": "datavis-tech/codemirror-ot",
"url": "https://github.com/datavis-tech/codemirror-ot/issues/28",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
247452410 | Replace screenshots
The screenshots aren't too great. They should look good.
I've uploaded new ones to the GDrive folder 'Platform/Ordino Screenshots'
| gharchive/issue | 2017-08-02T16:41:57 | 2025-04-01T06:38:20.727501 | {
"authors": [
"alexsb",
"mstreit"
],
"repo": "datavisyn/datavisyn.github.io",
"url": "https://github.com/datavisyn/datavisyn.github.io/issues/13",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
344850506 | Automate upgrading envoy
There is some build+test+push awesomeness at https://github.com/datawire/envoy/tree/datawire/extauth-build-automation/DATAWIRE that would be awesome to have automated.
Let's do it :tm:
Closing since I think @LukeShu managed to do this...
| gharchive/issue | 2018-07-26T13:44:35 | 2025-04-01T06:38:20.732205 | {
"authors": [
"containscafeine",
"kflynn"
],
"repo": "datawire/ambassador",
"url": "https://github.com/datawire/ambassador/issues/663",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
477548492 | Adding max_request_headers_kb configuration support.
Description
Exposing max_request-headers_kb envoy parameter through the ambassador.
Related Issues
None
Testing
max_request-headers_kb configured in Ambassador Global configuration. For e.g.
---
apiVersion: ambassador/v1
kind: Module
name: ambassador
config:
service_port: 4567
max_request-headers_kb: 90
Todos
[X] Tests
[X] Documentation
Hey, thanks for this! A couple of questions:
First, this looks like it'll enforce a 60KB limit on headers globally for all Ambassador installations, which is a behavioral change that's probably not desirable. If the user doesn't specify a size, I think we shouldn't apply any limit.
Second, this is definitely one that needs a test -- maybe set the limit to 1KB, then send a request through with longer headers than that. Should the request fail? or do the headers get truncated? or... what?
Thanks again!
| gharchive/pull-request | 2019-08-06T19:12:34 | 2025-04-01T06:38:20.735712 | {
"authors": [
"ankurpshah",
"kflynn"
],
"repo": "datawire/ambassador",
"url": "https://github.com/datawire/ambassador/pull/1740",
"license": "apache-2.0",
"license_type": "permissive",
"license_source": "bigquery"
} |
2760137472 | Dockerhub deprecation warning
Related to: https://github.com/datopian/giftless/issues/119
Changes
Added a "docker-prerun.sh" script to serve as the entry point of the docker file, that right now only echoes a deprecation warning in case the image is being pulled from Dockerhub, but also that can be extended in the future for different deprecation warnings
If the image is build with --build-arg IS_DOCKERHUB=true, upon running the image, the following warning can be seen:
I have built the docker image at the point of this PR and pushed that to Dockerhub, please try it out if you can https://hub.docker.com/repository/docker/datopian/giftless/tags/0.6.2/sha256-a7f53727881796de19c169899e7f9cb4d9e701803958855f52f8150c4d10f9b5
Future
If this PR is approved, I'll also push the same image with the "latest" tag and tweak the descriptions at the Dockerhub repo to flag that it's deprecated.
At first, I didn't want to have an "IS_DOCKERHUB" env var. Wanted something like "PRERUN_ARGS" so that the prerun script could be extended more easily, but had some difficulties with that, perhaps something to revisit later on
@athornton @vit-zikmund how does this look?
The warning seems legit, but the handling of tini is not right. My OCD also tells me not to introduce any runtime ENV var unless its being used by the main code. Also my "overengineering gate" (which is a thing I'm starting to embrace fairly recently) tells me not to introduce new generic features unless it's apparent the extensibility is worth the loss of code simplicity. Working on a followup commit that would adhere to what I described ;)
Here's the update, please @demenech have a look.
I took the liberty to use the same branch, we can revert/force push that commit if you don't like my solution. On retrospect, this move was rather presumptuous and I don't want to shadow anyone. Sorry. Next time, I'll rather use my own branch.
FYI the Dockerfile as a whole is pretty suboptiomal for build re-runs and it's necessarily bloated, containing all the project files, where only a fraction is actually needed. I'd try at some improvements, but not before this PR is done.
| gharchive/pull-request | 2024-12-26T20:54:37 | 2025-04-01T06:38:20.753005 | {
"authors": [
"demenech",
"rufuspollock",
"vit-zikmund"
],
"repo": "datopian/giftless",
"url": "https://github.com/datopian/giftless/pull/181",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1989926194 | Refactor code and simplify file processing
Pull Request: Main Refactoring and Conversion
Summary of Changes
This pull request introduces significant improvements and refactoring to enhance the MarkdownDB functionality. The key modifications include:
Class Breakdown:
The MarkdownDB class has been refactored into two distinct classes, separating concerns for indexing and querying.
Conversion to TypeScript Objects and SQL:
The Markdown file processing has been optimized by converting MD files into TypeScript objects before transforming them into SQL data.
Function Refactoring:
Few smaller functions have undergone refactoring.
Also, not sure if parse.ts and markdownToObject need to be separate. They seem to have overlapping responsibilities. I'd merge these two.
@mohamedsalem401 What do you think? I'm not sure about it
The reason why I think they should be independent is that parseFile.ts parses the links and tags from a string source, irrespective of whether it's in a local file or not, while markdownToObject.ts is responsible for loading files from the local file system.
The reason why I think they should be independent is that parseFile.ts parses the links and tags from a string source, irrespective of whether it's in a local file or not, while markdownToObject.ts is responsible for loading files from the local file system.
Yes, I agree, let's leave them separate
@mohamedsalem401 Where are the tests? 😄
I believe this pull request (PR) includes numerous changes at the moment. Therefore, I plan to open a new PR specifically for the latest changes, including tests.
@mohamedsalem401 Where are the tests? 😄
I believe this pull request (PR) includes numerous changes at the moment. Therefore, I plan to open a new PR specifically for the latest changes, including tests.
I think it would be better if we add tests to the same PR...
Also, I wouldn't merge it into main. We don't want to publish a new version of the package until the whole refactoring is ready. (Note we have an auto-publish workflow in place.) Let's create another branch, .e.g v2, off of main and reopen this PR against that branch.
I think it would be better if we add tests to the same PR...
Also, please don't merge it to main. We don't want to publish a new version of the package until the whole refactoring is ready. (Note we have an auto-publish workflow in place.) Let's create another branch, .e.g v2, off of main and reopen this PR against that branch.
OK, I will add them in this pull request and will switch to the new branch v2
This was for #47 and we did this (for now) in a simpler way where we don't refactor existing code - see resolution details in #47. We reused some of this and will probably reuse more in future.
| gharchive/pull-request | 2023-11-13T05:37:22 | 2025-04-01T06:38:20.761449 | {
"authors": [
"mohamedsalem401",
"olayway",
"rufuspollock"
],
"repo": "datopian/markdowndb",
"url": "https://github.com/datopian/markdowndb/pull/48",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
58640289 | Cite and discuss Hardekopf
His paper introduces an time stamp feature to AAM for which properties can be proven and re-used. Thus it's relevant to discuss as an instance of re-usable metatheory for static analysis.
This still hasn't happened and needs to.
Done.
| gharchive/issue | 2015-02-23T20:05:47 | 2025-04-01T06:38:20.775906 | {
"authors": [
"davdar",
"dvanhorn"
],
"repo": "davdar/maam",
"url": "https://github.com/davdar/maam/issues/11",
"license": "bsd-3-clause",
"license_type": "permissive",
"license_source": "bigquery"
} |
427382220 | change msbuild discovery path logic
Change to logic for msbuild.exe discovery
Thanks for pulling this out separate from the WIP solution loading - makes it easier to merge while figuring out the solution part.
| gharchive/pull-request | 2019-03-31T10:35:43 | 2025-04-01T06:38:20.792669 | {
"authors": [
"colombod",
"daveaglick"
],
"repo": "daveaglick/Buildalyzer",
"url": "https://github.com/daveaglick/Buildalyzer/pull/106",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1627481627 | Refactor advanced options page
This should help with #60
Good luck reviewing this :D
Changes:
Facefixer strength slider now only shows when facefixer is active
Made the giant advanced options file 500 lines less giant :smirk:
| gharchive/pull-request | 2023-03-16T13:30:44 | 2025-04-01T06:38:20.796017 | {
"authors": [
"evguu"
],
"repo": "daveschumaker/artbot-for-stable-diffusion",
"url": "https://github.com/daveschumaker/artbot-for-stable-diffusion/pull/62",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
414050192 | Deluge
Add deluge application to ansible-nas repository.
Thank you for submitting this PR, getting Deluge merged will be awesome. This has however, reminded me why I didn't do it myself in the first place :smile:
Few things that will need fixing;
You've supplied a default username/password, but not mentioned in the docs what it is or how to change it (I think mentioning the default for the container, and how to change it is fine)
The docs need to mention that having Transmission and Deluge pointing at a watch directory will cause problems
The paths in the supplied config don't look right (ie /root/Downloads)
The added variables need adding to tests/test.yml
You've specified a local network range that's different to the one specified in the example config and transmission. It's probably worth pulling this (and the transmission one) out to a central variable in the config shared by both containers.
Also I'm not clear on;
Why are config files supplied? Is there something you're changing that's required that isn't possible to do via environment variables? (if so, is there a better image floating around that could be configured with environment variables?)
Please tell me, why you didn't do this?
Deluge after installation is not configured properly, it misses configuration about watch folder and where to download files that why I add configuration files to the repository. But I can delete these files, and add information to deluge docs the user after the first login needs to configure watch and downloads directory manually in configuration from webUI.
Have a read of this: https://github.com/davestephens/ansible-nas/blob/master/docs/contributing.md
It's generally considered good practise to read contribution guidelines before contributing to a project on GitHub. If you don't and still submit a PR, you should expect some sort of feedback along the lines of what the guidelines say, which is what I did.
I'm not at a computer right now but I want to test what you say about directories properly, if possible I don't want to supply config files. Reason being, if you run the playbook, change the config, run the playbook again then you're going to break people's config for them, which is not great.
| gharchive/pull-request | 2019-02-25T11:05:04 | 2025-04-01T06:38:20.801200 | {
"authors": [
"davestephens",
"tcharewicz"
],
"repo": "davestephens/ansible-nas",
"url": "https://github.com/davestephens/ansible-nas/pull/56",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1157470053 | Added twitch.tv/alejandromp4
I saw there is now a section for twitch 🎉
CI issue seems to be on ruby setup not on validate.
| gharchive/pull-request | 2022-03-02T17:11:05 | 2025-04-01T06:38:20.806380 | {
"authors": [
"alexito4"
],
"repo": "daveverwer/iOSDevDirectory",
"url": "https://github.com/daveverwer/iOSDevDirectory/pull/610",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
1081761631 | Add pretty-print functionality to the rule classes
At request of @vjawahar , let us know if this is good
Will do this better and reopen a PR in a sec
| gharchive/pull-request | 2021-12-16T04:34:07 | 2025-04-01T06:38:20.818912 | {
"authors": [
"imnnos"
],
"repo": "david-fisher/320-F21-Track-1",
"url": "https://github.com/david-fisher/320-F21-Track-1/pull/114",
"license": "BSD-3-Clause",
"license_type": "permissive",
"license_source": "github-api"
} |
481418642 | can events be added to the calendar?
I see that there are events: boolean in a few views, but how do I pass the actual events into the component?
Hi @williamli, events should be controlled from outside of the component, so you need to hold a state outside, and just pass those in as an array of events into the Organizer component. Here is an example repo - https://github.com/davidalekna/organizer-examples/blob/master/src/index.js
let me know if you have more questions 👍🏻
I was looking for the definition of event object.
I found it in https://github.com/davidalekna/organizer-examples/blob/master/src/helpers/index.js https://github.com/davidalekna/organizer-examples/blob/master/src/helpers/index.js
Thanks.
On 16 Aug 2019, at 15:55, David notifications@github.com wrote:
Hi @williamli https://github.com/williamli, events should be controlled from outside of the component, so you need to hold a state outside, and just pass those in as an array of events into the Organizer component. Here is an example repo - https://github.com/davidalekna/organizer-examples/blob/master/src/index.js https://github.com/davidalekna/organizer-examples/blob/master/src/index.js
let me know if you have more questions 👍🏻
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/davidalekna/react-organizer/issues/1?email_source=notifications&email_token=AABL4MMKJEKVZHLE2UA6NYLQEZMNNA5CNFSM4IMDQQIKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4N6L5Q#issuecomment-521922038, or mute the thread https://github.com/notifications/unsubscribe-auth/AABL4MJESEIHWBDTETPJQF3QEZMNNANCNFSM4IMDQQIA.
No worries mate. This package is being moved to a monorepo and is in a progress on being rewritten in typescript.
https://github.com/davidalekna/react-components/tree/master/packages/alekna-organizer
| gharchive/issue | 2019-08-16T02:51:11 | 2025-04-01T06:38:20.830813 | {
"authors": [
"davidalekna",
"williamli"
],
"repo": "davidalekna/react-organizer",
"url": "https://github.com/davidalekna/react-organizer/issues/1",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
585695589 | Introduce request: Use one subdomain for multiple projects
Idea:
Currently I should generate certificates for each project and use something like app.project.test as domain.
It would be good to use domains like project.magento2.test, project.magento1.test, project.laravel.test
In this case I need generate one wildcard certificate for all projects and urls looks more clear
@Den4ik I'm not sure this is somewhere I'd like to go with this project. Using domains in this way would have confusing semantics, and would result in conflicts for the domains generated for auxiliary services for things like Mailhog and RabbitMQ which both run on a per-project basis at mailhog.project.test and rabbitmq.project.test currently.
The use of a root CA to sign SSL certificates was done because by-design each project should have a separate domain name. If you're merely concerned about the manual step during setup for other devs working on the project, perhaps adopting an init script similar to this one would be a good idea (the repo this is in mirrors how I and my colleagues at Mediotype setup each Magento project to get started):
https://github.com/davidalger/warden-env-magento2/blob/develop/tools/init.sh
Appreciate the suggestion, but I'm going to go ahead and close this one out.
| gharchive/issue | 2020-03-22T11:18:41 | 2025-04-01T06:38:20.834367 | {
"authors": [
"Den4ik",
"davidalger"
],
"repo": "davidalger/warden",
"url": "https://github.com/davidalger/warden/issues/122",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
91364229 | Syntax for inlining of function calls
Currently we support inlining in two forms.
rotate 1, 2 fill red box
rotate 1, 2 >> fill red >> box
Could we standardise on the latter of these? It still gives us a useful inlining short cut, but it's (imho) a bit easier to see what's an argument and what's a function.
Also it's a damn site easier to parse, and would mean that I'll be able to extend it to work with arbitrary functions more easily.
That separation is not a difficult part to do - I do that separation part in 134 badly-written lines in two functions called in sequence (findQualifiers and fleshOutQualifiers) here https://github.com/davidedc/livecodelab/blob/master/coffee/languages/livelangv1/code-preprocessor.coffee#L1080-L1214
before those two functions the input is
rotate 1, 2 fill red box
and after those two functions becomes
rotate 1, 2, -> fill red, -> box;;
(I then need more transformations, but that's where all the "chevrons" positioning is done).
And I made no attempt to be short about it and I'm sure that there is redundant code in there, it's probably just 1 screen of clean code rather than my 3... and I have no clever tokenisation in place which I think you have in place, so really the matching/transformation in your situation would be shorter and cleaner...
It's just unnecessary symbols, and it's so much easier not to use the chevrons, in fact I never used them, so no I don't think is good to mandate them.
heh fair enough if you want to keep it working without the >> operator. was asking incase I could make my life easier :)
I'm realising that I'm going to have to have a more extensive rewriter/preprocessor anyway. I'm adding in support for using closures as arbitrary expressions, so we can more easily pass them into functions without having to assign them to variables before hand. turns out that because we don't have a prefix for closures, it's really difficult to parse.
Ah well :p
| gharchive/issue | 2015-06-26T21:14:39 | 2025-04-01T06:38:20.841122 | {
"authors": [
"davidedc",
"rumblesan"
],
"repo": "davidedc/livecodelab",
"url": "https://github.com/davidedc/livecodelab/issues/262",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
363762753 | improve physics drawing
I've been drawing each physics object manually but just saw this tutorial for doing it all together
https://love2d.org/wiki/Tutorial:PhysicsDrawing
Should go through and try and apply. Need to look into if I can expand it to change colour for each shape or texture each object. Would be really nice to centralize object drawing if poss.
I'm not going to do this.
| gharchive/issue | 2018-09-25T20:56:08 | 2025-04-01T06:38:20.882212 | {
"authors": [
"davidjtferguson"
],
"repo": "davidjtferguson/silly-sam",
"url": "https://github.com/davidjtferguson/silly-sam/issues/19",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
263479751 | Pulling react-redux-forms from npm doesn't have 1.14.2 built code
After updating to 1.14.2 and using isValid an undefined error is thrown. It appears the changes are in the /src folder but not in /lib. I built it manually and it works fine.
@davidkpiano would you be able to push the changes to npm please?
Same here... exported isValid is found in the /src but not in the /lib
1.14.4 was just published!
Thanks @davidkpiano !
| gharchive/issue | 2017-10-06T15:12:04 | 2025-04-01T06:38:20.884859 | {
"authors": [
"davidkpiano",
"mewben",
"stevenmason"
],
"repo": "davidkpiano/react-redux-form",
"url": "https://github.com/davidkpiano/react-redux-form/issues/964",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
2653356410 | chore: remove unneeded entries in CI's Makefile
Issue or need
Some entries in CI/CD's .ci/Makefile are just redirecting to run scripts. The purpose of that file is to add commands that are only different when running in CI/CD
Proposed changes
Remove CI's Makefile targets that just run regular run scripts
Quick reminders
🤝 I will follow Code of Conduct
✅ No existing pull request already does almost same changes
👁️ Contributing docs are something I've taken a look at
📝 Commit messages convention has been followed
💬 TSDoc comments have been added or updated indicating API visibility if API surface has changed.
🧪 Tests have been added if needed. For instance, if adding new features or fixing a bug. Or removed if removing features.
⚙️ API Report has been updated if API surface is altered.
#1034 👈
main
This stack of pull requests is managed by Graphite. Learn more about stacking.
Join @davidlj95 and the rest of your teammates on Graphite
| gharchive/pull-request | 2024-11-12T21:23:36 | 2025-04-01T06:38:20.892080 | {
"authors": [
"davidlj95"
],
"repo": "davidlj95/ngx",
"url": "https://github.com/davidlj95/ngx/pull/1034",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
563426958 | Implement task configuration avoidance for avro plugin
This PR supports https://github.com/davidmc24/gradle-avro-plugin/issues/97
@davidmc24 - not quite ready yet. This needs a bit of work to make sure the tests are passing. Some tests will need to be adjusted, but I think there is an issue with the code. I will look into it as soon as I find a bit of time to do so.
Sounds good. Let me know when it’s ready
On Mon, Feb 24, 2020 at 2:06 PM Denis Cabasson notifications@github.com
wrote:
@davidmc24 https://github.com/davidmc24 - not quite ready yet. This
needs a bit of work to make sure the tests are passing. Some tests will
need to be adjusted, but I think there is an issue with the code. I will
look into it as soon as I find a bit of time to do so.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/davidmc24/gradle-avro-plugin/pull/102?email_source=notifications&email_token=AADNKULEF2DPTSB5V3GJKCLREQLC3A5CNFSM4KTHBAQKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEMZEITI#issuecomment-590496845,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AADNKUJ54BBOQIR5GGAZFP3REQLC3ANCNFSM4KTHBAQA
.
--
David M. Carr
david@carrclan.us
Went with a different implementation for #97. Thanks for the contribution. It was a helpful example of some of the techniques needed.
| gharchive/pull-request | 2020-02-11T19:31:04 | 2025-04-01T06:38:20.897511 | {
"authors": [
"davidmc24",
"dcabasson"
],
"repo": "davidmc24/gradle-avro-plugin",
"url": "https://github.com/davidmc24/gradle-avro-plugin/pull/102",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
470711845 | Remove BorderlessEntryRenderer that was left after cleanup.
Fixed iOS build.
With cleanup a few days ago BorderlessEntry was removed but the renderer was still in iOS project and broke the build.
iOS builds now, but Xappy stops immediately after displaying its blue launch screen.
I have dealt with the usual suspects ("xapping" the .vs, bin and obj directories) without success.
I haven't changed anything and this is the cloned respository directly from GitHub to my Mac.
I have attached a file with the 16 warnings reported during the build.
The head and tail of the installation, In case this helps:
/Library/Frameworks/Xamarin.iOS.framework/Versions/Current/bin/mlaunch -sdkroot "/Applications/Xcode.app/Contents/Developer" --installdev "~/Documents/Xamarin_solutions/Xappy/Xappy/Xappy.iOS/bin/iPhone/Debug/device-builds/iphone8.1-12.3.1/Xappy.iOS.app" --device ios "--devname=FAR iP6s" --install-progress
Installing application bundle 'com.companyname.Xappy' on 'FAR iP6s'
Installing application bundle 'com.companyname.Xappy' on 'FAR iP6s'
TransferringPackage - PercentComplete: 10%
CopyingFile - Path: ~/Documents/Xamarin_solutions/Xappy/Xappy/Xappy.iOS/bin/iPhone/Debug/device-builds/iphone8.1-12.3.1/Xappy.iOS.app/META-INF/
CopyingFile - PercentComplete: 10%
. . .
CreatingStagingDirectory - PercentComplete: 5%
ExtractingPackage - PercentComplete: 15%
InspectingPackage - PercentComplete: 20%
TakingInstallLock - PercentComplete: 20%
PreflightingApplication - PercentComplete: 30%
InstallingEmbeddedProfile - PercentComplete: 30%
VerifyingApplication - PercentComplete: 40%
CreatingContainer - PercentComplete: 50%
InstallingApplication - PercentComplete: 60%
PostflightingApplication - PercentComplete: 70%
SandboxingApplication - PercentComplete: 80%
GeneratingApplicationMap - PercentComplete: 90%
Application bundle 'com.companyname.Xappy' installed on 'FAR iP6s'
Upload succeeded.
190721_Xappy_warnings.txt
Thanks.
More FYI:
I attempted to include all nightly builds even remotely associated with this issue.
Pease see attached report with specific details: Could not add packages.
Are there any specific packages to include and/or to exclude?
All I'm trying to do is to get some consistent state that will build and run without gobbling up all of my scarce time, with the objective of using some of the working code as an example of what Xamarin can do.
Thanks for doing all the hard work on the fundamentals.
190721_Xappy_could_not_add_packages.txt
Hi @farr64,
for me the iOS build an run works with the configuration Debug | iPhoneSimulator > iPhone XR iOS 12.2
Not tried it with real device or other configuration. Maybe that helps.
Best Alex
P.S. If you still have errors it may help if you attach the Application Output besides the Tool Output (build).
Hi Alex,
It most certainly helped: Xappy works wonderfully on the simulator. Thanks for the great tip ;-)
Now, if we can only get this baby out of the simulator and into the real world, that would be a giant step for Xumanity.
Almost there, an important step at a time.
I appreciate all of your work. I know how challenging each step is.
Well . . .
I let the simulator run for a few minutes and then the simulator decided to eject Xappy (or perhaps vice versa).
I enclose the crash log. I didn't bother to send it to Apple, so I just copied and pasted and saved it for you.
Thanks.
190721_Xappy_Simulator_crash.txt
So the native crash log don't help.
I have fixed one more crash that comes when navigating to about but the PullRequest is currently missing.
Will see if I can do it today evening (CEST).
It looks like the iOS Version needs some general love ;)
| gharchive/pull-request | 2019-07-20T20:30:30 | 2025-04-01T06:38:20.910947 | {
"authors": [
"Alex-Witkowski",
"farr64"
],
"repo": "davidortinau/Xappy",
"url": "https://github.com/davidortinau/Xappy/pull/41",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2165376793 | VSCode plugin login failure
CodeGPT plugin login does redirect tot http://localhost:54112/auth/callback?code=XXX
Once logged in with Google, page shows Connection Success and redirect back to VSCode app.
Launched external handler for 'vscode://'.
However, nothing really happens, plugin still shows as logged out.
The only log entry is this -
[15:15:51] Registering CodeGPT Copilot provider
Re-installing the plugin did not help.
Removing plugin files from ~/.vscode/extensions/danielsanmedium.dscodegpt-3.2.1/ with clean install did not help.
VSCode upgrade 1.86.2 to 1.87.0 did not help.
Another confusing thing is version - plugin version is 3.2.1, Chat shows 3.1.1, welcome banner shows 2.0.
VSCode Version: 1.87.0
Commit: 019f4d1419fbc8219a181fab7892ebccf7ee29a2
Date: 2024-02-27T23:42:51.279Z
Electron: 27.3.2
ElectronBuildId: 26836302
Chromium: 118.0.5993.159
Node.js: 18.17.1
V8: 11.8.172.18-electron.0
OS: Darwin arm64 23.3.0
+1
Having the exact same issue.
@gudzenkov @red-daut Hi guys!, try to update Node.js: 18.17.1 to Node.js 20.
That is Node.js bundled with VSCode 1.87.0
My local node is already 21.6.2
facing the same issue. not redirect to VSC after web login
Version: 1.87.0 (user setup)
Commit: 019f4d1419fbc8219a181fab7892ebccf7ee29a2
Date: 2024-02-27T23:41:44.469Z
Electron: 27.3.2
ElectronBuildId: 26836302
Chromium: 118.0.5993.159
Node.js: 18.17.1
V8: 11.8.172.18-electron.0
OS: Windows_NT x64 10.0.22621
local node version 20.11
Having the same issue.
Mostly the same happening here. Ubuntu 23.10, VS Code Insiders (latest as of this post).
I get the "Connection Success" message" but when I click on "Open in VSCode Insider", Chrome silently fails to reach my running VSCode instance.
Some issue here. Can't login. Is there any alternative form of login to codeGPT vscode extension?
Same issue too. I click on "Open in VSCode" after web login but nothing happens. I tried with with Firefox and Chrome, same issue. I'm on Linux Mint.
Same issue here, vscode on macos.
CodeGPT: v3.2.4
VSCode: 1.87.2
I cannot login, stuck here:
If I click signin it opens the browser at this local page:
http://localhost:54112/login
But get this:
VSCODE:
Versione: 1.87.2 (user setup)
Commit: 863d2581ecda6849923a2118d93a088b0745d9d6
Data: 2024-03-08T15:20:17.278Z
Electron: 27.3.2
ElectronBuildId: 26836302
Chromium: 118.0.5993.159
Node.js: 18.17.1
V8: 11.8.172.18-electron.0
Sistema operativo: Windows_NT x64 10.0.19045
same problem here. Is this going to be fixed? if not I'll just use cursor
We have not been able to replicate the error, so far every time we log in, the account remains connected in the extension.
The only error that currently exists is that depending on the browser the buttons to return to VSCode do not work, this will be resolved in the next version.
You could follow this tutorial to confirm that the entire Login process is correctly working on your accounts.
https://youtu.be/yErnyqXobcI?si=p3Tr925PcvjvzR_h
If after following the tutorial the Connected icon still does not appear in CodeGPT, could you send me an email to daniel@codegpt.co with a video or images of the complete flow you are doing so we can fix the problem.
Thank you very much for your help reporting the error, we will try to solve it as soon as possible
Emailed you with the video!
@bgeneto @matiaszanolli @clintonruairi Hi guys! I just test it on Ubuntu 22.04 and works!
, just follow the tutorial is the same for Ubuntu https://youtu.be/yErnyqXobcI?si=p3Tr925PcvjvzR_h
Can you guys provide a way to copy the token and paste in the extension via cmd? I'm stuck, can't login to cursor too.
no need the token just set the connection on the menu, please follow the tutorial https://youtu.be/yErnyqXobcI?si=p3Tr925PcvjvzR_h
I followed the tutorial the first time. It didn't work. After you posted this message, I made sure to follow it again, very slowly, to ensure I was not doing anything incorrectly. Same result.
When prompted to sign in by the extension, it is still redirected to a dead page on localhost.
The codeGPT plugin still gets stuck on an infinite loading screen.
When manually clicking sign in from the extension bar, it still redirects to a dead page on localhost.
This problem has been echoed by dozens of your users. If this extension worked I would happily pay the annual subscription fee. If your 'solution' is that we should all watch your Youtube video again, and that it works for you, I guess we should all just use cursor instead.
https://cursor.sh/
Screencast from 2024-04-17 22-21-23.webm
@clintonruairi
Now that we can see the video, possibly the problem is that port 54112 or 54113 are being used by another service.
Could you check if by turning off the services that run through those ports you can now lift the extension?
Nope, no process running on those ports. Ran the following commands:
sudo lsof -i :54112
sudo lsof -i :54113
sudo fuser 54112/tcp
sudo fuser 54113/tcp
sudo fuser -k 54113/tcp
sudo fuser -k 54112/tcp
First to see if there were any processes running on those ports - returned nothing. Then killing any processes on those ports, just to be sure. Then installed codeGPT, opened the sidebar - still stuck on infinite loading. Clicked the sign in prompt in the bottom right of vs code, redirected to 'Not found' link on localhost.
Screenshot:
Other things I have tried so far (all unsuccessful), each after uninstalling codeGPT, closing VS code, and then reinstalling codeGPT:
disabled firewall
disabled VPN
cleared browser cache for codeGPT
changed browser to firefox
disabled adblocker
made sure VS code up to date
made sure codeGPT up to date
tried on multiple other wifi networks
tried on 2 other machines. Windows 11 and MacOs Sonoma 14.4.1
restarted/ Reinstalled VS code
Behaviour is the same across all of the above configs. Leads me to believe this is almost certainly a problem with codeGPT itself, or conflicting behaviour with another extensions. My installed extensions:
code --list-extensions
cweijan.vscode-office
dbaeumer.vscode-eslint
ecmel.vscode-html-css
esbenp.prettier-vscode
file-icons.file-icons
github.github-vscode-theme
grapecity.gc-excelviewer
infeng.vscode-react-typescript
magicstack.magicpython
mkxml.vscode-filesize
monokai.theme-monokai-pro-vscode
ms-azuretools.vscode-docker
ms-python.debugpy
ms-python.python
ms-python.vscode-pylance
ms-vscode-remote.remote-containers
ms-vscode.remote-repositories
ms-vscode.vscode-typescript-next
pmneo.tsimporter
prisma.prisma
rust-lang.rust
rust-lang.rust-analyzer
tomoki1207.pdf
xabikos.javascriptsnippets
yoavbls.pretty-ts-errors
Got any suggestions? @davila7
Thank you for the information @clintonruairi
We are evaluating with the team what might be happening... we will keep you updated.
@clintonruairi
Now that we can see the video, possibly the problem is that port 54112 or 54113 are being used by another service.
Could you check if by turning off the services that run through those ports you can now lift the extension?
Used this command to check if those ports were occupied by a service:
Get-NetTCPConnection | where {$.LocalPort -eq 54112 -or $.LocalPort -eq 54113}
Returned nothing when VSCODE wasn't active.
Returned values and PID were of a VSCODE process, suspecting it was CODEGPT, I uninstalled the extension. And now It doesn't detect it.
Which means the problem is not dependent on some other process cause as soon I reinstalled again those ports were detected as used...by CodeGPT. But still cannot connect.
@bgeneto @matiaszanolli @clintonruairi Hi guys! I just test it on Ubuntu 22.04 and works! , just follow the tutorial is the same for Ubuntu https://youtu.be/yErnyqXobcI?si=p3Tr925PcvjvzR_h
Hi,
For info, it is not the right way to reproduce this issue.
You need to do a remote connection, for example through an SSH connection to an GNU/Linux instance.
Thanks
@bgeneto @matiaszanolli @clintonruairi Sorry guys, you are right, I haven't try on remote server yet, but I made a procedure on WSL here: https://medium.com/p/881b91ba193e
im getting unable to connect to the extension services, however i have latest node and vscode version and port 54112 is unused.
| gharchive/issue | 2024-03-03T14:06:34 | 2025-04-01T06:38:20.954672 | {
"authors": [
"JeromeGsq",
"Mayorc1978",
"Neoplayer",
"PilarHidalgo",
"Sarinoty",
"bgeneto",
"camsique",
"clintonruairi",
"davila7",
"djacquensf9",
"gudzenkov",
"luqmanyusof",
"matiaszanolli",
"pokhreldipesh",
"red-daut"
],
"repo": "davila7/code-gpt-docs",
"url": "https://github.com/davila7/code-gpt-docs/issues/237",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
426427539 | Support for reading images in correct orientation using EXIF data.
Current Behavior
I'm working on image processing with some images I collected myself. Dlib's dlib.load_rgb_image('image_path') method swaps the rows and columns on some images while OpenCV's cv2.imread('image_path') method does not.
Check out the results below
img = dlib.load_rgb_image("myimg.jpg")
print(img.shape)
--------------------
OUTPUT: (1944, 2592, 3)
(the resultant image is rotated 90 degrees clockwise)
while OpenCV's method returns the correct shape:
img = cv2.imread("myimg.jpg")
print(img.shape)
--------------------
OUTPUT: (2592, 1944, 3)
dlib.load_rgb_image() does not take into account the EXIF orientation metadata, so some images are read incorrectly.
I don't want to go in and rotate some of these offending images myself manually because I'm creating an app.
Is there a way in Dlib to read images using orientation information?
Note: I asked this question of stackoverflow, one of the comments told me to create an issue here
Version: 19.17.0
Where did you get dlib: pip
Platform: Windows 10 - 64bit
Compiler: python 3.6
Added platform info
Yeah, it doesn't do anything with EXIF data. It would be cool if it loader used it. Someone should submit a pull request that adds that feature :)
I'll see what I can do.
Would this require changes somewhere towards the top of image_loader.h?
That would be sensible.
| gharchive/issue | 2019-03-28T10:57:01 | 2025-04-01T06:38:20.961201 | {
"authors": [
"RafayAK",
"davisking",
"nmaynes"
],
"repo": "davisking/dlib",
"url": "https://github.com/davisking/dlib/issues/1706",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
313335711 | [fix] this change fixes compilation problems under mac
I have noticed the following problem with compilation on Mac:
dlib/tools/python/src/other.cpp:56:1: error: reference to 'list' is ambiguous list _max_cost_assignment ( ^ /Library/Developer/CommandLineTools/usr/include/c++/v1/list:805:28: note: candidate found by name lookup is 'std::__1::list' class _LIBCPP_TEMPLATE_VIS list ^ /usr/local/include/boost/python/list.hpp:57:7: note: candidate found by name lookup is 'boost::python::list' class list : public detail::list_base ^ /Users/jaroslaw/code/dlib/tools/python/src/other.cpp:72:11: error: reference to 'list' is ambiguous const list& assignment ^ /Library/Developer/CommandLineTools/usr/include/c++/v1/list:805:28: note: candidate found by name lookup is 'std::__1::list' class _LIBCPP_TEMPLATE_VIS list ^ /usr/local/include/boost/python/list.hpp:57:7: note: candidate found by name lookup is 'boost::python::list' class list : public detail::list_base ^ 2 errors generated. make[2]: *** [CMakeFiles/dlib_.dir/src/other.cpp.o] Error 1 error: cmake build failed!
this change fixes it.
PLEASE DOUBLE CHECK ME - I have not read this code or code in c++ recently... :)
I see the fix in place already :D
| gharchive/pull-request | 2018-04-11T13:56:21 | 2025-04-01T06:38:20.963678 | {
"authors": [
"jaroslawk"
],
"repo": "davisking/dlib",
"url": "https://github.com/davisking/dlib/pull/1253",
"license": "BSL-1.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2074797644 | Cannot access command palette from fullscreen
Hi Davis,
I can't access the command palette from fullscreen mode. Let me know if there's other info you need that would help solve this. Thanks!
Also: I was trying to access the palette because I wanted to split the screen to see two notes at once. I left fullscreen mode, split the screen, and reactivated fullscreen. One of the notes closed. So it seems that you can't split the screen in fullscreen mode. Let me know if you'd like me to open a separate issue thread for this. Thanks again!
Hey, thanks for opening the issue. To achieve Fullscreen Mode I am just calling requestFullscreen on the Editor element. Since Obsidian places the command palette on a completely separate div that is not a child of the editor element, there is no way to make it appear in fullscreen mode as it is implemented at the moment. This also explains why you can only fullscreen one editor, as only one element can be fullscreen.
To solve these issues we'd have to fullscreen the complete obsidian window and do some css tricks to make the editor appear fullscreen. I tried some things, but somehow position: fixed on the editor is not working. I suppose obsidian is using one of these properties https://stackoverflow.com/a/52937920 in any of the parent nodes. But I could not identifiy it quickly.
So fixing this requires some more thought and experimentation. I have thus postponed it to a later release.
Gotcha. Thanks for the explanation. For now, I'm using fullscreen with the Minimal theme and Hider plugin. That's what I was doing before, and it works fine.
Hey 👋 in the mean time I have completely rewritten the fullscreen mode, which fixes this issue as well. If any problems remain, feel free to open an issue.
| gharchive/issue | 2024-01-10T17:03:07 | 2025-04-01T06:38:20.967381 | {
"authors": [
"davisriedel",
"seldstein"
],
"repo": "davisriedel/obsidian-typewriter-mode",
"url": "https://github.com/davisriedel/obsidian-typewriter-mode/issues/44",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2382055629 | 🛑 Melody is down
In 5ebaceb, Melody (https://melody.red-mirror.com/) was down:
HTTP code: 0
Response time: 0 ms
Resolved: Melody is back up in 5921703 after 8 minutes.
| gharchive/issue | 2024-06-30T02:42:09 | 2025-04-01T06:38:20.970159 | {
"authors": [
"davorg"
],
"repo": "davorg/uptime",
"url": "https://github.com/davorg/uptime/issues/477",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
320749044 | Fileprovider.Watch("") Does not work as expected
Hello again, ive been using this in memory file magic some more, and ive found that the root directory (in this case i mean the root directory of any given provider) does not function properly, i cannot watch for changes in this root directory forexample, i get a null argument exception, which i dont get with the physical file provider forexample..
Its quite unsettling for me that my tests have completely different behaviors in this regard than my actual physical file provider.
[Fact]
public async Task GetFilesToBackupQuery_NewApproach_FilewatchIsApplied()
{
// Arrange
var rootLocation = "/RAP";
var directory = new InMemoryDirectory(rootLocation);
var fp = new InMemoryFileProvider(directory);
var watcher = fp.Watch(""); // This line throws, but i expect it to watch rootLocation
var changeMe = false;
watcher.RegisterChangeCallback((state) => { changeMe = true; }, new object());
// Act
directory.AddFile(rootLocation, new StringFileInfo("Test", "Test.txt"));
await Task.Delay(2000);
// Assert
changeMe.Should().Be(true);
}
Interesting. Does Watch("/") work?
It doesent throw if i use it with "/", however this is different from how the physical file provider works (:
It also dident register any change when i added that file a couple lines later (:
What do you think?
Sounds like its a bug if its not behaving consistently with how Watch works with other providers like the PhysicalFileProvider.
I'd welcome a PR or added test coverage in this area.
Otherwise it may be a while until I can look at it, but I'll add it to my list!
You can see there are various tests for watch here: https://github.com/dazinator/Dazinator.AspNet.Extensions.FileProviders/blob/master/src/Dazinator.AspNet.Extensions.FileProviders.Tests/InMemoryFileProviderTests.cs#L215
These were produced at the time based on the physical file watcher tests.
However I can't see one that tests watching for a new file - only tests are for watching changes to an existing file.
It would be interesting to see how PhysicalFileWatcher handles that.
Also I am pretty sure watching("") used to be correct, are you using asp.net core 1.X or 2.X ? Perhaps things have changed with physicalfileprovider in 2.X compared to the 1.X version?
My manager has agreed to me spending some of my work time doing a PR, so ill see what i can figure out (:
Im using 2.X of asp.net core (:
After digging around in your code a bit, it seems to me that this code was never designed to support watching directories, only files, and the tests seem to back this theory since there arent any that watches a directory (: So i think its gonna take a bit of a redesign.. This whole filters concept seems to work really nice for checking if a file has changed, but i cant really make it work with directories, since its not what its designed for sadly..
Im totally down for helping, but i think you might need to make some design considerations (:
From what ive learned so far it seems that we can check if a path has a file extension (.txt ect) and if it does not then we can add a "/" wildcard to make the globbing thing work, but then we get another design issue later when it then tries to notify the watcher cause it cannot find a watcher with the path "/" cause the watcher would be watching "".
Im gonna keep going for a while here, i just wanted to put down these thoughts while they were still fresh!
I am up for changes in design that facilitate the end goal of mirroring physicalfileprovider behaviours in terms of watching.
It would be good initially just to get a few failing tests up that we want to pass, and that PhysicalFileProvider passes with - if you get a chance to add a few such tests we can then discuss any design changes in terms of making those tests pass.
Closing here, thanks for this, it was my first contribution to an OS framework, and it was a fun and educating experience (:
@dazinator Just one last comment, any plans on making a new release, or should i just use the unstable version for a while? (:
If you can use the latest unstable nuget package for now that would be great. I will issue a new release but I want to ensure there has been some time for this change to sink in - if you hit any further issues please let me know.
| gharchive/issue | 2018-05-07T10:11:36 | 2025-04-01T06:38:21.014147 | {
"authors": [
"RPaetau",
"dazinator"
],
"repo": "dazinator/Dazinator.AspNet.Extensions.FileProviders",
"url": "https://github.com/dazinator/Dazinator.AspNet.Extensions.FileProviders/issues/21",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
788415109 | Incorrect format of generated binary literals
System information:
Operating system (distribution) and version: any
DBeaver version: 7.2.3 and earlier
Java version: any
Additional extensions: none
Connection specification:
Database name and version: HANA Cloud (4.0), also earlier
Driver name: com.sap.db.jdbc.Driver, 2.7.9 and earlier
Do you use tunnels or proxies (SSH, SOCKS, etc)? no
Describe the problem you're observing:
In short: Binary literals are generated as 0x<value> instead of X'<value>', where <value> is the value in hexadecimal format.
Long: Generated binary literals use the non-standard format: 0xAABB. However, the SQL standard specifies that X'AABB' should be used instead. Some of the DBs (like MariaDB) still support the non-standard format probably for compatibility reasons, but others (like HANA) don't (HANA actually treats 0xAABB as an hexadecimal integer value).
Snippet from the ANSI SQL '92 Standard:
<hex string literal> ::=
X <quote> [ <hexit>... ] <quote>
[ { <separator>... <quote> [ <hexit>... ] <quote> }... ]
HANA 2.0.5 docs: https://help.sap.com/viewer/4fe29514fd584807ac9f2a04f6754767/2.0.05/en-US/20a1569875191014b507cf392724b7eb.html
Steps to reproduce, if exist:
Create a table with a binary primary key.
Go to table data and try to delete a row. Save and refresh the data. The row is still there.
Alternatively, go to table data and right click a row, then generate delete query. Run the generated query to get the following error:
SQL Error [266] [07006]: SAP DBTech JDBC: [266] (at 43): inconsistent datatype: INT type is not comparable with VARBINARY type.: line 2 col 12
verified
| gharchive/issue | 2021-01-18T16:48:12 | 2025-04-01T06:38:21.082306 | {
"authors": [
"alumni",
"uslss"
],
"repo": "dbeaver/dbeaver",
"url": "https://github.com/dbeaver/dbeaver/issues/11036",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
911427566 | Function header and trailler missing for PostgreSQL
I suspect this is similar to https://github.com/dbeaver/dbeaver/issues/3892 but it impacts PostgreSQL versions well above 8.4.
System information:
Windows
Tested on:
DBeaver 7.3.0
DBeaver 21.1.0
Connection specification:
Tested on:
PostgreSQL 9.5 and 10.10:
PostgreSQL 9.5.23 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit
PostgreSQL 10.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-36), 64-bit
Driver name
PostgreSQL JDBC Driver
Do you use tunnels or proxies (SSH, SOCKS, etc)?
Happens both on and off SSH.
Describe the problem you're observing:
Sometimes when loading a function in PostgreSQL you do not get the header.
You end up with:
Whereas it should look like:
Also missing from the end is:
$function$;
Steps to reproduce, if exist:
Open up a PostgreSQL connection.
Open the functions list in a schema with functions (more than 1 would show the results best).
Open up (view function) one of the functions in the schema. This should load as expected, close the function.
Leave DBeaver running for at least an hour with no interaction.
Open up one of the functions that was not opened in step 3. This will not load the header.
Opening up the same function that was opened in step 3 still loads correctly.
Workaround:
Restart of DBeaver.
Or just Disconnect the connection and reconnect (Invalidate/Reconnect does not seem to be enough).
Include any warning/errors/backtraces from the logs
Logs seem to show:
2021-06-04 12:50:21.729 - Error reading procedure body
org.postgresql.util.PSQLException: This connection has been closed.
at org.postgresql.jdbc.PgConnection.checkClosed(PgConnection.java:767)
at org.postgresql.jdbc.PgConnection.prepareStatement(PgConnection.java:1659)
at org.postgresql.jdbc.PgConnection.prepareStatement(PgConnection.java:373)
at org.jkiss.dbeaver.model.impl.jdbc.exec.JDBCConnectionImpl.prepareStatement(JDBCConnectionImpl.java:244)
at org.jkiss.dbeaver.model.impl.jdbc.JDBCUtils.queryString(JDBCUtils.java:624)
at org.jkiss.dbeaver.ext.postgresql.model.PostgreProcedure.getObjectDefinitionText(PostgreProcedure.java:400)
at org.jkiss.dbeaver.ui.editors.sql.SQLSourceViewer.getSourceText(SQLSourceViewer.java:85)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditorNested$ObjectDocumentProvider$1.lambda$0(SQLEditorNested.java:271)
at org.jkiss.dbeaver.model.exec.DBExecUtils.tryExecuteRecover(DBExecUtils.java:169)
at org.jkiss.dbeaver.ui.editors.sql.SQLEditorNested$ObjectDocumentProvider$1.run(SQLEditorNested.java:269)
at org.jkiss.dbeaver.model.runtime.AbstractJob.run(AbstractJob.java:105)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
thanks for the report
closed as the duplicate of #12649
| gharchive/issue | 2021-06-04T11:58:12 | 2025-04-01T06:38:21.091825 | {
"authors": [
"HeikkiVesanto",
"uslss"
],
"repo": "dbeaver/dbeaver",
"url": "https://github.com/dbeaver/dbeaver/issues/12735",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
953193262 | IBMi DB2 / AS400 - Auto Generated / Auto Increment not correctly showing
System information:
Operating system (distribution) and version:
Windows 10 / 64bit
DBeaver version:
21.0.1 and 21.0.2. Also verified issue in 6.0.0
Additional extensions:
None
Connection specification:
Database name and version:
com.ibm.as400.access.AS400JDBCDriver, jt400-10.5.jar (10.5)
Driver name:
JT400 / com.ibm.as400.access.AS400JDBCDriver
Do you use tunnels or proxies (SSH, SOCKS, etc)?
No
Describe the problem you're observing:
When viewing the column properties on the table it is not identifying identity columns on the table properly and showing us they auto-increment.
Example:
You can see the fields are not marked as auto incrementing, even though it is an identity column which if I use the native ACS viewer from IBM I see this:
Steps to reproduce, if exist:
Double click a table with an identity column in DBeaver to bring up that tables property window.
Include any warning/errors/backtraces from the logs
No warnings.
Hello @xantari
Please add table DDL info for tests. I can't reproduce it for now for DB2 LUW.
This is for DB2 for IBMi, which is a different DB2 dialect then LUW and uses a different JDBC driver then the LUW version.
Here is the DDL:
CREATE TABLE ARRTFLIB.CEACTIVITYAPIUSERS FOR SYSTEM NAME CEACTAPIU (
IDSPROVIDERID FOR COLUMN IDSPROID INTEGER GENERATED ALWAYS AS IDENTITY (
START WITH 1 INCREMENT BY 1
NO MINVALUE NO MAXVALUE
NO CYCLE NO ORDER
NO CACHE )
,
CLIENTID VARCHAR(200) CCSID 37 NOT NULL ,
ALLSPONSORACCESS FOR COLUMN ALLSPAXS SMALLINT NOT NULL DEFAULT 0 ,
CONSTRAINT ARRTFLIB.Q_ARRTFLIB_CEACTAPIU_IDSPROID_00001 PRIMARY KEY( IDSPROVIDERID ) )
RCDFMT CEACTAPIU ;
LABEL ON TABLE ARRTFLIB.CEACTIVITYAPIUSERS
IS '-CEACTIVITYAPIUSERS' ;
LABEL ON COLUMN ARRTFLIB.CEACTIVITYAPIUSERS
( IDSPROVIDERID TEXT IS 'Identity Server PoviderID' ,
CLIENTID TEXT IS 'Client ID' ,
ALLSPONSORACCESS TEXT IS 'All sponsor access flag' ) ;
Ok, thanks for the bug report.
For AS400 DBeaver shows only what the driver gives. So maybe this is a driver issue.
We don't have a DB2 AS400 test environment for testing, unfortunately. Therefore we can't fix it now. Maybe someday.
@xantari This driver can be very sensitive to it's properties and changing their default values can fix some issues for some people, but break something for others. So my advice to you for now is to try carefully tweaking them (or first looking in the documentation for a needed property) to test if it can be changed
@Matvey16 Thanks, I did some experimentation and found a result that does show the auto increment properties properly. But then it messes up the column retrieval information.
Here is what I had:
metadata source: 0
translate binary: true
date format: iso
time format: iso
extended metadata: false
With the above settings you get the column names in addition to the column text underneath the column as follows when you do a select * from table:
To fix the identity column information issue I then changed it to the following:
metadata source: 1
translate binary: true
date format: iso
time format: iso
extended metadata: true
So lines #1 and #5 above were changed. You can not leave metadata source: 0 and just leave "extended metadata" to true. As that still doesn't allow you to view the identity column information, even though the documentation says it should.
The problem now though, is with #1 and #5 above changed I now get this when viewing that same table as above:
As you can see in the above image, though I've fixed the issue with the identity column property info now being displayed in DBeaver, I have completely lost the column names on the result sets.
So a bit more experimentation.
Used the following driver properties:
metadata source: 1
translate binary: true
date format: iso
time format: iso
extended metadata: true
When I have this unmarked:
I get this:
When marking it:
I get the column names back:
Now, I figured how do I get the column headers back. And I noticed this property in DBeavber:
It was already marked. and should show the column description/labels in the header right? Kinda wondering if this is now a DBeaver bug.
Closing this in favor of #13335 since the original issue this report is about is fixed, but a separate issue has now cropped up.
Ok, thanks for the bug report.
For AS400 DBeaver shows only what the driver gives. So maybe this is a driver issue.
We don't have a DB2 AS400 test environment for testing, unfortunately. Therefore we can't fix it now. Maybe someday.
You may be able to get a free IBM i account at https://pub400.com/.
| gharchive/issue | 2021-07-26T18:58:06 | 2025-04-01T06:38:21.108681 | {
"authors": [
"LonwoLonwo",
"Matvey16",
"bdietz400",
"xantari"
],
"repo": "dbeaver/dbeaver",
"url": "https://github.com/dbeaver/dbeaver/issues/13322",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1190301921 | No shortcut to open a table from the References panel
Is your feature request related to a problem? Please describe.
When following references between multiple tables, the References panel comes useful to go from table A to table B. However, if table C has a foreign key refering to table B, it is then inconvenient to get to C.
I see no way to do that with DBeaver 22.0.1.
Describe the solution you'd like
A simple way would be to allow opening the References panel in its own full tab, so that we can then use References again.
Describe alternatives you've considered
Perhaps a more elaborate solution like Toad's Master-Detail Browser would be even better.
Hello @Chealer,
Sorry for the late response.
I'm unsure if I understand you correctly. Can you please describe (or show using a video) a use case for your feature request?
Nothing to be sorry about @ShadelessFox
A use case would involve 2 entities indirected linked with foreign keys, through a third entity.
For example, an organization can have members, and a member can have skills. If a table for skills has a foreign key to a table for members, and the members table has a foreign key to an ORGANIZATION table, it would be great to be able to quickly go from SKILL to ORGANIZATION through MEMBER, by selecting SKILL's foreign key, then MEMBER's foreign key.
So, basically, you want this combo menu to show references from the table that is currently shown in that panel?
I'm not sure what data that panel should display in such case. Should it result in query that looks something like this?
SELECT * FROM skills WHERE member_id IN (SELECT member_id FROM members WHERE organization_id = <selected row>);
Thanks.
That is not really what I wanted @ShadelessFox , but please excuse me and disregard my previous comment. I haven't used DBeaver in a while and was confused. I retested, and here is the actual use case which is problematic:
For example, an organization can have members, and a member can have skills. If a table for skills has a foreign key to a table for members, and the members table has a foreign key to an ORGANIZATION table, it would be great to be able to quickly go from ORGANIZATION to SKILL through MEMBER, by selecting MEMBER's reverse foreign key from ORGANIZATION, then SKILL's reverse foreign key. This would allow to quickly find skills found in an organization.
I don't know the best way to do this, but one way would be to add a button in the References panel allowing to make it a full-fledged tab (rather than a sub-tab). There could be some "Separate into new tab" button. To achieve the above, I would go to ORGANIZATION, select a row, open the References panel, select a member in the References panel, use the new "Separate into new tab" button, and from there open the References panel again.
By the way, while Toad's Master Detail can serve as inspiration, its design is not that intuitive, so I would not advise to replicate it exactly.
The provided solution is actually pretty useful, so we can stick with it.
I was unable to take a glance of Toad's Master Detail. Can you please, if possible, provide a footage that shows its functionality?
There is a workaround that involves using the References tab in the metadata editor (it's not the same as the References panel):
| gharchive/issue | 2022-04-01T21:49:59 | 2025-04-01T06:38:21.116784 | {
"authors": [
"Chealer",
"ShadelessFox"
],
"repo": "dbeaver/dbeaver",
"url": "https://github.com/dbeaver/dbeaver/issues/16049",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1243165763 | is there a way not to have to reinstall plugins after every updates ?
Hi,
i'm using DBeaver in zip version (Windows)
every time I migrate to the newer version I have to redownload et reinstall the plugins.
Fortunatly I'm using only one.
having this message for instance with the Office addon:
thanks for the suggestion
closed a a duplicate of #5317
| gharchive/issue | 2022-05-20T13:39:45 | 2025-04-01T06:38:21.118890 | {
"authors": [
"itphonim",
"uslss"
],
"repo": "dbeaver/dbeaver",
"url": "https://github.com/dbeaver/dbeaver/issues/16555",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
2345274530 | 连接doris一段时间后,再也连接不上
Description
第一次连接doris后,过一段时间后,doris再也连接不上,doris集群没任何问题,电脑重启后,打开debaver后恢复正常,过一段时间后,会再次出现连接不上的问题,断开重新连接也连接不上doris,请问这个问题怎么解决?
DBeaver Version
社区版本24.1.0
Operating System
windows11
Database and driver
No response
Steps to reproduce
doris版本2.1.2,社区版本24.1.0
Additional context
No response
已解决
| gharchive/issue | 2024-06-11T03:02:30 | 2025-04-01T06:38:21.121912 | {
"authors": [
"a582687883"
],
"repo": "dbeaver/dbeaver",
"url": "https://github.com/dbeaver/dbeaver/issues/34324",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
338944714 | I can't modify cell in resultset. Earlier this function was work. It is very necessary function and it not work.
Can't reproduce. Please provide more details.
What is your database
How do you edit cells (in table editor or in custom query results).
Are you sure that exactly the same results were editable in earlier versions?
Sorry. I rechecked and made sure that I can edit cell in the result of query for MySQL and PostgreSQL.
I can't edit cell for Amazon Redshift, after update the driver for this DB.
It is driver issue (i think) or interaction between DBeaver and Driver Amazon Redshift.
I tried edit cell in the custom query results.
This was fixed in 5.1.5.
| gharchive/issue | 2018-07-06T13:43:15 | 2025-04-01T06:38:21.124340 | {
"authors": [
"nkiseev",
"serge-rider"
],
"repo": "dbeaver/dbeaver",
"url": "https://github.com/dbeaver/dbeaver/issues/3763",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
346571521 | SQL Editor Issue
Even ever i press multiple enter, and then do backspace their are scroll bar coming on editor screen, its irritating after few occurrence.
Please check workaround in #3916
| gharchive/issue | 2018-08-01T12:07:14 | 2025-04-01T06:38:21.126168 | {
"authors": [
"khushalc",
"serge-rider"
],
"repo": "dbeaver/dbeaver",
"url": "https://github.com/dbeaver/dbeaver/issues/3900",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
161077493 | Additional Sound Notifications
Hey Serge,
I see that in 3.6.7 you added sound notification support "beep after query finish".
Is it possible we can expand on this capability, for example rather than beep after each query could we have a notification sound after the run of all queries within one SQL editor?
Also could there be a different notification sound based on the outcome per query/per SQL script.
See example notification sounds below:
Success: http://www.soundsnap.com/node/92951
Failed: http://www.soundsnap.com/error_tone_chime_blastwavefx_16379
This would really help me and hopefully other people when you running many queries, want to monitor but don't always have to be at your desk.
Thanks,
Ben
Agreed, that would be a good feature.
Hello All, +1 vote for this...
Hello All, I also +1 vote for this.
This small feature can contribute so much for efficient time utilization while waiting for long running SQL.
+1 for this!
+1
I would also suggest the ability to trigger a macOS notification on query completion too. (Should that be a separate issue?) I sometimes need to kick off a long-running query and then I go do work in some other app while I'm waiting. For various reasons, sound notifications aren't always viable.
+1
I would also suggest the ability to trigger a macOS notification on query completion too. (Should that be a separate issue?) I sometimes need to kick off a long-running query and then I go do work in some other app while I'm waiting. For various reasons, sound notifications aren't always viable.
i don't even know how to set the notification:(
What's the latest on this feature? Is it still planned?
+1
dont't forget us!
Any updates on this? Has it been abandoned?
| gharchive/issue | 2016-06-19T15:17:18 | 2025-04-01T06:38:21.132182 | {
"authors": [
"SirBenJammin",
"andreescastano",
"dburtonDRW",
"earsonlau",
"eng543",
"lalato",
"serge-rider",
"shungabubus",
"xenago",
"yonisade"
],
"repo": "dbeaver/dbeaver",
"url": "https://github.com/dbeaver/dbeaver/issues/546",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
680401425 | DBeaver: When editing an INTEGER value in a grid, the cell is not opened if start typing a minus or plus sign
DBeaver - Version 7.1.4 CE
DBeaver driver: MS SQL Server / Microsoft Driver (the new Microsoft Driver)
Operating System: Windows 7 / Windows 8.1 / Windows 10
Database Server: Microsoft SQL Express 2014, 2016, 2017
When editing an INTEGER value in a grid, the cell is not opened if start typing a minus "-" or a plus "+", this workes if I enter a number "0".."9" instead.
The grid ought to set the INTEGER cell in edit mode not only when I press "0".."9" on the keyboard, pressing "+" or "-" ought to open the cell in edit mode too.
Create a test table and fill it with test data:
CREATE TABLE TestInteger (Id INTEGER, Value INTEGER);
GO
INSERT INTO TestInteger
VALUES (1, 10), (2, 20);
View the table data TestInteger in a grid.
Click on the 1st row Value column (with the value of 10).
Press number "5" on the keyboard.
The cell is changed to edit mode. This is OK.
Click on the 2nd row Value column (with the value of 20).
Press minus sign (the "-" character) or the plus sign (the "+" character) on the keyboard.
Nothing happens. This is an ERROR.
My guess is that the "-" and the "+" character should be added to characters that change the grid cell into edit mode.
`thanks for the bug report
Fixed
+ still doesn't change int cell in edit mode
Keypad buttons handle was added
verified
| gharchive/issue | 2020-08-17T17:26:06 | 2025-04-01T06:38:21.138657 | {
"authors": [
"kseniiaguzeeva",
"pdanie",
"serge-rider",
"uslss"
],
"repo": "dbeaver/dbeaver",
"url": "https://github.com/dbeaver/dbeaver/issues/9567",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
822385727 | [BUG] helper command fails due to losreader problem
Describe the bug
When running the 2nd raiderDelay helper command, it fails due to array dimension error in the losreader.py file.
To Reproduce
Steps to reproduce the behavior:
Command used
raiderDelay.py --date 20200103 --time 23:00:00 -b 39 40 -79 -78 --model GMAO --zref 15000 --heightlvs 0 100 200 -v
Error Output
Weather model GMAO is available from 2014-02-20 00:00:00-Present
WARNING: Rounded given hour from 23 to 0
Traceback (most recent call last):
File "/opt/anaconda3/envs/RAiDER/bin/raiderDelay.py", line 4, in <module>
__import__('pkg_resources').run_script('RAiDER==0.0.1', 'raiderDelay.py')
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/pkg_resources/__init__.py", line 665, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/pkg_resources/__init__.py", line 1463, in run_script
exec(code, namespace, namespace)
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/EGG-INFO/scripts/raiderDelay.py", line 12, in <module>
parseCMD()
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/runProgram.py", line 196, in parseCMD
_tropo_delay(new_args)
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/runProgram.py", line 208, in _tropo_delay
(_, _) = tropo_delay(args_copy)
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/delay.py", line 153, in tropo_delay
los = getLookVectors(los, lats, lons, hgts, zref)
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/losreader.py", line 338, in getLookVectors
look_vecs = _getZenithLookVecs(lat, lon, hgt, zref=zref)
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/losreader.py", line 320, in _getZenithLookVecs
zenLookVecs = (np.array((e, n, u)).T * (zref - heights)[..., np.newaxis])
ValueError: operands could not be broadcast together with shapes (2,3) (3,1)
@leiyangleon yes I saw this bug last night when I was testing as well. I think I have the fix in place, will push asap.
@jlmaurer the command still fails but with a new error now:
Traceback (most recent call last):
File "/opt/anaconda3/envs/RAiDER/bin/raiderDelay.py", line 4, in <module>
__import__('pkg_resources').run_script('RAiDER==0.0.1', 'raiderDelay.py')
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/pkg_resources/__init__.py", line 665, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/pkg_resources/__init__.py", line 1463, in run_script
exec(code, namespace, namespace)
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/EGG-INFO/scripts/raiderDelay.py", line 12, in <module>
parseCMD()
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/runProgram.py", line 154, in parseCMD
args = checkArgs(args, p)
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/checkArgs.py", line 42, in checkArgs
lat, lon, llproj, bounds, flag, pnts_file = readLL(args.query_area)
File "/opt/anaconda3/envs/RAiDER/lib/python3.7/site-packages/RAiDER-0.0.1-py3.7-macosx-10.9-x86_64.egg/RAiDER/llreader.py", line 33, in readLL
fname = ' '.join(*args)
TypeError: sequence item 0: expected str instance, float found
@leiyangleon This is actually not the same bug, but I can't reproduce the error either way because I'm getting a KeyError in the _load_model_level function in gmao.py. Can you tell me if there is a quick fix for this?
(RAiDER) jlmd9g@rt01jlmd9g tmp1 % raiderDelay.py --date 20200103 --time 23:00:00 -b 39 40 -79 -78 --model GMAO --zref 15000 --heightlvs 0 100 200 -v
Weather model GMAO is available from 2014-02-20 00:00:00-Present
WARNING: Rounded given hour from 23 to 0
ERROR: Unable to save weathermodel to file
Traceback (most recent call last):
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/models/gmao.py", line 143, in _fetch
writeWeatherVars2NETCDF4(self, lats, lons, h, q, p, t, outName=out)
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/utilFcns.py", line 730, in writeWeatherVars2NETCDF4
nc_outfile = write2NETCDF4core(nc_outfile, dimension_dict, dataset_dict, tran, mapping_name='WGS84')
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/utilFcns.py", line 801, in write2NETCDF4core
dataset_dict[data]['dataset'][np.isnan(dataset_dict[data]['dataset'])] = FillValue
TypeError: only integer scalar arrays can be converted to a scalar index
Traceback (most recent call last):
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/bin/raiderDelay.py", line 4, in
__import__('pkg_resources').run_script('RAiDER==0.0.1', 'raiderDelay.py')
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/pkg_resources/__init__.py", line 650, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/pkg_resources/__init__.py", line 1446, in run_script
exec(code, namespace, namespace)
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/EGG-INFO/scripts/raiderDelay.py", line 12, in
parseCMD()
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/runProgram.py", line 196, in parseCMD
_tropo_delay(new_args)
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/runProgram.py", line 208, in _tropo_delay
(_, _) = tropo_delay(args_copy)
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/delay.py", line 113, in tropo_delay
weather_model_file = prepareWeatherModel(
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/processWM.py", line 91, in prepareWeatherModel
f = weather_model.load(
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/models/weatherModel.py", line 201, in load
self.load_weather(*args, **kwargs)
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/models/gmao.py", line 156, in load_weather
self._load_model_level(f)
File "/Users/jlmd9g/software/miniconda3/envs/RAiDER/lib/python3.8/site-packages/RAiDER-0.0.1-py3.8-macosx-10.9-x86_64.egg/RAiDER/models/gmao.py", line 168, in _load_model_level
h = np.array(f.variables['H'][:])
KeyError: 'H'
@jlmaurer not sure what the cause is as I have never seen that error before. Would suggest to try other models. I suspect this is not related to GMAO only as I haven't touched the GMAO code for months...
| gharchive/issue | 2021-03-04T18:23:45 | 2025-04-01T06:38:21.145291 | {
"authors": [
"jlmaurer",
"leiyangleon"
],
"repo": "dbekaert/RAiDER",
"url": "https://github.com/dbekaert/RAiDER/issues/273",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1713170223 | Training yolact on resnet18 and got some error
Hi,
I was training the yolact model on resnet18 backbone and it was going all good but then suddenly I got some error and the training got aborted-
`[ 2] 32930 || B: 3.996 | C: 4.955 | M: 4.408 | S: 1.036 | T: 14.395 || ETA: 1 day, 4:21:46 || timer: 0.127
[ 2] 32940 || B: 4.020 | C: 5.072 | M: 4.458 | S: 1.104 | T: 14.655 || ETA: 1 day, 4:23:08 || timer: 0.132
[ 2] 32950 || B: 4.063 | C: 5.246 | M: 4.552 | S: 1.199 | T: 15.060 || ETA: 1 day, 4:23:34 || timer: 0.151
[ 2] 32960 || B: 4.057 | C: 5.435 | M: 4.608 | S: 1.271 | T: 15.371 || ETA: 1 day, 4:23:31 || timer: 0.158
[ 2] 32970 || B: 4.071 | C: 5.574 | M: 4.675 | S: 1.306 | T: 15.626 || ETA: 1 day, 4:24:23 || timer: 0.166
[ 2] 32980 || B: 4.033 | C: 5.746 | M: 4.758 | S: 1.381 | T: 15.918 || ETA: 1 day, 4:26:38 || timer: 0.140
[ 2] 32990 || B: 4.031 | C: 5.817 | M: 4.741 | S: 1.411 | T: 15.999 || ETA: 1 day, 4:25:21 || timer: 0.139
[ 2] 33000 || B: 4.055 | C: 5.763 | M: 4.799 | S: 1.412 | T: 16.028 || ETA: 1 day, 4:25:51 || timer: 0.128
Traceback (most recent call last):
File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/queues.py", line 245, in _feed
obj = _ForkingPickler.dumps(obj)
File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 364, in reduce_storage
shared_cache[cache_key] = StorageWeakRef(storage)
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 65, in setitem
self.free_dead_references()
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 70, in free_dead_references
if storage_ref.expired():
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 35, in expired
return torch.Storage._expired(self.cdata) # type: ignore[attr-defined]
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/storage.py", line 757, in _expired
return eval(cls.module)._UntypedStorage._expired(*args, **kwargs)
AttributeError: module 'torch.cuda' has no attribute '_UntypedStorage'
Traceback (most recent call last):
File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/queues.py", line 245, in _feed
obj = _ForkingPickler.dumps(obj)
File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 364, in reduce_storage
shared_cache[cache_key] = StorageWeakRef(storage)
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 65, in setitem
self.free_dead_references()
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 70, in free_dead_references
if storage_ref.expired():
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 35, in expired
return torch.Storage._expired(self.cdata) # type: ignore[attr-defined]
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/storage.py", line 757, in _expired
return eval(cls.module)._UntypedStorage._expired(*args, **kwargs)
AttributeError: module 'torch.cuda' has no attribute '_UntypedStorage'
[ 2] 33010 || B: 4.178 | C: 5.768 | M: 4.934 | S: 1.417 | T: 16.296 || ETA: 1 day, 4:49:09 || timer: 0.126
Traceback (most recent call last):
File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/queues.py", line 245, in _feed
obj = _ForkingPickler.dumps(obj)
File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 364, in reduce_storage
shared_cache[cache_key] = StorageWeakRef(storage)
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 65, in setitem
self.free_dead_references()
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 70, in free_dead_references
if storage_ref.expired():
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 35, in expired
return torch.Storage._expired(self.cdata) # type: ignore[attr-defined]
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/storage.py", line 757, in _expired
return eval(cls.module)._UntypedStorage._expired(*args, **kwargs)
AttributeError: module 'torch.cuda' has no attribute '_UntypedStorage'
Traceback (most recent call last):
File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/queues.py", line 245, in _feed
obj = _ForkingPickler.dumps(obj)
File "/home/gangwa/miniconda3/lib/python3.9/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 364, in reduce_storage
shared_cache[cache_key] = StorageWeakRef(storage)
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 65, in setitem
self.free_dead_references()
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 70, in free_dead_references
if storage_ref.expired():
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/multiprocessing/reductions.py", line 35, in expired
return torch.Storage._expired(self.cdata) # type: ignore[attr-defined]
File "/home/gangwa/miniconda3/lib/python3.9/site-packages/torch/storage.py", line 757, in _expired
return eval(cls.module)._UntypedStorage._expired(*args, **kwargs)
AttributeError: module 'torch.cuda' has no attribute '_UntypedStorage'
`
Anyone has any idea why I got this after around 1-2 hours of training.
Thanks
This repo did not updated 3 years.
It's better for you to train with yolov8-seg
I am able to make it run but getting very slow training speed
Solution -
Don't use torch1.12. Either upgrade or degrade the version of torch with cuda.
| gharchive/issue | 2023-05-17T05:51:36 | 2025-04-01T06:38:21.170281 | {
"authors": [
"abd-gang",
"sdimantsd"
],
"repo": "dbolya/yolact",
"url": "https://github.com/dbolya/yolact/issues/817",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
711315936 | Oauth/IClaimsPrinicpal collector
gab the current IClaimsPrinicpal for a ASPNET application
extract the name and id, using the approved XML namespaces
https://github.com/dbones-labs/auditable/commit/d88e68c6e2a46e7919ef6cc36458f1c36f5f743e
| gharchive/issue | 2020-09-29T17:32:51 | 2025-04-01T06:38:21.171761 | {
"authors": [
"dbones"
],
"repo": "dbones-labs/auditable",
"url": "https://github.com/dbones-labs/auditable/issues/20",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
215679656 | Add support for concrete nested serializers
This is a proposal to add support for editing the fields of nested serializers not on a per-request basis, but at instantiation time.
Here is an example. There is a serializer for user called UserSerializer it has rather a lot of fields
class UserSerializer(...):
class Meta:
fields = ('id', 'url', 'name', 'email', 'accounts', 'friends', 'most_recent_activity')
The UserSerializer is nested inside another serializer
class MessageSerializer(...):
from_users = UserSerializer(many=True, read_only=True)
class Meta:
fields = ('id', 'url', 'from_users', 'to_users', 'account', 'created_at', 'modified_at')
But we only want a few bits of info for each user in the MessageSerializer, just name, email, id and url. Just enough context to help the front end render without relying on a user lookup.
This proposal is made to solve that situation:
class MessageSerializer(...):
from_users = UserSerializer(many=True, read_only=True, fields=('id', 'url', 'name', 'email'))
I have already coded something like this up, and can I can see how there is some overlap with this project. Enough to justify putting it in, I think. Unfortunately it doesn't directly support the purpose of this project, which is dynamic per-request fields. This is more like dynamic fields at runtime.
I'm not interested in this feature anymore. Nested ModelSerializers pay a significant penalty when instantiated. This was my main use-case. Instead I create a normal Serializer for my nested serializers. The performance benefits are huge.
In case anyone was wondering, it is the get_fields function in ModelSerializer that is particularly taxing.
| gharchive/issue | 2017-03-21T09:46:15 | 2025-04-01T06:38:21.175408 | {
"authors": [
"jtrain"
],
"repo": "dbrgn/drf-dynamic-fields",
"url": "https://github.com/dbrgn/drf-dynamic-fields/issues/13",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2150141605 | Upgrade Jinja2 dependency version specification to address CVE-2024-22195
resolves CVE-2024-22195
Description
CVE-2024-22195 identified an issue in Jinja2 versions <= 3.1.2. As such we've gone and changed our dependency requirement specification to be 3.1.2 or greater (but less than 4).
Note: Preivously we were using the ~= version specifier. However due to some issues with the ~= we've moved to using >= in combination with <. This gives us the same range that ~= gave us, but avoids a pip resolution issue when multiple packages in an environment use ~= for the same dependency.
Checklist
[x] I have read the contributing guide and understand what's expected of me
[x] I have signed the CLA
[x] I have run this code in development and it appears to resolve the stated issue
[x] I have opened an issue to add/update docs, or docs changes are not required/relevant for this PR
[x] I have run changie new to create a changelog entry
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 54.04%. Comparing base (c61d318) to head (7b3f164).
Additional details and impacted files
@@ Coverage Diff @@
## main #85 +/- ##
=======================================
Coverage 54.04% 54.04%
=======================================
Files 49 49
Lines 2866 2866
=======================================
Hits 1549 1549
Misses 1317 1317
Flag
Coverage Δ
unit
54.04% <ø> (ø)
Flags with carried forward coverage won't be shown. Click here to find out more.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
| gharchive/pull-request | 2024-02-22T23:57:54 | 2025-04-01T06:38:21.185248 | {
"authors": [
"QMalcolm",
"codecov-commenter"
],
"repo": "dbt-labs/dbt-common",
"url": "https://github.com/dbt-labs/dbt-common/pull/85",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1290041562 | [CT-794] Export Lineage Graph as SVG or PDF
Describe the feature
Problem: if we want to generate an image for our Lineage Graph, there is the option to export the graph as PNG, but this image has no quality options.
Example:
To add more flexibility, dbt can allow generating not just the PNG, but an SVG or PDF file with vectorized images. Reference: Scalable Vector Graphics
Describe alternatives you've considered
You can try to add PNG configurations as ppi and dimensions, but a scalable image format is a more general solution.
Who will this benefit?
Users who want to use Lineage Graphs in slides or documentations.
Users who want to edit Lineage Graphs using graphic design programs as Illustrator.
Are you interested in contributing this feature?
Maybe, but I'm not so familiar with the dbt repo.
@nelsoncardenas Sorry for the delay getting back to you!
The logic for the graph export is neatly self-contained in just a few lines of code. We use the cytoscape library's built-in .png() function to create a PNG here:
https://github.com/dbt-labs/dbt-docs/blob/85dec858c5d213699fbc2cefa388ba1e80c94889/src/app/components/graph/graph-viz.js#L210-L214
It looks like the cytoscape library has built-in support for PNG, JPG, and JSON as export options (no SVG): https://js.cytoscape.org/#core/export
But it also looks like someone has developed an extension to the cytoscape library, for SVG exports: https://github.com/kinimesi/cytoscape-svg
Is that something you'd be interested in experimenting with?
@nelsoncardenas @jtcohen6 Is this still open?
@abhijithp05 sorry, I have been busy, and I don't think I'll have any time soon to work on this problem.
@nelsoncardenas I have issue while running the project related to assets/css reference.
@nelsoncardenas Can you tell me where to find the export button
@nelsoncardenas Created a PR for the issue. Please review. and merge
| gharchive/issue | 2022-06-30T12:08:54 | 2025-04-01T06:38:21.193362 | {
"authors": [
"abhijithp05",
"jtcohen6",
"nelsoncardenas"
],
"repo": "dbt-labs/dbt-docs",
"url": "https://github.com/dbt-labs/dbt-docs/issues/283",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1617288479 | ADAP-56: Python 3.11 Support
resolves #524
Description
Add support for Python 3.11
Checklist
[x] I have read the contributing guide and understand what's expected of me
[x] I have signed the CLA
[ ] I have run this code in development and it appears to resolve the stated issue
[x] This PR includes tests, or tests are not required/relevant for this PR
[ ] I have opened an issue to add/update docs, or docs changes are not required/relevant for this PR
[x] I have run changie new to create a changelog entry
We have a better way of doing this.
We have a better way of doing this.
@mikealfare Can you link to the better way if ready please?
We have a better way of doing this.
@mikealfare Can you link to the better way if ready please?
Fair point. Here's the PR we merged: https://github.com/dbt-labs/dbt-spark/pull/818
| gharchive/pull-request | 2023-03-09T13:43:08 | 2025-04-01T06:38:21.198537 | {
"authors": [
"followingell",
"mikealfare"
],
"repo": "dbt-labs/dbt-spark",
"url": "https://github.com/dbt-labs/dbt-spark/pull/676",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1499963785 | A large enterprise moving from Core to Cloud to enable more dbt developers
Contact Details
@boxysean
I have read the dbt Developer Blog contribution guidelines.
[X] I have read the dbt Developer Blog contribution guidelines.
Which of these best describes you?
[ ] I am a dbt Community member or partner contributing to the Developer Blog
[X] I work for dbt Labs and am creating this issue for a community or marketing approved piece.
What is the topic of your post?
This post is a success story targeted towards dbt Core users at large enterprises who are looking to expand their dbt usage by adopting dbt Cloud. It will include key technical challenges and solutions used to solve them that others can follow.
Link to an initial outline.
https://www.notion.so/dbtlabs/848888d520f541a78c11e9e147a31581
Hey @boxysean - this is an awesome topic for a post. We have been wanting to do a guide on moving for Core to Cloud for a while and I still think we should do that, but starting with a single example and going deep makes a ton of sense. Let's plan on you, me and @dave-connors-3 spending some time digging into this in early Jan.
| gharchive/issue | 2022-12-16T10:30:55 | 2025-04-01T06:38:21.202213 | {
"authors": [
"boxysean",
"jasnonaz"
],
"repo": "dbt-labs/docs.getdbt.com",
"url": "https://github.com/dbt-labs/docs.getdbt.com/issues/2592",
"license": "Apache-2.0",
"license_type": "permissive",
"license_source": "github-api"
} |
1039660475 | add an EAX preprint badge
cosmetic
ta very mooch
| gharchive/pull-request | 2021-10-29T15:00:42 | 2025-04-01T06:38:21.212421 | {
"authors": [
"dbuscombe-usgs",
"ebgoldstein"
],
"repo": "dbuscombe-usgs/dash_doodler",
"url": "https://github.com/dbuscombe-usgs/dash_doodler/pull/18",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
2642510359 | thrimshim: use send_file to serve templates from database
I don't claim to understand why this works, but this makes downloading the thumbnail templates (to preview on the thumbnail management page, or to select the "hole" in the advanced crop settings in the editor) way faster. Something to do with how flask is chunking the memoryview object to serve.
Applying this changed the timings to download "fiddling.png" from 818ms waiting and 6770ms receiving, to 777ms waiting and 14ms receiving.
Resolves #458
It will not spot all Python bugs but running a linter such as Pyflakes before pushing is good practice.
ekim has a better fix (bytes(image) apparently is enough), closing
| gharchive/pull-request | 2024-11-08T00:24:53 | 2025-04-01T06:38:21.214252 | {
"authors": [
"chrusher",
"dcollinsn"
],
"repo": "dbvideostriketeam/wubloader",
"url": "https://github.com/dbvideostriketeam/wubloader/pull/461",
"license": "MIT",
"license_type": "permissive",
"license_source": "github-api"
} |
172833232 | add stronger typing to poetic plotting, fixes #882
Related to #881 and #871
Coverage increased (+0.2%) to 65.67% when pulling 47336cb3de67c6fdff901d0d08834033d0ce758b on tlnagy:pull-request/24f5d936 into 4a3683797227463b2a5bb4f736ce64fc19fd016d on dcjones:master.
Coverage increased (+0.02%) to 65.458% when pulling 47336cb3de67c6fdff901d0d08834033d0ce758b on tlnagy:pull-request/24f5d936 into 4a3683797227463b2a5bb4f736ce64fc19fd016d on dcjones:master.
Wow this is a pretty action at a distance kind of problem. I guess this fix is warranted though.
Indeed. I would've preferred a different workaround to this (something more like #874), but that was a lucky fix. However, this is a much more robust fix for this error and it looks like the tests pass. Hopefully no one is passing anything too funky to the a and b parameters.
Also, based on your suggestion in #882, I added a test for the ambiguity method error.
Coverage increased (+0.02%) to 65.458% when pulling 8804dde2af4781692cacbdd0333a291ec1762e81 on tlnagy:pull-request/24f5d936 into 4a3683797227463b2a5bb4f736ce64fc19fd016d on dcjones:master.
Coverage increased (+0.02%) to 65.458% when pulling 8804dde2af4781692cacbdd0333a291ec1762e81 on tlnagy:pull-request/24f5d936 into 4a3683797227463b2a5bb4f736ce64fc19fd016d on dcjones:master.
Hopefully no one is passing anything too funky to the a and b parameters.
julia> brightness(x::RGB) = (x.r+x.g+x.b)/3
brightness (generic function with 1 method)
julia> plot([brightness], colorant"black", colorant"white")
ERROR: MethodError: `isless` has no method matching isless(::ColorTypes.RGB{FixedPointNumbers.UFixed{UInt8,8}}, ::ColorTypes.RGB{FixedPointNumbers.UFixed{UInt8,8}})
Closest candidates are:
isless(::DataArrays.NAtype, ::Any)
isless(::Any, ::DataArrays.NAtype)
in plot at /home/shashi/.julia/v0.4/Gadfly/src/poetry.jl:44
haha.
yeah, if someone's using funky a and b then they can open an issue.
| gharchive/pull-request | 2016-08-23T23:44:06 | 2025-04-01T06:38:21.230630 | {
"authors": [
"coveralls",
"shashi",
"tlnagy"
],
"repo": "dcjones/Gadfly.jl",
"url": "https://github.com/dcjones/Gadfly.jl/pull/883",
"license": "mit",
"license_type": "permissive",
"license_source": "bigquery"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.