date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/14
722
2,592
<issue_start>username_0: So for various reasons (such as its language-independence) I want to use tensorflow's [saved\_model](https://www.tensorflow.org/programmers_guide/saved_model#apis_to_build_and_load_a_savedmodel) API for saving/loading models. I can save everything (and restore it successfully) with a call to `builder.add_meta_graph_and_variables()` at the end of training, but I don't see any way to save periodically. Tensorflow docs on this are very sparse, and the template code they provide ([here](https://www.tensorflow.org/api_docs/python/tf/saved_model/builder/SavedModelBuilder)) doesn't help me: ``` ... builder = tf.saved_model.builder.SavedModelBuilder(export_dir) with tf.Session(graph=tf.Graph()) as sess: ... builder.add_meta_graph_and_variables(sess, ["foo-tag"], signature_def_map=foo_signatures, assets_collection=foo_assets) ... with tf.Session(graph=tf.Graph()) as sess: ... builder.add_meta_graph(["bar-tag", "baz-tag"]) ... builder.save() ``` Calling `builder.save()` does not save the new variables into the model. It just updates the model protobuf. What am I missing? How do I save after e.g. the nth epoch using `saved_model`?<issue_comment>username_1: Well, after looking through the tensorflow code [here](https://github.com/tensorflow/tensorflow/blob/r1.6/tensorflow/python/saved_model/builder_impl.py) and elsewhere, it looks like the answer is "you can't". `SavedModelBuilder` is really just designed for models outside of the training phase, and it allows you to add metagraphs and choose which sets of variables to load/save (i.e. TRAINING vs. SERVING) but that's it. `SavedModelBuilder.add_meta_graph_and_variables`, for example, can be called exactly once, and there is no `SavedModelBuilder.update_variables` or anything like that. While training, on the other hand, you need to use the `Saver` class and save checkpoints and those associated files. Why there isn't a unified system for this I have no idea but apparently that's the way it is. Upvotes: 2 [selected_answer]<issue_comment>username_2: The variables are saved when you call `builder.add_meta_graph_and_variables()`, because `saver.save()` is called inside it. [See here](https://github.com/tensorflow/tensorflow/blob/a6d8ffae097d0132989ae4688d224121ec6d8f35/tensorflow/python/saved_model/builder_impl.py#L421) Solution: Just call `saver.save(sess, export_dir+'/variables/variables', write_meta_graph=False, write_state=False)` before `builder.save()`. Upvotes: 0
2018/03/14
517
1,754
<issue_start>username_0: I am new to unittest and still understanding how things work. If I have a list of dictionaries...eg: ``` mylist = [{"y": "xval", "v": "x1val"}, {"y": "yval", "v": "y1val"}, {"y": "zval", "v": "z1val"}] ``` What sort of assertion/test would I perform to validate that the value for "v" is "y1val" when "y" = "yval"? It may also be the case that the dictionary y:yval does not exist in the list.<issue_comment>username_1: Well, after looking through the tensorflow code [here](https://github.com/tensorflow/tensorflow/blob/r1.6/tensorflow/python/saved_model/builder_impl.py) and elsewhere, it looks like the answer is "you can't". `SavedModelBuilder` is really just designed for models outside of the training phase, and it allows you to add metagraphs and choose which sets of variables to load/save (i.e. TRAINING vs. SERVING) but that's it. `SavedModelBuilder.add_meta_graph_and_variables`, for example, can be called exactly once, and there is no `SavedModelBuilder.update_variables` or anything like that. While training, on the other hand, you need to use the `Saver` class and save checkpoints and those associated files. Why there isn't a unified system for this I have no idea but apparently that's the way it is. Upvotes: 2 [selected_answer]<issue_comment>username_2: The variables are saved when you call `builder.add_meta_graph_and_variables()`, because `saver.save()` is called inside it. [See here](https://github.com/tensorflow/tensorflow/blob/a6d8ffae097d0132989ae4688d224121ec6d8f35/tensorflow/python/saved_model/builder_impl.py#L421) Solution: Just call `saver.save(sess, export_dir+'/variables/variables', write_meta_graph=False, write_state=False)` before `builder.save()`. Upvotes: 0
2018/03/14
548
1,964
<issue_start>username_0: Tried different variations but couldn't found one that would be aplicable for my requirements. I need regex for the following case: `someString.someString.someString` This pattern (`'someString.'`) can repeat any amount of time but the dot should not be at the end. More over, spaces and any symbols are not allowed at the end too. The following are invalid: `someString.someString.someString ?` `someString.someString.someString eq` I tried smth like `^([a-zA-Z]+)(\.)([a-zA-Z]+).*[^\s?]$` but it doesn't process redundant characters in the end properly. If anyone has idea regarding correct regex please leave the comment<issue_comment>username_1: Well, after looking through the tensorflow code [here](https://github.com/tensorflow/tensorflow/blob/r1.6/tensorflow/python/saved_model/builder_impl.py) and elsewhere, it looks like the answer is "you can't". `SavedModelBuilder` is really just designed for models outside of the training phase, and it allows you to add metagraphs and choose which sets of variables to load/save (i.e. TRAINING vs. SERVING) but that's it. `SavedModelBuilder.add_meta_graph_and_variables`, for example, can be called exactly once, and there is no `SavedModelBuilder.update_variables` or anything like that. While training, on the other hand, you need to use the `Saver` class and save checkpoints and those associated files. Why there isn't a unified system for this I have no idea but apparently that's the way it is. Upvotes: 2 [selected_answer]<issue_comment>username_2: The variables are saved when you call `builder.add_meta_graph_and_variables()`, because `saver.save()` is called inside it. [See here](https://github.com/tensorflow/tensorflow/blob/a6d8ffae097d0132989ae4688d224121ec6d8f35/tensorflow/python/saved_model/builder_impl.py#L421) Solution: Just call `saver.save(sess, export_dir+'/variables/variables', write_meta_graph=False, write_state=False)` before `builder.save()`. Upvotes: 0
2018/03/14
472
1,773
<issue_start>username_0: I'm trying to code an app that can pick the color from a picture selected from a gallery. When coding one of the overrides, I got this message. ``` Incompatible types. Required: java.lang.String[] Found: java.lang.String ``` The code is as follows: ``` @Override protected void onActivity(int requestCode, int resultCode, Intent data) { if (requestCode == RESULT_LOAD_IMAGE && resultCode == RESULT_OK && null !=data) { Uri selectedImage = data.getData(); String[] filePathColumn = (MediaStore.Images.Media.DATA); Cursor cursor = getContentResolver().query(selectedImage, filePathColumn, null, null, null); cursor.moveToFirst(); int columnIndex = cursor.getColumnIndex(filePathColumn[0]); } } ``` The problem occurs in this `onActivity` method. How do I fix this error?<issue_comment>username_1: ``` String[] filePathColumn = (MediaStore.Images.Media.DATA); ``` This line attempts to create an array by placing a string in parentheses `()`. But the parentheses only clarify order of operations; here they do nothing. To create an array, wrap it in braces `{}` as an array initializer. ``` String[] filePathColumn = {MediaStore.Images.Media.DATA}; ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: `MediaStore.Images.Media.DATA` is just a String object, not an Array of Strings. So assigning it to a `String []` raises the error. It's like saying a page is equal to a book. You can solve this using two ways: 1. Use braces to initialize the array with the value in-line: ``` String[] filePathColumn = { MediaStore.Images.Media.DATA }; ``` 2. Or do it separately: ``` String[] filePathColumn = new String[1]; filePathColumn[0] = MediaStore.Images.Media.DATA; ``` Upvotes: 0
2018/03/14
1,092
3,250
<issue_start>username_0: I would like to draw a rotated rectangle I've got the top left point and bottom right point, width and height of box. As well as the angle. But I can't seem work out how you draw the rotated rectangle using OpenCV in **Python**. Please note that I do not want to rotate the image. Thanks<issue_comment>username_1: There are many ways to draw a rectangle in OpenCV. From the OpenCV documentatation: [Drawing Functions](https://docs.opencv.org/3.0-beta/modules/imgproc/doc/drawing_functions.html) > > **rectangle** > > > Draws a simple, thick, or filled up-right rectangle. > > > So this function doesn't help as you want to draw it rotated. A rectangle is nothing but a special 4-sided polygon. So simply use the function for drawing polygons instead. > > **polylines** > > > Draws several polygonal curves. > > > Python: > > > > ``` > cv2.polylines(img, pts, isClosed, color[, thickness[, lineType[, shift]]]) → img > > ``` > > and insert the 4 vertices of your rotated rectangle or draw the 4 sides separately using > > **line** > > > Draws a line segment connecting two points. > > > or > > **drawContours** > > > Draws contours outlines or filled contours. > > > The points can be obtained using simple math or for example using OpenCV's RotatedRect <https://docs.opencv.org/2.4/modules/core/doc/basic_structures.html#rotatedrect> Upvotes: 1 <issue_comment>username_2: ``` class Point: def __init__(self, x, y): self.x = int(x) self.y = int(y) class Rectangle: def __init__(self, x, y, w, h, angle): # Center Point self.x = x self.y = y # Height and Width self.w = w self.h = h self.angle = angle def rotate_rectangle(self, theta): pt0, pt1, pt2, pt3 = self.get_vertices_points() # Point 0 rotated_x = math.cos(theta) * (pt0.x - self.x) - math.sin(theta) * (pt0.y - self.y) + self.x rotated_y = math.sin(theta) * (pt0.x - self.x) + math.cos(theta) * (pt0.y - self.y) + self.y point_0 = Point(rotated_x, rotated_y) # Point 1 rotated_x = math.cos(theta) * (pt1.x - self.x) - math.sin(theta) * (pt1.y - self.y) + self.x rotated_y = math.sin(theta) * (pt1.x - self.x) + math.cos(theta) * (pt1.y - self.y) + self.y point_1 = Point(rotated_x, rotated_y) # Point 2 rotated_x = math.cos(theta) * (pt2.x - self.x) - math.sin(theta) * (pt2.y - self.y) + self.x rotated_y = math.sin(theta) * (pt2.x - self.x) + math.cos(theta) * (pt2.y - self.y) + self.y point_2 = Point(rotated_x, rotated_y) # Point 3 rotated_x = math.cos(theta) * (pt3.x - self.x) - math.sin(theta) * (pt3.y - self.y) + self.x rotated_y = math.sin(theta) * (pt3.x - self.x) + math.cos(theta) * (pt3.y - self.y) + self.y point_3 = Point(rotated_x, rotated_y) return point_0, point_1, point_2, point_3 ``` Returns four new points that have been translated by theta <https://github.com/rij12/YOPO/blob/yopo/darkflow/net/yopo/calulating_IOU.py> Upvotes: -1
2018/03/14
1,887
5,877
<issue_start>username_0: The goal of the exercise is to calculate a complex number Z according to some formula and create an array of n such complex numbers. Here's the function that calculates Z ``` double complex convert(double R, int p) { double complex Z=0+0*I; double complex A, B, C; A=exp(M_PI/4) + 0*I; B=cos(11*M_PI/6 + 2*p*M_PI) + 0*I; C=I*sin(R*M_PI/6); Z=A*((R*B)+C); return Z; } ``` The function that creates the array: ``` double complex *array_function (double *a, int n) { int i; double complex array[100]; for (i=0; i ``` And int main: ``` int main() { int N, i; double complex *new_array[100]; double array[100]; printf("Enter the length of the array = "); scanf("%d", &N); for (i=0; i ``` But I keep getting the same error message: "assignment to expression with array type" in regards to the line: "new\_array=array\_function(array, N);" **Edit:** Here's the edited code: ``` double complex convert(double R, int p) { double complex Z=0+0*I; double complex A, B, C; A=exp(M_PI/4) + 0*I; B=cos(11*M_PI/6 + 2*p*M_PI) + 0*I; C=I*sin(R*M_PI/6); Z=A*((R*B)+C); return Z; } double complex *array_function (double *a, int n) { int i; double complex *array = malloc(100 * sizeof *array); for (i=0; i ```<issue_comment>username_1: You cannot assign to arrays in C. You can only assign to **array elements**. If you want to change arrays dynamically, declare a pointer of the appropriate type and assign the result of `malloc` and/or `realloc`. Upvotes: 2 <issue_comment>username_2: `double complex *new_array[100];` declares `new_array` to be an array of 100 pointers to `double complex`. That is not what you want. You merely want a pointer to `double complex` (which will point to the first element of an array that is provided by the function). The declaration for this is `double complex *new_array;`. However, in `array_function`, you attempt to return `array`, where `array` is defined inside the function with `double complex array[100];`. That declaration, when used inside a function, declares an array that lasts only until the function returns. If you return its address (or the address of its first element), the pointer to that address will be invalid. The proper way to return a new array from a function is to dynamically allocate the array, as with: ``` double complex *array = malloc(100 * sizeof *array); if (!array) { fprintf(stderr, "Error, failed to allocate memory.\n"); exit(EXIT_FAILURE); } … // Assign values to the array elements. return array; ``` Then the caller is responsible for releasing the array at some later time, by passing the address to the `free` routine. (To use `malloc`, `free`, and `exit`, add `#include` to your program.) Upvotes: 0 <issue_comment>username_3: If you want to let the `array_function` create the content of `new_array`, you can send the array pointer as a parameter to the function, and let the function use that. You also need to change the definition of new\_array to `double complex new_array[100]` That is, ``` void array_function (double *a, double complex array[], int n) { int i; for (i=0; i ``` And in main(): ``` double complex new_array[100]; ... array_function(array, new_array, N); ``` Upvotes: 0 <issue_comment>username_4: If you have made the changes to insure you are reading doubles with `scanf` by adding the `'l'` modifier to your `%f` *format specifier* (e.g. `"%lf"`) and you have fixed your attempt to return a statically declared array, by declaring a pointer in `main()` to which you assign the return from `array_function`, and properly allocated the array in `array_function`, then your code should be working without crashing. Also, `M_PI` should be properly typed as `double` eliminating the *integer division* concern. You must **VALIDATE ALL USER INPUT** (sorry for all caps, but if you learn nothing else here, learn that). That means validating the **return** of `scanf` and checking the range of the value entered where appropriate. Putting those pieces together, you could do something like the following (with the code sufficiently spaced so old-eyes can read it): ``` #include #include #include #include #define MAXC 100 /\* if you need a constant, define one \*/ double complex convert (double R, int p) { double complex Z = 0 + 0 \* I; /\* space your code so it is readable \*/ double complex A, B, C; /\* (especially for older eyes......) \*/ A = exp (M\_PI / 4.0) + 0 \* I; B = cos (11 \* M\_PI / 6.0 + 2 \* p \* M\_PI) + 0 \* I; C = I \* sin (R \* M\_PI / 6.0); Z = A \* ((R \* B) + C); return Z; } double complex \*array\_function (double \*a, int n) { int i; double complex \*array = calloc (MAXC, sizeof \*array); /\* allocate \*/ if (!array) { /\* validate allocation succeeded \*/ perror ("calloc-array"); exit (EXIT\_FAILURE); } for (i = 0; i < n; i++) /\* convert your values \*/ array[i] = convert (a[i], i); return array; /\* return pointer \*/ } int main (void) { int N, i; double complex \*new\_array; /\* declare pointer to receive return \*/ double array[MAXC]; printf ("Enter array length: "); if (scanf("%d", &N) != 1 || N > MAXC) { /\* VALIDATE ALL USER INPUT \*/ fprintf (stderr, "error: invalid input or out of range.\n"); return 1; } for (i=0; i ``` (**note:** you should `free` all memory you allocate) **Example Use/Output** ``` $ ./bin/complex Enter array length: 5 enter array[ 0]: 1.81 enter array[ 1]: 1.97 enter array[ 2]: .31 enter array[ 3]: 2.51 enter array[ 4]: 6.021 The new array is: 3.43798 + i1.781127 3.74189 + i1.881977 0.58883 + i0.354442 4.76758 + i2.121489 11.43651 + i-0.024116 ``` Look things over and let me know if you have further questions. Upvotes: 1
2018/03/14
2,097
7,139
<issue_start>username_0: I am trying to write a menu driven program in java language to calculate the cost of attendance for various types of students at this specific university. I am running into problems with my output, when I run my program none of my cases execute upon entering the according letter. If there are any tips in helping fix my output problem I would greatly appreciate it. ``` int cred, sem, cost, tuition; char choice; Scanner sc = new Scanner(System.in); do { System.out.println("Enter a for a non resident Grad student/n"); System.out.println("Enter b for a resident Grad student/n"); System.out.println("Enter c for an international Grad student/n"); System.out.println("Enter d for a non resident UnderGrad student/n"); System.out.println("Enter e for a resident UnderGrad student/n"); System.out.println("Enter f for an international UnderGrad student/n"); choice = sc.next().charAt(0); tuition = sc.nextInt(); switch (choice) { case 'a': System.out.println("Enter number of credits being taken: "); cred = sc.nextInt(); cost = (780 * cred) + (145 * cred); break; case 'b': System.out.println("Enter number of credits being taken: "); cred = sc.nextInt(); cost = (510 * cred) + (110 * cred); break; case 'c': System.out.println("Enter number of credits being taken: "); cred = sc.nextInt(); cost = (850 * cred) + (155 * cred); break; case 'd': System.out.println("Enter number of credits being taken: "); cred = sc.nextInt(); if (cred > 18) tuition = (475 * (cred - 18) + 5850); else if (cred < 12) tuition = (475 * cred); else tuition = 5850; case 'e': System.out.println("Enter number of credits being taken: "); cred = sc.nextInt(); if (cred > 18) tuition = (325 * (cred - 18) + 4000); else if (cred < 12) tuition = (325 * cred); else tuition = 4000; case 'f': System.out.println("Enter number of credits being taken: "); cred = sc.nextInt(); if (cred > 18) tuition = (625 * (cred - 18) + 7550); else if (cred < 12) tuition = (625 * cred); else tuition = 7550; } } while (choice != 's'); System.out.println("Cost of attendance= " + tuition); ```<issue_comment>username_1: You cannot assign to arrays in C. You can only assign to **array elements**. If you want to change arrays dynamically, declare a pointer of the appropriate type and assign the result of `malloc` and/or `realloc`. Upvotes: 2 <issue_comment>username_2: `double complex *new_array[100];` declares `new_array` to be an array of 100 pointers to `double complex`. That is not what you want. You merely want a pointer to `double complex` (which will point to the first element of an array that is provided by the function). The declaration for this is `double complex *new_array;`. However, in `array_function`, you attempt to return `array`, where `array` is defined inside the function with `double complex array[100];`. That declaration, when used inside a function, declares an array that lasts only until the function returns. If you return its address (or the address of its first element), the pointer to that address will be invalid. The proper way to return a new array from a function is to dynamically allocate the array, as with: ``` double complex *array = malloc(100 * sizeof *array); if (!array) { fprintf(stderr, "Error, failed to allocate memory.\n"); exit(EXIT_FAILURE); } … // Assign values to the array elements. return array; ``` Then the caller is responsible for releasing the array at some later time, by passing the address to the `free` routine. (To use `malloc`, `free`, and `exit`, add `#include` to your program.) Upvotes: 0 <issue_comment>username_3: If you want to let the `array_function` create the content of `new_array`, you can send the array pointer as a parameter to the function, and let the function use that. You also need to change the definition of new\_array to `double complex new_array[100]` That is, ``` void array_function (double *a, double complex array[], int n) { int i; for (i=0; i ``` And in main(): ``` double complex new_array[100]; ... array_function(array, new_array, N); ``` Upvotes: 0 <issue_comment>username_4: If you have made the changes to insure you are reading doubles with `scanf` by adding the `'l'` modifier to your `%f` *format specifier* (e.g. `"%lf"`) and you have fixed your attempt to return a statically declared array, by declaring a pointer in `main()` to which you assign the return from `array_function`, and properly allocated the array in `array_function`, then your code should be working without crashing. Also, `M_PI` should be properly typed as `double` eliminating the *integer division* concern. You must **VALIDATE ALL USER INPUT** (sorry for all caps, but if you learn nothing else here, learn that). That means validating the **return** of `scanf` and checking the range of the value entered where appropriate. Putting those pieces together, you could do something like the following (with the code sufficiently spaced so old-eyes can read it): ``` #include #include #include #include #define MAXC 100 /\* if you need a constant, define one \*/ double complex convert (double R, int p) { double complex Z = 0 + 0 \* I; /\* space your code so it is readable \*/ double complex A, B, C; /\* (especially for older eyes......) \*/ A = exp (M\_PI / 4.0) + 0 \* I; B = cos (11 \* M\_PI / 6.0 + 2 \* p \* M\_PI) + 0 \* I; C = I \* sin (R \* M\_PI / 6.0); Z = A \* ((R \* B) + C); return Z; } double complex \*array\_function (double \*a, int n) { int i; double complex \*array = calloc (MAXC, sizeof \*array); /\* allocate \*/ if (!array) { /\* validate allocation succeeded \*/ perror ("calloc-array"); exit (EXIT\_FAILURE); } for (i = 0; i < n; i++) /\* convert your values \*/ array[i] = convert (a[i], i); return array; /\* return pointer \*/ } int main (void) { int N, i; double complex \*new\_array; /\* declare pointer to receive return \*/ double array[MAXC]; printf ("Enter array length: "); if (scanf("%d", &N) != 1 || N > MAXC) { /\* VALIDATE ALL USER INPUT \*/ fprintf (stderr, "error: invalid input or out of range.\n"); return 1; } for (i=0; i ``` (**note:** you should `free` all memory you allocate) **Example Use/Output** ``` $ ./bin/complex Enter array length: 5 enter array[ 0]: 1.81 enter array[ 1]: 1.97 enter array[ 2]: .31 enter array[ 3]: 2.51 enter array[ 4]: 6.021 The new array is: 3.43798 + i1.781127 3.74189 + i1.881977 0.58883 + i0.354442 4.76758 + i2.121489 11.43651 + i-0.024116 ``` Look things over and let me know if you have further questions. Upvotes: 1
2018/03/14
656
2,385
<issue_start>username_0: I'm trying to add subject to a teacher's workload. I have a validation before inserting the user input to my workload table, mostly for detecting schedule conflict. Now, what I want is to retain the selected value of the user and display it again after the validation fails. So that the user doesn't have to select the subject name, class name, and class adviser again. I've attached my code for the class adviser selection. ```html php $link = mysqli_connect("localhost", "root", "", "smis"); $sql = "SELECT * FROM teacherData"; $result = mysqli_query($link,$sql); echo "<select id='modalINPUT' name='teacherID' required"; echo"Select Adviser"; while ($row = mysqli_fetch_array($result)) { echo "".$row['Fname'] ." ". $row['Mname'] ." " .$row['Lname'] . ""; } echo ""; ?> /* Once validation fails, is there a way where I can display the selected value? Like display the name of the teacher. */ ```<issue_comment>username_1: You could compare the posted value against the id of the current row. And add `selected` attribute if the values are the same. ``` // create $selected variable to get "selected" or "": $selected = isset($_POST['teacherID']) && $_POST['teacherID'] == $row['teacherID'] ? 'selected' : '' ; // add $selected to the tag: echo "" . $row['Fname'] ." ". $row['Mname'] ." " .$row['Lname'] . ""; ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: Yes, you can use: ``` $teacherID = !empty($_POST['teacherID']) ? $_POST['teacherID'] : ''; ``` right before your sql query Then to display selected value on select box, you can check if posted id matches with current id and print out selected parameter for the according option field: ``` echo ''.htmlspecialchars($row['Fname']) .' '. htmlspecialchars($row['Mname']) .' ' .htmlspecialchars($row['Lname']) . ''; ``` EDIT: Take a look on my edited answer - use single quotes for better performance (so PHP doesnt have to scan your double quoted string for vars), and ALWAYS process your information, which you get from DB with htmlspecialchars, so you don't mess up your html, when something improper is saved into DB EDIT: Also do not select all data from MYSQL table, unless you need to. To gain performance, select only fields you need in your query: ``` $sql = 'SELECT teacherID, Fname, Mname, Lname FROM teacherData'; ``` Upvotes: 1
2018/03/14
674
2,132
<issue_start>username_0: I have a dataframe I'm working with that has a large number of columns, and I'm trying to format them as efficiently as possible. I have a bunch of columns that all end in .pct that need to be formatted as percentages, some that end in .cost that need to be formatted as currency, etc. I know I can do something like this: ``` cost_calc.style.format({'c.somecolumn.cost' : "${:,.2f}", 'c.somecolumn.cost' : "${:,.2f}", 'e.somecolumn.cost' : "${:,.2f}", 'e.somecolumn.cost' : "${:,.2f}",... ``` and format each column individually, but I was hoping there was a way to do something similar to this: ``` cost_calc.style.format({'*.cost' : "${:,.2f}", '*.pct' : "{:,.2%}",... ``` Any ideas? Thanks!<issue_comment>username_1: The first way doesn't seem bad if you can automatically build that dictionary... you can generate a list of all columns fitting the \*.cost description with something like ``` costcols = [x for x in df.columns.values if x[-5:] == '.cost'] ``` then build your dict like: ``` formatdict = {} for costcol in costcols: formatdict[costcol] = "${:,.2f}" ``` then as you suggested: ``` cost_calc.style.format(formatdict) ``` You can easily add the .pct cases similarly. Hope this helps! Upvotes: 3 [selected_answer]<issue_comment>username_2: I would use regEx with dict generators: ``` import re mylist = cost_calc.columns r = re.compile(r'.*cost') cost_cols = {key: "${:,.2f}" for key in mylist if r.match(key)} r = re.compile(r'.*pct') pct_cols = {key: "${:,.2f}" for key in mylist if r.match(key)} cost_calc.style.format({**cost_cols, **pct_cols}) ``` note: code for Python 2.7 and 3 onwards Upvotes: 2 <issue_comment>username_3: ``` import re mylist = cost_calc.columns r = re.compile(r'.*cost') cost_cols = {key: (lambda x: f'{locale.format_string("%.2f", x, True)} €') for key in mylist if r.match(key)} r = re.compile(r'.*pct') pct_cols = {key: "{:.2%}" for key in mylist if r.match(key)} ``` note: version for euro Upvotes: 0
2018/03/14
3,232
10,683
<issue_start>username_0: I'm trying to create an application using Qt3D that where I can create multiple view windows on the same scene. I started with the code from the Qt3DWindow and the [Simple C++ Example](https://doc.qt.io/qt-5.10/qt3d-simple-cpp-example.html) and started moving things around. What I figured is that each view window would define its own frame graph (just using a simple QForwardRenderer for now) and camera and then I would add each window's frame graph to the main frame graph in my scene. Everything appears to be working fine as I create multiple windows, but when I close the windows and start removing frame graphs, the application crashes. It's crashing on a background thread somewhere down in the Qt3DCore or Qt3DRender module and I can't get to the source code. As I understand it I should be able to modify the frame graph dynamically at run time, but is that not thread safe? Are you expected to wholesale replace one frame graph with another as opposed to modifying the active frame graph like I'm doing? **--- Edit ---** I did a little more testing and if I delay destroying the QWindow (i.e., the surface that it's trying to render to) a bit after removing its frame graph from the parent frame graph, I don't get the crash. I *do* however get some warnings on the console that say: > > Qt3D.Renderer.Backend: bool \_\_cdecl Qt3DRender::Render::GraphicsContext::makeCurrent(class QSurface \*) makeCurrent failed > > > My guess is it's a threading issue, that the backend is still trying to use the QSurface to render to after it has been destroyed on the main thread. I don't really like my solution (I just used a single shot timer to delay destroying the window by 1 second), but it's better than crashing. **RenderWindow.h** ``` #ifndef RENDERWINDOW_H #define RENDERWINDOW_H #include #include #include #include #include class RenderWindow : public QWindow { public: RenderWindow(QScreen\* screen = nullptr); ~RenderWindow(); Qt3DRender::QCamera\* camera() const; Qt3DRender::QFrameGraphNode\* frameGraph() const; protected: void resizeEvent(QResizeEvent \*) override; private: // Rendering Qt3DRender::QFrameGraphNode\* mpFrameGraph; Qt3DRender::QCamera\* mpCamera; static bool msFormatDefined; }; #endif // RENDERWINDOW\_H ``` **RenderWindow.cpp** ``` #include "renderwindow.h" #include bool RenderWindow::msFormatDefined = false; namespace { // Different clear colors so that it's obvious each window is using a // different camera and frame graph. static QColor sClearColors[] = { Qt::darkBlue, Qt::blue, Qt::darkCyan, Qt::cyan }; static int sViewCount = 0; } RenderWindow::RenderWindow(QScreen\* screen) : QWindow(screen) , mpFrameGraph(nullptr) , mpCamera(new Qt3DRender::QCamera) { setSurfaceType(QWindow::OpenGLSurface); // Set the default surface format once if (!msFormatDefined) { QSurfaceFormat format; format.setVersion(4, 3); format.setProfile(QSurfaceFormat::CoreProfile); format.setDepthBufferSize(24); format.setSamples(4); format.setStencilBufferSize(8); setFormat(format); QSurfaceFormat::setDefaultFormat(format); msFormatDefined = true; } // Camera mpCamera->lens()->setPerspectiveProjection(45.0f, 16.0f/9.0f, 0.1f, 1000.0f); mpCamera->setPosition(QVector3D(0, 0, 40.0f)); mpCamera->setViewCenter(QVector3D(0, 0, 0)); // Frame Graph (using forward renderer for now) Qt3DExtras::QForwardRenderer\* renderer = new Qt3DExtras::QForwardRenderer; renderer->setCamera(mpCamera); renderer->setSurface(this); renderer->setClearColor(sClearColors[sViewCount++ % 4]); mpFrameGraph = renderer; } RenderWindow::~RenderWindow() { qDebug() << "start ~RenderWindow"; // Unparent objects. Probably not necessary but it makes me feel // good inside. mpFrameGraph->setParent(static\_cast(nullptr)); mpCamera->setParent(static\_cast(nullptr)); delete mpFrameGraph; delete mpCamera; qDebug() << "end ~RenderWindow"; } Qt3DRender::QCamera\* RenderWindow::camera() const { return mpCamera; } Qt3DRender::QFrameGraphNode\* RenderWindow::frameGraph() const { return mpFrameGraph; } void RenderWindow::resizeEvent(QResizeEvent \*) { mpCamera->setAspectRatio((float)width()/(float)height()); } ``` **Scene.h** ``` #ifndef SCENE_H #define SCENE_H #include #include #include #include #include class RenderWindow; class Scene { public: Scene(); ~Scene(); Qt3DCore::QEntityPtr rootNode() const; void addView(RenderWindow\* window); private: void setupScene(); private: Qt3DCore::QEntityPtr mpRoot; // Frame Graph Qt3DRender::QFrameGraphNode\* mpFrameGraph; Qt3DRender::QRenderSettings\* mpRenderSettings; // Aspects Qt3DCore::QAspectEngine\* mpEngine; Qt3DRender::QRenderAspect\* mpRenderAspect; Qt3DInput::QInputAspect\* mpInputAspect; }; #endif // SCENE\_H ``` **Scene.cpp** ``` #include "scene.h" #include #include #include #include #include #include #include #include #include "orbittransformcontroller.h" #include "RenderWindow.h" Scene::Scene() : mpRoot(nullptr) , mpFrameGraph(new Qt3DRender::QFrameGraphNode) , mpRenderSettings(new Qt3DRender::QRenderSettings) , mpEngine(new Qt3DCore::QAspectEngine) , mpRenderAspect(new Qt3DRender::QRenderAspect) , mpInputAspect(new Qt3DInput::QInputAspect) { mpEngine->registerAspect(mpRenderAspect); mpRenderSettings->setActiveFrameGraph(mpFrameGraph); setupScene(); mpRoot->addComponent(mpRenderSettings); mpEngine->setRootEntity(mpRoot); } Scene::~Scene() { qDebug() << "start ~Scene"; mpEngine->setRootEntity(Qt3DCore::QEntityPtr()); mpRoot.clear(); delete mpEngine; // mpRenderSettings and mpFrameGraph are children of the // root node and are automatically destroyed when it is. qDebug() << "end ~Scene"; } Qt3DCore::QEntityPtr Scene::rootNode() const { return mpRoot; } void Scene::addView(RenderWindow\* window) { // Add the window's frame graph to the main frame graph if (window->frameGraph()) { window->frameGraph()->setParent(mpFrameGraph); } } void Scene::setupScene() { mpRoot.reset(new Qt3DCore::QEntity); Qt3DCore::QEntity\* entity = new Qt3DCore::QEntity; entity->setParent(mpRoot.data()); // Create the material Qt3DExtras::QPhongMaterial \*material = new Qt3DExtras::QPhongMaterial(entity); material->setAmbient(Qt::black); material->setDiffuse(QColor(196, 196, 32)); material->setSpecular(Qt::white); // Torrus Qt3DCore::QEntity \*torusEntity = new Qt3DCore::QEntity(entity); Qt3DExtras::QTorusMesh \*torusMesh = new Qt3DExtras::QTorusMesh; torusMesh->setRadius(5); torusMesh->setMinorRadius(1); torusMesh->setRings(100); torusMesh->setSlices(20); Qt3DCore::QTransform \*torusTransform = new Qt3DCore::QTransform; torusTransform->setScale3D(QVector3D(1.5, 1, 0.5)); torusTransform->setRotation(QQuaternion::fromAxisAndAngle(QVector3D(1, 0, 0), -45.0f)); torusEntity->addComponent(torusMesh); torusEntity->addComponent(torusTransform); torusEntity->addComponent(material); // Sphere Qt3DCore::QEntity \*sphereEntity = new Qt3DCore::QEntity(entity); Qt3DExtras::QSphereMesh \*sphereMesh = new Qt3DExtras::QSphereMesh; sphereMesh->setRadius(3); Qt3DCore::QTransform \*sphereTransform = new Qt3DCore::QTransform; /\*OrbitTransformController \*controller = new OrbitTransformController(sphereTransform); controller->setTarget(sphereTransform); controller->setRadius(20.0f); QPropertyAnimation \*sphereRotateTransformAnimation = new QPropertyAnimation(sphereTransform); sphereRotateTransformAnimation->setTargetObject(controller); sphereRotateTransformAnimation->setPropertyName("angle"); sphereRotateTransformAnimation->setStartValue(QVariant::fromValue(0)); sphereRotateTransformAnimation->setEndValue(QVariant::fromValue(360)); sphereRotateTransformAnimation->setDuration(10000); sphereRotateTransformAnimation->setLoopCount(-1); sphereRotateTransformAnimation->start();\*/ sphereEntity->addComponent(sphereMesh); sphereEntity->addComponent(sphereTransform); sphereEntity->addComponent(material); } ``` **MainWindow.h** ``` #ifndef MAINWINDOW_H #define MAINWINDOW_H #include #include "scene.h" namespace Ui { class MainWindow; } class MainWindow : public QMainWindow { Q\_OBJECT public: explicit MainWindow(QWidget \*parent = 0); ~MainWindow(); void createWindow(); private: Ui::MainWindow \*ui; Scene\* scene; }; #endif // MAINWINDOW\_H ``` **MainWindow.cpp** ``` #include "mainwindow.h" #include #include "ui\_mainwindow.h" #include "renderwindow.h" MainWindow::MainWindow(QWidget \*parent) : QMainWindow(parent), ui(new Ui::MainWindow), scene(new Scene()) { ui->setupUi(this); connect(ui->createButton, &QPushButton::clicked, this, &MainWindow::createWindow); } MainWindow::~MainWindow() { qDebug() << "~MainWindow"; delete scene; delete ui; } void MainWindow::createWindow() { RenderWindow\* window = new RenderWindow(); scene->addView(window); window->resize(640, 480); window->show(); QVector3D pos[] = { QVector3D(0, 0, 40), QVector3D(0, 25, -30), QVector3D(-20, -20, -20), QVector3D(40, 0, 0) }; static int count = 0; window->camera()->setPosition(pos[count++%4]); window->camera()->setViewCenter(QVector3D(0, 0, 0)); // Delete the window when it is closed. connect(window, &QWindow::visibilityChanged, this, [=](bool on) { if (!on) window->deleteLater(); }); } ```<issue_comment>username_1: I've thoroughly tested your example and draw the same conclusions. When you destroy the window too quickly, the application crashes, probably because Qt3D still tries to issue some OpenGL commands to the underlying QSurface. I think this is a bug that should be reported. A 'cleaner' work-around of this problem could be to track the generated 3d windows in the main window. You could maintain a list of pointers to all windows that where generated (and probably closed by the user at the some point). The windows are finally destroyed in the destructor of the main window. Upvotes: 2 <issue_comment>username_2: I had exactly the same problem. I was creating a class derived from Qt3DWindow in a dialog box so the user could preview the effects of the choices made, and the program crashed when the dialog exited. In fact on Windows this crash causes the debugger and Qt Creator to crash too! I tried working around this in a variety of ways and some helped because it turns out that it is a threading issue that was fixed on the 23rd October: <https://github.com/qt/qt3d/commit/3314694004b825263c9b9f2d69bd85da806ccbbc> The fix is now to apply the patch, and recompile Qt. 5.11.3 (or perhaps 5.12) will be out quite soon I expect but this bug is a killer if you are using Qt3D in dialogs. Upvotes: 0
2018/03/14
596
2,329
<issue_start>username_0: I have a Function invokes a command that starts a new ps session on a remote server. The invoke command has an Exit clause however this is not exiting? ``` Function CreateID{ Invoke-Command -Session $Script:sesh -ScriptBlock{ Set-Location c:\ Import-Module ActiveDirectory Try { If (Get-ADGroupMember "$Using:IDGroup" | Where-Object Name -match "$Using:Computer") { Write-Host "Already in $using:IDGroup Exiting Script" Disconnect-PSSession -Session $Script:sesh Exit-PSSession Exit } } Catch { } Write-Host "Did not Exit" } } ``` The Get-AD command works fine so where it should not display "did not exit" it does - how can i exit from a scriptblock in a remote ps session? I am trying the disconnect session and Exit-pssession to see if they would do the same as simply exit but none of those are working. I have also tried Break and no luck.<issue_comment>username_1: I've thoroughly tested your example and draw the same conclusions. When you destroy the window too quickly, the application crashes, probably because Qt3D still tries to issue some OpenGL commands to the underlying QSurface. I think this is a bug that should be reported. A 'cleaner' work-around of this problem could be to track the generated 3d windows in the main window. You could maintain a list of pointers to all windows that where generated (and probably closed by the user at the some point). The windows are finally destroyed in the destructor of the main window. Upvotes: 2 <issue_comment>username_2: I had exactly the same problem. I was creating a class derived from Qt3DWindow in a dialog box so the user could preview the effects of the choices made, and the program crashed when the dialog exited. In fact on Windows this crash causes the debugger and Qt Creator to crash too! I tried working around this in a variety of ways and some helped because it turns out that it is a threading issue that was fixed on the 23rd October: <https://github.com/qt/qt3d/commit/3314694004b825263c9b9f2d69bd85da806ccbbc> The fix is now to apply the patch, and recompile Qt. 5.11.3 (or perhaps 5.12) will be out quite soon I expect but this bug is a killer if you are using Qt3D in dialogs. Upvotes: 0
2018/03/14
450
1,680
<issue_start>username_0: I am hiding a `div` when user hover over one of the menu element. I am using this code for this purpose ``` jQuery("#menu-item-15").hover(function(){ jQuery(".bootom-menu").css("display", "none"); }); ``` But i want to display `bootom-menu div` when user will not be hovering (un-hover) over this specific menu element.<issue_comment>username_1: I've thoroughly tested your example and draw the same conclusions. When you destroy the window too quickly, the application crashes, probably because Qt3D still tries to issue some OpenGL commands to the underlying QSurface. I think this is a bug that should be reported. A 'cleaner' work-around of this problem could be to track the generated 3d windows in the main window. You could maintain a list of pointers to all windows that where generated (and probably closed by the user at the some point). The windows are finally destroyed in the destructor of the main window. Upvotes: 2 <issue_comment>username_2: I had exactly the same problem. I was creating a class derived from Qt3DWindow in a dialog box so the user could preview the effects of the choices made, and the program crashed when the dialog exited. In fact on Windows this crash causes the debugger and Qt Creator to crash too! I tried working around this in a variety of ways and some helped because it turns out that it is a threading issue that was fixed on the 23rd October: <https://github.com/qt/qt3d/commit/3314694004b825263c9b9f2d69bd85da806ccbbc> The fix is now to apply the patch, and recompile Qt. 5.11.3 (or perhaps 5.12) will be out quite soon I expect but this bug is a killer if you are using Qt3D in dialogs. Upvotes: 0
2018/03/14
1,476
3,702
<issue_start>username_0: I'm trying to sort an array of objects by descending and beginning with number first, here's what I'm having: ``` var users = [   { 'user': 'fred',   'a': 48 },   { 'user': 'barney', 'a': 'b' },   { 'user': 'fred',   'a': 40 },   { 'user': 'barney', 'a': 'c' } ]; _.orderBy(users, 'a', 'desc'); ``` result: ``` 0: Object {a: "b", user: "barney"} 1: Object {a: "c", user: "barney"} 2: Object {a: 48, user: "fred"} 3: Object {a: 40, user: "fred"} ``` expected result: ``` 0: Object {a: 48, user: "fred"} 1: Object {a: 40, user: "fred"} 2: Object {a: "b", user: "barney"} 3: Object {a: "c", user: "barney"} ```<issue_comment>username_1: If two elements which are going to be compared are letters, check by string comparison. If one of them is a letter, put it after the second always. If two are numbers just do a simple check. ```js var users = [ { 'user': 'fred', 'a': 48 }, { 'user': 'barney', 'a': 'b' }, { 'user': 'fred', 'a': 40 }, { 'user': 'barney', 'a': 'c' } ]; users.sort((obj1, obj2) => { if(typeof obj1.a ==='string' && typeof obj2.a ==='string' ) { return obj1.a.localeCompare(obj2.a); } if(typeof obj1.a ==='string') { return 1; } return obj1.a - obj2.a; }); console.log(users); ``` In your expected result your numbers are going in the descending order. For that you need just to replace `obj1.a - obj2.a` with `obj2.a - obj1.a`. Upvotes: 0 <issue_comment>username_2: You could apply a check for type first and return the delta of the boolean values. If not equal, check the type and return either the result of string comparison (ascending) or the delta of the numerical values (descending). ```js var users = [{ 'user': 'fred', 'a': 48 }, { 'user': 'barney', 'a': 'b' }, { 'user': 'fred', 'a': 40 }, { 'user': 'fred', 'a': 0 }, { 'user': 'fred', 'a': 0 }, { 'user': 'barney', 'a': 'a' }, { 'user': 'barney', 'a': 'd' }, { 'user': 'fred', 'a': 0 }, { 'user': 'barney', 'a': 'c' }, { 'user': 'fred', 'a': 47 }, { 'user': 'fred', 'a': 46 }]; users.sort(({ a: a }, { a: b }) => (typeof a === 'string') - (typeof b === 'string') || (typeof a === 'string' ? a.localeCompare(b) : b - a)); console.log(users); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 1 <issue_comment>username_3: You can provide multiple sort keys ("iteratees") and the order for each key ``` _.orderBy(users, [x => isNaN(x.a), 'a'], ['asc', 'desc']); ``` This will, however, sort the strings descending as well (that is, the result will be `48, 40, c, b`. ```js var users = [ { 'user': 'x', 'a': 'a' }, { 'user': 'x', 'a': 948 }, { 'user': 'x', 'a': 48 }, { 'user': 'x', 'a': -480 }, { 'user': 'x', 'a': 'c' }, { 'user': 'x', 'a': 548 }, { 'user': 'x', 'a': 4558 }, { 'user': 'x', 'a': 148 }, { 'user': 'x', 'a': 4834534 }, { 'user': 'x', 'a': 1 }, { 'user': 'x', 'a': 'b' }, ]; users = _.orderBy(users, [x => isNaN(x.a), 'a'], ['asc', 'desc']); console.log(users) ``` Upvotes: 2 [selected_answer]<issue_comment>username_4: Check the types to see if they match. If so, then sort based on what the types are. If not, then return `1` if they're strings, else `-1`. This will send the strings to the end. ```js var users = [ { 'user': 'fred', 'a': 48 }, { 'user': 'barney', 'a': 'b' }, { 'user': 'fred', 'a': 40 }, { 'user': 'barney', 'a': 'c' } ]; users.sort((a, b) => { const aTyp = typeof a.a; const isStr = aTyp == "string"; if (aTyp == typeof b.a) { return isStr ? a.a.localeCompare(b.a) : b.a - a.a; } return isStr ? 1 : -1; }); console.log(users); ``` Upvotes: 0
2018/03/14
1,324
4,644
<issue_start>username_0: I write desktop application that works with map, and I want to react on pan and long press events. It is possible to use `QGestureEvent` on Qt/Linux/X11 with ordinary mouse? I took [Qt gesture example](http://doc.qt.io/qt-5/qtwidgets-gestures-imagegestures-example.html), it works on tablet, but not reaction on press left mouse button and move (I expect that application recognizes it as tap or swipe event). Then I added to [Qt gesture example](http://doc.qt.io/qt-5/qtwidgets-gestures-imagegestures-example.html) `app.setAttribute(Qt::AA_SynthesizeTouchForUnhandledMouseEvents, true);` at `main` and such code to `imagewidget.cpp`: ``` void ImageWidget::mousePressEvent(QMouseEvent *e) { e->ignore(); } void ImageWidget::mouseReleaseEvent(QMouseEvent *e) { e->ignore(); } void ImageWidget::mouseMoveEvent(QMouseEvent *e) { e->ignore(); } ``` this code still works on tablet, but again no reaction on mouse on Linux/X11. Any way to enable qgesture on linux/x11, should I write my own gesture recognition for mouse?<issue_comment>username_1: Look into this image widget gestures example. (search for mouseDoubleClickEvent) <http://doc.qt.io/qt-5/qtwidgets-gestures-imagegestures-imagewidget-cpp.html> Based on that you need to reimplement the required mouse events. ``` MyWidget::MyWidget() { --- --- } bool MyWidget::event(QEvent *ev) { --- --- } void MyWidget::mouseReleaseEvent(QMouseEvent *event) { } void MyWidget::mouseMoveEvent(QMouseEvent *event) { } ``` And declare those two functions in header ``` void mouseReleaseEvent(QMouseEvent *event); void mouseMoveEvent(QMouseEvent *event); ``` Upvotes: 0 <issue_comment>username_2: The official way to make gestures out of mouse events in Qt is deriving from the `QGestureRecognizer` class, which allows to listen to relevant mouse events, set gesture properties accordingly, then trigger the gesture (or cancel it). Here follows an example for pan gestures only, just to give an idea of what has to be done. Have a `QGestureRecognizer` subclass like this: ``` #include #include class PanGestureRecognizer : public QGestureRecognizer { QPointF startpoint; bool panning; public: PanGestureRecognizer() : panning(false){} QGesture \*create(QObject \*target); Result recognize(QGesture \*state, QObject \*watched, QEvent \*event); }; ``` The `create` method has been overridden to return a new instance of our gesture of interest: ``` QGesture *PanGestureRecognizer::create(QObject *target) { return new QPanGesture(); } ``` The `recognize` method override is the core of our recognizer class, where events are passed in, gesture properties set, gesture events triggered: ``` QGestureRecognizer::Result PanGestureRecognizer::recognize(QGesture *state, QObject *, QEvent *event) { QMouseEvent * mouse = dynamic_cast(event); if(mouse != 0) { if(mouse->type() == QMouseEvent::MouseButtonPress) { QPanGesture \* gesture = dynamic\_cast(state); if(gesture != 0) { panning = true; startpoint = mouse->pos(); gesture->setLastOffset(QPointF()); gesture->setOffset(QPointF()); return TriggerGesture; } } if(panning && (mouse->type() == QMouseEvent::MouseMove)) { QPanGesture \* gesture = dynamic\_cast(state); if(gesture != 0) { gesture->setLastOffset(gesture->offset()); gesture->setOffset(mouse->pos() - startpoint); return TriggerGesture; } } if(mouse->type() == QMouseEvent::MouseButtonRelease) { QPanGesture \* gesture = dynamic\_cast(state); if(gesture != 0) { QPointF endpoint = mouse->pos(); if(startpoint == endpoint) { return CancelGesture; } panning = false; gesture->setLastOffset(gesture->offset()); gesture->setOffset(mouse->pos() - startpoint); return FinishGesture; } } if(mouse->type() == QMouseEvent::MouseButtonDblClick) { panning = false; return CancelGesture; } } return Ignore; } ``` Basically, we track mouse events, updating a couple of properties of our own (`panning` and `startpoint`) and the passed in gesture properties as well. For each mouse event type, we also return a [QGestureRecognizer::Result](http://doc.qt.io/qt-5/qgesturerecognizer.html#ResultFlag-enum) . All other events are discarded (the method returns `Ignore`). This code can be tested with the [Image Gestures Example](http://doc.qt.io/qt-5/qtwidgets-gestures-imagegestures-example.html), though: just add the class to the project and this line in the `ImageWidget` constructor: ``` QGestureRecognizer::registerRecognizer(new PanGestureRecognizer()); ``` This should let the user grab the picture and move it around, using a mouse. Upvotes: 4 [selected_answer]
2018/03/14
170
734
<issue_start>username_0: I am getting below error when i try to open solution in Visual Studio. "Creation of the virtual directory <http://localhost://1111> failed with the error: Object reference not set to an instance of an object" However, same project loads fine, inside other solution file. Not sure what I am missing. Thanks for your help.<issue_comment>username_1: Reinstalling visual studio resolved my issue. Thanks Upvotes: 0 <issue_comment>username_2: This happens due to default settings on the .vs folder. Look at "Properties > Advanced" window of the .vs folder in the solution directory and uncheck "Encrypt contents to secure data". That should resolve this issue without needing to re-install the IDE. Upvotes: 1
2018/03/14
162
703
<issue_start>username_0: I have a 3D scene with an infinite horizontal plane (parallel to the xz coordinates) at a height H along the Y vertical axis. I would like to know how to determine the intersection between the axis of my camera and this plane. The camera is defined by a view-matrix and a projection-matrix.<issue_comment>username_1: Reinstalling visual studio resolved my issue. Thanks Upvotes: 0 <issue_comment>username_2: This happens due to default settings on the .vs folder. Look at "Properties > Advanced" window of the .vs folder in the solution directory and uncheck "Encrypt contents to secure data". That should resolve this issue without needing to re-install the IDE. Upvotes: 1
2018/03/14
626
2,439
<issue_start>username_0: I've got an angular page where I query a webservice and then do a bunch of processing on the data and end up with say 100 names. On the HTML page I'm just showing the number 100, but if I click on that I want to go to a **new page** where I display all 100 actual names. Obviously that's too much data to pass in the URL string. I'm not sure how to post to an angular page directly though. What's the right way to deal with this? This is not a parent/child relationship on the same page.<issue_comment>username_1: The interaction between components is done through their `Input` and `Output` properties. Refer to [this](https://angular.io/guide/template-syntax#inputs-outputs) link for more information. So basically you pass your data by using such properties from one component to the other. Upvotes: 0 <issue_comment>username_2: Your best bet is to store those 100 names in a client-side Angular service. Then you can access the service from either page to show the 100 or show the 100 names. A service is a simple class that can be implemented as a singleton, meaning data you assign to the service remains in the service even after you move from one component to another. I have an example here: <https://blogs.msmvps.com/deborahk/build-a-simple-angular-service-to-share-data/> ``` import { Injectable } from '@angular/core'; @Injectable() export class DataService { serviceData: string; // <-- Your data would be stored here } ``` After retrieving the data, you can store it in the service using simple code like this: ``` export class MainComponent { constructor(public dataService: DataService) { } getData() { // whatever code you are using now to get the data. this.dataService.serviceData = retrievedData; } } ``` Then in the new component that you are routing to, you would simply get the data like this: ``` export class OtherComponent { get data():string { return this.dataService.serviceData; } constructor(public dataService: DataService) { } } ``` Then your OtherComponent can bind to the data using the `data` property. Upvotes: 2 <issue_comment>username_3: Since Angular is a Framework for SPAs it's not a perfect solution to POST pass your data to the "details view". You could follow the getting [started guidelines](https://angular.io/tutorial/toh-pt3) and pass the information to a child component using `@Input` properties. Upvotes: 0
2018/03/14
900
2,155
<issue_start>username_0: I have a list of lists called `my_list_of_lists`, from which I want to select a certain number of elements. * from element 1 of `my_list_of_lists`, I want to select 1 element at random * from element 2 of `my_list_of_lists`, I want to select 1 element at random * from element 3 of `my_list_of_lists`, I want to select 2 elements at random Here are `my_list_of_lists` and `number_to_select`: ``` my_list_of_lists <- list( c(147, 313, 337, 546), c(35, 135, 281, 283, 325, 326, 357), c(311, 334, 403, 427, 436, 507, 520, 566, 595, 632)) number_to_select <- c(1, 1, 2) ``` I can do this individually no problem. For example: ``` sample(my_list_of_lists[[3]],number_to_select[[3]]) #[1] 520 436 ``` But when I try to use `lapply`, I don't get it: ``` selected_vals = lapply(my_list_of_lists, function(x) { sample(x, number_to_select)}) selected_vals[[3]] #[1] 334 ``` How can I use `lapply` to choose 1 element from the first list, 1 element from the second list, and 2 elements from the third list?<issue_comment>username_1: You want to iterate over multiple collections, so you should use `Map`. For example ``` Map(sample, my_list_of_lists, number_to_select) ``` will do what you want by calling `sample` multiple times with corresponding values of `my_list_of_lists` and `numbers_to_select`. Upvotes: 2 <issue_comment>username_2: Here is a corresponding solution with `purrr::map2` from the `tidyverse`. You can't use `lapply` here because you want to map over two objects simultaneously. In general, it's helpful to provide your input in a reproducible format rather than just the `head()`. ```r library(tidyverse) my_list_of_lists <- list( c(147, 313, 337, 546), c(35, 135, 281, 283, 325, 326, 357), c(311, 334, 403, 427, 436, 507, 520, 566, 595, 632) ) number_to_select <- c(1, 1, 2) selected_vals <- map2( .x = my_list_of_lists, .y = number_to_select, .f = function(x,y) base::sample(x, y) ) print(selected_vals) #> [[1]] #> [1] 546 #> #> [[2]] #> [1] 283 #> #> [[3]] #> [1] 507 311 ``` Created on 2018-03-14 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0). Upvotes: 0
2018/03/14
1,587
4,040
<issue_start>username_0: So I am trying to write a function which has a generic which extends a certain object thus constrains it. Next I would like to use this generic together with a definition of a parameter to generate a new "enhanced" parameter. This is all good but as soon as I want to introduce a default value to the parameter TypeScript complains with a message as follow (Some different variations of this in the [playground](https://www.typescriptlang.org/play/#src=const%20test1%20%3D%20%3CT%20extends%20%7B%20foo%3F%3A%20string%20%7D%3E(options%3A%20T%20%26%20%7B%20bar%3F%3A%20boolean%20%7D%20%3D%20%7Bfoo%3A%20''%7D)%20%3D%3E%20%7B%0D%0A%20%20%20%20console.log(options)%3B%0D%0A%7D%0D%0A%0D%0Aconst%20test2%20%3D%20%3CT%20extends%20%7B%20foo%3F%3A%20string%20%7D%3E(options%3A%20T%20%26%20%7B%20bar%3F%3A%20boolean%20%7D%20%3D%20%7B%7D)%20%3D%3E%20%7B%0D%0A%20%20%20%20console.log(options)%3B%0D%0A%7D%0D%0A%0D%0Aconst%20test3%20%3D%20%3CT%3E(options%3A%20T%20%26%20%7B%20bar%3F%3A%20boolean%20%7D%20%3D%20%7Bbar%3A%20false%7D)%20%3D%3E%20%7B%0D%0A%20%20%20%20console.log(options)%3B%0D%0A%7D%0D%0A%0D%0Aconst%20test4%20%3D%20%3CT%20extends%20%7Bbar%3F%3A%20boolean%7D%3E(options%3A%20T%20%26%20%7B%20bar%3F%3A%20boolean%20%7D%20%3D%20%7Bbar%3A%20false%7D)%20%3D%3E%20%7B%0D%0A%20%20%20%20console.log(options)%3B%0D%0A%7D%0D%0A%0D%0Aconst%20test5%20%3D%20(options%3A%20%7Bfoo%3F%3A%20string%7D%20%26%20%7B%20bar%3F%3A%20boolean%20%7D%20%3D%20%7Bbar%3A%20false%7D)%20%3D%3E%20%7B%0D%0A%20%20%20%20console.log(options)%3B%0D%0A%7D)): Function: ``` const test1 = (options: T & { bar?: boolean } = {foo: ''}) => { console.log(options); } ``` The error: > > Type '{ foo: string; }' is not assignable to type 'T & { bar?: > boolean; }'. > Object literal may only specify known properties, but 'foo' does not exist in type 'T & { bar?: boolean; }'. Did you mean to write > 'foo'? > > > The compiler warns me that I probably wanted to use foo, which I actually did. Is it simply not possible to use a generic in this way or is this a bug in TypeScript?<issue_comment>username_1: The reason why none of the initializations work is that you can't initialize something you don't know. Consider the following call: ``` test1<{ foo: string, goo: boolean}>(); ``` The generic parameter is valid for the function, but it has an extra property, `goo`, which is mandatory, but unspecified in your default value for options. This is why the compiler complains about the assignment, you don't know the final shape of `T`, you only know the minimum requirements for it, so you can't build an object that will be compatible with`T` If you are ok with `options` not having all mandatory properties specified on the generic type parameter, you can use a type assertion ``` const test1 = (options: T & { bar?: boolean } = {foo: ''}) => { console.log(options); } ``` Upvotes: 1 <issue_comment>username_2: Type `T` at the definition time is unknown, so the compiler throws this error that you cannot initialize something you are unaware of. There are a couple of workarounds I can think of, but not sure how useful they are for your case. You can leave type `T` as is, and use union types for the `options` parameter as follows: ``` const test1 = (options: T | { bar?: boolean } | { foo?: string } = { foo: '' }) => { console.log(options); }; ``` Another workaround is to use a type assertion and manually tell the compiler that the initializer is of the type it needs to be: ``` const test2 = (options: T & { bar?: boolean } = { foo: '' } as T & { bar?: boolean }) => { console.log(options); }; ``` But keep in mind that these are just workarounds and whenever you have to use a workaround, it implies a flaw in the design. Maybe you can take a second look at your logic and improve it in order to remove the need for these workarounds. Or maybe, you can skip argument initialization and add `options.foo = options.foo || '';` as the first line of code in your function. Just some ideas. Upvotes: 4 [selected_answer]
2018/03/14
875
2,625
<issue_start>username_0: I have a text file with random words in it. i want to find out which words have maximum occurrence as a pair('hi,hello' OR 'Good,Bye'). Simple.txt ``` hi there. hello this a dummy file. hello world. you did good job. bye for now. ``` I have written this command to get the count for each word(hi,hello,good,bye). ``` cat simple.txt| tr -cs '[:alnum:]' '[\n*]' | sort | uniq -c|grep -E -i "\|\|\|\" ``` this gives me the the occurrence of each word with a count(number of times it occurs) in the file but now how to refine this and get a direct output as "Hi/hello is the pair with maximum occurrence"<issue_comment>username_1: To make it more interesting, let's consider this test file: ``` $ cat >file.txt You say hello. I say good bye. good bye. good bye. ``` To get a count of all pairs of words: ``` $ awk -v RS='[[:space:][:punct:]]+' 'NR>1{a[last","$0]++} {last=$0} END{for (pair in a) print a[pair], pair}' file.txt 3 good,bye 1 say,good 2 bye,good 1 I,say 1 You,say 1 hello,I 1 say,hello ``` To get the single pair with the highest count, we need to sort: ``` $ awk -v RS='[[:space:][:punct:]]+' 'NR>1{a[last","$0]++} {last=$0} END{for (pair in a) print a[pair], pair}' file.txt | sort -nr | head -1 3 good,bye ``` ### How it works * `-v RS='[[:space:][:punct:]]+'` This tells awk to use any combination of white space or punctuation as a record separator. This means that each word becomes a record. * `NR>1{a[last","$0]++}` For every word after the first, increment the count in associative array `a` for the combination of the previous and current work. * `last=$0` Save the current word in the variable `last`. * `END{for (pair in a) print a[pair], pair}` After we have finished reading the input, print out the results for each pair. * `sort -nr` Sort the output numerically in reverse (highest number first) order. * `head -1` Select the first line (giving us the pair with the highest count). ### Multiline version For those who prefer their code spread out over multiple lines: ``` awk -v RS='[[:space:][:punct:]]+' ' NR>1 { a[last","$0]++ } { last=$0 } END { for (pair in a) print a[pair], pair }' file.txt | sort -nr | head -1 ``` Upvotes: 2 <issue_comment>username_2: some terse perl: ``` perl -MList::Util=max,sum0 -slne ' for $word (m/(\w+)/g) {$count{$word}++} } END { $pair{$_} = sum0 @count{+split} for ($a, $b); $max = max values %pair; print "$max => ", {reverse %pair}->{$max}; ' -- -a="hi hello" -b="good bye" simple.txt ``` ``` 3 => hi hello ``` Upvotes: 1
2018/03/14
1,071
3,014
<issue_start>username_0: I am trying to left join two data sets based on a four digit code in each. One data set has the codes filled in to varying degrees (2,3, or all 4 digits) with trailing zeroes as needed. The other data set has the codes completed to all four digits. If the last two digits of CodeA are 00 then I want to join to any CodeB with the same first two digits. If only the last digit of CodeA is 0 then I want to join to all CodeBs that have the same first three digits. If CodeA has all four digits then I want to join to those exact same codes in CodeB. Example: ``` CodeA data set Example CodeA Field1 1 2500 w 2 4110 x 3 2525 y 4 5345 z CodeB data set CodeB Field2 1234 a 2525 b 4113 c 6543 d 5341 e 2522 f 4122 g 5345 h ``` I want my result data set to look like this: ``` Ex CodeA Field1 CodeB Field2 1 2500 w 2525 b 1 2500 w 2522 f 2 4110 x 4113 c 3 2525 y 2525 b 4 5345 z 5345 h ```<issue_comment>username_1: To make it more interesting, let's consider this test file: ``` $ cat >file.txt You say hello. I say good bye. good bye. good bye. ``` To get a count of all pairs of words: ``` $ awk -v RS='[[:space:][:punct:]]+' 'NR>1{a[last","$0]++} {last=$0} END{for (pair in a) print a[pair], pair}' file.txt 3 good,bye 1 say,good 2 bye,good 1 I,say 1 You,say 1 hello,I 1 say,hello ``` To get the single pair with the highest count, we need to sort: ``` $ awk -v RS='[[:space:][:punct:]]+' 'NR>1{a[last","$0]++} {last=$0} END{for (pair in a) print a[pair], pair}' file.txt | sort -nr | head -1 3 good,bye ``` ### How it works * `-v RS='[[:space:][:punct:]]+'` This tells awk to use any combination of white space or punctuation as a record separator. This means that each word becomes a record. * `NR>1{a[last","$0]++}` For every word after the first, increment the count in associative array `a` for the combination of the previous and current work. * `last=$0` Save the current word in the variable `last`. * `END{for (pair in a) print a[pair], pair}` After we have finished reading the input, print out the results for each pair. * `sort -nr` Sort the output numerically in reverse (highest number first) order. * `head -1` Select the first line (giving us the pair with the highest count). ### Multiline version For those who prefer their code spread out over multiple lines: ``` awk -v RS='[[:space:][:punct:]]+' ' NR>1 { a[last","$0]++ } { last=$0 } END { for (pair in a) print a[pair], pair }' file.txt | sort -nr | head -1 ``` Upvotes: 2 <issue_comment>username_2: some terse perl: ``` perl -MList::Util=max,sum0 -slne ' for $word (m/(\w+)/g) {$count{$word}++} } END { $pair{$_} = sum0 @count{+split} for ($a, $b); $max = max values %pair; print "$max => ", {reverse %pair}->{$max}; ' -- -a="hi hello" -b="good bye" simple.txt ``` ``` 3 => hi hello ``` Upvotes: 1
2018/03/14
1,217
4,581
<issue_start>username_0: I've created a function that attempts to return a `SubForm` data type. This function is used by various parent `Forms`. The function looks like this: ``` Public Function mySubFrm(name As String, subformName As String) As SubForm Dim frm As Form Dim subFrm As SubForm Set frm = Forms(name) Set subFrm = frm.Controls(subformName) mySubFrm = subFrm End Function ``` I've attempted to use it by the following: ``` Dim testSubForm As SubForm testSubForm = mySubFrm("testForm", "testSubForm") ``` Immediately, it follows with compile error: > > Invalid use of property > > > What I've attempted to do was add a watch at `frm.Controls(subformName)` and I see its return type is `SubForm/SubForm`, so I feel as though I am declaring and setting the right data type, but then again I'm not sure? Can someone assist me with what I'm not doing properly? Thanks<issue_comment>username_1: You receive the error because you're trying to set an object, but are not using the `Set` keyword. Perhaps it should be the following: ``` Set mySubFrm = subFrm ``` Or ``` Set mySubFrm = frmDatasheet ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I don't know much Access, but I know VBA pretty well. Your function is returning an object reference: > > > ``` > Public Function mySubFrm(name As String, subformName As String) As SubForm > > ``` > > As such, its return value **must** be assigned using the `Set` keyword: ``` Set mySubFrm = subFrm ``` The reason why you're getting username_3 confusing error, is because of a lovely (not!) thing in VBA called *default properties*. See, a form has a *default property*, most likely its `Controls` collection - and that property is only exposing a `Public Property Get` accessor, which makes it read-only. So when you omit the `Set` keyword: ``` mySubFrm = subFrm ``` VBA assumes the code is legal, and so the only thing you could possibly be wanting to do, is to assign that default property - in other words, it's behaving exactly as if you would have written: ``` mySubFrm.Controls = subFrm ``` But the `Controls` class' own default property is its `Item` member, which is also read-only: there's no way the default `Controls` property can appear on the left-hand side of an assignment. Hence, *invalid use of property*. My open-source project [Rubberduck](http://rubberduckvba.com) will soon have an inspection that will issue a result whenever you're implicitly referring to an object's default property, offering to make the call explicit. Writing code that *says what it does, and does what it says* is hard when you don't even know you're referring to a default property. Upvotes: 3 <issue_comment>username_3: I would like to suggest a different approach. Typically when you create a form with a subform, it's quite rare that you actually need the subform to be dynamic. There is almost always a relationship between the parent form and the child form, and I would consider a parent form without its child a broken object in general. Therefore, when I need to access things that's on the subform, I prefer to use the explicit version: ``` Dim mySubForm As Access.SubForm Set mySubForm = Me.Controls("mySubForm").Form mySubForm.SomeControlOnMySubForm.Value = 123 ``` Yes it's 3 lines now but the advantage is that the reference to a *specific* form is now made explicit in your parent form. You now can see from the code that the parent form *depends* on *that* subform. More importantly, if you delete or rename the control `SomeControlOnMySubform`, the above form will allow the VBA compiler to warn you that there is no such object on the subform, enabling you to verify your code. In other words, try your best to convert any potential runtime errors into compile-time errors because compile-time errors are much easier to validate and prevent than runtime errors. The same principle works in the reverse; when you want to describe your subform as explicitly depending on a parent form, you can do: ``` Dim myParent As Form_frmParent Set myParent = Me.Parent ``` NOTE: The code all assumes that there are code-behinds for all forms involved. If a form has no code-behind, it won't have a VBA class for you to reference. In such case, I set the form's property `HasModule` (located in the `Others` tab) to `Yes` even if there's ultimately no code-behind. (The reason is that it used to be that you could create "lightweight" form by creating it with no code-behind but with modern hardware, there is hardly any difference) Upvotes: 1
2018/03/14
693
2,420
<issue_start>username_0: I'm attempting to add parsing validation tests and wanted to check that the initial JSON I was sent could be turned into an object and that object in turn turned into JSON. In the end the validation would be that both dictionaries are equal. What I'm seeing however is that, while date parsing works, the conversion to a string replaces `+00:00` with `Z`. In my research I've found that these are interchangeable and I'm aware that I could in theory replace `Z` with `+00:00` for the comparison but I was wondering if there is a way on the `ISO8601DateFormatter` or any `DateFormatter` to say that you would prefer `+00:00` over `Z`? For those who like to see some code this is my quick playground example. ``` var date = "2018-01-30T22:13:12+00:00" let df = ISO8601DateFormatter() df.formatOptions = [.withInternetDateTime] let newDate = df.date(from: date) let newString = df.string(from: newDate!) ```<issue_comment>username_1: The ISO 8601 date format states that `Z` should be used when the date's timezone offset is 0. Many of the timezone date formatting symbols used with a `DateFormatter` also specifically result in `Z` if the date's timezone offset is 0. If you want to generate a string from a `Date` and you want to ensure that you get `+00:00` instead of `Z`, then use `DateFormatter` with the appropriate date formatter specifier. The format specifier `xxx` will give you a timezone in the format `+00:00`. `XXX` and `ZZZZZ` will also give you that same format but will give you `Z` in the result if the offset is 0. More on these can be seen on the [Unicode Technical Specification #35](http://www.unicode.org/reports/tr35/tr35-31/tr35-dates.html#Date_Format_Patterns) page. The documentation for `ISO8601DateFormatter` and its `formatOptions` states that `ZZZZZ` is used for the timezone. So you will always get `Z` for a timezone offset of 0. A `DateFormatter` with a date format of `yyyy-MM-dd'T'HH:mm:ssxxx` will give you the same result you are looking for. But also be sure to set the date formatter's locale to `en_US_POSIX`. You will also need to ensure the output comes out in the UTC timezone. Set the formatter's `timeZone` property to `TimeZone(secondsFromGMT: 0)`. Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` df.formatOptions = [.withInternetDateTime, .withTimeZone, .withColonSeparatorInTimeZone] ``` will get you what you want. Upvotes: 0
2018/03/14
365
1,336
<issue_start>username_0: Is it possible to upload jar as a file into database ? I need to upload jars into mongodb. I don't know how to do that. I know about file upload with Spring Boot. I know it is possible to upload zip in database. But not finding information about JAR/WAR files.<issue_comment>username_1: JAR and WAR files are nothing more than a renamed ZIP file. If you want to see it yourself rename `something.jar` to `something.zip` and open it using archive manager. Since you said you know how to upload a ZIP you should follow the same procedure. If the file is small (e.g. less than 4MB) perhaps using BSON is the best approach. See [Storing Large Objects and Files in MongoDB](https://www.mongodb.com/blog/post/storing-large-objects-and-files-in-mongodb). Upvotes: 3 [selected_answer]<issue_comment>username_2: If you mean saving a jar file into a database - it is depends on the database's support of [BLOB](https://en.wikipedia.org/wiki/Binary_large_object) data types. And if you mean use Java language based stored procedures from JAR file - with [Oracle](https://docs.oracle.com/cd/B13789_01/java.101/b12021/storproc.htm) and [PostgreSQL](https://www.javacodegeeks.com/2012/10/introduction-to-postgresql-pljava.html) this is possible. MongoDB supports server side JavaScript stored procedures only. Upvotes: 1
2018/03/14
418
1,171
<issue_start>username_0: I have a textfile that is ~ 10k lines long. There are always 216 lines describe a fact with a total of 17 values. I want to build a tensor that is 216 lines high, 13 columns wide and about 1000 layers deep. That would be the input. The output would be one line high, 4 columns wide and also about 1000 layers deep. Current status: ``` x_train = x_train.reshape (1308, 13, 216) y_train = y_train.reshape (1308, 4, 216) result = y_train [:,:, 0] ``` Conv: ``` model.add (Convolution2D (1, kernel_size = (13, 5), activation = 'relu', input_shape = (1308, 13, 216))) ``` Afterwards little maxpooling, etc., which should not disturb. I absolutely do not get along with the reshapes rightly. Would be very bad if someone could help me. Current error message: > > Input arrays should have the same number of samples as target arrays. > Found 1 input samples and 1308 target samples. > > > Many Thanks<issue_comment>username_1: I think changing from `input_shape = (1308, 13, 216)` to `input_shape = (13, 216)` should work. Upvotes: 1 <issue_comment>username_2: I needed to change it into ``` input_shape = (13, 216, 1) ``` Upvotes: 2
2018/03/14
1,550
5,650
<issue_start>username_0: I am developing REST API using nodeJS, express, mongoose etc with mongodb. I am uploading file and saving it to a folder using multer. Now I want to save the path of the file to a mongodb document. However, I am saving data to mongodb using mongoose schema. First I created the model. When a post request is made, I read it using bodyParser (req.body) and save this object by creating new instance or more shortcut. ```js Product.create(req.body).then(function(product){ res.send(product); }).catch(next); ``` But when I am using multer to upload a file and want to save the path to the model I cant do it using create() function. So what is the way ??<issue_comment>username_1: If using multer you will get uploaded files path in `req.file.path` and you just need to save that in your database. Upvotes: 1 <issue_comment>username_2: Here you can upload the image into some destination you want, this is for the [reference](https://steemit.com/utopian-io/@morningtundra/storing-and-retreiving-images-in-mongodb-with-nodejs) for more details information including how to access stored images in Mongodb and you can see the documentation [here](https://github.com/expressjs/multer) about multer. ``` express = require('express') , router = express.Router() , MongoClient = require('mongodb').MongoClient , ObjectId = require('mongodb').ObjectId , fs = require('fs-extra') // Your mongodb or mLabs connection string , url = 'mongodb://username:password@yourinstanced.mlab.com:29459/yourdb' , multer = require('multer') , util = require('util') , upload = multer({limits: {fileSize: 2000000 },dest:'/uploads/'}) app.post('/profile', upload.single('avatar'), function (req, res, next) { // req.file is the `avatar` file // req.body will hold the text fields, if there were any if (req.file == null) { // If Submit was accidentally clicked with no file selected... )} else { MongoClient.connect(url, function(err, db){ // this landing will give you any option of file information that you can collect console.log('landing here', req.file) // read the img file from tmp in-memory location var newImg = fs.readFileSync(req.file.path); // encode the file as a base64 string. var encImg = newImg.toString('base64'); // define your new document var newItem = { contentType: req.file.mimetype, size: req.file.size, name: req.file.originalname, path: req.file.path }; db.collection('yourcollectionname') .insert(newItem, function(err, result){ if (err) { console.log(err); }; var newoid = new ObjectId(result.ops[0]._id); fs.remove(req.file.path, function(err) { if (err) { console.log(err) }; res.send(newItem); }); }); }); }}); ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: In the MongoDB we can store single or multiple images. For storing multiple images I am using productPictures array. **1: First, create the Product model** ``` const mongoose = require('mongoose'); const productSchema = new mongoose.Schema({ name: { type: String, required: true, trim: true, }, productPictures: [{ img: { type: String } }], }); module.exports = mongoose.model('Product', productSchema); ``` **2: Create the product controller** ``` const Product = require('../models/product'); exports.createProduct = (req, res) => { const { name} = req.body; let productPictures = []; if (req.files.length > 0) { productPictures = req.files.map((file) => { return { img: file.filename }; }); } const product = new Product({ name, productPictures, }); product.save((error, product) => { if (error) return res.status(400).json({ error }); if (product) { res.status(201).json({ product }); } }); }; ``` **3: Create products route file** * I am using **nanoid** to generate a unique name for images * Create uploads folder inside src folder ``` const express = require('express'); const path = require('path'); const multer = require('multer'); const { nanoid } = require('nanoid'); const { createProduct } = require('../controllers/product'); const router = express.Router(); const storage = multer.diskStorage({ destination: function (req, file, cb) { cb(null, path.join(path.dirname(__dirname), 'uploads')); }, filename: function (req, file, cb) { cb(null, nanoid() + '-' + file.originalname); }, }); const upload = multer({ storage: storage }); router.post( '/products/create', upload.array('productPicture'), // for storing single image : upload.single('productPicture') createProduct ); module.exports = router; ``` **4: Create server.js file** ``` const env = require('dotenv'); const express = require('express'); const mongoose = require('mongoose'); const app = express(); // routes const productRoutes = require('./routes/product'); env.config(); mongoose .connect(`${process.env.MONGO_URI}`, { useNewUrlParser: true, useUnifiedTopology: true, useCreateIndex: true, }) .then(() => { console.log('Database connected'); }); // Body parser (You can you **body-parser**) app.use(express.json()); app.use('/api', productRoutes); app.listen(process.env.PORT, () => { console.log(`Server is running on port ${process.env.PORT}`); }); ``` **5: Finally you can create product using postman** [![enter image description here](https://i.stack.imgur.com/3aXQO.png)](https://i.stack.imgur.com/3aXQO.png) Upvotes: 2
2018/03/14
400
1,643
<issue_start>username_0: How do I get otherImages to return the string in it so that I can replace a word within it when called from 'narrow' method? ``` def otherImages(self): self.wfile.write(bytes("[![](../images/menu_button.png)](/narrow)", "utf8")) #^word I want to replace def contentList(self, skip_name=''): # All content methods in list methods = [self.title, self.containerDIV, self.heading, self.stopSection, self.offlineSection, self.onlineSection, self.endDIV, self.otherImages] for m in methods: if m.__name__ != skip_name: m() def narrow(self): try: self.reply() self.contentList('onlineSection') # removed onlineSection for words in self.otherImages(): words.replace("narrow", "index") ```<issue_comment>username_1: `self.otherImages` doesn't `return` anything! When a function does not return an explicit value in python, it returns `None`. You cannot iterate over `None`. Upvotes: 2 <issue_comment>username_2: Here are the changes I made which solves my problem. It allows me to edit the string when called from the 'narrow' method. ``` def otherImages(self): return["[![](../images/menu_button.png)](/narrow)"] def narrow(self): try: self.reply() self.contentList('onlineSection') # removed onlineSectionv for words in self.otherImages(): words = words.replace("/narrow", "/") self.wfile.write(bytes(words, "utf8")) return ``` Upvotes: 1 [selected_answer]
2018/03/14
527
2,418
<issue_start>username_0: Is it possible to transfer virtual environment data from a local host to a docker image via the ADD command? Rather than doing pip installs inside the container, I would rather the user have all of that done locally and simply transfer the virtual environment into the container. Granted all of the files are the same name locally as in the docker container, along with all directories being nested properly. This would save minutes to hours if it was possible to transfer virtual environment settings into a docker image. Maybe I am thinking about this in the wrong abstract. It just feels very inefficient doing pip installs via a requirements.txt that was passed into the container, as opposed to doing it all locally, otherwise each time the image is started up it has to re-install the same dependencies that have not changed from each image's build.<issue_comment>username_1: While possible, it's not recommended. * Dependencies (library versions, globally installed packages) can be different on host machine and container. * Image builds will not be 100% reproducible on other hosts. * Impact of pip install is not big. Each RUN command creates it's own layer, which are cached locally and also in repository, so pip install will be re-run only when requirements.txt is changed (or previous layers are rebuilt). To trigger pip install only on requirements.txt changes, Dockerfile should start this way: `... COPY requirements.txt ./ RUN pip install -r requirements.txt COPY src/ ./ ...` Also, it will be run only on image build, not container startup. If you have multiple containers with same dependencies, you can build intermediate image with all the dependencies and build other images `FROM` it. Upvotes: 2 <issue_comment>username_2: We had run into this problem earlier and here are a few things we considered: 1. Consider building base images that have common packages installed. The app containers can then use the one of these base containers and install the differential. 2. [Cache the Pip packages](https://packaging.python.org/guides/index-mirrors-and-caches/ "Package Caching") on a local path that can be mounted on the container. That'll save the time to download the packages. Depending on the complexity of your project one may suit better than the other - you may also consider a hybrid approach to find maximum optimization. Upvotes: 4 [selected_answer]
2018/03/14
546
2,287
<issue_start>username_0: I was scratching my head when i was trying to find some sample code for designing an inline editing form input component in Angular 5. I ran accross the following lines: ``` public onChange: any = Function.prototype; public onTouched: any = Function.prototype; ``` My question is: What do they do? The example then goes on and implements the `ControlValueAccessor` interface. It would implement some members like this: ``` public registerOnChange(fn: (_: any) => {}): void { this.onChange = fn; } public registerOnTouched(fn: () => {}): void { this.onTouched = fn; } ``` So it seems you would assign some function to the global Function prototype. Is that good practice and what is the author trying to do with that?<issue_comment>username_1: While possible, it's not recommended. * Dependencies (library versions, globally installed packages) can be different on host machine and container. * Image builds will not be 100% reproducible on other hosts. * Impact of pip install is not big. Each RUN command creates it's own layer, which are cached locally and also in repository, so pip install will be re-run only when requirements.txt is changed (or previous layers are rebuilt). To trigger pip install only on requirements.txt changes, Dockerfile should start this way: `... COPY requirements.txt ./ RUN pip install -r requirements.txt COPY src/ ./ ...` Also, it will be run only on image build, not container startup. If you have multiple containers with same dependencies, you can build intermediate image with all the dependencies and build other images `FROM` it. Upvotes: 2 <issue_comment>username_2: We had run into this problem earlier and here are a few things we considered: 1. Consider building base images that have common packages installed. The app containers can then use the one of these base containers and install the differential. 2. [Cache the Pip packages](https://packaging.python.org/guides/index-mirrors-and-caches/ "Package Caching") on a local path that can be mounted on the container. That'll save the time to download the packages. Depending on the complexity of your project one may suit better than the other - you may also consider a hybrid approach to find maximum optimization. Upvotes: 4 [selected_answer]
2018/03/14
4,299
14,987
<issue_start>username_0: I would like to know why this query takes is slow (about 10 to 20 seconds), the three tables used have 500,000 records, this is the query: ``` SELECT *, 'rg_egresos' AS nombre_tabla FROM rg_detallexml DE INNER JOIN rg_egresos EG INNER JOIN rg_emisor EM ON DE.idContador = EG.id AND DE.idDetalleXml = EG.idDetalleXml AND DE.idContador = EM.idContador AND DE.idDetalleXml = EM.idDetalleXml WHERE DE.idContador = '14894' AND DATE_FORMAT(dateFechaHora, '%Y-%m-%d') BETWEEN '2017-10-01' AND '2017-10-31' AND strTipodeComprobante = 'egreso' AND version_xml = '3.2' AND estado_factura = 0 AND modificado = 0; ``` And this is what it shows when I use `EXPLAIN` ``` *************************** 1. row *************************** id: 1 select_type: SIMPLE table: EG type: index_merge possible_keys: idx_idDetallexml,idx_estado_factura,idx_modificado,idx_idContador key: idx_idContador,idx_estado_factura,idx_modificado key_len: 4,4,4 ref: NULL rows: 2111 Extra: Using intersect(idx_idContador,idx_estado_factura,idx_modificado); Using where *************************** 2. row *************************** id: 1 select_type: SIMPLE table: DE type: eq_ref possible_keys: PRIMARY,idx_strTipodeComprobante,idx_idContador,idx_version_xml key: PRIMARY key_len: 4 ref: db_pwf.EG.idDetalleXml rows: 1 Extra: Using where *************************** 3. row *************************** id: 1 select_type: SIMPLE table: EM type: ref possible_keys: idx_idContador,idx_idDetallexml key: idx_idDetallexml key_len: 4 ref: db_pwf.DE.idDetalleXml rows: 1 Extra: Using where ``` Can you see a way to improve the query?, I have other queries working with bigger tables and they are faster, all the required fields have its index, thanks. Table rg\_detallexml: ``` +---------------------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------------------+--------------+------+-----+---------+----------------+ | idDetalleXml | int(10) | NO | PRI | NULL | auto_increment | | UUID | varchar(50) | NO | MUL | NULL | | | dateFechaSubida | varchar(7) | YES | | NULL | | | idContador | int(10) | NO | MUL | NULL | | | dateFechaHora | datetime | YES | MUL | NULL | | | dateFechaHoraCertificacion | datetime | YES | | NULL | | | dateFechaPago | datetime | YES | | NULL | | | intFolio | int(10) | YES | | NULL | | | strSerie | varchar(2) | YES | | A | | | doubleDescuento | double | YES | | NULL | | | doubleTotal | double | YES | | NULL | | | doubleSubtotal | double | YES | | NULL | | | duobleTotalImpuestosTrasladados | double | YES | | NULL | | | doubleTotalImpuestosRetenidos | double | YES | | NULL | | | doubleTotalRetencionesLocales | double | YES | | NULL | | | doubleTotalTrasladosLocales | double | YES | | NULL | | | strTipodeComprobante | varchar(15) | YES | MUL | NULL | | | strMetodoDePago | varchar(150) | YES | | NULL | | | strFormaDePago | varchar(150) | YES | | NULL | | | strMoneda | varchar(10) | YES | | NULL | | | tipoCambio | double | NO | | NULL | | | strLugarExpedicion | varchar(150) | YES | | NULL | | | DIOT | int(1) | YES | | 0 | | | version_xml | varchar(10) | NO | MUL | NULL | | +---------------------------------+--------------+------+-----+---------+----------------+ ``` Table rg\_egresos: ``` +---------------------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------------------+--------------+------+-----+---------+----------------+ | id_egreso | int(11) | NO | PRI | NULL | auto_increment | | id | int(11) | NO | MUL | NULL | | | idDetalleXml | int(10) | NO | MUL | NULL | | | idCatalogo | int(19) | NO | MUL | NULL | | | tipoCuenta | int(11) | NO | MUL | NULL | | | intRubro | int(1) | NO | | NULL | | | RFC | varchar(20) | NO | MUL | NULL | | | compra_gastos_0_porciento | float | NO | MUL | NULL | | | deducible | int(1) | NO | | NULL | | | compra_gastos_exentos | float | NO | | NULL | | | no_deducibles | float | NO | | NULL | | | estado_factura | int(11) | NO | MUL | NULL | | | fecha | date | NO | MUL | NULL | | | total_xml | double | NO | | NULL | | | subtotal_xml | double | NO | | NULL | | | iva_xml | double | NO | | NULL | | | total_impuestos | double | NO | | NULL | | | abonado | double | NO | | NULL | | | subtotal | double | NO | | NULL | | | iva | double | NO | | NULL | | | pendiente | double | NO | | NULL | | | subtotal_sin_iva | double | NO | | NULL | | | acreditable | int(1) | NO | MUL | 0 | | | fecha_operacion | datetime | NO | MUL | NULL | | | modificado | int(1) | NO | MUL | NULL | | | UUID | varchar(50) | NO | MUL | NULL | | | IEPS | double | NO | | NULL | | | retencion_iva | double | NO | | NULL | | | retencion_isr | double | NO | | NULL | | | imp_local | double | NO | | 0 | | | enviado_a | int(11) | NO | MUL | NULL | | | enviado_al_iva | int(1) | NO | | NULL | | | EsNomina | int(1) | NO | MUL | 0 | | | dateFechaPago | date | NO | MUL | NULL | | | nota_credito | int(1) | NO | MUL | NULL | | | extranjero | int(1) | NO | MUL | NULL | | | pago_banco | int(1) | NO | MUL | NULL | | | idBanco_Pago | int(20) | NO | MUL | NULL | | | movimientoPago | int(10) | NO | | NULL | | | saldo_banco | varchar(50) | NO | | NULL | | | tipo_pago | int(1) | NO | | 0 | | | responsable | varchar(100) | NO | | NULL | | +---------------------------+--------------+------+-----+---------+----------------+ ``` Table rg\_emisor: ``` +-----------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+---------+----------------+ | idEmisor | int(10) | NO | PRI | NULL | auto_increment | | idDetalleXml | int(10) | NO | MUL | NULL | | | idContador | int(10) | NO | MUL | NULL | | | strRFC | varchar(13) | NO | | NULL | | | strNombreEmisor | varchar(200) | YES | | NULL | | | strRegimen | varchar(250) | YES | | NULL | | | strPais | varchar(40) | YES | | MX | | | strEstado | varchar(50) | YES | | NULL | | | intCP | int(5) | YES | | NULL | | | strMunicipio | varchar(250) | YES | | NULL | | | strLocalidad | varchar(250) | YES | | NULL | | | strColonia | varchar(250) | YES | | NULL | | | intNumExt | int(10) | YES | | NULL | | | intNumInt | int(10) | YES | | NULL | | | strCalle | varchar(250) | YES | | NULL | | | regimenFiscal | varchar(20) | YES | | NULL | | +-----------------+--------------+------+-----+---------+----------------+ ```<issue_comment>username_1: The biggest problem I see is on this part: ``` DATE_FORMAT(dateFechaHora, '%Y-%m-%d') BETWEEN '2017-10-01' AND '2017-10-31' ``` is dateFechaHora a datetime field? Why are you converting a datetime field to a string (DATE\_FORMAT)? even if you have an index on the dateFechaHora field, it won't be used. I would suggest you to use this code instead: ``` and DateFechaHora >= '2017-10-01' and DateFechaHora < '2017-11-01' ^^^^^^^^^^ ``` yes it's the following day and it won't be included. So your query might look like this: ``` select *, 'rg_egresos' AS nombre_tabla from rg_detallexml DE inner join rg_egresos EG on DE.idContador = EG.id and DE.idDetalleXml = EG.idDetalleXml inner join rg_emisor EM on DE.idContador = EM.idContador and DE.idDetalleXml = EM.idDetalleXml where DE.idContador = '14894' and dateFechaHora >= '2017-10-01' and dateFechaHora < '2017-11-01' and strTipodeComprobante = 'egreso' and version_xml = '3.2' and estado_factura = 0 and modificado = 0 ; ``` Upvotes: 1 <issue_comment>username_2: **Now that you've shown the tables, we see that `rg_egresos.id` is not the table's ID. There can hence be multiple records for one contador in the table. Let's look at the tables and the query more closely:** All tables contain a contador ID and a DetalleXml ID. You want to join them all on these two fields. So you start with the `rg_detallexml` and get all records for the contador. With the `idDetalleXml` thus found, you search for `rg_egresos` and `rg_emisors`. This is a bit strange. First of all an `rg_detallexml` is obviously linked to one contador, but in the other tables the `rg_detallexml` can be linked to another contador. Well, that may be possible (some kind of from/to relation maybe). But with five `rg_egresos` records and four `rg_emisors` records for an `rg_detallexml`/contador, you'd select thirty records, because you are combining `rg_egresos` records with `rg_emisors` records that are not really related. Anyway: you want to find `rg_detallexml` quickly. ``` create index idx_de on rg_detallexml(idcontador, strtipodecomprobante, version_xml, datefechahora, iddetallexml); ``` Then you look for `rg_egresos`: ``` create index idx_eg on rg_egresos(id, iddetallexml, estado_factura, modificad); ``` At last you look for `rg_emisor`: ``` create index idx_em on rg_emisor(idcontador, iddetallexml); ``` As the columns are present in all tables, we could of course go through them in any order. Starting with `rg_detallexml` seems most natural and most restrictive, too, but that is not necessarily best. So you may want to offer the DBMS yet another index: ``` create index idx_eg2 on rg_egresos(id, estado_factura, modificad, iddetallexml); ``` which would allow the DBMS to look up the contador's records in this table first and with the added criteria find related `iddetallexml` here. Upvotes: 3 [selected_answer]<issue_comment>username_3: I see two partial Answers in the other replies. Let's tie them together. Change ``` AND DATE_FORMAT(dateFechaHora, '%Y-%m-%d') BETWEEN '2017-10-01' AND '2017-10-31' ``` to ``` AND DE.dateFechaHora >= '2017-10-01' AND DE.dateFechaHora < '2017-10-01' + INTERVAL 1 MONTH ``` *and* If DE is a good starting table: ``` DE: INDEX(idContador, strTipodeComprobante, version_xml, dateFechaHora) -- date last; others in any order ``` If EG is a better starting table: ``` EG: INDEX(estado_factura, modificado, id) -- in any order DE: INDEX(idContador, idDetalleXml, strTipodeComprobante, version_xml, dateFechaHora) ``` Also have ``` EM: INDEX(idContador, idDetalleXml) -- in either order ``` "Using intersect" almost always is a clue that you should have a composite index instead of separate indexes. (The separate indexes *may* be useful for other queries.) (That is, add all those indexes, then let the Optimizer decide.) Please use `SHOW CREATE TABLE`, not the less-descriptive `DESCRIBE`. Do you really need `SELECT *`? The query, after my suggestions: ``` SELECT DE.*, EG.*, EM.*, 'rg_egresos' AS nombre_tabla FROM rg_detallexml DE INNER JOIN rg_egresos EG ON DE.idContador = EG.id AND DE.idDetalleXml = EG.idDetalleXml INNER JOIN rg_emisor EM ON DE.idContador = EM.idContador AND DE.idDetalleXml = EM.idDetalleXml WHERE DE.idContador = '14894' AND DE.dateFechaHora >= '2017-10-01' AND DE.dateFechaHora < '2017-10-01' + INTERVAL 1 MONTH AND DE.strTipodeComprobante = 'egreso' AND DE.version_xml = '3.2' AND EG.estado_factura = 0 AND EG.modificado = 0; ``` Upvotes: 1
2018/03/14
1,790
5,687
<issue_start>username_0: I'm trying to evaluate an equation randomly generated by the system where the two integers are stored in an array and the operator is in a separate string array. I want to compare the answer with the user's answer. here is my code: ``` Integer[] array; String[] operators = {"+", "-", "*", "/"}; String question; int operator = 0; public void generateSum(){ for (int x = 0; x < 2; x++) { Random randomNumber = new Random(); //creating random object int number = randomNumber.nextInt(100) + 1; Random randomOperator = new Random(); operator = randomOperator.nextInt(3); array[x] = (number); } question = array[0].toString() + operators[operator] + array[1].toString() + "="; TextView txt = findViewById(R.id.txtQuestion); txt.setText(question); } ```<issue_comment>username_1: The biggest problem I see is on this part: ``` DATE_FORMAT(dateFechaHora, '%Y-%m-%d') BETWEEN '2017-10-01' AND '2017-10-31' ``` is dateFechaHora a datetime field? Why are you converting a datetime field to a string (DATE\_FORMAT)? even if you have an index on the dateFechaHora field, it won't be used. I would suggest you to use this code instead: ``` and DateFechaHora >= '2017-10-01' and DateFechaHora < '2017-11-01' ^^^^^^^^^^ ``` yes it's the following day and it won't be included. So your query might look like this: ``` select *, 'rg_egresos' AS nombre_tabla from rg_detallexml DE inner join rg_egresos EG on DE.idContador = EG.id and DE.idDetalleXml = EG.idDetalleXml inner join rg_emisor EM on DE.idContador = EM.idContador and DE.idDetalleXml = EM.idDetalleXml where DE.idContador = '14894' and dateFechaHora >= '2017-10-01' and dateFechaHora < '2017-11-01' and strTipodeComprobante = 'egreso' and version_xml = '3.2' and estado_factura = 0 and modificado = 0 ; ``` Upvotes: 1 <issue_comment>username_2: **Now that you've shown the tables, we see that `rg_egresos.id` is not the table's ID. There can hence be multiple records for one contador in the table. Let's look at the tables and the query more closely:** All tables contain a contador ID and a DetalleXml ID. You want to join them all on these two fields. So you start with the `rg_detallexml` and get all records for the contador. With the `idDetalleXml` thus found, you search for `rg_egresos` and `rg_emisors`. This is a bit strange. First of all an `rg_detallexml` is obviously linked to one contador, but in the other tables the `rg_detallexml` can be linked to another contador. Well, that may be possible (some kind of from/to relation maybe). But with five `rg_egresos` records and four `rg_emisors` records for an `rg_detallexml`/contador, you'd select thirty records, because you are combining `rg_egresos` records with `rg_emisors` records that are not really related. Anyway: you want to find `rg_detallexml` quickly. ``` create index idx_de on rg_detallexml(idcontador, strtipodecomprobante, version_xml, datefechahora, iddetallexml); ``` Then you look for `rg_egresos`: ``` create index idx_eg on rg_egresos(id, iddetallexml, estado_factura, modificad); ``` At last you look for `rg_emisor`: ``` create index idx_em on rg_emisor(idcontador, iddetallexml); ``` As the columns are present in all tables, we could of course go through them in any order. Starting with `rg_detallexml` seems most natural and most restrictive, too, but that is not necessarily best. So you may want to offer the DBMS yet another index: ``` create index idx_eg2 on rg_egresos(id, estado_factura, modificad, iddetallexml); ``` which would allow the DBMS to look up the contador's records in this table first and with the added criteria find related `iddetallexml` here. Upvotes: 3 [selected_answer]<issue_comment>username_3: I see two partial Answers in the other replies. Let's tie them together. Change ``` AND DATE_FORMAT(dateFechaHora, '%Y-%m-%d') BETWEEN '2017-10-01' AND '2017-10-31' ``` to ``` AND DE.dateFechaHora >= '2017-10-01' AND DE.dateFechaHora < '2017-10-01' + INTERVAL 1 MONTH ``` *and* If DE is a good starting table: ``` DE: INDEX(idContador, strTipodeComprobante, version_xml, dateFechaHora) -- date last; others in any order ``` If EG is a better starting table: ``` EG: INDEX(estado_factura, modificado, id) -- in any order DE: INDEX(idContador, idDetalleXml, strTipodeComprobante, version_xml, dateFechaHora) ``` Also have ``` EM: INDEX(idContador, idDetalleXml) -- in either order ``` "Using intersect" almost always is a clue that you should have a composite index instead of separate indexes. (The separate indexes *may* be useful for other queries.) (That is, add all those indexes, then let the Optimizer decide.) Please use `SHOW CREATE TABLE`, not the less-descriptive `DESCRIBE`. Do you really need `SELECT *`? The query, after my suggestions: ``` SELECT DE.*, EG.*, EM.*, 'rg_egresos' AS nombre_tabla FROM rg_detallexml DE INNER JOIN rg_egresos EG ON DE.idContador = EG.id AND DE.idDetalleXml = EG.idDetalleXml INNER JOIN rg_emisor EM ON DE.idContador = EM.idContador AND DE.idDetalleXml = EM.idDetalleXml WHERE DE.idContador = '14894' AND DE.dateFechaHora >= '2017-10-01' AND DE.dateFechaHora < '2017-10-01' + INTERVAL 1 MONTH AND DE.strTipodeComprobante = 'egreso' AND DE.version_xml = '3.2' AND EG.estado_factura = 0 AND EG.modificado = 0; ``` Upvotes: 1
2018/03/14
827
3,444
<issue_start>username_0: I have a Java program in IntelliJ which has a pom.xml and uses Maven. The packages were downloaded and currently they are found by IntelliJ. I'm a little confused though because the Maven repository is not part of the CLASSPATH as far as I can tell. So does IntelliJ just do a bit of magic where it looks into its Maven repository to find the packages? (I think that IntelliJ has its own Maven repo. I separately have Maven 3 installed, but I think it isn't using it.) But more generally: If you build a JAR using Maven then I guess it will put the dependencies in the JAR where the Java program can find them, so there won't be a problem. But if you just run a Java program directly, do you need to add the Maven repository to your classpath or does something else happen? Thanks for any information you can provide to lessen my confusion :)<issue_comment>username_1: When you start the program from IntelliJ using a runtime configuration for your `main()` method IntelliJ constructs the classpath from all the project dependencies. You can see this in the Run window, the first log line is the `java` command used to start the `main()`. It's a long line but it usually looks similar to: ``` java -javaagent:/opt/idea/idea-IC-173.3727.127/lib/idea_rt.jar=40165:/opt/infra/idea/idea-IC-173.3727.127/bin -Dfile.encoding=UTF-8 -classpath /home/ [...] ``` IntelliJ constructs the `-classpath` argument adding both the module `target` directory and the Maven dependencies referenced from the local Maven repository. When you package the project using Maven `mvn clean package` usually it becomes a standalone JAR with just your code (unless you changed the defaults). Now you have a few choices how to provide dependencies needed to start your `main()`: 1. Provide them using `-classpath` parameter just like IntelliJ. 2. Add `maven-shade-plugin` and use `shade` goal to the build [a runnable Uber JAR](https://maven.apache.org/plugins/maven-shade-plugin/). This creates a fat JAR which doesn't require `-classpath`. 3. Use some other Maven plugin to perform point 2 e.g. Spring Boot `spring-boot:repackage` goal. Upvotes: 2 <issue_comment>username_2: All the required dependencies, defined in the pom.xml file(s), are downloaded from Maven Central (or others if configured) to the local Maven repository. That repository is located at `/.m2/repository`. Maven generates/calculates a dependency tree to know all the required dependencies for the project. (you can also dump that tree with the command `mvn dependency:tree`. I always pipe the result to a file, because the tree can be large `mvn dependency:tree > deptree.txt`). **Maven put them all on the classpath** when executing a maven command like `mvn compile` IntelliJ also use/calculate the dependency tree and add all the jar files to the projects classpath (point to the files in the `/.m2/repository` folder). You can see them all in the list with External Libraries, and they will be used / on the classpath for compilation and running the application. When building a JAR file the dependencies are NOT added to the JAR. Only the bytecode (java classes) and resources from your own project are packaged into the JAR file. (Source files can also be packaged if you configure that) By adding a Maven plugin (`maven-shade-plugin`) you can configure your project to also pack dependencies into the JAR. SpringBoot projects also will do that. Upvotes: 2
2018/03/14
868
2,221
<issue_start>username_0: I have a `DataFrame` like this: ``` A B ---------- c d e f ``` I'd like to introduce a third column, made up of a concatenation of `A`, `B` and the index, so that the `DataFrame` becomes: ``` A B C --------------- c d cd0 e f ef1 ``` I'd like to do that like so: ``` df['C'] = df['A'] + df['B'] + # and here I don't know how to reference the row index. ``` How can I do this?<issue_comment>username_1: ``` df['C'] = df['A'].astype(str) + df['B'].astype(str) + np.array(map(str, df.index.values)) ``` Basically you access the df index with df.index, and to turn that into a numpy array you add the .values, and to convert that into a string (to easily add to the previous columns, which are strings), you can use a map function. Edit: added .astype(str) to columns A and B, to convert them to strings. If they are already strings, this won't be necessary. Upvotes: 0 <issue_comment>username_2: **Option 1** For better scalability, use `assign` + `agg`: ``` df['C'] = df.assign(index=df.index.astype(str)).agg(''.join, 1) df A B C 0 c d cd0 1 e f ef1 ``` Or, using `np.add.reduce` in a similar fashion: ``` df['C'] = np.add.reduce(df.assign(index=df.index.astype(str)), axis=1) df A B C 0 c d cd0 1 e f ef1 ``` --- **Option 2** A less scalable option using vectorised string concatenation: ``` df['C'] = df['A'] + df['B'] + df.index.astype(str) df A B C 0 c d cd0 1 e f ef1 ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: **With `pd.DataFrame.itertuples`** Python 3.6 ``` df.assign(C=[f'{a}{b}{i}' for i, a, b in df.itertuples()]) A B C 0 c d cd0 1 e f ef1 ``` --- **With `pd.Series.str.cat`** ``` df.assign(C=df.A.str.cat(df.B).str.cat(df.index.astype(str))) A B C 0 c d cd0 1 e f ef1 ``` --- **Mish/Mash** ``` from operator import add from functools import reduce from itertools import chain df.assign(C=reduce(add, chain((df[c] for c in df), [df.index.astype(str)]))) A B C 0 c d cd0 1 e f ef1 ``` --- **Summation** ``` df.assign(C=df.sum(1) + df.index.astype(str)) A B C 0 c d cd0 1 e f ef1 ``` Upvotes: 2
2018/03/14
434
1,458
<issue_start>username_0: I'd like to be able to run my detox tests and my Jest unit tests separately. For example, run detox tests with `detox build && detox test`, and my Jest unit tests with `npm test`. After implementing detox (using mocha as the test runner), running `npm test` results in immediate error, and looks like its trying to run my detox tests (not what I'd expect)! Here's the first error I get. `FAIL e2e/auth.spec.js` Not sure why its trying to run detox tests, when my package.json is pointing the test script to Jest. `"scripts": { "start": "node node_modules/react-native/local-cli/cli.js start", "test": "jest" }` How do I run my jest tests now?<issue_comment>username_1: By default `jest` runs all files in your project directory, that have the `.test.` or `.spec.` extension to them. That's why it picks up your detox test files and fails to execute them. <https://facebook.github.io/jest/docs/en/configuration.html#testmatch-array-string> You have to override this default behavior in order for the two not to clash. Here's what we use in our `package.json` just for reference, you might want to change it: ``` "jest": { "testMatch": [ "/\_\_tests\_\_/\*\*/\*.test.js?(x)", "/src/\*\*/\*.test.js" ] } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: If you dont keep you files in on folder like **tests** just add this to package.json ``` "jest": { "testMatch": ["**/*+(.test.js)"] } ``` Upvotes: 1
2018/03/14
1,655
6,304
<issue_start>username_0: Can I require classes implementing an interface to have a certain static field or method and access/invoke that field or method through a generic type argument? I have an interface, `Arithmetical`, which specifies several functions like `T plus(T o)` and `T times(T o)`. I have as well a `Vector>` class, which is intended for vectors (of variable dimension) with components of type `N`. I ran into an issue, however, when trying to implement the [dot product](https://en.wikipedia.org/wiki/Dot_product). I want to implement the method `N dot(Vector o)`. For this, I plan to start with whatever `N`'s zero is and iterate through both `Vector`s' `List`s, adding the product of each pair of elements to my total. Is there a way to specify in `Arithmetical` that all implementing classes must have a static (and preferably final) field `ZERO` and start `dot(Vector o)`'s body with something along the lines of `N sum = N.ZERO;`? If not, what other approaches might there be to this problem? I want to allow 0-dimensional vectors, so I can't just begin by multiplying the vectors' first components. Is there a way to instantiate an object of a generic type, so I can merely specify a `T zero()` method in `Arithmetical`? I have a reason for not using Java's numerical types—I want to have vectors with complex components. Here's Arithmetical: ``` public interface Arithmetical { public T plus(T o); public T minus(T o); public T negate(); public T times(T o); public T over(T o); public T inverse(); // Can I put a line here that requires class Complex (below) to define ZERO? } ``` Vector: ``` public class Vector> { private List components; public Vector(List cs) { this.components = new ArrayList(cs); } public N dot(Vector o) { // Here's where I need help. } } ``` And Complex: ``` public class Complex implements Arithmetical { public static final Complex ZERO = new Complex(0, 0); // Can I access this value through N if >? private double real; private double imag; public Complex(double r, double i) { this.real = r; this.imag = i; } /\* Implementation of Arithmetical (and some more stuff) not shown... \*/ } ``` I'm quite new to Java (and programming in general); I will likely not understand complex (ha) explanations and workarounds. Thanks! (Python is a suggested tag... [Huh.](https://stackoverflow.com/q/17811855/5499914))<issue_comment>username_1: > > Can I put a line here that requires class Complex (below) to define ZERO? > > > No. The best you can do is to define an interface, for example: ``` interface ZeroProvider> { A zero(); } ``` and then supply a compatible instance of that where you need to provide a zero, for example: ``` class ComplexZeroProvider implements ZeroProvider { public Complex zero() { return new Complex(0, 0); } } ``` Upvotes: 2 <issue_comment>username_2: You need a "zero" for every possible implementation type. A constant in the interface won't do, because a constant cannot be overridden and must remain the same. The solution is to add a new method to your `Arithmetical` interface: ``` public T zero(); ``` Each implementation is forced to implement this and return its own version of zero. In this case, you're using it as a starting point for adding; it's the additive identity. The `Complex` class implementation would look like this. ``` @Override public Complex zero() { return ZERO; } ``` If your instances are mutable, then don't use a constant; just return `new Complex(0, 0)`. Another idea is to borrow from what `Stream`s do when `reduce`-ing items and combining them to one single item -- take an identity value that represents the initial state, i.e. no items collected yet -- zero. ``` public N dot(Vector o, N identity) { N dotProduct = identity; // Perform operations on each item in your collection // to accumulate and return a dot product. } ``` The caller will have to supply the identity value. ``` Complex dotProduct = vectorOfComplex.dotProduct(otherVector, new Complex(0, 0)); ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: There's something you can do sometimes using reflection in situations like this. If you put the following method in the `Vector` class, it will invoke a static method `N.zero()` (with caveats, below): ``` protected N zero() { try { Type s = getClass().getGenericSuperclass(); @SuppressWarnings("unchecked") Class n = (Class) ((ParameterizedType) s).getActualTypeArguments()[0]; Method zero = n.getMethod("zero"); return n.cast(zero.invoke(null)); } catch (RuntimeException | ReflectiveOperationException x) { // probably better to make a custom exception type throw new IllegalArgumentException("illegal type argument", x); } } ``` However, it's important to understand what this is actually doing. This is getting the type argument from the class file of the direct superclass of `this`. In other words, there must actually be a superclass of `this` with an actual type argument (which is a class). The usual idiom then is that you'd create all of your vectors like this: ``` new Vector() {} ``` instead of this: ``` new Vector() ``` Or you'd declare subclasses like this: ``` public class Vector { // ... public static class OfComplex extends Vector { } } ``` Since you need an actual superclass with a type argument which is a class, instantiations like in the following examples will fail: ``` new Vector() new Vector() // never use this anyway new Vector() {} // never use this anyway // also, you can't do stuff like this: public Vector copy() { return new Vector(this) {}; } ``` In your case I think the suggestions in the other answers are better, but I wanted to post this answer along with the proper explanation and caveats which are sometimes not included. There are cases where this technique is actually good, mainly when you have pretty tight restrictions on how the class in question is extended. [Guava `TypeToken`](http://google.github.io/guava/releases/snapshot-jre/api/docs/com/google/common/reflect/TypeToken.html) will also do some of the reflection for you. Also, this is the best Java can do at doing exactly what you're asking for (at the moment), so it's worthwhile to point out just as a comparison. Upvotes: 1
2018/03/14
469
1,639
<issue_start>username_0: I'm trying to perform a simple operation: take a file and put "> " at the front of every line. However, when I try to use Visual Studio Code to do it, the regular expression "^" doesn't match all the lines. In particular, it matches: * blank lines * lines starting with "-", "{" or " " but not * lines starting with a letter This makes no sense to me, I'm told it uses Rust's regular expression engine but I can't see anything in the documentation that would suggest this would happen. Why does this happen and how do I fix it? [![well this isn't ideal](https://i.stack.imgur.com/RRXfS.png)](https://i.stack.imgur.com/RRXfS.png) This is what happens if I try "^.". [![^.](https://i.stack.imgur.com/f8SHG.png)](https://i.stack.imgur.com/f8SHG.png)<issue_comment>username_1: The Visual Studio text editor has a Regex implementation. You could populate this with some of your data and develop your Regex expression manually before you code it. I'm looking at Visual Studio Code (an MS product) on Linux and using the equivalent of Search ^(.\*)$ Replace >$1 in the editor I may have solved your problem. ``` -999 {42 uuu AAA ``` becomes ``` >-999 >{42 > > uuu >AAA ``` This Regex technique is called group capturing. Upvotes: 3 <issue_comment>username_2: It turns out the correct answer is: because match whole word is switched on. This is visible in the screenshot above, but not very obvious. Upvotes: 3 [selected_answer]<issue_comment>username_3: just add an anchor `^` ``` ^my_pattern ``` That is if you are just trying to match to the start of the line (like the question asks): Upvotes: 0
2018/03/14
893
2,772
<issue_start>username_0: I am doing an assignment for Powershell and one of the functions is to say when the last boot was. I am printing date and 'time since', date works fine but I think there is too much code for displaying the 'time since'. I want the first value to not be zero. Like this: > > 1 Hour, 0 Minutes, 34 Seconds > > > and not like this: > > 0 Days, 1 Hours, 0 Minutes, 34 Seconds > > > ``` $bootDate = (Get-CimInstance Win32_OperatingSystem).LastBootUpTime $bootTime = $(Get-Date).Subtract($bootDate) # Think there is an easier way, but couldn't find any :/ $time = "" if($bootTime.Days -ne 0) { $time = "$($bootTime.Days) Days, $($bootTime.Hours) Hours, $($bootTime.Minutes) Minutes, " } elseif($bootTime.Hours -ne 0){ $time = "$($bootTime.Hours) Hours, $($bootTime.Minutes) Minutes, " } elseif($bootTime.Minutes -ne 0){ $time = "$($bootTime.Minutes) Minutes, " } echo "Time since last boot: $time$($bootTime.Seconds) Seconds" echo "Date and time: $($bootDate.DateTime)" ``` This code prints it as I want it to be, but is just seems like too much code for something so little. Is there an easier way?<issue_comment>username_1: Make sure you inspect to `TotalDays` rather than `Days`. Additionally, I would split the code into a separate function: ``` function Get-TruncatedTimeSpan { param([timespan]$TimeSpan) $time = "" if($TimeSpan.TotalDays -ge 1) { $time += "$($TimeSpan.Days) Days, " } if($TimeSpan.TotalHours -ge 1){ $time += "$($TimeSpan.Hours) Hours, " } if($TimeSpan.TotalMinutes -ge 1){ $time += "$($TimeSpan.Minutes) Minutes, " } return "$time$($TimeSpan.Seconds) Seconds" } $bootDate = (Get-CimInstance Win32_OperatingSystem).LastBootUpTime $bootTime = $(Get-Date).Subtract($bootDate) echo "Time since last boot: $(Get-TruncatedTimeSpan $bootTime)" echo "Date and time: $($bootDate.DateTime)" ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: A concise solution based on removing the longest run of `0`-valued components from the start, using the `-replace` operator, which uses a regular expression for matching (and by not specifying a replacement string effectively *removes* the match): ``` function get-FriendlyTimespan { param([timespan] $TimeSpan) "{0} Days, {1} Hours, {2} Minutes, {3} Seconds" -f $TimeSpan.Days, $TimeSpan.Hours, $TimeSpan.Minutes, $TimeSpan.Seconds -replace '^0 Days, (0 Hours, (0 Minutes, )?)?' } # Invoke with sample values (using string-based initialization shortcuts): "0:0:1", "0:1:0", "1:0:0", "1", "0:2:33" | % { get-FriendlyTimespan $_ } ``` The above yields: ``` 1 Seconds 1 Minutes, 0 Seconds 1 Hours, 0 Minutes, 0 Seconds 1 Days, 0 Hours, 0 Minutes, 0 Seconds 2 Minutes, 33 Seconds ``` Upvotes: 1
2018/03/14
388
1,244
<issue_start>username_0: I have an object (in real code there is more values inside): req.session.params = {valueA: a, valueB: b, result: result} I would like to pass its pairs to res.render() along the others. For now I'm doing: ``` res.render('mainView.ejs', { otherVal: x, valueA: a, valueB: b, result: result }); ``` but is there a way to do that quicker? Something like: ``` res.render('mainView.ejs', {otherVal: x, {req.session.passedParams} }); ```<issue_comment>username_1: You can use something like below ``` res.render('mainView.ejs', Object.assign({otherVal: x},req.session.passedParams)); ``` or ``` res.render('mainView.ejs', {{otherVal: x},...req.session.passedParams}); ``` PS: See [Surely ES6+ must have a way to merge two javascript objects together, what is it?](https://stackoverflow.com/questions/13852852/surely-es6-must-have-a-way-to-merge-two-javascript-objects-together-what-is-it) Upvotes: 2 [selected_answer]<issue_comment>username_2: You could use lodash package to accomplish this easily ``` var _ = require('lodash'); var params = _.assign({otherVal: x}, req.session.passedParams); console.log(params); ``` I hope this helps Upvotes: 0
2018/03/14
576
2,167
<issue_start>username_0: I have a tableview definition in which I am attempting to invoke an UIAlertController popup. I installed a button in the prototype tableView cell, when the button is touched, an IBAction handles the event. The problem is that the compiler won't let me. ``` present(alertController, animated: true, completion: nil) ``` Generates compiler error: "Use of unresolved identifier 'present' Here is the code: ``` class allListsCell: UITableViewCell { @IBOutlet var cellLable: UIView! @IBOutlet var cellSelected: UILabel! var colorIndex = Int() @IBAction func cellMarkButton(_ sender: UIButton, forEvent event: UIEvent) { if colors[self.colorIndex].selected == false { colors[self.colorIndex].selected = true cellSelected.text = "•" let alertController = UIAlertController(title: "???", message: "alertA", preferredStyle: .alert) let OKAction = UIAlertAction(title: "dismiss", style: .default) { (action:UIAlertAction!) in print("Sand: you have pressed the Dismiss button"); } alertController.addAction(OKAction) present(alertController, animated: true, completion: nil) // ERROR } else { colors[self.colorIndex].selected = false cellSelected.text = "" } } ``` If I comment that one line, the app runs correctly for each cell...<issue_comment>username_1: You can use something like below ``` res.render('mainView.ejs', Object.assign({otherVal: x},req.session.passedParams)); ``` or ``` res.render('mainView.ejs', {{otherVal: x},...req.session.passedParams}); ``` PS: See [Surely ES6+ must have a way to merge two javascript objects together, what is it?](https://stackoverflow.com/questions/13852852/surely-es6-must-have-a-way-to-merge-two-javascript-objects-together-what-is-it) Upvotes: 2 [selected_answer]<issue_comment>username_2: You could use lodash package to accomplish this easily ``` var _ = require('lodash'); var params = _.assign({otherVal: x}, req.session.passedParams); console.log(params); ``` I hope this helps Upvotes: 0
2018/03/14
1,332
4,480
<issue_start>username_0: So for some time now I am trying to make a proper shuffle script, using the youtube api, in order to play my youtube playlist. I've found a lot of examples, but none of them seem to be working very well. Some do shuffle the list but not the first song being played, and some do the precise opposite. What I would like is to shuffle the full playlist and then start playing. So the first played song should be random and the next one played should be random/shuffled as well. Ive found the script below to shuffle the playlist. However the first song played is not shuffled. Can someone help me out with this? Thanks a million! ``` // 2. This code loads the IFrame Player API code asynchronously. var tag = document.createElement('script'); tag.src = "https://www.youtube.com/iframe\_api"; var firstScriptTag = document.getElementsByTagName('script')[0]; firstScriptTag.parentNode.insertBefore(tag, firstScriptTag); // 3. This function creates an <iframe> (and YouTube player) // after the API code downloads. function onYouTubeIframeAPIReady() { var player = new YT.Player("player", { height: '390', width: '640', events: { 'onReady': function (event) { event.target.cuePlaylist({list: "PLFgquLnL59anYA8FwzqNFMp3KMcbKwMaT"}); event.target.playVideo(); setTimeout(function() { event.target.setShuffle({'shufflePlaylist' : true}); }, 1000); } } }); } ```<issue_comment>username_1: This worked for me! ``` // 2. This code loads the IFrame Player API code asynchronously. var tag = document.createElement('script'); tag.src = "https://www.youtube.com/iframe\_api"; var firstScriptTag = document.getElementsByTagName('script')[0]; firstScriptTag.parentNode.insertBefore(tag, firstScriptTag); // 3. This function creates an <iframe> (and YouTube player) // after the API code downloads. function onYouTubeIframeAPIReady() { var numPl = Math.floor((Math.random() \* 50) + 1); var player = new YT.Player("player", { height: '390', width: '640', playerVars: { listType:'playlist', list:'PLFgquLnL59anYA8FwzqNFMp3KMcbKwMaT', index: numPl, autoplay: 1, }, events: { 'onReady': function (event) { //event.target.cuePlaylist({list: "PLFgquLnL59anYA8FwzqNFMp3KMcbKwMaT"}); //event.target.playVideo(); setTimeout(function() { event.target.setShuffle({'shufflePlaylist' : true}); }, 1000); } } }); } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This works using youtube api. and users youtube created play list . shuffles the play list into new order every time page is refreshes 1. Load YT-player API. 2. Load YT-playlist user playlist id "\*\*.youtube.com/playlist?list=PLo16\_\*\*\*\*\*\*\*" 3. To Shuffle playlist use > *player.setShuffle(true);* 4. To Start YT-player at video 1 in shuffled playlist use > *player.playVideoAt(0)* working demo [Responsive shuffled YouTube Playlist on Google sites](https://sites.google.com/view/shuffled-api-list-player/home) code [jsfiddle.net](https://jsfiddle.net/username_2/xo5tb4u9/) ``` // 2. This code loads the IFrame Player API code asynchronously. var tag = document.createElement('script'); tag.src = "https://www.youtube.com/iframe\_api"; var firstScriptTag = document.getElementsByTagName('script')[0]; firstScriptTag.parentNode.insertBefore(tag, firstScriptTag); // 3. This function creates an <iframe> (and YouTube player) // after the API code downloads. var player; function onYouTubeIframeAPIReady() { player = new YT.Player('player', { playerVars: { autoplay: 0, loop: 1, controls: 1, showinfo: 1, frameborder: 1, 'listType': 'playlist', 'list': "PLo16\_DLriHp4A8BvkJFZfO\_4KDVv7yGgy", // your YouTube playlist id here }, events: { 'onReady': onPlayerReady, 'onStateChange': onPlayerStateChange } }); } // 4. The API will call this function when the video player is ready. function onPlayerReady(event) { // first shuffle play list player.setShuffle(true); // Onload and on refresh shuffles the users YT playlist but always starts playing // the first video in the original list at its new index position //ie. video (1) = video( new shuffled pos ?) // to get over this we can start the player at the new shuffled playlist //video 1(index=0) this changes every time it's refreshed player.playVideoAt(0) } // 5. The API calls this function when the player's state changes. // option to to add bits function onPlayerStateChange(event) { const player = event.target; } ``` Upvotes: 0
2018/03/14
662
2,550
<issue_start>username_0: Whenever I make a code change in the Android Studio (version 3.0.1) I need to clean before I build in order for that change to take effect. Making a change and hitting the green arrow build/run button looks like it builds but the new changes are not incorporated unless the project is cleaned beforehand. For example, if I add some logging then build/run, the new logs don't appear until I clean then build/run again. This seems to be the case for almost all changes. Sometimes it works, most of the time it doesn't. The compiler should detect changes to the code and rebuild those files every single time. It feels like they prioritized build speed over correctness. Has anyone else solved this problem? If not then is there some setting that forces a clean before every build/run?<issue_comment>username_1: I experienced that when I left my Android Studio opened for a long time. The solutions I found were: 1. Hit the "Sync Project with Gradle files" button. [![enter image description here](https://i.stack.imgur.com/HfQcy.png)](https://i.stack.imgur.com/HfQcy.png) 2. Restart Android Studio. I hope this helps you. Upvotes: 1 <issue_comment>username_2: I had the same issue after updating Android Studio to version 3.1. It seems that the `Before launch` action of the default run configuration has been changed to `Instant App Provision`. Check it out and change to `Gradle-aware Make` here: [![Run/Debug Configurations window](https://i.stack.imgur.com/LRlDd.png)](https://i.stack.imgur.com/LRlDd.png) Upvotes: 4 <issue_comment>username_3: I am using android Studio for about 2 months, this never had happened to me. For me I did a clear and rebuild project. Upvotes: -1 <issue_comment>username_4: This suddenly started: Code changes were not being reflected on the app / were not taking effect. My "Run / Debug configurations" settings were as said by @username_2. Still, i was facing the problem. ``` "File -> Invalidate Caches / Restart ..." ``` made it. This could have been the result of my machine intermittently rebooting (because of hardware issues). Upvotes: 2 <issue_comment>username_5: I found that I had to force the apk to install on every build - or risk that some of my changes will not be present. It seems that Google has broken something with the instant run in AS 4.1.2 -- to fix it uncheck "Skip installation if APK has not changed" in the "Run app" dialog. [![Android Studio 4.1.2 Run Dialog](https://i.stack.imgur.com/ENldV.png)](https://i.stack.imgur.com/ENldV.png) Upvotes: 0
2018/03/14
757
2,679
<issue_start>username_0: How do you call fat arrow function like below (stored in string) without using eval? ``` "g => { alert(g); }" ``` With using eval, the below code is working fine, but I want to avoid using eval. ``` eval("g => { alert(g); }")('hello') ``` I hope I could do something like below using "new Function", but I have no luck so far. ``` new Function("g => { alert(g); }")('hello') ``` Thanks so much in advance.<issue_comment>username_1: in theory, this works. Still seems like a bad idea. ``` var fatso = "g => { alert(g); }"; var s = document.createElement("script"); s.innerText = "function callMe(f) { var a = " + fatso + "; a(f) }" document.head.appendChild(s); callMe("hello"); ``` Upvotes: 0 <issue_comment>username_2: from [MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function) > > The Function constructor creates a new Function object. Calling the constructor directly can create functions dynamically, but suffers from security and performance issues similar to **eval**. > > > that said, you can easyly parse your string to use function constructor, for example: ```js const str = [g,body] = "g => { console.log(g); }".split(" => "); var myFunc = new Function(g, body); myFunc("hello"); const str2 = [args,body] = "(h,i) => {console.log(h,i);}".split(" => "); const [h,i] = args.replace(/\(|\)/g,"").split(","); var myFunc2 = new Function(h,i, body); myFunc2("hello","buddy"); ``` Upvotes: 1 <issue_comment>username_3: **Anything you do, including your own JS interpreter, is going to be a functional equivalent to `eval`.** Calling `eval()` function is not a security gap by itself. Executing unsanitized code is what is a security issue, and that's the thing you are trying to do here. **Using `eval` or *any equivalent* creates an XSS vulnerability.** If you use real interpreter written in JS, you may be able to sandbox the code and mitigate the risk, but it's still a grey area. OTOH, if you have valid reasons to consider the code in question to be sanitized, there's no good reason to not use `eval`. Be aware that you can't reliably sanitize JS code. [Anything can be done with very little JS syntax available](http://www.jsfuck.com/), so **either the code is from a trusted source or it should never be run**. All that said, perhaps the simplest way to get your function compiled with `new Function` is ``` (new Function('return ('+code+')'))() ``` This is almost exact equivalent to calling `eval(code)`, just without any access to the local namespace. Using `eval()` directly should be slightly faster (omits creating unnecessary function object). Upvotes: 1
2018/03/14
744
2,758
<issue_start>username_0: I am able to implement the filter on input field with the pipe. But I am not able to do so with checkbox filter. Below is my code for input filter. ``` import { Pipe, PipeTransform } from '@angular/core'; @Pipe({ name: 'filter' }) export class FilterPipe implements PipeTransform { transform(users: any, searchTerm: any): any { // check if search term is undefined if (searchTerm === undefined) { return users; } // return updated users array return users.filter(function(user) { return user.name.toLowerCase().includes(searchTerm.toLowerCase()); }); } } ``` UserComponent.html ``` | | | | --- | --- | | {{user.name}} | {{user.type}} | ``` Now for `type` I want to filter on checkbox select. ``` | {{role.type}} | ``` In input field i can get the value using `[(ngModel)]` but in checkbox, I am not able to do so. Please let me know how could I achieve using checkbox select. Thank you<issue_comment>username_1: Yes, you can use ngModel like this: ``` ``` And I have a blog post on filtering here: <https://blogs.msmvps.com/deborahk/filtering-in-angular/> It shows how to filter with code instead of with a pipe so you can more easily filter on checkbox values. Upvotes: 0 <issue_comment>username_2: First of all, you shouldn't use checkboxes when it comes to state switching. Always use radio buttons in this case. Put this code in your HTML-template ``` Admin Student Staff ``` And make sure that searchTerm is a member of your typescript file. e.g.: ``` private searchTerm: string = 'search'; ``` That should work. If you now click one of those radio buttons searchTerm is set and Angular will filter using your pipe. Moreover, if you enter 'admin', 'student' or 'staff' manually, the corresponding radio button will get activated. Upvotes: 0 <issue_comment>username_3: **Component code:** ``` public roleList = {studentRole:false,adminRole:false,staffRole:false}; ``` **HTML code:** ``` | {{user.name}} - {{user.roleType}} | Student Admin Staff ``` **Pipe code:** ``` import { Pipe, PipeTransform } from '@angular/core'; @Pipe({ name: 'roleFilter' }) export class RoleFilter implements PipeTransform { transform(userList: any,roleList:any): any { if(!roleList.studentRole && !roleList.adminRole && !roleList.staffRole){ return userList; } return userList.filter(user => (roleList.studentRole && user.roleType == "student") || (roleList.adminRole && user.roleType == "admin") || (roleList.staffRole && user.roleType == "staff")) } } ``` Upvotes: 1
2018/03/14
1,083
3,562
<issue_start>username_0: I have already read and tried a lot posts on SO, but no one has solved this issue. I set to enviroment variables for JDK and JRE release. I have add the same values within both sections "user's variables for USER" and "system variables". The variables that I have added is: `JAVA_HOME->C:\PROGRA~1\Java\JDK9U4~1 and JRE_HOME->C:\PROGRA~1\Java\JRE9U4~1.` My installations directory for JDK and JRE are respectly "C:\Program Files\Java\JDK 9u4" and "C:\Program Files\Java\JRE 9u4". When I run catalina\_start.bat, I getting this error: ``` [XAMPP]: Searching JDK HOME with reg query ... Errore: The registry key or the specified value could not be found. . [XAMPP]: Cannot find current JDK installation! . [XAMPP]: Cannot set JAVA_HOME. Aborting ... done. ``` The text of my catalina\_start.bat file (and I think this is default) is: ``` @echo off :::::::::::::::::::::::::::::::::::: :: Set JAVA_HOME and :: :::::::::::::::::::::::::::::::::::: IF EXIST tomcat\logs\catalina.pid ( del /F/Q tomcat\logs\catalina.pid ) echo. echo [XAMPP]: Searching JDK HOME with reg query ... set KeyName=HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Development Kit reg query "%KeyName%" /s if %ERRORLEVEL% == 1 ( echo . [XAMPP]: Cannot find current JDK installation! echo . [XAMPP]: Cannot set JAVA_HOME. Aborting ... goto :END ) set "CURRENT_DIR=%cd%" set "CATALINA_HOME=%CURRENT_DIR%" :: only for windows 32 bit if you have problems with the tcnative-1.dll :: set CATALINA_OPTS=-Djava.library.path="%CATALINA_HOME%\bin" set Cmd=reg query "%KeyName%" /s for /f "tokens=2*" %%i in ('%Cmd% ^| find "JavaHome"') do set JAVA_HOME=%%j echo. echo [XAMPP]: Seems fine! echo [XAMPP]: Set JAVA_HOME : %JAVA_HOME% echo [XAMPP]: Set CATALINA_HOME : %CATALINA_HOME% echo. if %ERRORLEVEL% == 0 ( echo run > logs\catalina.pid ) "%CATALINA_HOME%\bin\catalina.bat" run :END echo done. pause ``` I have already tried many solutions, but nothing have helped me. I wondered if someone could to help me with this, I would be very grateful. I hope that I have explained myself clearly (sorry for my english). Thanks for advices.<issue_comment>username_1: I solved the problem for me by downloading Java 11.0.2 using the zip-file. Thus it didn't create the registry entry, so I manually added it. The code above only searches for the registry key in `KeyName`, so just create the key as ``` HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Development Kit\ ``` There is no need to add any values, just create the path. You can do this by * pressing Win+R * typing "regedit" * then go to HKEY\_LOCAL\_MACHINE -> SOFTWARE * right click on SOFTWARE * select New -> Key * name the new folder "JavaSoft" * right click on the newly created JavaSoft folder * select New -> Key * name the new folder "Java Development Kit". ![](https://i.stack.imgur.com/EGFmL.png) Upvotes: 2 <issue_comment>username_2: I solved the issue by changing the line ``` set KeyName=HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Development Kit ``` to ``` set KeyName=HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\JDK ``` I had a look into my registry and found the expected Key had been created with the commonly used abbreviation `JDK`. As an extra i also changed my `tomcat_service_install.bat` to the above mentioned keyname. This made the installation as a windows service possible. Upvotes: 2 <issue_comment>username_3: add a row your catalina\_start.bat as below line ; ``` set "CURRENT_DIR=%cd%" set "CATALINA_HOME=%CURRENT_DIR%" set JAVA_HOME=C:\yourJDKpath ``` Upvotes: 0
2018/03/14
1,175
4,250
<issue_start>username_0: I'm a newbie in android world, when I try to get data from the database, I meet this error, bellow is my database access code ``` public static boolean Checkduplicate(String activity_name, String location, String date) { SQLiteDatabase dtb = ActivityHandler.db; String Query = "Select * from Activity where activity_name = " + activity_name + "and location =" + location + "and _date =" + date; Cursor cursor = dtb.rawQuery(Query, null); if(cursor.getCount() <= 0){ cursor.close(); return true; } cursor.close(); return false; } ``` Here is the error ``` FATAL EXCEPTION: main Process: com.example.vinhg.comp1661_nguyengiavinh, PID: 31092 java.lang.NullPointerException: Attempt to invoke virtual method 'android.database.Cursor android.database.sqlite.SQLiteDatabase.rawQuery(java.lang.String, java.lang.String[])' on a null object reference at com.example.vinhg.comp1661_nguyengiavinh.ActivityHandler.Checkduplicate(ActivityHandler.java:32) at com.example.vinhg.comp1661_nguyengiavinh.MainActivity.addData(MainActivity.java:41) at com.example.vinhg.comp1661_nguyengiavinh.MainActivity$1.onClick(MainActivity.java:32) at android.view.View.performClick(View.java:5340) at android.view.View$PerformClick.run(View.java:21610) at android.os.Handler.handleCallback(Handler.java:815) at android.os.Handler.dispatchMessage(Handler.java:104) at android.os.Looper.loop(Looper.java:207) at android.app.ActivityThread.main(ActivityThread.java:5763) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:888) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:749) ``` Here is my full ActivityHandler class ``` public class ActivityHandler { private static SQLiteDatabase db; public ActivityHandler(Context context){ DatabaseHandler dbDatabaseHandler = new DatabaseHandler(context); this.db = dbDatabaseHandler.getWritableDatabase(); } @Override protected void finalize() throws Throwable { try{ db.close(); }catch (Exception ex){ } super.finalize(); } public static boolean Checkduplicate(String activity_name, String location, String date) { SQLiteDatabase dtb = ActivityHandler.db; String Query = "Select * from Activity where activity_name = " + activity_name + "and location =" + location + "and _date =" + date; Cursor cursor = dtb.rawQuery(Query, null); if(cursor.getCount() <= 0){ cursor.close(); return true; } cursor.close(); return false; } ```<issue_comment>username_1: I solved the problem for me by downloading Java 11.0.2 using the zip-file. Thus it didn't create the registry entry, so I manually added it. The code above only searches for the registry key in `KeyName`, so just create the key as ``` HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Development Kit\ ``` There is no need to add any values, just create the path. You can do this by * pressing Win+R * typing "regedit" * then go to HKEY\_LOCAL\_MACHINE -> SOFTWARE * right click on SOFTWARE * select New -> Key * name the new folder "JavaSoft" * right click on the newly created JavaSoft folder * select New -> Key * name the new folder "Java Development Kit". ![](https://i.stack.imgur.com/EGFmL.png) Upvotes: 2 <issue_comment>username_2: I solved the issue by changing the line ``` set KeyName=HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\Java Development Kit ``` to ``` set KeyName=HKEY_LOCAL_MACHINE\SOFTWARE\JavaSoft\JDK ``` I had a look into my registry and found the expected Key had been created with the commonly used abbreviation `JDK`. As an extra i also changed my `tomcat_service_install.bat` to the above mentioned keyname. This made the installation as a windows service possible. Upvotes: 2 <issue_comment>username_3: add a row your catalina\_start.bat as below line ; ``` set "CURRENT_DIR=%cd%" set "CATALINA_HOME=%CURRENT_DIR%" set JAVA_HOME=C:\yourJDKpath ``` Upvotes: 0
2018/03/14
174
724
<issue_start>username_0: In Android Studio 3.0.1 I tried to go to Build > Select Build Variant but the option "Select Build Variant" was greyed out. How do I access this menu option?<issue_comment>username_1: I found that the "Select Build Variant" menu item became available as long as I wasn't viewing the project-level build.gradle file. While editing the project-level build.gradle file, the menu option was grayed out. But if I opened a different file, for example a module-level build.gradle file, the "Select Build Variant" menu item became available. Upvotes: 7 [selected_answer]<issue_comment>username_2: Had the same problem. Switched to Android Level Level View and the menu wasn't greyed out any more Upvotes: 2
2018/03/14
1,827
5,879
<issue_start>username_0: I'm wanting to write a function that will (hopefully) work in the raster calculator in the `raster` package. What I'm trying to do is regress each cell value against a vector of Time. There are multiple examples of this, but what I would like to do is for the method to try 1 type of regression (gls, controlling for AR1 residual errors), but if for some reason that regression throws an error (perhaps there is no AR1 structure in the residuals) then to revert back to simple OLS regression. I've written two functions for the regression. One for `gls`: ``` # function for calculating the trend, variability, SNR, and residuals for each pixel ## this function will control for AR1 structure in the residuals funTrAR1 <- function(x, ...) {if (sum(is.na(x)) >= 1) { NA } else { mod <- nlme::gls(x ~ Year, na = na.omit, method = "REML", verbose = TRUE, correlation = corAR1(form = ~ Year, fixed = FALSE), control = glsControl(tolerance = 1e-3, msTol = 1e-3, opt = c("nlminb", "optim"), singular.ok = TRUE, maxIter = 1000, msMaxIter = 1000)) slope <- mod$coefficients[2] names(slope) <- "Trend" var <- sd(mod$residuals) names(var) <- "Variability" snr <- slope/var names(snr) <- "SNR" residuals <- c(stats::quantile( mod$residuals, probs = seq(0,1,0.25), na.rm = TRUE, names = TRUE, type = 8), base::mean(mod$residuals, na.rm = TRUE)) names(residuals) <- c("P0", "P25", "P50", "P75", "P100", "AvgResid") return(c(slope, var, snr, residuals))} } ``` and for `OLS`: ``` # function for calculating the trend, variability, SNR, and residuals for each pixel ## this function performs simple OLS funTrOLS <- function(x, ...) {if (sum(is.na(x)) >= 1) { NA } else { mod <- lm(x ~ Year, na.action = na.omit) slope <- mod$coefficients[2] names(slope) <- "TrendOLS" var <- sd(mod$residuals) names(var) <- "VariabilityOLS" snr <- slope/var names(snr) <- "SNROLS" residuals <- c(stats::quantile( mod$residuals, probs = seq(0,1,0.25), na.rm = TRUE, names = TRUE, type = 8), base::mean(mod$residuals, na.rm = TRUE)) names(residuals) <- c("P0", "P25", "P50", "P75", "P100", "AvgResid") return(c(slope, var, snr, residuals))} } ``` I'm trying to wrap these in a tryCatch expression which can be passed to `raster::calc` ``` xReg <- tryCatch( { funTrAR1 }, error = function(e) { ## this should create a text file if a model throws an error sink(paste0(inDir, "/Outputs/localOLSErrors.txt"), append = TRUE) cat(paste0("Used OLS regression (grid-cell) for model: ", m, ". Scenario: ", t, ". Variable: ", v, ". Realisation/Ensemble: ", r, ". \n")) sink() ## run the second regression function funTrOLS } ) ``` This function is then passed to `raster::calc` like so ``` cellResults <- calc(rasterStack, fun = xReg) ``` Which if everything works will produce a raster stack of the output variables that looks similar to this [![calc output](https://i.stack.imgur.com/suJX0.png)](https://i.stack.imgur.com/suJX0.png) However, for some of my datasets the loop that I'm running all of this in stops and I receive the following error: ``` Error in nlme::gls(x ~ Year, na = na.omit, method = "REML", verbose = TRUE, : false convergence (8) ``` Which is directly from `nlme::gls` and what I was hoping to avoid. I've never used `tryCatch` before (this might be very obvious), but does anyone know how to get the `tryCatch()` to move to the second regression function if the first (AR1) regression fails?<issue_comment>username_1: Here is another way to code this, perhaps that helps: ``` xReg <- function(x, ...) { r <- try(funTrAR1(x, ...), silent=TRUE) # if (class(r) == 'try-error') { if (!is.numeric(r)) { # perhaps a faster test than the one above r <- c(funTrOLS(x, ...), 2) } else { r <- c(r, 1) } r } ``` I add a layer that shows which model was used for each cell. You can also do ``` xReg <- function(x, ...) { r <- funTrOLS(x, ...) try( r <- funTrAR1(x, ...), silent=TRUE) r } ``` Or use calc twice and use `cover` after that ``` xReg1 <- function(x, ...) { r <- c(NA, NA, NA, NA) try( r <- funTrAR1(x, ...), silent=TRUE) r } xReg2 <- function(x, ...) { funTrOLS(x, ...) } a <- calc(rasterStack, xReg1) b <- calc(rasterStack, xReg2) d <- cover(a, b) ``` And `a` will show you where xReg1 failed. Upvotes: 2 [selected_answer]<issue_comment>username_2: After doing a bit more reading, and also looking at @RobertH answer, I wrote a bit of (very) ugly code that checks if GLS will fail and if it does, performs OLS instead. I'm positive that there is a nicer way to do this, but it works and maintains raster layer names as they were defined in my functions, it also exports any errors to a txt file. ``` for (i in 1) { j <- tempCentredRas cat(paste("Checking to see if gls(AR1) will work for model", m, r,"cell based calculations\n", sep = " ")) ### This check is particularly annoying as it has to do this for every grid-cell ### it therefore has to perform GLS/OLS on every grid cell twice ### First to check if it (GLS) will fail, and then again if it does fail (use OLS) or doesn't (use GLS) possibleLocalError <- tryCatch( raster::calc(j, fun = funTrAR1), error = function(err) err ) if (inherits(possibleLocalError, "error")) { cat(paste("GLS regression failed for model", m, r, "using OLS instead for cell based results.","\n", sep = " ")) cellResults <- raster::calc(j, fun = funTrOLS) } else { cellResults <- raster::calc(j, fun = funTrAR1) } } ``` Upvotes: 0
2018/03/14
639
2,240
<issue_start>username_0: so i keep getting this error on my live server but not on my local host, everything works normal on my localhost but when i deploy my website on online server it keep giving me this error > > file\_put\_contents(C:\appname\storage\framework\views/27f511f5644086daa68b2cf835bf49f5148aba43.php): failed to open stream: No such file or directory > > > i tried `php artisan config:cache` but nothing really worked<issue_comment>username_1: Try and use next command `"php artisan cache:clear"` will help you to clear cache for app Upvotes: -1 <issue_comment>username_2: I'm going to guess that you have the wrong path being used, so try this to ensure that regardless of the server, you will have the right path: ``` $path = dirname(__FILE__); file_put_contents($path.DIRECTORY_SEPARATOR.'27f511f5644086daa68b2cf835bf49f5148aba43.php'); ``` This assumes that you are trying to write to the same folder as the code file that is running this. You can also use the following to determine exactly what the path is on each server: ``` echo getcwd(); ``` Upvotes: 0 <issue_comment>username_3: Looks like your storage/ directory is not writable. Laravel requires that directory to be writable. If you're on linux/mac, run this chmod 0755 storage/ -R Hope that helps. Upvotes: 0 <issue_comment>username_4: I suppose your issue is due to the [Laravel Configuration Caching](https://laravel.com/docs/5.6/configuration#configuration-caching).I suggest you 1. Remove the configuration cache file 2. Flush the application cache 3. Create a cache file for faster configuration loading To do this, run the following Artisan commands on your command line 1. php artisan config:clear 2. php artisan cache:clear 3. php artisan config:cache Where you don't have access to the command line on your server, you can [programmatically execute commands](https://laravel.com/docs/5.6/artisan#programmatically-executing-commands) like this: ``` Route::get('/clear-cache', function() { $exitCode = Artisan::call('config:clear'); $exitCode = Artisan::call('cache:clear'); $exitCode = Artisan::call('config:cache'); return 'DONE'; //Return anything }); ``` I hope this is helpful. Upvotes: 3 [selected_answer]
2018/03/14
1,253
4,047
<issue_start>username_0: I am creating a data provider class that will hold data, perform transformations and make it available to other classes. If the user creates and instance of this class and passes some data at instantiation, I would like to store it twice: once for all transformations and once as a copy of the original data. Let's assume the data itself has a `copy` method. I am using the `attrs` package to create classes, but would also be interested in best approaches to this in general (perhaps there is a better way of getting what I am after?) Here is what I have so far: ``` @attr.s class DataContainer(object): """Interface for managing data. Reads and write data, acts as a provider to other classes. """ data = attr.ib(default=attr.Factory(list)) data_copy = data.copy() def my_func(self, param1='all'): """Do something useful""" return param1 ``` This doesn't work: `AttributeError: '_CountingAttr' object has no attribute 'copy'` I also cannot call `data_copy = self.data.copy()`, I get the error: `NameError: name 'self' is not defined`. The working equivalent without the `attrs` package would be: ``` class DataContainer(object): """Interface for managing data. Reads and write data, acts as a provider to other classes. """ def __init__(self, data): "Init method, saving passed data and a backup copy" self.data = data self.data_copy = data ``` ### EDIT: As pointed out by @hynek, my simple init method above needs to be corrected to make an actual copy of the data: i.e. `self.data_copy = data.copy()`. Otherwise both `self.data` and `self.data_copy` would point to the same object.<issue_comment>username_1: After looking through [the documentation a little more deeply](http://www.attrs.org/en/stable/examples.html?highlight=init#other-goodies) (scroll right to the bottom), I found that there is a kind of post-init hook for classes that are created by `attrs`. You can just include a special `__attrs_post_init__` method that can do the more complicated things one might want to do in an `__init__` method, beyond simple assignment. Here is my final working code: ``` In [1]: @attr.s ...: class DataContainer(object): ...: """Interface for managing data. Reads and write data, ...: acts as a provider to other classes. ...: """ ...: ...: data = attr.ib() ...: ...: def __attrs_post_init__(self): ...: """Perform additional init work on instantiation. ...: Make a copy of the raw input data. ...: """ ...: self.data_copy = self.data.copy() In [2]: some_data = np.array([[1, 2, 3], [4, 5, 6]]) In [3]: foo = DataContainer(some_data) In [4]: foo.data Out[5]: array([[1, 2, 3], [4, 5, 6]]) In [6]: foo.data_copy Out[7]: array([[1, 2, 3], [4, 5, 6]]) ``` Just to be doubly sure, I checked to see that the two attributes are not referencing the same object. In this case they are not, which is likely thanks to the `copy` method on the NumPy array. ``` In [8]: foo.data[0,0] = 999 In [9]: foo.data Out[10]: array([[999, 2, 3], [ 4, 5, 6]]) In [11]: foo.data_copy Out[12]: array([[1, 2, 3], [4, 5, 6]]) ``` Upvotes: 0 <issue_comment>username_2: You can do two things here. The first one you've found yourself: you use `__attr_post_init__`. The second one is to have a default: ``` >>> import attr >>> @attr.s ... class C: ... x = attr.ib() ... _x_backup = attr.ib() ... @_x_backup.default ... def _copy_x(self): ... return self.x.copy() >>> l = [1, 2, 3] >>> i = C(l) >>> i C(x=[1, 2, 3], _x_backup=[1, 2, 3]) >>> i.x.append(4) >>> i C(x=[1, 2, 3, 4], _x_backup=[1, 2, 3]) ``` JFTR, you example of ``` def __init__(self, data): self.data = data self.data_copy = data ``` is wrong because you’d assign the same object twice which means that modifying `self.data` also modifies `self.data_copy` and vice versa. Upvotes: 2 [selected_answer]
2018/03/14
715
3,319
<issue_start>username_0: I have a piece of code below which on the click of a button will promt the user to select a .csv file from their storeage. After this the file will populate the the datagridview with its contents. ``` public void button1_Click(object sender, EventArgs e) { string delimiter = ","; string tablename = "AudioTable"; DataSet dataset = new DataSet(); OpenFileDialog openFileDialog1 = new OpenFileDialog(); openFileDialog1.Filter = "CSV Files (*.csv)|*.csv|All Files (*.*)|*.*"; openFileDialog1.FilterIndex = 1; if (openFileDialog1.ShowDialog() == DialogResult.OK) { if (MessageBox.Show("Are you sure you want to import the data from \n " + openFileDialog1.FileName + "?", "Are you sure?", MessageBoxButtons.YesNo) == DialogResult.Yes) { string filename = openFileDialog1.FileName; StreamReader sr = new StreamReader(filename); string csv = File.ReadAllText(openFileDialog1.FileName); dataset.Tables.Add(tablename); dataset.Tables[tablename].Columns.Add("QID"); dataset.Tables[tablename].Columns.Add("Text"); ; string allData = sr.ReadToEnd(); string[] rows = allData.Split("\r".ToCharArray()); foreach (string r in rows) { string[] items = r.Split(delimiter.ToCharArray()); dataset.Tables[tablename].Rows.Add(items); } this.dataGridView1.DataSource = dataset.Tables[0].DefaultView; MessageBox.Show(filename + " was successfully imported.", "Successful", MessageBoxButtons.OK); } } } ``` This code runs fine by consistantly displaying the data within the datagridview however, When you attempt to upload a second different .csv this files data will overwrite the existing data from the 1st .csv. Is there any possible way to change this so that instead of overwriting the existing data within the datagridview it will just append it into new rows below the existing data. To try and make it abit clearer, The first time user uploads a .csv there is no existing data within the datagridview so it loads fine but if that same user wants to upload second .csv after the first thats when the existing data from the 1st .csv will be overwritten. I have searched about and seen a few issues similar to this relating to datatables in web based solutions however was unsure how translate there to suit my own. Thanks<issue_comment>username_1: You could add more columns to your dataset and then call this.dataGridView1.DataSource = dataset.Tables[0].DefaultView; again. Upvotes: 0 <issue_comment>username_2: each time you try do this you use the same dataset , and create a new on the previous. that's why the data is overwritten save the contents of what you want to add to the datagridview data source into a data table, and then merge them together into datagridview datasource ``` DataTable dt2 = new DataTable(); // fill your table (dataGridView1.DataSource as DataTable).Merge(dt2); ``` Upvotes: 1
2018/03/14
399
1,429
<issue_start>username_0: In Ruby, you can do this: ``` prc = lambda{|x, y=42, *other|} prc.parameters #=> [[:req, :x], [:opt, :y], [:rest, :other]] ``` In particular, I'm interested in being able to get the names of the parameters which are `x` and `y` in the above example. In Crystal, I have the following situation: ``` def my_method(█) # I would like the name of the arguments of the block here end ``` How would one do this in Crystal?<issue_comment>username_1: While this already sounds weird in Ruby, there's no way to do it in Crystal since in your example the block already takes no arguments. The other problem is that such information is already lost after compilation. So we would need to access this at compile time. But you cannot access a runtime method argument at compile time. However you can access the block with a macro and that then even allows arbitrary signatures of the block without explicitly giving them: ``` macro foo(█) {{ block.args.first.stringify }} end p foo {|x| 0 } # => "x" ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: To expand on the great answer by username_1, the equivalent of the Ruby `parameters` method would be something like this: ``` macro block_args(█) {{ block.args.map &.symbolize }} end p block_args {|x, y, *other| } # => [:x, :y, :other] ``` Note that block arguments are always required in Crystal and can't have default values. Upvotes: 3
2018/03/14
2,582
8,029
<issue_start>username_0: I try to build a solution stored in an external GIT-Repository on Visual Studio Online. It has the following steps: > > 1: Git Restore - Works > > > 2: NuGet Restore - Works > > > 3: Build - Does NOT work > > > My first guess when looking at the logs is that MsBuild is not looking for the Packages where NuGet had stored them. Some Lines from NuGet Restore: ``` 2018-03-14T21:10:11.0352862Z Completed installation of AngleSharp 0.9.9 2018-03-14T21:10:11.0353230Z Adding package 'AngleSharp.0.9.9' to folder 'D:\a\1\s\packages' 2018-03-14T21:10:11.0353563Z Added package 'AngleSharp.0.9.9' to folder 'D:\a\1\s\packages' 2018-03-14T21:10:11.0354972Z Added package 'AngleSharp.0.9.9' to folder 'D:\a\1\s\packages' from source 'https://api.nuget.org/v3/index.json' 'Microsoft.SharePointOnline.CSOM.16.1.7317.1200' to folder 'D:\a\1\s\packages' ``` Some lines from MsBuild: ``` 018-03-14T21:10:21.2105399Z PrepareForBuild: 2018-03-14T21:10:21.2105793Z Creating directory "bin\Release\". 2018-03-14T21:10:21.2424947Z Creating directory "obj\Release\". 2018-03-14T21:10:30.3569560Z ResolveAssemblyReferences: 2018-03-14T21:10:30.3570425Z Primary reference "AngleSharp, Version=0.9.9.0, Culture=neutral, PublicKeyToken=<KEY>, processorArchitecture=MSIL". 2018-03-14T21:10:30.3670272Z ##[warning]C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\Microsoft.Common.CurrentVersion.targets(2041,5): Warning MSB3245: Could not resolve this reference. Could not locate the assembly "AngleSharp, Version=0.9.9.0, Culture=neutral, PublicKeyToken=<KEY>, processorArchitecture=MSIL". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. ``` My solution/packages structure is: ``` ....\mysolution\myproject\myproject.csproj ....\mysolution\myproject\packages.config ``` Current Config: [![NUGET](https://i.stack.imgur.com/xNl45.png)](https://i.stack.imgur.com/xNl45.png) [![enter image description here](https://i.stack.imgur.com/d0nnP.png)](https://i.stack.imgur.com/d0nnP.png) So how can I change the Nuget and/or msbuild-behavior to make this work? *(Update)*: To clear this up: I have this problem with **every** package. They all are in the packages.config, each one is downloaded from Nuget, but each one also isn't found from MsBuild *(Update2)* The Commands generated are currently the following: NUGET: ``` D:\a\_tool\NuGet\4.4.1\x64\nuget.exe restore D:\a\1\s\AweCsomeO365\packages.config -PackagesDirectory D:\a\1\a\packages -Verbosity Detailed -NonInteractive -ConfigFile D:\a\1\Nuget\tempNuGet_22.config ``` MSBUILD: ``` C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\msbuild.exe" "D:\a\1\s\AweCsomeO365\AweCsomeO365.csproj" /nologo /nr:false /dl:CentralLogger,"D:\a\_tasks\VSBuild_(GUID)\1.126.0\ps_modules\MSBuildHelpers\Microsoft.TeamFoundation.DistributedTask.MSBuild.Logger.dll";"RootDetailId=(GUID)|SolutionDir=D:\a\1\s\AweCsomeO365"*ForwardingLogger,"D:\a\_tasks\VSBuild_(GUID)\1.126.0\ps_modules\MSBuildHelpers\Microsoft.TeamFoundation.DistributedTask.MSBuild.Logger.dll" /p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation=D:\a\1\a /p:ReferencePath=D:\a\1\a\packages /p:platform="anyCPU" /p:configuration="Release" /p:VisualStudioVersion="15.0" /p:_MSDeployUserAgent="VSTS_(GUID)_build_4_22 ``` I replaced the GUIDs; tempNuGetConfig is something that seems to be generated by VSTS dynamically Still. even if the log states that nuget stores the packages ``` Added package 'AngleSharp.0.9.9' to folder 'D:\a\1\a\packages' ``` MsBuild does not seem to find them there: ``` For SearchPath "D:\a\1\a\packages". 2018-03-16T13:57:42.4625155Z Considered "D:\a\1\a\packages\AngleSharp.winmd", but it didn't exist. 2018-03-16T13:57:42.4625456Z Considered "D:\a\1\a\packages\AngleSharp.dll", but it didn't exist. 2018-03-16T13:57:42.4625730Z Considered "D:\a\1\a\packages\AngleSharp.exe", but it didn't exist. ``` VSTS-Configurationvalues: MsBuild: `/p:ReferencePath=$(Build.StagingDirectory)\packages` Nuget-DestiantionDirectory: `$(Build.StagingDirectory)\packages` (update3): I have **no** solution file, but only a csproj-file in that repository<issue_comment>username_1: I believe that your MSBuild "ReferencePath" parameter is not correct. you are telling MS Build that all your references (nuget packages and their dlls included) are going to be located at "D:\a\1\a\packages" but that is not where nuget will download and store the packages and dlls. Nuget will download and extract files into D:\a\1\a\packages\{packageName}\{version}\lib\{environment}\package.dll. I think you need to remove that last parameter (ReferencePath) from your MSBuild arguments. I also noticed that your PackageLocation parameter is not the same as the destination for the NuGet restore task, do you need to add the "\packages" to that parameter like the destination in the restore task? Upvotes: 1 <issue_comment>username_2: Change the nuget restore destination directory to $(Build.SourcesDirectory)\packages and remove the msbuild ReferencePath parameter. Upvotes: 0 <issue_comment>username_3: The issue was that inside the project there was a hintpath for the packages directing to a location that was not within the GIT-Repository (and shouldn't): ``` ..\..\AweCsome365Test\packages\AngleSharp.0.9.9\lib\net45\AngleSharp.dll ``` My original approach was to define a target directory to NuGet and a Source Directory for MSBuild to use *another* location to the packages that both understand. The issue though (as far as I understand) is, that NuGet always creates a subfolder-structure `"./packages/{PackagesName}/lib/net45/{file}"` and MSBuild does not look recursivly when setting `"./packages"` as source path. *The above is just an explanation for the future guy running into the same problem* So my solution was to mimic the local behavior for nuget and changing the output directory to match the HintPath (even if there is no "AweCsome365Test")-directory in the repository: [![Nuget Destination directory setting](https://i.stack.imgur.com/FbKLZ.png)](https://i.stack.imgur.com/FbKLZ.png) *(I will leave this question open as this solution smells fishy. If anyone has a better solution that allows to chain nuget and msbuild without using the HintPath I am happily willing to spend my bounty on it)* Upvotes: 4 [selected_answer]<issue_comment>username_4: The answers here are largely right. However it's worth noting another cause that can result in this behaviour. My toolchain was using Azure DevOps which is basically the same as Visual Studio Online, just a few years later. Cause: * Reference your project from a different solution (cross-repo), for instance for debugging purposes * Update NuGet references in the problematic project from the external place you referenced it from What this does is make use of the **solution** location for packages when the package gets installed. For .Net core/standard projects, using Update-Package -reinstall appears to fix things. However, for .Net Framework projects, even though `packages.json` may get rebuilt, the node in the `.csproj` gets left as is - with a reference to a packages folder that Azure will never create. Simple fix: 1. Right click on the offending solutions locally, and choose *Unload* 2. Right click on the unloaded project, choose *edit .csproj* 3. Find any hintpaths that look like `../../OtherRepo/packages` (the slash in use may vary), and change them to `../packages` 4. Confirm the solution does build locally still 5. Push the changes to Azure, and cross your fingers This approach will fix the issue caused by consolidating / updating packages from the wrong place rather than requiring a change to the build pipeline to spoof that location (which in may case, wasn't working very well either). Upvotes: 0
2018/03/14
1,977
6,134
<issue_start>username_0: I have this code which just Mnist tesorflow example and I would to do get the prediction for test data ``` from __future__ import absolute_import from __future__ import division from __future__ import print_function # Imports import numpy as np import tensorflow as tf tf.logging.set_verbosity(tf.logging.INFO) # Our application logic will be added here def cnn_model_fn(features, labels, mode): """Model function for CNN.""" # Input Layer input_layer = tf.reshape(features["x"], [-1, 28, 28, 1]) # Convolutional Layer #1 conv1 = tf.layers.conv2d( inputs=input_layer, filters=32, kernel_size=[5, 5], padding="same", activation=tf.nn.relu) # Pooling Layer #1 pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2) # Convolutional Layer #2 and Pooling Layer #2 conv2 = tf.layers.conv2d( inputs=pool1, filters=64, kernel_size=[5, 5], padding="same", activation=tf.nn.relu) pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2) # Dense Layer pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64]) dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu) dropout = tf.layers.dropout( inputs=dense, rate=0.4, training=mode == tf.estimator.ModeKeys.TRAIN) # Logits Layer logits = tf.layers.dense(inputs=dropout, units=10) predictions = { # Generate predictions (for PREDICT and EVAL mode) "classes": tf.argmax(input=logits, axis=1), # Add `softmax_tensor` to the graph. It is used for PREDICT and by the # `logging_hook`. "probabilities": tf.nn.softmax(logits, name="softmax_tensor") } if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions) # Calculate Loss (for both TRAIN and EVAL modes) loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits) # Configure the Training Op (for TRAIN mode) if mode == tf.estimator.ModeKeys.TRAIN: optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001) train_op = optimizer.minimize( loss=loss, global_step=tf.train.get_global_step()) return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op) # Add evaluation metrics (for EVAL mode) if mode == tf.estimator.ModeKeys.EVAL: eval_metric_ops = { "accuracy": tf.metrics.accuracy( labels=labels, predictions=predictions["classes"])} return tf.estimator.EstimatorSpec( mode=mode, loss=loss, eval_metric_ops=eval_metric_ops) predicted_classes = tf.argmax(logits, 1) if mode == tf.estimator.ModeKeys.PREDICT: predictions = { 'class_ids': predicted_classes[:, tf.newaxis], 'probabilities': tf.nn.softmax(logits), 'logits': logits, } return tf.estimator.EstimatorSpec(mode, predictions=predictions) def main(unused_argv): # Load training and eval data mnist = tf.contrib.learn.datasets.load_dataset("mnist") train_data = mnist.train.images[:54000] # Returns np.array train_labels = np.asarray(mnist.train.labels, dtype=np.int32)[:54000] eval_data = train_data[:500] # Returns np.array eval_labels = train_labels[:500] # np.asarray(mnist.test.labels, dtype=np.int32) test_data = train_data[1000:] test_label = train_labels[1000:] # eval_data = mnist.test.images # Returns np.array # eval_labels = np.asarray(mnist.test.labels, dtype=np.int32) # Create the Estimator mnist_classifier = tf.estimator.Estimator( model_fn=cnn_model_fn, model_dir="./tmp/mnist_convnet_model") # Set up logging for predictions tensors_train_to_log = {"probabilities": "softmax_tensor"} # tensors_eval_to_log = {"accuracy": "classes"} logging_train_hook = tf.train.LoggingTensorHook( tensors=tensors_train_to_log, every_n_iter=6000) # logging_eval_hook = tf.train.LoggingTensorHook( # tensors=tensors_eval_to_log, every_n_iter=1000) # Train the model print("Training Data length:", np.shape(train_data)) train_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": train_data}, y=train_labels, batch_size=10, num_epochs=1, shuffle=True) eval_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": eval_data}, y=eval_labels, num_epochs=1, shuffle=True) # input_fn=train_input_fn, # steps=20000, # hooks=[logging_hook]) # Evaluate the model and print results # eval_results = mnist_classifier.evaluate(input_fn=eval_input_fn) # print(eval_results) train_spec = tf.estimator.TrainSpec(input_fn=train_input_fn, max_steps=6500) eval_spec = tf.estimator.EvalSpec(input_fn=eval_input_fn) tf.estimator.train_and_evaluate(estimator=mnist_classifier, train_spec=train_spec,eval_spec=eval_spec) test_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": test_data[0]}, y=test_label, num_epochs=1, shuffle=True) # mnist_classifier.train( test_spec = tf.estimator.EvalSpec(input_fn=test_input_fn) predictions = mnist_classifier.predict(test_spec) print(predictions["logits"][0]) # print(predictions["logits"]) #I got an error when I tried to print this if __name__ == "__main__": tf.app.run() ``` the code work will like I got trained model but still when I tried to print the prediction I could find a way to do that. So, any one did this example and print the pridected data not just the evaluation accuracy.<issue_comment>username_1: It is a generator object and, to print it, you should use `print(list(predictions)[0])` Upvotes: 0 <issue_comment>username_2: Following should print all the predictions - ``` for i in range(300): print(list(predictions)[0]) ``` Upvotes: 0 <issue_comment>username_3: This should work ``` outputs = [list(next(predictions).values())[0] for i in range(300)] ``` Upvotes: 0 <issue_comment>username_4: try this: ``` training_predictions = linear_regressor.predict(input_fn=predict_training_input_fn) training_predictions = np.array([item['predictions'][0] for item in training_predictions]) validation_predictions = linear_regressor.predict(input_fn=predict_validation_input_fn) validation_predictions = np.array([item['predictions'][0] for item in validation_predictions]) ``` Upvotes: 1
2018/03/14
706
1,907
<issue_start>username_0: I have a dataset that includes a column called BirthYear that includes lots of years in which people were born and I need to create a new column that prints "young" if their BirthYear is > 1993 and to print "old" if their BirthYear is < 1993. I've tried using the if function but I cant seem to achieve it, I would appreciate if u let me know how to do it, thanks!<issue_comment>username_1: I also like [`cut()`](https://www.rdocumentation.org/packages/base/versions/3.4.3/topics/cut) for this, especially if you want the result to be a [factor](https://stat.ethz.ch/R-manual/R-devel/library/base/html/factor.html). ``` year <- sample(1989:1999, size=20, replace=T) # Arbitrary vector of years breaks <- c(-Inf, 1993, Inf) # The 3 bounds of the 2 intervals labels <- c("old", "young") # The 2 labels of the 2 intervals binary <- cut(x=year, breaks=breaks, labels=labels, right=F) # Inspect data.frame(year, binary) ``` The result: ``` year binary 1 1993 young 2 1997 young 3 1989 old 4 1998 young 5 1999 young 6 1989 old 7 1994 young 8 1991 old 9 1991 old 10 1991 old ... ``` This is close to a [duplicate](https://stackoverflow.com/questions/5746544/r-cut-by-defined-interval), but involves custom labels. If you have to inspect more than one variable eventually, look at [`dplyr::case_when()`](https://www.rdocumentation.org/packages/dplyr/versions/0.7.3/topics/case_when). Upvotes: 3 [selected_answer]<issue_comment>username_2: Another option could be use `dplyr::recode_factor` as below: ``` set.seed(1) year <- sample(1970:2005, size=10, replace=T) > year #[1] 2001 1975 1979 1994 1974 1973 1985 1994 1975 1981 recode_factor(as.factor(year > 1993), 'TRUE' = "Old", 'FALSE' = "Young") #[1] Old Young Young Old Young Young Young Old Young Young #Levels: Old Young ``` Upvotes: 1
2018/03/14
1,036
3,832
<issue_start>username_0: This is my component: ``` import { Component, OnInit, ContentChildren, QueryList } from '@angular/core'; import { IconBoxComponent } from '../icon-box/icon-box.component'; @Component({ selector: 'app-three-icon-box', templateUrl: './three-icon-box.component.html', styleUrls: ['./three-icon-box.component.scss'] }) export class ThreeIconBoxComponent implements OnInit { @ContentChildren(IconBoxComponent) boxes: QueryList; constructor() { } ngOnInit() { } ngAfterContentInit() { console.log(this.boxes); } } ``` Its template looks like this: ``` {{ box }} ``` This is how I'm rendering it: ``` content 1 content 2 content 3 ``` In that second block of code, I'm trying to render the , but I can't figure out how. `{{ box }}` was just an idea of what I'm trying to do, but I just get `[object Object]`.<issue_comment>username_1: It's a bit unclear what you need. But I guess it would be enough to use ng-content. Remove the ContentChildren and also the ngFor and just use ``` ``` In your template. Then you have to add the classes directly where your box components are declared. ``` ``` To enrich projected components within the ThreeIconBoxComponent you will need a totally different approach using templates and a template outlet. Upvotes: 4 <issue_comment>username_2: You should use this pattern 1. Create TemplateMarker Directive to mark which [templates](https://angular.io/api/core/TemplateRef) you want to pass as parameters (to prevent grabbing [other templates](https://valor-software.com/ngx-bootstrap/#/tooltip#dynamic-content)). 2. Inject markers using `@ContentChildren`. 3. Render them where you need using [NgTemplateOutlet](https://angular.io/api/common/NgTemplateOutlet). Hint: You can render each template multiple times and [send them parameters](https://stackoverflow.com/questions/42978082/what-is-let-in-angular-2-templates). Example: ``` import { Component, Input, Directive, TemplateRef, ContentChildren, QueryList } from '@angular/core'; @Directive({ selector: '[templateMarker]' }) export class TemplateMarker { constructor(public template: TemplateRef) {} } @Component({ selector: 'template-owner', template: ` `, }) export class TemplateOwner { @ContentChildren(TemplateMarker) markers: QueryList; } @Component({ selector: 'hello', template: ` first template second template `, }) export class HelloComponent {} ``` Upvotes: 4 <issue_comment>username_3: This is an old question but I was just asking this same question in an application I'm developing. I'm unsure how "correct" my solution is but this is how I tackled this problem. In my case my parent `component` is only supposed to accept content of a specific type (in your case this would be the `IconBoxComponent`). Within that child `component` (`IconBoxComponent` here) I exposed a `TemplateRef` which can then be used by the parent `component` for rendering. Although using a `directive` also worked I didn't like the idea of having to use both my custom child `component` as well as a custom `directive`. **Note**: In my example below I removed all of the `CSS` `classes` and some `HTML` elements to make it more readable. They weren't removed for some technical reason. #### IconBoxComponent ``` @Component({ selector: 'app-icon-box', template: ` My Icon Box `, styleUrls: ['./icon-box.component.scss'] }) export class IconBoxComponent { @ViewChild('template', { static: true }) template!: TemplateRef; } ``` #### ThreeIconBoxComponent ``` @Component({ selector: 'app-three-icon-box', template: ` `, styleUrls: ['./three-icon-box.component.scss'] }) export class ThreeIconBoxComponent { @ContentChildren(IconBoxComponent) boxes = new QueryList(); } ``` #### Usage ``` content 1 content 2 content 3 ``` Upvotes: 0
2018/03/14
543
2,144
<issue_start>username_0: I wrote react native application. The application is simple, more informational, it is used by Redux, Saga, several linked npm packages. The app runs in normal mode, not full-screen. The structure of the application was built on the basis of Ignite. The problem is that on the phone (Samsung Note8) the application is recognized as a game. E.g. while app is running there is a message "The game is running" on the lock screen. Also there are additional buttons for the gamepad or something like that. In additional, app has a paddings on the top and on the bottom when it's working on real device (Samsung Note8). This effect real exists when some games running. When app is running on another device (e.g. ZTE Blade 610) it's running as usual and without any side effects. The main version is the cause of all is Game Tools that's existed on Samsung Note8 but others apps have no similar effects and running as expected. Is there a possibility to make a react native app as a real app but not a game? Why Game Tools recognizes my app as a game? Or what is the reason and how can it be affected? Thanks.<issue_comment>username_1: I think there are few possibilities. 1. you (or one of your dependencies) have included the google play service API which inside of play service API has a module named games that samsung will automatically treat it as game. You could find which of your dependency is loading google play service API and create a exclude like: ``` compile (project ('your.dependency')){ exclude group: 'com.google.android.gms', module:'play-services-game' } ``` 2. Your application id (can see on build.gradle) is registered on samsung game database. You could check by going into playstore and search for your application id Upvotes: 1 <issue_comment>username_2: this is something that can happen on samsung phones due to the package name of your app. we cant change this after the initial release, you must contact samsung developer support and they can fix it on the fly. i wrote a gist on github about it: <https://gist.github.com/Adnan-Bacic/718eb3b4e70380696c91dc21c4804112> Upvotes: 0
2018/03/14
511
1,573
<issue_start>username_0: I am trying to access DB2 tables in a java project. I am able to access the tables when I manually added the jar files - `db2jcc4.jar` and `db2jcc_license_cisuz.jar`. No issues in accessing the tables. But when I try to add these jar files through Maven, they won't add to the project. ``` com.ibm.db2 db2jcc4 9.7.0.4 ``` Error Message - `Missing artifact id`. Also, the latest `db2jcc4.jar` files (Version 11.1) are not present in Maven repository. Is there any other place I can access it from?<issue_comment>username_1: You have to download the right driver from IBM. <http://www-01.ibm.com/support/docview.wss?uid=swg21363866> Then install it to your local maven repository <http://maven.apache.org/plugins/maven-install-plugin/install-file-mojo.html> Upvotes: 2 <issue_comment>username_2: As written in maven central repository the artifact is in another repo. Add it to your pom and it will work. ``` Alfresco Alfresco https://artifacts.alfresco.com/nexus/content/repositories/public/ ``` Upvotes: 1 <issue_comment>username_3: According to maven central repository the artifact is in another repository. Include these two in your pom.xml and it should work: ``` com.ibm.db2.jcc db2jcc4 10.1 com.ibm.db2.jcc https://artifacts.alfresco.com/nexus/content/repositories/public/ ``` Upvotes: 0 <issue_comment>username_4: Assuming that , using share drive is the option to go. ``` com.ibm.db2.jcc licences 0.7 system R:\JDBC drivers\IBM DB2\_db2\_2.2.0.v20130525\_0720\db2jcc\_license\_cisuz.jar ``` Upvotes: 1
2018/03/14
1,311
5,033
<issue_start>username_0: Thanks firstly for bearing with me as a relative newcomer to the world of Python. I'm working on a simple set of code and have been racking my brain to understand where I am going wrong. I suspect it is a relatively simple thing to correct but all searches so far have been fruitless. If this has been covered before then please be gentle, I have looked for a couple of days! I'm working on the following and after catching and correcting a number of issues I suspect that I'm on the last hurdle:- ``` def main(): our_list = [] ne = int(input('How many numbers do you wish to enter? ')) for i in range(0, (ne)): # set up loop to run user specified number of time number=int(input('Choose a number:- ')) our_list.append(number) # append to our_list print ('The list of numbers you have entered is ') print (our_list) main() while True: op = input ('For the mean type <1>, for the median type <2>, for the mode type <3>, to enter a new set of numbers type <4> or 5 to exit') import statistics if op == "1": mn = statistics.mean(our_list) print ("The mean of the values you have entered is:- ",mn) if op == "2": me = statistics.median(our_list) print ("The median of the values you have entered is:- ",me) if op == "3": mo = statistics.mode(our_list) print ("The mode of the values you have entered is:- ",mo) if op == "5": main() else: print("Goodbye") break` ``` For some reason the appended (our\_list) is not being recognised within the while true loop rendering the statistics calculation void. Any steer would be really appreciated as to where I am missing the obvious, thanks in advance. Cheers Bryan<issue_comment>username_1: I'm not sure exactly what you mean by "not being recognized", but `our_list` is a local variable inside `main`, so it can't be used anywhere but inside `main`. So, if you try to use it elsewhere, you should get a `NameError`. If your code actually has a global variable with the same name as the local variable that we aren't seeing here, things can be more confusing—you won't get a `NameError`, you'll get the value of the global variable, which isn't what you want. The best solution here is to return the value from the function, and then have the caller use the returned value. For example: ``` def main(): our_list = [] ne = int(input('How many numbers do you wish to enter? ')) for i in range(0, (ne)): # set up loop to run user specified number of time number=int(input('Choose a number:- ')) our_list.append(number) # append to our_list print ('The list of numbers you have entered is ') print (our_list) return our_list the_list = main() while True: op = input ('For the mean type <1>, for the median type <2>, for the mode type <3>, to enter a new set of numbers type <4> or 5 to exit') import statistics if op == "1": mn = statistics.mean(the_list) print ("The mean of the values you have entered is:- ",mn) if op == "2": me = statistics.median(the_list) print ("The median of the values you have entered is:- ",me) if op == "3": mo = statistics.mode(the_list) print ("The mode of the values you have entered is:- ",mo) if op == "5": the_list = main() else: print("Goodbye") break ``` There are other options—you could pass in an empty list for `main` to fill, or use a global variable (or, better, a more restricted equivalent like an attribute on a class instance or a closure variable), or refactor your code so everyone who needs to access `our_list` is inside the same function… but I think this is the cleanest way to do what you're trying to do here. --- By the way, this isn't *quite* the last hurdle—but you're very close: * After any mean, median, or mode, it's going to hit the "Goodbye" and exit instead of going back through the loop. Do you know about `elif`? * You mixed up `'5'` and `'4'` in the menu. * If the user enters `2` and `3` and asks for the mode, your code will dump a `ValueError` traceback to the screen; probably not what you want. Do you know `try`/`except`? That's all I noticed, and they're all pretty simple things to add, so congrats in advance. Upvotes: 1 <issue_comment>username_2: The issue is that `our_list` was defined in the `main()` function, and is not visible outside of the `main()` function scope. Since you're doing everything in one chunk, you could remove line 1 and 6, taking the code from your `main()` function and putting it on the same indentation level as the code which follows. Upvotes: 0 <issue_comment>username_3: This seems to be because you defined our\_list within the main() function. You should probably define it as a global variable by creating it outside the main() function. You could also put the while loop inside a function and pass in our\_list as a parameter to the list. Upvotes: 0
2018/03/14
2,009
6,732
<issue_start>username_0: Scenario: I am getting the data of companies name, address, city and contact from a flat file imported into SQL Server. I am trying to import this data into a platform that only accepts unique company names. Let's look at an example of the flat data: ``` CompanyName City Address Contact ------------------------------------------------------------------------ Starbucks Seattle Null Pedram Starbucks Seattle 44 East Ave Daniel Starbucks Seattle 2701 Freedom way April Starbucks Seattle 3500 E Destination Drive Steve Starbucks Luxembourg N2915 Countrt Road AB Hans Starbucks Orleans 2800 Rice St. Emily Starbucks St. Paul 6500 Henri-Bourassa BE Casey Starbucks St. Paul 6500 Henri-Bourassa BE Kathy ``` With a data set like this, I am trying to get a result as shown below: ``` CompanyName City Address ------------------------------------------------------------------------- Starbucks (Seattle) Seattle Null Starbucks (Seattle-1) Seattle 44 East Ave Starbucks (Seattle-2) Seattle 2701 Freedom way Starbucks (Seattle-3) Seattle 3500 E Destination Drive Starbucks (Luxembourg) Luxembourg N2915 Countrt Road AB Starbucks (Orleans) Orleans 2800 Rice St. Starbucks (St. Paul) St. Paul 6500 Henri-Bourassa BE ``` I have tried using `Row_number` and `Partition By` but the issue that I have is: how do I generate that number in front? Below is the code for data Set and what I have tried: ``` Create table #Company ( companyname nvarchar(255), City nvarchar(100), [Address] nvarchar(255), Contact nvarchar(255) ) insert into #Company (companyname, City, Contact) values ('Starbucks', 'Seattle', 'Pedram'); insert into #Company (companyname, City, [Address], Contact) values ('Starbucks', 'Seattle', '44 East Ave', 'Daniel'), ('Starbucks', 'Seattle', '2701 Freedom way', 'April'), ('Starbucks', 'Seattle','3500 E Destination Drive', 'Steve'), ('Starbucks', 'Luxembourg', 'N2915 Countrt Road AB', 'Hans'), ('Starbucks', 'Orleans', '2800 Rice St.', 'Emily'), ('Starbucks', 'St. Paul', '6500 Henri-Bourassa BE', 'Casey'), ('Starbucks', 'St. Paul', '6500 Henri-Bourassa BE', 'Kathy'); SELECT * FROM #Company SELECT ROW_NUMBER() OVER (PARTITINO BY companyname, city, [address] ORDER BY companyname), CASE WHEN ROW_NUMBER() OVER (PARTITION BY companyname, city, [address] ORDER BY companyname, city, [address]) = 1 THEN companyname + ' ' + '(' + ISNULL(city,'')+ ')' ELSE companyname END, --+ CAST(ROW_NUMBER() OVER (PARTITION BY T1.COMPANY, T1.CITY ORDER BY T1.[Address 1]) AS VARCHAR(3)) * FROM #Company ```<issue_comment>username_1: Using some cte's to format it out and using row\_number(): ``` --Number the rows ;with cte AS ( SELECT ROW_NUMBER() OVER(PARTITION BY companyname, city ORDER BY Address) AS counter_ , * FROM #company --convert the number 1's to blanks and subtract each number by 1 so the 2nd record has a 1 appended ), cte2 AS ( SELECT *, CASE WHEN CAST(counter_ AS varchar(3)) = 1 THEN '' ELSE CAST(CAST(counter_ AS INT) - 1 AS VARCHAR(3)) END AS numberformatting FROM CTE ) --if the formatted number is a blank, dont append anything. If it is non-blank, append a hyphen and the number SELECT DISTINCT CASE WHEN numberformatting = '' THEN companyname + '(' + city + ')' ELSE companyname + '(' + city + '-' + numberformatting + ')' END AS formattedName, city, companyname, city, address FROM cte2 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Let me suggest to set this order: ``` order by companyname,city,[address] ;WITH X AS ( SELECT ROW_NUMBER() over (Partition by companyname,city,[address] order by companyname,city,[address]) rn, companyname, city, address, contact FROM #Company ) SELECT CONCAT(companyname,' (', city, iif(rn=1, '', '-' + CAST(rn as varchar(10))), ')') CompanyName, City, Address, Contact FROM X; GO CompanyName | City | Address | Contact :--------------------- | :--------- | :----------------------- | :------ Starbucks (Luxembourg) | Luxembourg | N2915 Countrt Road AB | Hans Starbucks (Orleans) | Orleans | 2800 Rice St. | Emily Starbucks (Seattle) | Seattle | *null* | Pedram Starbucks (Seattle) | Seattle | 2701 Freedom way | April Starbucks (Seattle) | Seattle | 3500 E Destination Drive | Steve Starbucks (Seattle) | Seattle | 44 East Ave | Daniel Starbucks (St. Paul) | St. Paul | 6500 Henri-Bourassa BE | Casey Starbucks (St. Paul-2) | St. Paul | 6500 Henri-Bourassa BE | Kathy ``` *dbfiddle [here](http://dbfiddle.uk/?rdbms=sqlserver_2014&fiddle=53ed46b05151e08b1d568ece9858f3fd)* Upvotes: 2 <issue_comment>username_3: **Query** ``` SELECT companyname + ' (' + City + REPLACE( ' - ' + CAST( ROW_NUMBER() OVER (PARTITION BY companyname , City ORDER BY CASE WHEN [Address] IS NULL THEN '0' ELSE [Address] END ) - 1 AS VARCHAR(10)) + ')' , ' - 0', '') AS CompanyNameNew , City , [Address] FROM #Company ORDER BY CompanyName , [Address] ``` **Result Set** ``` ╔══════════════════════════╦════════════╦══════════════════════════╗ ║ CompanyNameNew ║ City ║ Address ║ ╠══════════════════════════╬════════════╬══════════════════════════╣ ║ Starbucks (Seattle) ║ Seattle ║ NULL ║ ║ Starbucks (Seattle - 1) ║ Seattle ║ 2701 Freedom way ║ ║ Starbucks (Seattle - 2) ║ Seattle ║ 3500 E Destination Drive ║ ║ Starbucks (Seattle - 3) ║ Seattle ║ 44 East Ave ║ ║ Starbucks (Orleans) ║ Orleans ║ 2800 Rice St. ║ ║ Starbucks (St. Paul) ║ St. Paul ║ 6500 Henri-Bourassa BE ║ ║ Starbucks (St. Paul - 1) ║ St. Paul ║ 6500 Henri-Bourassa BE ║ ║ Starbucks (Luxembourg) ║ Luxembourg ║ N2915 Countrt Road AB ║ ╚══════════════════════════╩════════════╩══════════════════════════╝ ``` Upvotes: 2
2018/03/14
1,251
4,034
<issue_start>username_0: Is there a way to print a dictionary into a text? I have a dictionary with 100+ keys and would like to print it into a tab delimited text file if possible. This seems so simple but I cannot figure it out--i.e. I am new VBA user. ``` 'Defining text file variable Dim FilePath As String 'Text file path FilePath = path & "\OrderStatus.txt" 'Open the text file Open FilePath For Output As #1 'OrderStatus is the dictionary Write #1, OrderStatus Close #1 ```<issue_comment>username_1: Using some cte's to format it out and using row\_number(): ``` --Number the rows ;with cte AS ( SELECT ROW_NUMBER() OVER(PARTITION BY companyname, city ORDER BY Address) AS counter_ , * FROM #company --convert the number 1's to blanks and subtract each number by 1 so the 2nd record has a 1 appended ), cte2 AS ( SELECT *, CASE WHEN CAST(counter_ AS varchar(3)) = 1 THEN '' ELSE CAST(CAST(counter_ AS INT) - 1 AS VARCHAR(3)) END AS numberformatting FROM CTE ) --if the formatted number is a blank, dont append anything. If it is non-blank, append a hyphen and the number SELECT DISTINCT CASE WHEN numberformatting = '' THEN companyname + '(' + city + ')' ELSE companyname + '(' + city + '-' + numberformatting + ')' END AS formattedName, city, companyname, city, address FROM cte2 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Let me suggest to set this order: ``` order by companyname,city,[address] ;WITH X AS ( SELECT ROW_NUMBER() over (Partition by companyname,city,[address] order by companyname,city,[address]) rn, companyname, city, address, contact FROM #Company ) SELECT CONCAT(companyname,' (', city, iif(rn=1, '', '-' + CAST(rn as varchar(10))), ')') CompanyName, City, Address, Contact FROM X; GO CompanyName | City | Address | Contact :--------------------- | :--------- | :----------------------- | :------ Starbucks (Luxembourg) | Luxembourg | N2915 Countrt Road AB | Hans Starbucks (Orleans) | Orleans | 2800 Rice St. | Emily Starbucks (Seattle) | Seattle | *null* | Pedram Starbucks (Seattle) | Seattle | 2701 Freedom way | April Starbucks (Seattle) | Seattle | 3500 E Destination Drive | Steve Starbucks (Seattle) | Seattle | 44 East Ave | Daniel Starbucks (St. Paul) | St. Paul | 6500 Henri-Bourassa BE | Casey Starbucks (St. Paul-2) | St. Paul | 6500 Henri-Bourassa BE | Kathy ``` *dbfiddle [here](http://dbfiddle.uk/?rdbms=sqlserver_2014&fiddle=53ed46b05151e08b1d568ece9858f3fd)* Upvotes: 2 <issue_comment>username_3: **Query** ``` SELECT companyname + ' (' + City + REPLACE( ' - ' + CAST( ROW_NUMBER() OVER (PARTITION BY companyname , City ORDER BY CASE WHEN [Address] IS NULL THEN '0' ELSE [Address] END ) - 1 AS VARCHAR(10)) + ')' , ' - 0', '') AS CompanyNameNew , City , [Address] FROM #Company ORDER BY CompanyName , [Address] ``` **Result Set** ``` ╔══════════════════════════╦════════════╦══════════════════════════╗ ║ CompanyNameNew ║ City ║ Address ║ ╠══════════════════════════╬════════════╬══════════════════════════╣ ║ Starbucks (Seattle) ║ Seattle ║ NULL ║ ║ Starbucks (Seattle - 1) ║ Seattle ║ 2701 Freedom way ║ ║ Starbucks (Seattle - 2) ║ Seattle ║ 3500 E Destination Drive ║ ║ Starbucks (Seattle - 3) ║ Seattle ║ 44 East Ave ║ ║ Starbucks (Orleans) ║ Orleans ║ 2800 Rice St. ║ ║ Starbucks (St. Paul) ║ St. Paul ║ 6500 Henri-Bourassa BE ║ ║ Starbucks (St. Paul - 1) ║ St. Paul ║ 6500 Henri-Bourassa BE ║ ║ Starbucks (Luxembourg) ║ Luxembourg ║ N2915 Countrt Road AB ║ ╚══════════════════════════╩════════════╩══════════════════════════╝ ``` Upvotes: 2
2018/03/14
639
1,962
<issue_start>username_0: the data that I am ingesting in my R script are full of white spaces (the bane of my existance). Thus far I have been using trimws within my functions so that my joins return true results. I am wondering if it is possible trim the white space in all columns and all data frames that I have stored in a list. ``` ParsedFile <- grep("ItemDetail", names(.GlobalEnv), value = TRUE) ``` this creates a list of the data frames that I want to remove the white space of in all the fields.I thought this would work but lapply does not seem to want to write the information back to the data frame. ``` as.data.frame(lapply(get(ParsedFile), trimws)) ``` Moreover, I see it only print 1 result to the console where I expect a result for each data frame. Can someone please help me out? Thanks<issue_comment>username_1: IIUC, here's a way to do it. I don't know what your desired output is. But, this method returns a list of dataframe. This might give you some thoughts: ``` # here df1, df2 are data frames df1 = data.frame(name = c(' mani ','san ',' fdfg ')) df2 = data.frame(name = c(' mani ','gh ',' fdfg ')) # do this lapply(c(df1, df2), function(x) do.call(rbind, lapply(x, trimws))) $name [,1] [1,] "mani" [2,] "san" [3,] "fdfg" $name [,1] [1,] "mani" [2,] "gh" [3,] "fdfg" ``` **Second Method:** For your case :- ``` # save intermediate list temp <- lapply(get(ParsedFile), function(x) do.call(rbind, lapply(x, trimws))) # convert list to df new_df <- data.frame(new_col = do.call(rbind, temp)) print(new_df) new_col 1 mani 2 san 3 fdfg 4 mani 5 gh 6 fdfg ``` Upvotes: 0 <issue_comment>username_2: Use `purrr` and its `map` function to iterate over the list of data frames, then `map_df` to iterate over the columns in each data frame, which will return the results as `data_frames`. ``` library(purrr) ParsedFile %>% map(~map_df(., ~trimws(.))) ``` Upvotes: 3 [selected_answer]
2018/03/14
1,067
3,423
<issue_start>username_0: Edit: I have found a way to work with CompareTo to help with this problem, but for some reason I cannot get the count down to work. It's a negative number that needs to get more negative to meet the requirements, but I am missing something here. When I execute the down section it closes the program. So to me this means that I have something messed up and the program isnt seeing the problem and closing. We are supposed to: > > Ask the user for an integer then ask the user if he/she wants to count > up or down. Display a table of numbers where the first column contains > the counter, the second column contains the counter plus 10, and the > third column contains the counter plus 100. Make it so each number > takes up 5 spaces total. > > > If counting up, the first column should contain numbers 1 through the > user input; If counting down, the first column should contain numbers > -1 through the the negative of the user input; > > > Do user input validation on the word "up" and "down". Allow for any > case. > > > ``` import java.util.Scanner; public class ps1 { public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); //Comparision string already declared String up = "up"; String down = "down"; //initialize the counters sum int sum = 0; //ask the user for a number System.out.println("Enter an ending value"); int num1 = keyboard.nextInt(); keyboard.nextLine(); System.out.println("Count up or down?"); String input = keyboard.nextLine(); while (input.equalsIgnoreCase(up) || input.equalsIgnoreCase(down)) { System.out.println("Count up or down?"); input = keyboard.nextLine(); } if (input.compareToIgnoreCase(up) == 0) { if (num1 >= 0) for (int c = 1; c <= num1; c++) { sum = sum + c; System.out.printf("%5d%5d%5d\n", c, c + 10, c + 100); else System.out.println("Up numbers must be positive"); if (input.compareToIgnoreCase(down) == 0) { for (int c1 = -1; c1 <= num1; c1--) { sum = sum + c1; System.out.printf("%5d%5d%5d\n", c1, c1 + 10, c1 + 100); } } } } } ```<issue_comment>username_1: IIUC, here's a way to do it. I don't know what your desired output is. But, this method returns a list of dataframe. This might give you some thoughts: ``` # here df1, df2 are data frames df1 = data.frame(name = c(' mani ','san ',' fdfg ')) df2 = data.frame(name = c(' mani ','gh ',' fdfg ')) # do this lapply(c(df1, df2), function(x) do.call(rbind, lapply(x, trimws))) $name [,1] [1,] "mani" [2,] "san" [3,] "fdfg" $name [,1] [1,] "mani" [2,] "gh" [3,] "fdfg" ``` **Second Method:** For your case :- ``` # save intermediate list temp <- lapply(get(ParsedFile), function(x) do.call(rbind, lapply(x, trimws))) # convert list to df new_df <- data.frame(new_col = do.call(rbind, temp)) print(new_df) new_col 1 mani 2 san 3 fdfg 4 mani 5 gh 6 fdfg ``` Upvotes: 0 <issue_comment>username_2: Use `purrr` and its `map` function to iterate over the list of data frames, then `map_df` to iterate over the columns in each data frame, which will return the results as `data_frames`. ``` library(purrr) ParsedFile %>% map(~map_df(., ~trimws(.))) ``` Upvotes: 3 [selected_answer]
2018/03/14
840
2,593
<issue_start>username_0: I am working for the first time with dask and trying to run predict() from a trained keras model. If I dont use dask, the function works fine (i.e. pd.DataFrame() versus dd.DataFrame () ). With Dask the error is below. Is this not a common use case (aside from scoring a groupby perhaps) ```python def calc_HR_ind_dsk(grp): model=keras.models.load_model('/home/embedding_model.h5') topk=10 x=[grp['user'].values,grp['item'].values] pred_act=list(zip(model.predict(x)[:,0],grp['respond'].values)) top=sorted(pred_act, key=lambda x: -x[0])[0:topk] hit=sum([x[1] for x in top]) return(hit) import dask.dataframe as dd #step 1 - read in data as a dask df. We could reference more than 1 files using '*' wildcard df = dd.read_csv('/home/test_coded_final.csv',dtype='int64') results=df.groupby('user').apply(calc_HR_ind_dsk).compute() ``` TypeError: Cannot interpret feed\_dict key as Tensor: Tensor Tensor("Placeholder\_30:0", shape=(55188, 32), dtype=float32) is not an element of this graph.<issue_comment>username_1: Have a look at: <http://dask.pydata.org/en/latest/dataframe-api.html#dask.dataframe.groupby.DataFrameGroupBy.apply> Unlike pandas, in dask many function, which let you define your own custom op, needs the meta parameter. Without this dask will sonehow test your custom function and pass weird things to keras which would might not be happening during calling compute. Upvotes: 1 <issue_comment>username_2: I found the answer. It is an issue with keras or tensorflow: <https://github.com/keras-team/keras/issues/2397> Below code worked and using dask shaved 50% from the time versus standard pandas groupby. ``` #dask model=keras.models.load_model('/home/embedding_model.h5') #this part import tensorflow as tf global graph graph = tf.get_default_graph() def calc_HR_ind_dsk(grp): topk=10 x=[grp['user'].values,grp['item'].values] with graph.as_default(): #and this part from https://github.com/keras-team/keras/issues/2397 pred_act=list(zip(model.predict(x)[:,0],grp['respond'].values)) top=sorted(pred_act, key=lambda x: -x[0])[0:topk] hit=sum([x[1] for x in top]) return(hit) import dask.dataframe as dd df = dd.read_csv('/home/test_coded_final.csv',dtype='int64') results=df.groupby('user').apply(calc_HR_ind_dsk).compute() ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: A different answer I wrote might help here (use-case was using a Dask with a pre-trained ML model to predict on 1,000,000 examples): <https://stackoverflow.com/a/59015702/4900327> Upvotes: 0
2018/03/14
428
1,117
<issue_start>username_0: I been trying to filter the results for std away from the mean in the most efficient manner DF ``` Cashier# Store Sales_ct Refunds_ct 001 001 100 10 002 001 200 9 003 001 900 8 004 002 200 10 005 002 400 9 006 002 200 8 ``` How to get the results that are 2 std away from the mean to return ``` Cashier# Store Sales_ct Refunds_ct sales_std_away_mean 003 001 900 8 ```<issue_comment>username_1: ``` def abs_z(s): return s.sub(s.mean()).div(s.std(ddof=0)).abs() df[abs_z(df.Sales_ct).ge(2)] Cashier# Store Sales_ct Refunds_ct 2 3 1 900 8 ``` Upvotes: 2 <issue_comment>username_2: Can calculate the mean and std of the Sales\_ct column with ``` sales_mean = np.mean(df.Sales_ct) sales_std = np.std(df.Sales_ct) ``` Then make a new column as you noted ``` df['sales_std_away_mean'] = np.abs((df.Sales_ct - sales_mean)/sales_std) ``` Then slice to select the rows above a threshold: ``` subdf = df[df.sales_std_away_mean > 2.] ``` Upvotes: 2
2018/03/14
999
3,545
<issue_start>username_0: I have some order data that is in JSON format that I would like to populate a custom object with. Looking at JSON.net documentation it appears I can use [LINQ to JSON](https://www.newtonsoft.com/json/help/html/DeserializeWithLinq.htm) to deserializec JSON to a .NET type. My question is can you do this with a class that references another custom class as a property? Assume I have two classes `Order` and `OrderDetail`. Where `Order` has a property that contains a collection of `OrderDetail` objects. ``` class Order { public string OrderId {get; set;} public string OrderDescription {get; set;} public List OrderItems {get; set;} // Collection of OrderDetails } class OrderDetail { public string ProductId {get; set;} public string ProductName {get; set;} public string UnitPrice {get; set;} public int Quantity {get; set;} } ``` Can LINQ to JSON be used to query the data and populate the objects? **Here is some sample JSON** ``` { "orders": [{ "orderId": 111, "orderDescription": "Giant Food Mart", "orderItems": [{ "productId": 65, "productName": "Dried Beef", "unitPrice": 10.00, "quantity": 7 }, { "productId": 23, "productName": "Carrots", "unitPrice": 1.25, "quantity": 100 } ] }, { "orderId": 112, "orderDescription": "Bob's Corner Variety", "orderItems": [{ "productId": 523, "productName": "Red Licorice", "unitPrice": 0.50, "quantity": 27 }, { "productId": 321, "productName": "Gummy Worms", "unitPrice": 1.50, "quantity": 50 } ] } ] } ``` It is how to parse the `OrderItems` into the **Orderitems** property of the `Order` object that is confusing me ``` JArray parsedJson = JArray.Parse("JSON data here"); IList orders = parsedJson.Select(x => new Order { OrderId = (int)x["orderId"], OrderDescription = (string)x["orderDescription"], OrderItems = x["orderItems"], // Here is where I get stuck }).ToList(); ```<issue_comment>username_1: You're making this harder than it needs to be. Use [json2csharp.com](http://json2csharp.com "json2csharp") to generate your classes to match the JSON. ``` public class OrderItem { public int productId { get; set; } public string productName { get; set; } public double unitPrice { get; set; } public int quantity { get; set; } } public class Order { public int orderId { get; set; } public string orderDescription { get; set; } public List orderItems { get; set; } } public class RootObject { public List orders { get; set; } } ``` Then simply deserialize into those classes. ``` JsonConvert.DeserializeObject(myJsonString); ``` At this point you can use LINQ-to-Objects to do any queries you might need to. Upvotes: 2 <issue_comment>username_2: You were pretty close, this worked using linq and you'll get everything in your orders list. ``` var parsedJson = JObject.Parse(test); var orders = parsedJson.Values().Children() .Select(x => new Order { OrderId = (int)x["orderId"], OrderDescription = (string)x["orderDescription"], OrderItems = JsonConvert.DeserializeObject< List>(x["orderItems"].ToString()) }).ToList(); ``` Note: I had to change: ``` public string OrderId { get; set; } //you were casting to int but had a string. ``` to ``` public int OrderId { get; set; } ``` Upvotes: 2 [selected_answer]
2018/03/14
1,028
3,585
<issue_start>username_0: So I have a SQL query that I would like to convert to LINQ. Here is said query: ``` SELECT * FROM DatabaseA.SchemaA.TableA ta LEFT OUTER JOIN DatabaseA.SchemaA.TableB tb ON tb.ShipId = ta.ShipId INNER JOIN DatabaseA.SchemaA.TableC tc ON tc.PostageId= tb.PostageId WHERE tc.PostageCode = 'Package' AND ta.MailId = 'Specification' ``` The problem I am struggling with is I cannot seem to figure out how to do a left join in LINQ before an inner join, since doing a left join in LINQ is not as clear to me at least. I have found numerous examples of a LINQ inner join and then a left join, but not left join and then inner join. If it helps, here is the LINQ query I have been playing around with: ``` var query = from m in tableA join s in tableB on m.ShipId equals s.ShipId into queryDetails from qd in queryDetails.DefaultIfEmpty() join p in tableC on qd.PostageId equals p.PostageId where m.MailId == "Specification" && p.PostageCode == "Package" select m.MailId; ``` I have tried this a few different ways but I keep getting an "Object reference not set to an instance of an object" error on qd.PostageId. LINQ is very new to me and I love learning it, so any help on this would be much appreciated. Thanks!<issue_comment>username_1: Use: ``` var query = from m in tableA join s in tableB on m.ShipId equals s.ShipId join p in tableC on s.PostageId equals p.PostageId where m.MailId == "Specification" && p.PostageCode == "Package" select m.MailId; ``` Your query uses a `LEFT OUTER JOIN` but it doesn't need it. It will, in practice, function as an `INNER JOIN` due to your `tc.PostageCode = 'Package'` clause. *If you compare to a column value in a table in a `WHERE` clause (and there are no `OR` clauses and you aren't comparing to `NULL`) then effectively **all** joins to get to that table will be treated as `INNER`).* That clause will **never** be true if `TableB` is `null` (which is why you use `LEFT OUTER JOIN` vs `INNER JOIN`) - so you should just use an `INNER JOIN` to make the problem simpler. Upvotes: 2 [selected_answer]<issue_comment>username_2: From my [SQL conversion recipe](https://stackoverflow.com/a/49245786/2557128): 1. `JOIN` conditions that aren't all equality tests with `AND` must be handled using `where` clauses outside the join, or with cross product (`from` ... `from` ...) and then `where` 2. `JOIN` conditions that are multiple `AND`ed equality tests between the two tables should be translated into anonymous objects 3. `LEFT JOIN` is simulated by using `into` *joinvariable* and doing another from `from` the *joinvariable* followed by `.DefaultIfEmpty()`. The order of `JOIN` clauses doesn't change how you translate them: ``` var ans = from ta in TableA join tb in TableB on ta.ShipId equals tb.ShipId into tbj from tb in tbj.DefaultIfEmpty() join tc in TableC on tb.PostageId equals tc.PostageId where tc.PostageCode == "Package" && ta.MailId == "Specification" select new { ta, tb, tc }; ``` However, because the `LEFT JOIN` is executed before the `INNER JOIN` and then the `NULL` `PostageId`s in `TableB` for unmatched rows will never match any row in `TableC`, it becomes equivalent to an `INNER JOIN` as well, which translates as: ``` var ans2 = from ta in tableA join tb in tableB on ta.ShipId equals tb.ShipId join tc in tableC on tb.PostageId equals tc.PostageId where tc.PostageCode == "Package" && ta.MailId == "Specification" select new { ta, tb, tc }; ``` Upvotes: 2
2018/03/14
1,277
4,513
<issue_start>username_0: I'm hoping this is a simple question, since I've never done shell scripting before. I'm trying to filter certain files out of a list of results. While the script executes and prints out a list of files, it's not filtering out the ones I don't want. Thanks for any help you can provide! ``` #!/bin/bash # Purpose: Identify all *md files in H2 repo where there is no audit date # # # # Example call: no_audits.sh # # If that call doesn't work, try ./no_audits.sh # # NOTE: Script assumes you are executing from within the scripts directory of # your local H2 git repo. # # Process: # 1) Go to H2 repo content directory (assumption is you are in the scripts dir) # 2) Use for loop to go through all *md files in each content sub dir # and list all file names and directories where audit date is null # #set counter count=0 # Go to content directory and loop through all 'md' files in sub dirs cd ../content FILES=`find . -type f -name '*md' -print` for f in $FILES do if [[ $f == "*all*" ]] || [[ $f == "*index*" ]] ; then # code to skip echo " Skipping file: " $f continue else # find audit_date in file metadata adate=`grep audit_date $f` # separate actual dates from rest of the grepped line aadate=`echo $adate | awk -F\' '{print $2}'` # if create date is null - proceed if [[ -z "$aadate" ]] ; then # print a list of all files without audit dates echo "Audit date: " $aadate " " $f; count=$((count+1)); fi fi done echo $count " files without audit dates " ```<issue_comment>username_1: First, to address the immediate issue: ``` [[ $f == "*all*" ]] ``` is only true if the exact contents of *f* is the string `*all*` -- with the wildcards as literal characters. If you want to check for a substring, then the asterisks shouldn't be quoted: ``` [[ $f = *all* ]] ``` ...is a better-practice solution. (Note the use of `=` rather than `==` -- this isn't essential, but is a good habit to be in, as the [POSIX `test` command](http://pubs.opengroup.org/onlinepubs/9699919799/utilities/test.html) is only specified to permit `=` as a string comparison operator; if one writes `[ "$f" == foo ]` by habit, one can get unexpected failures on platforms with a strictly compliant `/bin/sh`). --- That said, a ground-up implementation of this script intended to follow best practices might look more like the following: ``` #!/usr/bin/env bash count=0 while IFS= read -r -d '' filename; do aadate=$(awk -F"'" '/audit_date/ { print $2; exit; }' <"$filename") if [[ -z $aadate ]]; then (( ++count )) printf 'File %q has no audit date\n' "$filename" else printf 'File %q has audit date %s\n' "$filename" "$aadate" fi done < <(find . -not '(' -name '*all*' -o -name '*index*' ')' -type f -name '*md' -print0) echo "Found $count files without audit dates" >&2 ``` Note: * An arbitrary list of filenames cannot be stored in a single bash string (because all characters that might otherwise be used to determine where the first name ends and the next name begins could be present in the name itself). Instead, read one NUL-delimited filename at a time -- emitted with `find -print0`, read with `IFS= read -r -d ''`; this is discussed in [BashFAQ #1]. * Filtering out unwanted names can be done internal to `find`. * There's no need to preprocess input to `awk` using `grep`, as `awk` is capable of searching through input files itself. * `< <(...)` is used to avoid the behavior in [BashFAQ #24](https://mywiki.wooledge.org/BashFAQ/024), wherein content piped to a `while` loop causes variables set or modified within that loop to become unavailable after its exit. * `printf '...%q...\n' "$name"` is safer than `echo "...$name..."` when handling unknown filenames, as `printf` will emit printable content that accurately represents those names even if they contain unprintable characters or characters which, when emitted directly to a terminal, act to modify that terminal's configuration. Upvotes: 2 <issue_comment>username_2: Nevermind, I found the answer here: [bash script to check file name begins with expected string](https://stackoverflow.com/questions/25416991/bash-script-to-check-file-name-begins-with-expected-string) I tried various versions of the wildcard/filename and ended up with: if [[ "$f" == \*all.md ]] || [[ "$f" == \*index.md ]] ; The link above said not to put those in quotes, and removing the quotes did the trick! Upvotes: 0
2018/03/14
727
2,850
<issue_start>username_0: Why python code dose not seem to want to run and i have tried everything on IDLE it says "RESTART" and then the file name and on PyCharm it said's "Process finished with exit code 0" so i have no idea what I've done wrong. I would also like to add that if there is something stupid I've missed am sorry I've just started Python not to long ago and this is a starting project i wanted to make. Help would be well appreciated. ``` import time import os def main_interface(): print("Welcome to The Adventures!") print('''/nNew Game/nOptions/nExit''') loading_choose = input() loading_choose = loading_choose.lower() if loading_choose == 'New Game': new_game() elif loading_choose == 'Options': options() def options(): print("Welcome To Options") time.sleep(1) print("\nGraphics\nAudio\nExit") options = input() options.lower() if options == 'Graphics': graphic_interface() elif options == 'Audio': audio_interface() elif options == 'Exit': main_interface() def graphic_interface(): print("What Graphics Would You Like?") print("\nLow\nMedium\nHigh") graphics = input() graphics = graphics.lower() if graphics == 'Low': print("Graphics Set To Low") options() elif graphics == 'Medium': print("Graphics Set To Medium") time.sleep(1) options() elif graphics == 'High': print("Graphics Set To High") time.sleep(1) options() def audio_interface(): print("What Do You Want Your Volume To Be?") print("0-100") volume = input() if volume <100: print("Volume Greater Then 100") time.sleep(1) print("Try Again!") audio_interface() else: print("Volume Set To"+volume) options() ```<issue_comment>username_1: The reason why IDLE/Pycharm is not doing anything is because your code isn't doing anything. What I mean is that no functions are being called by your program - it's just a bunch of functions. To fix it, just call a function in your program (write this outside of a function). It's a simple mistake to make, but it is completely understandable given that you beginning python. But also, I see that there is no `new_game()` function defined, which you call in the `main_interface()` function. Also, in the `audio_interface()` function, the `input()` function is returning a `str` not an `int`, so you would want to convert the result to an `int` like so: `volume = int(input())` Happy Learning! Upvotes: 2 <issue_comment>username_2: add ``` main_interface() ``` to end of script to run function main\_interface. remove line `loading_choose = loading_choose.lower()` because it turns input text to lowercase and it won't mach any option. Upvotes: 0
2018/03/14
1,671
5,828
<issue_start>username_0: I am trying to receive sensor data from Arduino and write the readings into a file using Processing IDE (using serial communication / USB). After doing a large amount of tests, I am pretty sure it's the Processing side unable to process the data. **Only the first few (< 100) "samples" are written successfully, after that `Serial.available()` always returns false.** I am sending two-byte chunks from Arduino, 57600 baud, default settings (8 bit, no parity, 1 stop bit). Arduino code: ```cpp unsigned int data = 0; unsigned char buf[2]; void setup() { Serial.begin(57600); } void loop() { data = analogRead(A0); buf[0] = data & 0xFF; // low byte buf[1] = data >> 8; // high byte Serial.write(buf, 2); Serial.flush(); delayMicroseconds(300); } ``` Processing code: ```java import processing.serial.*; Serial serialPort; String serialData; PrintWriter output; int recordingTime = 1000; // how many miliseconds of data stream to record byte[] dataBuffer = new byte[2]; // reserve memory for 2 bytes and initialize to 0 (java stuff) int receivedBytes = 0; void setup() { serialPort = new Serial(this, "/dev/ttyUSB0", 57600); output = createWriter("serialData.txt"); } void draw() { while(millis() < recordingTime) { if (serialPort.available() > 0) { receivedBytes = serialPort.readBytes(dataBuffer); output.print("\n"); output.print("Received bytes: "); output.print(receivedBytes); output.print("\n"); output.println(binary(dataBuffer[0])); // low byte output.println(binary(dataBuffer[1])); // high byte output.println(""); } else { output.print("\n"); output.println("No data available"); } } output.flush(); output.close(); exit(); } ``` Output: ``` Received bytes: 2 11101001 00000011 Received bytes: 2 11101001 00000011 Received bytes: 2 11101001 00000011 ...after some lines... No data available No data available No data available No data available No data available No data available ``` **Why is this happening?** Why is there "no data available" after few samples? If I watch the serial monitor output in Arduino IDE, it works fine.<issue_comment>username_1: I can read the serial data from Arduino using `screen` and using Python. Still cannot get it to work in Processing - only receiving few samples (17 exactly). Screen command: `$ screen` (press `ctrl`+`a` and then `shift`+`k` to stop the program; add -L flag for logging to a file). Works as expected. I managed to achieve the same results using [Pyserial library](https://pyserial.readthedocs.io/en/latest/pyserial.html) in Python: ``` # # log_serial.py # Writes incoming serial data to file. import time import serial # Edit this parameters ========================================================= serialPort = "/dev/ttyUSB0" baudrate = 57600 recordTime = 1000 # milliseconds # ============================================================================== def millis(): """ Returns current (wall-)time in milliseconds """ return int(round(time.time() * 1000)) ser = serial.Serial(serialPort, baudrate) with open("output.txt", "w") as f: startTime = millis() f.write("Recording started at: ") f.write(str(startTime)) f.write("\n") while (millis() - startTime) <= recordTime: inData = ser.read(2) # reads two bytes inInt = int.from_bytes(inData, byteorder='little') # merges them into an integer f.write(str(inInt)) f.write("\n") f.write("Recording finished at: ") f.write(str(millis())) f.write("\n") ``` Still don't know why the Processing version can't handle it... There was a bug (wrong while loop condition) in my first Processing code. Here is updated version, using `Serial.read()` rather than `Serial.readBytes(buffer)`. Still doesn't solve my problem, only getting 17 samples: ``` import processing.serial.*; Serial serialPort; String serialData; PrintWriter output; int recordingTime = 5000; // how many miliseconds of data stream to record void setup() { serialPort = new Serial(this, "/dev/ttyUSB0", 57600); output = createWriter("serialData.txt"); } void draw() { int startTime = millis(); while((millis() - startTime) <= recordingTime) { if (serialPort.available() > 0) { int b1 = serialPort.read(); int b2 = serialPort.read(); int value = (b2 << 8) + (b1 & 0xFF); output.println(value); } } output.flush(); output.close(); exit(); } ``` Upvotes: 1 <issue_comment>username_2: I have no problems with the Serial communication between Processing and my Arduino Uno/Mega using `serialPort.read()` instead of `serialPort.readBytes(dataBuffer)`. Maybe this makes a difference... Have you already tried to increase the variable `recordingTime`? Upvotes: 0 <issue_comment>username_2: Please try to run this code and post the content of the written file afterwards: ``` import processing.serial.*; Serial serialPort; String serialData; PrintWriter output; int recordingTime = 5000; // how many miliseconds of data stream to record void setup() { serialPort = new Serial(this, "/dev/ttyUSB0", 57600); output = createWriter("serialData.txt"); } void draw() { int startTime = millis(); while((millis() - startTime) <= recordingTime) { if (serialPort.available() > 0) { int b1 = serialPort.read(); int b2 = serialPort.read(); int value = (b2 << 8) + (b1 & 0xFF); output.print(millis()); //to each received value there is written the actual millis()-value output.print(","); output.println(value); } } output.flush(); output.close(); exit(); } ``` Upvotes: 1
2018/03/14
354
1,304
<issue_start>username_0: When I try to run a functional test on Symfony 4, I get this: ``` The routing file "{__PATH__}config/routes/admin.yaml" contains unsupported keys for "admin_home": "controller". Expected one of: "resource", "type", "prefix", "path", "host", "schemes", "methods", "defaults", "requirements", "options", "condition", "ControllerTest" ``` I don't understand why because my routing configuration follows the official documentation: ``` admin_home: path: '' controller: App\Controller\Admin\HomeController::home ``` Official doc: <http://symfony.com/doc/current/routing.html> (in YAML tabs, I didn't install the annotations package). Where "controller" is a supported key. I installed the PHPUnit package `composer require --dev symfony/phpunit-bridge` then I run `./vendor/bin/simple-phpunit`.<issue_comment>username_1: This is a new syntax, [introduced in Symfony 3.4/4.0](https://github.com/symfony/symfony/pull/23227). On older versions, you should use: ```yaml admin_home: path: '' defaults: { _controller: App\Controller\Admin\HomeController::home } ``` Upvotes: 1 <issue_comment>username_2: As I could not reproduce the issue in a new project, **even with the same composer.json file**, I retried to remove the vendor directory. It works. Upvotes: 0
2018/03/14
2,251
8,490
<issue_start>username_0: In a component, we use a ngrx selector to retrieve different parts of the state. ``` public isListLoading$ = this.store.select(fromStore.getLoading); public users$ = this.store.select(fromStore.getUsers); ``` the `fromStore.method` is created using ngrx `createSelector` method. For example: ``` export const getState = createFeatureSelector('users'); export const getLoading = createSelector( getState, (state: UsersState) => state.loading ); ``` I use these observables in the template: ``` * {{user.name}} ``` I would like to write a test where i could do something like: ``` store.select.and.returnValue(someSubject) ``` to be able to change subject value and test the template of the component agains these values. The fact is we struggle to find a proper way to test that. How to write my "andReturn" method since the `select` method is called two times in my component, with two different methods (MemoizedSelector) as arguments? We don't want to use real selector and so mocking a state then using real selector seems not to be a proper unit test way (tests wouldn't be isolated and would use real methods to test a component behavior).<issue_comment>username_1: I ran into the same challenge and solved it once and for all by wrapping my selectors in services, so my components just used the service to get their data rather than directly going through the store. I found this cleaned up my code, made my tests implementation-agnostic, and made mocking much easier: ``` mockUserService = { get users$() { return of(mockUsers); }, get otherUserRelatedData$() { return of(otherMockData); } } TestBed.configureTestingModule({ providers: [{ provide: UserService, useValue: mockUserService }] }); ``` --- Before I did that however, I had to solve the issue in your question. The solution for you will depend on where you are saving the data. If you are saving it in the `constructor` like: ``` constructor(private store: Store) { this.users$ = store.select(getUsers); } ``` Then you will need to recreate the test component every time you want to change the value returned by the `store`. To do that, make a function along these lines: ``` const createComponent = (): MyComponent => { fixture = TestBed.createComponent(MyComponent); component = fixture.componentInstance; fixture.detectChanges(); return component; }; ``` And then call that after you change the value of what your store spy returns: ``` describe('test', () => { it('should get users from the store', () => { const users: User[] = [{username: 'BlackHoleGalaxy'}]; store.select.and.returnValue(of(users)); const cmp = createComponent(); // proceed with assertions }); }); ``` Alternatively, if you are setting the value in `ngOnInit`: ``` constructor(private store: Store) {} ngOnInit() { this.users$ = this.store.select(getUsers); } ``` Things are a bit easier, as you can create the component once and just recall `ngOnInit` every time you want to change the return value from the store: ``` describe('test', () => { it('should get users from the store', () => { const users: User[] = [{username: 'BlackHoleGalaxy'}]; store.select.and.returnValue(of(users)); component.ngOnInit(); // proceed with assertions }); }); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: I also ran into this problem and using services to wrap the selectors is no option for me, too. Especially not only for testing purposes and because I use the store to replace services. Therefore I came up with the following (also not perfect) solution: I use a different 'Store' for each component and each different aspect. In your example I would define the following Stores&States: ``` export class UserStore extends Store {} export class LoadingStatusStore extends Store {} ``` And inject them in the User-Component: ``` constructor( private userStore: UserStore, private LoadingStatusStore: LoadingStatusStore ) {} ``` Mock them inside the User-Component-Test-Class: ``` TestBed.configureTestingModule({ imports: [...], declarations: [...], providers: [ { provide: UserStore, useClass: MockStore }, { provide: LoadingStatusStore, useClass: MockStore } ] }).compileComponents(); ``` Inject them into the beforeEach() or it() test method: ``` beforeEach( inject( [UserStore, LoadingStatusStore], ( userStore: MockStore, loadingStatusStore: MockStore ) => {...} ``` Then you can use them to spy on the different pipe methods: ``` const userPipeSpy = spyOn(userStore, 'pipe').and.returnValue(of(user)); const loadingStatusPipeSpy = spyOn(loadingStatusStore, 'pipe') .and.returnValue(of(false)); ``` The drawback of this method is that you still can't test more than one part of a state of a store in one test-method. But for now this works as a workaround for me. Upvotes: 1 <issue_comment>username_3: You could use something like that: ```js spyOn(store, 'select').and.callFake(selectFake); function pipeFake(op1: OperatorFunction): Observable { if (op1.toString() === fromStore.getLoading.toString()) { return of(true); } if (op1.toString() === fromStore.getUsers.toString()) { return of(fakeUsers); } return of({}); } ``` Upvotes: -1 <issue_comment>username_4: I created a helper like that: ``` class MockStore { constructor(public selectors: any[]) { } select(calledSelector) { const filteredSelectors = this.selectors.filter(s => s.selector === calledSelector); if (filteredSelectors.length < 1) { throw new Error('Some selector has not been mocked'); } return cold('a', {a: filteredSelectors[0].value}); } } ``` And now my tests look like this: ``` const mock = new MockStore([ { selector: selectEditMode, value: initialState.editMode }, { selector: selectLoading, value: initialState.isLoading } ]); it('should be initialized', function () { const store = jasmine.createSpyObj('store', ['dispatch', 'select']); store.select.and.callFake(selector => mock.select(selector)); const comp = new MyComponent(store); comp.ngOnInit(); expect(comp.editMode$).toBeObservable(cold('a', {a: false})); expect(comp.isLoading$).toBeObservable(cold('a', {a: false})); }); ``` Upvotes: 2 <issue_comment>username_5: Moving your selectors into a service will not eliminate the need to mock selectors, if you are going to test selectors themselves. ngrx now has its own way of mocking and it is described here: <https://ngrx.io/guide/store/testing> Upvotes: 3 <issue_comment>username_6: The best solution I've found is using a **switch** statement to return the data you want for each selector. The [solution](https://stackoverflow.com/a/49301274/1974681) @username_1 provides only works when mocking one select request. For example: ```js jest.spyOn(store, "select").mockImplementation((selector) => { switch (selector) { case selectSelectedKey: return of(key); case selectCountry: return of(CANADA_MOCK); } return EMPTY; }); ``` Upvotes: 1 <issue_comment>username_7: If what you want to accomplish is to mock a state update so that your subscription to your selector receives a new value, you should use what NgRx suggests here. <https://ngrx.io/guide/store/testing#using-mock-selectors> Using the "overrideSelector" you can overwrite a selector you created and make it return a mocked value. For example ``` store = TestBed.inject(MockStore); store.overrideSelector(getState, mockValue); ``` Upvotes: 2 <issue_comment>username_8: Using the **overrideSelector** worked for me. This video helped me solve the problem <https://www.youtube.com/watch?v=NOT-nJLDnyg> ``` imports { ... } from '...'; import { MockStore, provideMockStore } from '@ngrx/store/testing'; describe('MyComponent', () => { let store: MockStore; beforeEach(async () => { await TestBed.configureTestingModule({ declarations: [ MyComponent ], imports: [ ... ], providers: [ ... provideMockStore({initialState}), ], schemas: [NO\_ERRORS\_SCHEMA], }) .compileComponents(); fixture = TestBed.createComponent(MyComponent); store = TestBed.inject(MockStore); component = fixture.componentInstance; store.overrideSelector(mySelector, { id: 0, name: 'test', value: 50, }) fixture.detectChanges(); spyOn(store, 'dispatch').and.callFake(() => {}); }); }); ``` Upvotes: 2
2018/03/14
292
1,058
<issue_start>username_0: When I run the command `rails new app-path -d postgresql --skip-test`, what does it call my app? Or is it just the path I put it in?<issue_comment>username_1: If you run `rails new foo/bar/baz`, the new app will be created inside of `foo/bar/baz` and will be named after the last segment of the provided path, `Baz` in this instance. Upvotes: 0 <issue_comment>username_2: if you run "rails new myapp -d mysql", a new folder will be created with the name myapp if you run "rails new path/myapp -d mysql", a new folder called "path/myapp" will be created. The project name is myapp Upvotes: 0 <issue_comment>username_3: If you run a command with `rails new app-path -d postgresql --skip-test` it will create rails app with last part of the `app-path`. for example If you run `rails new /Users/apple/work/demo_app -d postgresql --skip-test` it will create app named `demo_app` inside directory `/Users/apple/work`. If directory specified doesn't exists it will create the path and last part of the path will be app name. Upvotes: 1
2018/03/14
2,533
4,931
<issue_start>username_0: I want to know if it is possible to rewrite `A` as a Ramda pipe where `D` waits on `C` without passing the result of `C` into `D`: ``` const A = async payload => { const resultB = await B(payload); const resultC = await C(resultB); const resultD = await D(resultB); return resultB; }; ``` Edit: this doesn't seem to be producing desired results: ``` const R = require('ramda'); const then = R.curry((f, p) => p.then(f)); const trace = R.curry(async(name, data) => `${name}(${data})`); const B = trace('B'); const C = trace('C'); const D = trace('D'); const A = async payload => { const resultB = await B(payload); await C(resultB); await D(resultB); return resultB; }; const A_Pipe = R.pipe (B, then(C), then(D)); A('x').then(console.log); // -> B(x) A_Pipe('x').then(console.log); // -> D(C(B(x))) ```<issue_comment>username_1: Apparently [Ramda plans to add `R.then`](https://github.com/ramda/ramda/pull/1906) but it looks like they haven't gotten around to it yet Until then, you can make your own ``` const then = R.curry((f, p) => p.then(f)) const A = R.pipe(B, then(C), then(D)) ``` Here's a complete program you can paste in the [Ramda REPL](http://ramdajs.com/repl/#?const%20then%20%3D%20f%20%3D%3E%20p%20%3D%3E%0A%20%20p.then%20%28f%29%0A%0Aconst%20effect%20%3D%20f%20%3D%3E%20x%20%3D%3E%0A%20%20%28f%20%28x%29%2C%20x%29%0A%20%20%0Aconst%20trace%20%3D%0A%20%20effect%20%28console.log%29%0A%0Aconst%20fakeFetch%20%3D%20x%20%3D%3E%0A%20%20new%20Promise%20%28r%20%3D%3E%20setTimeout%20%28r%2C%20200%2C%20x%29%29%0A%0Aconst%20B%20%3D%20x%20%3D%3E%0A%20%20fakeFetch%20%28trace%20%28%60%5BB%3A%20%24%7Bx%7D%5D%60%29%29%0A%0Aconst%20C%20%3D%20x%20%3D%3E%0A%20%20fakeFetch%20%28trace%20%28%60%5BC%3A%20%24%7Bx%7D%5D%60%29%29%0A%20%20%0Aconst%20D%20%3D%20x%20%3D%3E%0A%20%20fakeFetch%20%28trace%20%28%60%5BD%3A%20%24%7Bx%7D%5D%60%29%29%0A%20%20%20%20%0Aconst%20A%20%3D%0A%20%20pipe%20%28B%2C%20then%20%28C%29%2C%20then%20%28D%29%29%0A%0AA%20%281%29%0A%2F%2F%20%3D%3E%20%7B%20Promise%20%22%5BD%3A%20%5BC%3A%20%5BB%3A%201%5D%5D%5D%22%20%7D) ``` const then = f => p => p.then (f) const effect = f => x => (f (x), x) const trace = effect (console.log) const fakeFetch = x => new Promise (r => setTimeout (r, 200, x)) const B = x => fakeFetch (trace (`[B: ${x}]`)) const C = x => fakeFetch (trace (`[C: ${x}]`)) const D = x => fakeFetch (trace (`[D: ${x}]`)) const A = pipe (B, then (C), then (D)) A (1) // => { Promise "[D: [C: [B: 1]]]" } ``` Output ``` [B: 1] [C: [B: 1]] [D: [C: [B: 1]]] ``` --- **I see what you did there** Upon closer inspection, `C` and `D` are side-effecting functions whose return value is discarded – `resultC` and `resultD` are not used. Rather, `resultB` is the only value you seem to care about ``` const A = async payload => { const **resultB** = await B(payload) ~~const resultC =~~ await C(**resultB**) ~~const resultD =~~ await D(**resultB**) return **resultB** } ``` When composing functions with `R.compose` or `R.pipe`, the return value of one is passed to the next. In your case however, `C` and `D` should not impact the input. I introduce `asyncTap` to encode your intention – compare to [`R.tap`](http://ramdajs.com/docs/#tap) or `effect` above ``` const asyncTap = f => p => p.then (R.tap (f)) const A = pipe (B, asyncTap (C), asyncTap (D)) A (1) .then (console.log, console.error) // => { Promise "[B: 1]" } ``` Output – see complete program in the [Ramda REPL](http://ramdajs.com/repl/#?const%20asyncTap%20%3D%20f%20%3D%3E%20p%20%3D%3E%0A%20%20p.then%20%28R.tap%20%28f%29%29%0A%0Aconst%20trace%20%3D%0A%20%20R.tap%20%28console.log%29%0A%20%20%0Aconst%20fakeFetch%20%3D%20x%20%3D%3E%0A%20%20new%20Promise%20%28r%20%3D%3E%20setTimeout%20%28r%2C%20200%2C%20x%29%29%0A%0Aconst%20B%20%3D%20x%20%3D%3E%0A%20%20fakeFetch%20%28trace%20%28%60%5BB%3A%20%24%7Bx%7D%5D%60%29%29%0A%0Aconst%20C%20%3D%20x%20%3D%3E%0A%20%20fakeFetch%20%28trace%20%28%60%5BC%3A%20%24%7Bx%7D%5D%60%29%29%0A%20%20%0Aconst%20D%20%3D%20x%20%3D%3E%0A%20%20fakeFetch%20%28trace%20%28%60%5BD%3A%20%24%7Bx%7D%5D%60%29%29%0A%20%20%20%20%0Aconst%20A%20%3D%0A%20%20pipe%20%28B%2C%20asyncTap%20%28C%29%2C%20asyncTap%20%28D%29%29%0A%0AA%20%281%29%20.then%20%28console.log%2C%20console.error%29%0A%2F%2F%20%3D%3E%20%7B%20Promise%20%22%5BB%3A%201%5D%22%20%7D%0A) ``` [B: 1] [C: [B: 1]] [D: [B: 1]] [B: 1] ``` This begs the question: what *are* you doing with `resultC` and `resultD`? Functional programming is about writing programs with pure, side-effect-free functions. When you're having difficulty expressing your program in a functional way, it can sometimes indicate you're not thinking in a functional way. --- Upvotes: 2 <issue_comment>username_2: Not ramda, but it does what you want. ```js const { pipe, tap } = require('rubico') const A = pipe([ B, tap(C), tap(D), ]) ``` Upvotes: 0
2018/03/14
975
3,182
<issue_start>username_0: is it possible to call a function using a variable as the name for a variable? Example: ``` def my_method(foo="bar"): print(foo) var = "foo='baz'" my_method(var) >>> baz ``` Now, I can't figure out a way to do this kind of thing (substitute the value in a variable for a variable name). Is this kind of thing possible? I know there are things you can do that are analogous to this, for instance: ``` def my_method(foo, bar, baz): print(foo, bar, baz) var = ['one','two','three'] my_method(*var) >>> one two three ``` But I can't find a uniform, generalized solution to any metaprogramming I might need in python. Is there one? Perhaps the language just isn't capable of a generalized metaprogramming solution.<issue_comment>username_1: You can provide `exec` with a dictionary inside which it will store variables and then unwrap it as your function's keyword arguments. ``` def my_method(foo="bar"): print(foo) var_a = "foo='baz'" kwargs = {} # See safety note at the bottom of this answer. exec(var_a, {'__builtins__': {}}, kwargs) my_method(**kwargs ) # prints: 'baz' ``` You could even use a decorator to give that behaviour to functions. ``` def kwargs_as_string(f): def wrapper(string, **more_kwargs): kwargs = {} # See safety note at the bottom of this answer. exec(string, {'__builtins__': {}}, kwargs) return f(**kwargs, **more_kwargs) return wrapper @kwargs_as_string def my_method(foo="bar"): print(foo) my_method("foo='baz'") # prints: 'baz' ``` ### Safety note To be safe, we provide `exec` with an empty global `__builtins__`, otherwise a reference to the dictionary of the built-in module is inserted under that key. This can lead to trouble. ``` var_a = '__import__("sys").stdout.write("You are in trouble")' exec(var_a, {}, {}) # prints: You are in trouble exec(var_a, {'__builtins__': {}}, {}) # raises a NameError: name '__import__' is not defined ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Assuming you're allowed to have JSON formatted strings... ``` import json args = json.loads( """ { "kwargs": { "a": 2, "b": 1 }, "args": [3, 4] } """) def foo(a, b): print("a: {}".format(a)) print("b: {}".format(b)) foo(**args["kwargs"]) foo(*args["args"]) # output: # a: 2 # b: 1 # a: 3 # b: 4 ``` Upvotes: 1 <issue_comment>username_3: [can you use getattr to call a function within your scope?](https://stackoverflow.com/questions/52763125/can-you-use-getattr-to-call-a-function-within-your-scope/52763715#52763715) I found these three options, the last one to be the most useful in my case. ``` def foo(): def bar(baz): print('dynamically called bar method using', baz) packages = {'bar': bar} getattr(packages['bar'], "__call__")('getattr with map') packages['bar']('just the map') locals()['bar']('just locals()') foo() ``` > > python test\_dynamic\_calling.py > > > ``` dynamically called bar method using getattr with map dynamically called bar method using just the map dynamically called bar method using just locals() ``` Upvotes: 0
2018/03/14
305
1,119
<issue_start>username_0: is there anyway to customize the search-bar element that NativeScript provides ? and add some buttons in it. am trying to get something like this (the search-bar in this app) [![enter image description here](https://i.stack.imgur.com/ZlO0p.png)](https://i.stack.imgur.com/ZlO0p.png) I've been searching a bit but found nothing about it.<issue_comment>username_1: The search-bar from tns-core-modules doesn't provide what you're looking for (see the API at <https://docs.nativescript.org/api-reference/modules/_ui_search_bar_>). I'd recommend to implement the component yourself. Upvotes: 0 <issue_comment>username_2: Basic demo here: <https://play.nativescript.org/?template=play-vue&id=y6iFw9> You can always hide default action bar with `actionBarHidden="true` on your element and then create your own action bar. In this case you can use `GridLayout` and put each element in its own column. Something like: ``` ``` Just replace labels with your icons, and add `@tap="yourFunction` to fire when icon is pressed. To turn labels into icons you can use package like Fonticon. Upvotes: 2
2018/03/14
1,236
3,421
<issue_start>username_0: I'm trying to delete comments from file but what's important I want to leave specific strings: ``` ## Something # START # END ``` These has to stay with rest not commented lines and I want to remove rest with "d" - this is important. I don't want to use print negation or other tricks because this sed command also process another things later with additional "-e". Here is sample file: ``` # START group1: <EMAIL>, <EMAIL>, <EMAIL> group2: <EMAIL>, <EMAIL> # S #STAR # start # star # comment is here ## Owner1 group3: <EMAIL>, <EMAIL> ## Owner2 group4: <EMAIL>, <EMAIL> group3: <EMAIL>, <EMAIL> # END group5: <EMAIL> alias1: <EMAIL> ``` I tried to use command like: ``` sed -e '/^#[^#]/d' sample.file ``` Which remove each line starting with "#" and next character is NOT "#" so it leaves "##" lines but how to manage removing rest without loosing # START and # END lines? I need to do this in same command without pipes, "!p" or "p" versions it has to be this "d" modified version. Tried things like: ``` sed -e '/^#[^#][^S][^T][^A][^R][^T]/d' ``` or ``` sed -e '/^#[^#]\([^S][^T][^A][^R][^T]\|[^E][^N][^D]\)/d' ``` but nothing is working the way I want. I'm not sure if this is possible this way. Expected output: ``` # START group1: <EMAIL>, <EMAIL>, <EMAIL> group2: <EMAIL>, <EMAIL> ## Owner1 group3: <EMAIL>, <EMAIL> ## Owner2 group4: <EMAIL>, <EMAIL> group3: <EMAIL>, <EMAIL> # END group5: <EMAIL> alias1: <EMAIL> ``` Greetings & thanks for help :)<issue_comment>username_1: Try: ``` sed -E '/^##|^# START|^# END/bskip; /^#/d; :skip' file ``` ### Example ``` $ sed -E '/^##|^# START|^# END/bskip; /^#/d; :skip' file # START group1: <EMAIL>, <EMAIL>, <EMAIL> group2: <EMAIL>, <EMAIL> ## Owner1 group3: <EMAIL>, <EMAIL> ## Owner2 group4: <EMAIL>, <EMAIL> group3: <EMAIL>, <EMAIL> # END group5: <EMAIL> alias1: <EMAIL> ``` ### How it works * `/^##|^# START|^# END/bskip` For any line that matches `^##` or `^# START` or `^# END`, we branch to the label `skip`. * `/^#/d` For all other lines that start with `#`, we delete. * `:skip` This defines the label `skip`. ### BSD/macOS The above was tested with GNU sed. For BSD/macOS sed, try: ``` sed -E -e '/^##|^# START|^# END/bskip' -e '/^#/d' -e ':skip' file ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This is more verbose than username_1s answer, but works too: ``` sed -r 's/# ((START)|(END)).*/## \1/;/^#[^#].*/d;s/## ((START)|(END))/# \1/;' sample.conf ``` Transfer the # START/END comment to the protected ## format, then, do the transformation, then transform it to # START/END back. First I've overseen the 'no-/p'-requierement, then the obvious solution is: ``` sed -r '/# (START)|(END).*/p;/^#[^#].*/d' sample.conf ``` Instead of deleting a complicated delete-pattern /d, you can use a simple print-pattern /p. Note that `[^S][^T][^A][^R][^T]` would match "`END`" (with 2 trailing spaces - maybe unlikely, but if another 3- or 5-letter exception needs treatment, it gets ugly, if it isn't alread. Upvotes: 0
2018/03/14
540
1,520
<issue_start>username_0: I have encountered this problem for multiple times. I want to export a summary data set that I made from **R** to table in Word. The best I can do for now is to first export the data to Excel, and then copy the table in Excel to word. My sample data: ``` > sum_tab col1 col2 col3 2 move up 10 10 3 no change 4 9 1 move down 12 7 21 move up 11 5 31 no change 4 16 11 move down 11 5 22 move up 9 6 32 no change 10 14 12 move down 7 6 ``` Export to Excel: ``` library(xlsx) write.xlsx(sum_tab, file = "sum_tab.xlsx") ``` Is there a neat way to export the `sum_tab` data to table in Word with 10 rows and 4 columns?<issue_comment>username_1: You can use one of these two options, use rmarkdown or the sjPlot package ``` sum_tab = data.frame(col1 = c("move up","no change", "move down", "move up", "no change","move down","move up","no change","move down"), col2 = c(10,4,12,11,4,11,9,10,7), col3 = c(10,9,7,5,16,5,6,14,6)) row.names(sum_tab) <- c(2,3,1,21,31,11,22,32,12) sum_tab library(sjPlot) tab_df(sum_tab) ``` In the viewer you can select the table with the cursor and paste it in Word. Upvotes: 3 [selected_answer]<issue_comment>username_2: The sjPlot has updated its functions with the important `file` parameter with which you can specify a `.doc` file (note: not `.docx`). So to save the `sum_tab` dataframe, you merely write: ``` sjPlot::tab_df(sum_tab, file = "output.doc") ``` Upvotes: 0
2018/03/14
1,900
5,269
<issue_start>username_0: I am using a bootstrap mega menu and I have it styled and working as I need, however I need it to open for desktop devices on hover (instead of on click). I thought I would be able to easily do this with just css, but I cannot get it to work. Any suggestions? I am open to using jQuery or jscript, but I am not very fluid with these just yet. Working jsfiddle: <https://jsfiddle.net/L2o657p6/4/> HTML: ``` Mega Menuu Toggle navigation [Menu Logo](#) * [Today (current)](#) [![](holder.js/100x100)](#) [![](holder.js/46x46)](#) [![](holder.js/46x46)](#) ##### Today's Featured Collections + [Press Every Button](#) + [Travel with Technology](#) + [Smart Choice](#) + [Fall in Love with Tech](#) + [Smartphones at a Snip](#) [![](holder.js/100x100)](#) [![](holder.js/46x46)](#) [![](holder.js/46x46)](#) ##### Today's Trending Collections + [Harley-Davidson](#) + [Will you be my Valentine?](#) + [Summer Wedding Bridesmaid Dresses](#) + [Pink Wedding Centerpiece Ideas](#) + [Wedding Party Table Runners](#) + [Backyard Wedding Reception](#) ##### My Collections You currently have no collections. [Learn how to create one](#). * [Fashion](#) ##### Top categories + + [Men's](#) + [Women's](#) + [Kids](#) ##### Shop for + + [Jewelry & Watches](#) + [Handbags & Accessories](#) + [Health & Beauty](#) + [Shoes](#) + [Sales & Events](#) ![](holder.js/100px200) * [Electronics](#) ##### Top categories + [Cell Phones & Accessories](#) + [Cameras & Photo](#) + [Computers & Tablets](#) ##### Other categories + [Car Audio, Video & GPS](#) + [iPhone](#) + [iPad](#) + [TV, Audio](#) + [Video Games & Consoles](#) ![](holder.js/100px200) * [Deals](#) ##### Best deals of the day + [Car Audio, Video & GPS](#) + [iPhone](#) + [iPad](#) + [TV, Audio](#) + [Video Games & Consoles](#) [![](holder.js/100px140) ##### Waterproof cellphone cover $5.99](#) [![](holder.js/100px140) ##### Large 20 slot leather watch box organizer $25.99](#) [![](holder.js/100px140) ##### Samsung Galaxy Tab A SM-P550NZAAXAR 9.7-Inch W-Fi Tablet (Titanium with S-Pen) $319](#) * [Contact Us](#) ##### Contact us Feel free to drop us a line, we will respond as sson as possible. Email address Text Submit ![](holder.js/100px260?text=Map) ``` CSS: ``` .nav > .dropdown-megamenu { position: static; } @media (max-width: 767px) { .navbar-nav .open .dropdown-container { position: static; float: none; width: auto; margin-top: 0; border: 0; box-shadow: none; border-radius: 0; } } .dropdown-megamenu > .dropdown-container { position: absolute; top: 100%; left: 0; right: 0; max-width: 100%; padding: 15px; } .dropdown-megamenu .dropdown-menu { display: block; } .link-image .media-object { float: left; margin-bottom: 7.5px; } .link-image-sm + .link-image-sm .media-object { margin-left: 7.5px; } .thumbnail .caption { min-height: 120px; } .thumbnail:hover { text-decoration: none; } /* Link list */ .list-links { list-style: none; padding: 0; } .list-links li { line-height: 1.71428571; } .list-links a { color: #555; } .list-links a:hover, .list-links a:focus, .list-links a:active { color: #22527b; } html, body { height: 100%; min-height: 500px; } body { background: -webkit-linear-gradient(top, #59a874 0, #449a63 100%); background: linear-gradient(to bottom, #59a874 0, #449a63 100%); } h3 { font-family: 'Open Sans', sans-serif; font-weight: bold; text-align: center; line-height: 1.3; margin-bottom: 2rem; color: #fff; } ```<issue_comment>username_1: You can just add a desktop only media query: ``` @media (min-width: 768px){ .navbar-nav .dropdown-megamenu:hover .dropdown-container { display: block; } } ``` Here's a fiddle: <https://jsfiddle.net/vqubh18j/> You could trim the selector down to `dropdown:hover .dropdown-container` if you wish. Also note there is a 2px top margin on the dropdown that makes a tiny gap between the navbar and the dropdown, allowing slower mouse movers to have the menu disappear unintentionally: ``` .dropdown-container { ... /* Should probably be removed or replaced with margin: 0; border-top: 2px solid transparent; */ margin: 2px 0 0; ... } ``` Upvotes: 2 <issue_comment>username_2: When one clicks on `dropdown-toggle`, class `open` is added to `dropdown-megamenu`. ``` .nav > .dropdown-megamenu.open .dropdown-container > .dropdown-menu, .nav > .dropdown-megamenu.open > .dropdown-container { display: block; } ``` Adapting the above CSS selector to the `hover` pseudo-class will give the following: ``` .nav > .dropdown-megamenu:hover .dropdown-container > .dropdown-menu, .nav > .dropdown-megamenu:hover > .dropdown-container { display: block; } ``` As hovering is not available on touch devices, the above selector is better wrapped in a media query. ``` @media (min-width: 768px) { .nav > .dropdown-megamenu:hover .dropdown-container > .dropdown-menu, .nav > .dropdown-megamenu:hover > .dropdown-container { display: block; } } ``` [Updated JSFiddle](https://jsfiddle.net/L2o657p6/21/) Upvotes: 0
2018/03/14
720
2,611
<issue_start>username_0: I deployed an app successfully following [**this**](https://cloud.google.com/python/django/flexible-environment) link. After deployment, I am having trouble connecting to Cloud SQL. In my IPython notebook, before I deploy my app, I can use the following statement to connect to my cloud instance using Google SDK: ``` cloud_sql_proxy.exe -instances="project_name:us-east1:instance_name"=tcp:3306 ``` After entering the above, I get a notification in Google Cloud Shell ``` "listening on 127.0.0.1:3306 for project_name:us-east1:instance_name ready for new connections" ``` I then use my IPython notebook to test the connection: ``` host = '127.0.0.1' (also changed to my my ip address for google cloud sql) user = 'cloud_sql_user' password = '<PASSWORD>' conn1 = pymysql.connect(host=host, user=user, password=<PASSWORD>, db='mydb') cur1 = conn1.cursor() ``` Local test results: Can connect to Cloud SQL from IPython and query cloud database. Next step: deploy ``` gcloud app deploy ``` Result: App Deployed. However, upon navigating to my website and typing in names into the input field, it takes me to a new URL and I get the error: ``` OperationalError at /search/ (20033), "Can't connect to MySQL server on 127.0.0.1 (timed out)) ``` My main questions are: * How can we get PyMySQL query into a cloud database after deployment? * Do I need a Gunicorn if I'm using Windows and need to connect to their cloud database? * Is SQL alchemy needed for me? I'm not using an ORM. The online instructions aren't really that clear. My local host computer is on Windows 7, Python 3 and Django. **Edit:** I edited the file based on the suggestion by the user below. I still get the error 'connection timed out'<issue_comment>username_1: This post is already a bit old, I hope you already solved this! You check if you're in production like this : `if os.getenv('GAE_INSTANCE'):` In the documentation, they manage it this way : `if os.getenv('SERVER_SOFTWARE', '').startswith('Google App Engine'):` I think that because this condition is wrong, you are overwriting the `DATABASES['default']['HOST']` value to `'127.0.0.1'` Hope this will be the answer you were looking for! Upvotes: 0 <issue_comment>username_2: Found it. Change the socket in your pymysql to a unix\_socket = "your cloud connection string name". Let host be 'localhost', user = 'your cloud username' and password = '<PASSWORD>' edit: don't forget the /cloud/ part in the connection string name Upvotes: 1
2018/03/14
426
1,561
<issue_start>username_0: I am currently using webpack 4 with react loadable to create chunks. It actually chunks depends on the break point. However, vendor size remains same. React loadable is not supporting webpack 4 yet ? or I am missing some setup ? css actually seems to splited into the chunk, though ``` { output: { path: 'tothe path', publicPath: `/publicPath/`, filename: '[name] + '.js', chunkFilename: '[name]', }, resolve: { extensions: ['.js', '.json', '.css'], alias: aliases }, stats: { warnings: false, children: false, }, optimization: { splitChunks: { chunks: "all", name: true, cacheGroups: { common: { name: "vendor" + ".js", test: /[\\/]node_modules[\\/]/, chunks: "all", enforce: true, reuseExistingChunk: false, }, } } } } ```<issue_comment>username_1: React-loadable doesn't work well with Webpack 4 yet, take a look at [this](https://github.com/jamiebuilds/react-loadable/pull/110) pull request. There's a [fork](https://www.npmjs.com/package/@7rulnik/react-loadable) of react-loadable (by the author of PR), but it didn't work for me either. I had a problem that some components wrapped in Loadable won't load. Upvotes: 2 <issue_comment>username_2: @username_1 I also ran into this problem. I found that the components can't load all import style. If I remove the style, the component will load normally. I move all the styles to the entry file as a workaround. Upvotes: 0
2018/03/14
592
2,380
<issue_start>username_0: I want to have multiple components associated to the root path in order to display one landing page view for anonymous users and an inbox for the authenticated users, without having to use manual navigation and path changes crutches. I've tried to enable my scenario with a routing block like this: ``` { path: '', component: LandingComponent, canActivate: [ ForbidAuthGuard ] }, { path: '', component: LocationsComponent, canActivate: [ RequireAuthGuard ] } ``` Angular is indeed calling `ForbidAuthGuard`, which is failing on an authenticated user and therefore cancelling the navigation event altogether, ignoring the `RequireAuthGuard` route. As implied by their conflicting names both guards are exclusive to each other so only one of the routes will ever be active, yet Angular seems to be ignoring the second route. Is this mechanic viable at all? or Is there any other technique to achieve the end goal of the first paragraph?. For completeness' sake I am using @angular/router and @angular/core with version 5.2.8.<issue_comment>username_1: You can do something like this: ``` { path: '', component: MyService.DoACheck() ? LandingComponent, LocationsComponent }, ``` But that then would not use your guards. The more common solution, I'm assuming, is the one you don't want: Define one route with a guard. In that route guard, determine if the user can access the route, if not, navigate to the other route. Like this: ``` export  class AuthGuard implements CanActivate { constructor(private authService: AuthService, private router: Router) { } canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): boolean { return this.checkLoggedIn(state.url); } checkLoggedIn(url: string): boolean { if (this.authService.isLoggedIn()) { return true; } // Retain the attempted URL for redirection this.authService.redirectUrl = url; this.router.navigate(['/login']); return false; } } ``` Upvotes: 1 <issue_comment>username_2: Yes, you can have multiple Components associated to a single route: **app.component.html** ``` ``` **Routing Config** ``` { path: 'my-single-url', children: [ { path: '', component: ComponentOne }, { path: '', component: ComponentTwo, outlet: 'aux' } ] } ``` Upvotes: 0
2018/03/14
10,616
23,954
<issue_start>username_0: I am trying to do some analysis of teaching evaluations from multiple sessions with different teachers. Each student's evaluation is stores as a .csv file (although they are tab separated). My usual approach of combining the csv files into a single data frame wont work because each file has a different number of columns and have the teacher's name in the column name. So there is a mismatch between names and dimensions. I skipped the first line and set header to FALSE, but the different number of columns still throws an error. So, I read each .csv file into it's own data frame with the same name as the file with: ``` for(i in file_names){ assign(i, read.csv(i, sep="\t", fileEncoding = "utf-16")) } ``` Is there a way to use dplyr in a loop to rbind columns with specific text in the name across all data frames (50 data frames for this first round)? Specifically I want to pull the `Created.At` variable and variables containing `..."Over.all.rating.for.teacher"`. Edit to add sample data: ``` Data1 <- dput(Data1) structure(list(Created.At = structure(c(3L, 4L, 5L, 6L, 7L, 8L, 9L, 1L, 2L, 10L), .Label = c("2016/01/19 10:16:08 PM", "2016/01/19 11:08:58 PM", "2016/01/19 3:36:24 PM", "2016/01/19 4:06:32 PM", "2016/01/19 4:08:52 PM", "2016/01/19 4:40:26 PM", "2016/01/19 6:38:57 PM", "2016/01/19 8:18:20 PM", "2016/01/19 8:58:38 PM", "2016/01/20 8:16:28 PM"), class = "factor"), Please.rate.teacher..John.Doe...Skills.of.interaction.and.rapport.with.learners = c(4L, 5L, 4L, NA, 4L, 5L, 5L, 4L, 4L, 3L), Please.rate.teacher..John.Doe...Clearly.communicated.goals.outcomes.for.the.session = c(4L, 5L, 4L, NA, 4L, 5L, 4L, 5L, 4L, 4L), Please.rate.teacher..John.Doe...Knowledge.of.subject.was.clearly.demonstrated = c(5L, 5L, 4L, NA, 4L, 5L, 5L, 5L, 4L, 3L), Please.rate.teacher..John.Doe...Conveys.the.significance.of.the.information = c(4L, 5L, 3L, NA, 4L, 5L, 5L, 4L, 4L, 3L), Please.rate.teacher..John.Doe...Class.preparation.materials....referred.to.or.used = c(NA, NA, 4L, 5L, 4L, 5L, 5L, NA, 4L, 2L), Please.rate.teacher..John.Doe...Teaching.methods.facilitated.achievement.of.goals.for.session = c(4L, 5L, 4L, NA, 4L, 5L, 5L, 5L, 5L, 3L), Please.rate.teacher..John.Doe...Uses.time.effectively = c(5L, 5L, 4L, NA, 4L, 4L, 4L, 5L, 3L, 3L), Please.rate.teacher..John.Doe...Compared.to.other.teachers..this.one.is... = c(4L, 5L, 4L, NA, 4L, 5L, 5L, 4L, 4L, 4L), Please.rate.teacher..John.Doe...Over.all.rating.for.teacher = c(4L, 5L, 4L, 5L, 4L, 5L, 5L, 4L, 4L, 4L), Please.rate.teacher..Jane.Doe....Skills.of.interaction.and.rapport.with.learners = c(4L, 4L, 4L, 5L, 4L, 5L, 5L, 4L, 5L, 3L), Please.rate.teacher..Jane.Doe...Clearly.communicated.goals.outcomes.for.the.session = c(4L, 5L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 3L), Please.rate.teacher..Jane.Doe....Knowledge.of.subject.was.clearly.demonstrated = c(4L, 4L, 3L, 4L, 4L, 5L, 5L, 4L, 5L, 4L), Please.rate.teacher..Jane.Doe....Conveys.the.significance.of.the.information = c(4L, 4L, 4L, 4L, 4L, 5L, 5L, 4L, 5L, 3L), Please.rate.teacher..Jane.Doe....Class.preparation.materials....referred.to.or.used = c(NA, NA, 4L, NA, 4L, 5L, 5L, NA, 5L, 2L), Please.rate.teacher..Jane.Doe....Teaching.methods.facilitated.achievement.of.goals.for.session = c(4L, 5L, 4L, 5L, 4L, 5L, 4L, 5L, 5L, 3L), Please.rate.teacher..Jane.Doe....Uses.time.effectively = c(4L, 5L, 4L, 5L, 4L, 5L, 5L, 5L, 5L, 3L), Please.rate.teacher..Jane.Doe...Compared.to.other.teachers..this.one.is... = c(4L, 4L, 4L, 4L, 4L, 5L, 4L, 4L, 4L, 3L), Please.rate.teacher..Jane.Doe....Over.all.rating.for.teacher = c(4L, 4L, 4L, 4L, 4L, 5L, 5L, 4L, 4L, 3L), Please.rate.teacher..Sue.Smith....Skills.of.interaction.and.rapport.with.learners = c(4L, 4L, 4L, 5L, 4L, 4L, 5L, 4L, 4L, 2L), Please.rate.teacher..Sue.Smith....Clearly.communicated.goals.outcomes.for.the.session = c(4L, 4L, 4L, 5L, 4L, 5L, 5L, 5L, 4L, 2L), Please.rate.teacher..Sue.Smith....Knowledge.of.subject.was.clearly.demonstrated = c(4L, 3L, 4L, 5L, 4L, NA, 5L, 4L, 4L, 2L), Please.rate.teacher..Sue.Smith....Conveys.the.significance.of.the.information = c(3L, 4L, 4L, 4L, 4L, NA, 5L, 4L, 3L, 2L), Please.rate.teacher..Sue.Smith....Class.preparation.materials....referred.to.or.used = c(NA, NA, 4L, 4L, 4L, NA, 5L, NA, 4L, 2L), Please.rate.teacher..Sue.Smith....Teaching.methods.facilitated.achievement.of.goals.for.session = c(4L, 4L, 4L, 5L, 4L, NA, 5L, 4L, 5L, 2L), Please.rate.teacher..Sue.Smith....Uses.time.effectively = c(3L, 4L, 4L, 5L, 4L, NA, 5L, 5L, 4L, 2L), Please.rate.teacher..Sue.Smith....Compared.to.other.teachers..this.one.is... = c(4L, 3L, 4L, 5L, 4L, 4L, 5L, 4L, 4L, 3L), Please.rate.teacher..Sue.Smith....Over.all.rating.for.teacher = c(NA, 3L, 4L, 5L, 4L, 5L, 5L, 4L, 4L, 2L), Please.rate.the.following.....I.feel.that.I.achieved.the.learning.objectives.for.today.s.sessions = c(4L, 5L, 4L, 5L, 4L, 5L, 5L, 5L, 5L, 4L), Please.rate.the.following.....The.session.promoted.ideas.for.dissemination.of.concepts.in.my.home.department.or.other.areas = c(3L, 5L, 4L, 5L, 5L, 5L, 5L, 5L, 5L, 4L), Please.rate.the.following.....I.feel.prepared.to.disseminate.these.ideas.concepts.in.my.home.department = c(3L, 4L, 4L, 5L, 3L, 5L, 3L, 4L, 4L, 2L), Please.rate.the.following.....I.can.see.myself.making.use.of.handouts.and.follow.up.material.references.that.were.provided.in.class.today = c(NA, 4L, 4L, 4L, 3L, 5L, 3L, NA, 5L, 2L), Overall.I.found.this.session.to.be... = c(4L, 5L, 4L, 5L, 4L, 5L, 5L, 5L, 5L, 3L)), .Names = c("Created.At", "Please.rate.teacher..John.Doe...Skills.of.interaction.and.rapport.with.learners", "Please.rate.teacher..John.Doe...Clearly.communicated.goals.outcomes.for.the.session", "Please.rate.teacher..John.Doe...Knowledge.of.subject.was.clearly.demonstrated", "Please.rate.teacher..John.Doe...Conveys.the.significance.of.the.information", "Please.rate.teacher..John.Doe...Class.preparation.materials....referred.to.or.used", "Please.rate.teacher..John.Doe...Teaching.methods.facilitated.achievement.of.goals.for.session", "Please.rate.teacher..John.Doe...Uses.time.effectively", "Please.rate.teacher..John.Doe...Compared.to.other.teachers..this.one.is...", "Please.rate.teacher..John.Doe...Over.all.rating.for.teacher", "Please.rate.teacher..Jane.Doe....Skills.of.interaction.and.rapport.with.learners", "Please.rate.teacher..Jane.Doe...Clearly.communicated.goals.outcomes.for.the.session", "Please.rate.teacher..Jane.Doe....Knowledge.of.subject.was.clearly.demonstrated", "Please.rate.teacher..Jane.Doe....Conveys.the.significance.of.the.information", "Please.rate.teacher..Jane.Doe....Class.preparation.materials....referred.to.or.used", "Please.rate.teacher..Jane.Doe....Teaching.methods.facilitated.achievement.of.goals.for.session", "Please.rate.teacher..Jane.Doe....Uses.time.effectively", "Please.rate.teacher..Jane.Doe...Compared.to.other.teachers..this.one.is...", "Please.rate.teacher..Jane.Doe....Over.all.rating.for.teacher", "Please.rate.teacher..Sue.Smith....Skills.of.interaction.and.rapport.with.learners", "Please.rate.teacher..Sue.Smith....Clearly.communicated.goals.outcomes.for.the.session", "Please.rate.teacher..Sue.Smith....Knowledge.of.subject.was.clearly.demonstrated", "Please.rate.teacher..Sue.Smith....Conveys.the.significance.of.the.information", "Please.rate.teacher..Sue.Smith....Class.preparation.materials....referred.to.or.used", "Please.rate.teacher..Sue.Smith....Teaching.methods.facilitated.achievement.of.goals.for.session", "Please.rate.teacher..Sue.Smith....Uses.time.effectively", "Please.rate.teacher..Sue.Smith....Compared.to.other.teachers..this.one.is...", "Please.rate.teacher..Sue.Smith....Over.all.rating.for.teacher", "Please.rate.the.following.....I.feel.that.I.achieved.the.learning.objectives.for.today.s.sessions", "Please.rate.the.following.....The.session.promoted.ideas.for.dissemination.of.concepts.in.my.home.department.or.other.areas", "Please.rate.the.following.....I.feel.prepared.to.disseminate.these.ideas.concepts.in.my.home.department", "Please.rate.the.following.....I.can.see.myself.making.use.of.handouts.and.follow.up.material.references.that.were.provided.in.class.today", "Overall.I.found.this.session.to.be..."), class = "data.frame", row.names = c(NA, -10L)) Data2 <- dput(Data2) structure(list(Created.At = structure(c(1L, 2L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L), .Label = c("2016/09/13 4:28:24 PM", "2016/09/13 4:29:11 PM", "2016/09/13 4:29:23 PM", "2016/09/13 4:29:29 PM", "2016/09/13 4:29:34 PM", "2016/09/13 4:29:37 PM", "2016/09/13 4:29:40 PM", "2016/09/13 4:29:41 PM", "2016/09/13 4:29:49 PM", "2016/09/13 4:30:19 PM", "2016/09/13 4:32:42 PM", "2016/09/13 4:35:50 PM", "2016/09/13 4:41:46 PM", "2016/09/13 9:41:27 PM", "2016/09/26 10:53:28 PM", "2016/10/11 10:30:34 PM" ), class = "factor"), Please.rate.teacher..Foo.Bar...Skills.of.interaction.and.rapport.with.learners = c(5L, 5L, 4L, 4L, 4L, 5L, 5L, 4L, 5L, 5L, 5L, 5L, 5L, 4L, 5L, 5L, 5L ), Please.rate.teacher..Foo.Bar...Clearly.communicated.goals.outcomes.for.the.session = c(5L, 5L, 4L, 4L, 4L, 5L, 5L, 4L, 5L, 5L, 4L, 4L, 5L, 4L, 4L, 5L, 5L ), Please.rate.teacher..Foo.Bar...Knowledge.of.subject.was.clearly.demonstrated = c(5L, 3L, 4L, 4L, 5L, 5L, 4L, 3L, 4L, 5L, 4L, 4L, 4L, 5L, 4L, 5L, 5L ), Please.rate.teacher..Foo.Bar...Conveys.the.significance.of.the.information = c(5L, 5L, 4L, 4L, 4L, 5L, 5L, 3L, 4L, 5L, 4L, 4L, 5L, 4L, 4L, 5L, 5L ), Please.rate.teacher..Foo.Bar...Class.preparation.materials....referred.to.or.used = c(5L, 4L, 4L, 4L, 4L, 4L, 5L, 5L, 4L, 5L, 5L, 4L, 5L, 4L, 5L, 5L, 5L ), Please.rate.teacher..Foo.Bar...Teaching.methods.facilitated.achievement.of.goals.for.session = c(5L, 5L, 5L, 4L, 4L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 4L, 5L, 5L, 5L ), Please.rate.teacher..Foo.Bar...Uses.time.effectively = c(5L, 5L, 4L, 4L, 5L, 5L, 5L, 5L, 4L, 5L, 5L, 5L, 4L, 5L, 5L, 5L, 5L ), Please.rate.teacher..Foo.Bar...Compared.to.other.teachers..this.one.is... = c(5L, 5L, 4L, 4L, 4L, 5L, 5L, 4L, 4L, 5L, 4L, 4L, 5L, 4L, 4L, 5L, 5L ), Please.rate.teacher..Foo.Bar...Over.all.rating.for.teacher = c(5L, 4L, 4L, 4L, 4L, 5L, 5L, 4L, 5L, 5L, 5L, 4L, 5L, 5L, 5L, 5L, 5L ), Please.rate.the.following.....I.feel.that.I.achieved.the.learning.objectives.for.today.s.sessions = c(4L, 4L, NA, 4L, 5L, 5L, 4L, 4L, 5L, 5L, 5L, 4L, 4L, 5L, 5L, 4L, 4L ), Please.rate.the.following.....The.session.promoted.ideas.for.dissemination.of.concepts.in.my.home.department.or.other.areas = c(5L, 4L, NA, NA, 4L, 5L, 4L, 4L, 5L, 4L, 5L, 4L, 4L, 4L, 5L, 5L, 5L ), Please.rate.the.following.....I.feel.prepared.to.disseminate.these.ideas.concepts.in.my.home.department = c(4L, 3L, NA, 4L, 4L, 5L, 4L, 3L, 4L, 4L, 5L, 4L, 3L, 4L, 5L, 4L, 4L ), Please.rate.the.following.....I.can.see.myself.making.use.of.handouts.and.follow.up.material.references.that.were.provided.in.class.today = c(5L, 4L, NA, NA, 4L, 5L, 5L, 5L, 5L, 5L, 5L, 4L, 4L, 5L, 5L, 5L, 5L ), Overall.I.found.this.session.to.be... = c(5L, 4L, 4L, 3L, 4L, 5L, 4L, 4L, 4L, 4L, 5L, 4L, 4L, 5L, 5L, 5L, 5L)), .Names = c("Created.At", "Please.rate.teacher..Foo.Bar...Skills.of.interaction.and.rapport.with.learners", "Please.rate.teacher..Foo.Bar...Clearly.communicated.goals.outcomes.for.the.session", "Please.rate.teacher..Foo.Bar...Knowledge.of.subject.was.clearly.demonstrated", "Please.rate.teacher..Foo.Bar...Conveys.the.significance.of.the.information", "Please.rate.teacher..Foo.Bar...Class.preparation.materials....referred.to.or.used", "Please.rate.teacher..Foo.Bar...Teaching.methods.facilitated.achievement.of.goals.for.session", "Please.rate.teacher..Foo.Bar...Uses.time.effectively", "Please.rate.teacher..Foo.Bar...Compared.to.other.teachers..this.one.is...", "Please.rate.teacher..Foo.Bar...Over.all.rating.for.teacher", "Please.rate.the.following.....I.feel.that.I.achieved.the.learning.objectives.for.today.s.sessions", "Please.rate.the.following.....The.session.promoted.ideas.for.dissemination.of.concepts.in.my.home.department.or.other.areas", "Please.rate.the.following.....I.feel.prepared.to.disseminate.these.ideas.concepts.in.my.home.department", "Please.rate.the.following.....I.can.see.myself.making.use.of.handouts.and.follow.up.material.references.that.were.provided.in.class.today", "Overall.I.found.this.session.to.be..."), class = "data.frame", row.names = c(NA, -17L)) ``` Data1 has 3 teachers in the session, Data2 has only a single teacher. I think to make sense of the data, and match with other demographic info, I'll need to create a variable for "teacher name". Edit to show desired output: ``` Created.At Rating Var 1 2016/01/19 3:36:24 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 2 2016/01/19 4:06:32 PM 5 Please rate teacher: <NAME> | Over all rating for teacher 3 2016/01/19 4:08:52 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 4 2016/01/19 4:40:26 PM 5 Please rate teacher: <NAME> | Over all rating for teacher 5 2016/01/19 6:38:57 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 6 2016/01/19 8:18:20 PM 5 Please rate teacher: <NAME> | Over all rating for teacher 7 2016/01/19 8:58:38 PM 5 Please rate teacher: <NAME> | Over all rating for teacher 8 2016/01/19 10:16:08 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 9 2016/01/19 11:08:58 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 10 2016/01/20 8:16:28 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 11 2016/01/19 3:36:24 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 12 2016/01/19 4:06:32 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 13 2016/01/19 4:08:52 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 14 2016/01/19 4:40:26 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 15 2016/01/19 6:38:57 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 16 2016/01/19 8:18:20 PM 5 Please rate teacher: <NAME> | Over all rating for teacher 17 2016/01/19 8:58:38 PM 5 Please rate teacher: <NAME> | Over all rating for teacher 18 2016/01/19 10:16:08 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 19 2016/01/19 11:08:58 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 20 2016/01/20 8:16:28 PM 3 Please rate teacher: <NAME> | Over all rating for teacher 21 2016/01/19 3:36:24 PM NA Please rate teacher: <NAME> | Over all rating for teacher 22 2016/01/19 4:06:32 PM 3 Please rate teacher: <NAME> | Over all rating for teacher 23 2016/01/19 4:08:52 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 24 2016/01/19 4:40:26 PM 5 Please rate teacher: <NAME> | Over all rating for teacher 25 2016/01/19 6:38:57 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 26 2016/01/19 8:18:20 PM 5 Please rate teacher: <NAME> | Over all rating for teacher 27 2016/01/19 8:58:38 PM 5 Please rate teacher: <NAME> | Over all rating for teacher 28 2016/01/19 10:16:08 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 29 2016/01/19 11:08:58 PM 4 Please rate teacher: <NAME> | Over all rating for teacher 30 2016/01/20 8:16:28 PM 2 Please rate teacher: <NAME> | Over all rating for teacher 31 2016/09/13 4:28:24 PM 5 Please rate teacher: Foo Bar | Over all rating for teacher 32 2016/09/13 4:29:11 PM 4 Please rate teacher: Foo Bar | Over all rating for teacher 33 2016/09/13 4:29:11 PM 4 Please rate teacher: Foo Bar | Over all rating for teacher 34 2016/09/13 4:29:23 PM 4 Please rate teacher: Foo Bar | Over all rating for teacher 35 2016/09/13 4:29:29 PM 4 Please rate teacher: Foo Bar | Over all rating for teacher 36 2016/09/13 4:29:34 PM 5 Please rate teacher: Foo Bar | Over all rating for teacher 37 2016/09/13 4:29:37 PM 5 Please rate teacher: Foo Bar | Over all rating for teacher 38 2016/09/13 4:29:40 PM 4 Please rate teacher: Foo Bar | Over all rating for teacher 39 2016/09/13 4:29:41 PM 5 Please rate teacher: Foo Bar | Over all rating for teacher 40 2016/09/13 4:29:49 PM 5 Please rate teacher: Foo Bar | Over all rating for teacher 41 2016/09/13 4:30:19 PM 5 Please rate teacher: Foo Bar | Over all rating for teacher 42 2016/09/13 4:32:42 PM 4 Please rate teacher: Foo Bar | Over all rating for teacher 43 2016/09/13 4:35:50 PM 5 Please rate teacher: Foo Bar | Over all rating for teacher 44 2016/09/13 4:41:46 PM 5 Please rate teacher: Foo Bar | Over all rating for teacher 45 2016/09/13 9:41:27 PM 5 Please rate teacher: Foo Bar | Over all rating for teacher 46 2016/09/26 10:53:28 PM 5 Please rate teacher: Foo Bar | Over all rating for teacher 47 2016/10/11 10:30:34 PM 5 Please rate teacher: Foo Bar | Over all rating for teacher ``` Ideal would be something like: ``` Created.At Overall.Rating Teacher 1 2016/01/19 3:36:24 PM 4 <NAME> 2 2016/01/19 4:06:32 PM 5 <NAME> 3 2016/01/19 4:08:52 PM 4 <NAME> 4 2016/01/19 4:40:26 PM 5 <NAME> 5 2016/01/19 6:38:57 PM 4 <NAME> 6 2016/01/19 8:18:20 PM 5 <NAME> 7 2016/01/19 8:58:38 PM 5 <NAME> 8 2016/01/19 10:16:08 PM 4 <NAME> 9 2016/01/19 11:08:58 PM 4 <NAME> 10 2016/01/20 8:16:28 PM 4 <NAME> 11 2016/01/19 3:36:24 PM 4 <NAME> 12 2016/01/19 4:06:32 PM 4 <NAME> 13 2016/01/19 4:08:52 PM 4 <NAME> 14 2016/01/19 4:40:26 PM 4 <NAME> 15 2016/01/19 6:38:57 PM 4 <NAME> 16 2016/01/19 8:18:20 PM 5 <NAME> 17 2016/01/19 8:58:38 PM 5 <NAME> 18 2016/01/19 10:16:08 PM 4 <NAME> 19 2016/01/19 11:08:58 PM 4 <NAME> 20 2016/01/20 8:16:28 PM 3 <NAME> 21 2016/01/19 3:36:24 PM NA Sue Smith 22 2016/01/19 4:06:32 PM 3 Sue Smith 23 2016/01/19 4:08:52 PM 4 Sue Smith 24 2016/01/19 4:40:26 PM 5 Sue Smith 25 2016/01/19 6:38:57 PM 4 Sue Smith 26 2016/01/19 8:18:20 PM 5 Sue Smith 27 2016/01/19 8:58:38 PM 5 Sue Smith 28 2016/01/19 10:16:08 PM 4 <NAME> 29 2016/01/19 11:08:58 PM 4 <NAME> 30 2016/01/20 8:16:28 PM 2 <NAME> 31 2016/09/13 4:28:24 PM 5 Foo Bar 32 2016/09/13 4:29:11 PM 4 Foo Bar 33 2016/09/13 4:29:11 PM 4 Foo Bar 34 2016/09/13 4:29:23 PM 4 Foo Bar 35 2016/09/13 4:29:29 PM 4 Foo Bar 36 2016/09/13 4:29:34 PM 5 Foo Bar 37 2016/09/13 4:29:37 PM 5 Foo Bar 38 2016/09/13 4:29:40 PM 4 Foo Bar 39 2016/09/13 4:29:41 PM 5 Foo Bar 40 2016/09/13 4:29:49 PM 5 Foo Bar 41 2016/09/13 4:30:19 PM 5 Foo Bar 42 2016/09/13 4:32:42 PM 4 Foo Bar 43 2016/09/13 4:35:50 PM 5 Foo Bar 44 2016/09/13 4:41:46 PM 5 Foo Bar 45 2016/09/13 9:41:27 PM 5 Foo Bar 46 2016/09/26 10:53:28 PM 5 Foo Bar 47 2016/10/11 10:30:34 PM 5 Foo Bar ```<issue_comment>username_1: You could try storing them in a list, rather than assigning each frame into a global variable. ``` library(dplyr) read_data <- function(files) { read.csv(files) %>% dplyr::mutate(id_col = files) } filenames <- list.files(pattern = ".csv") mydata <- lapply(files, read_data) ``` That gets you a list with all the dataframes. Then select the columns you want ``` new_data <- lapply(mydata, function(x){ dplyr::select(x, Created.At, id_col, contains("Over.all.rating.for.teacher")) return(x) }) ``` Note that I cant test this exactly due to the lack of a reproducible example, but this should set you on the right track Upvotes: 0 <issue_comment>username_2: Here's a shorter way to do it. You can get list of dataframes in your current environment using `ls()`. In the example below, I select column `name` from two data frames. This is similar to the problem you are solving I guess: ``` library(purrr) # sample dataframes df1 <- data.frame(name = c('a','b','c'), val1 = c(1,2,3)) df2 <- data.frame(name = c('d','e','f'), val2 = c(1,2,3),val3 = c(7,8,9)) # create a list of dataframes list_of_dataframes <- list(df1, df2) # select columns and create final dataframe output <- do.call(rbind, map(list_of_dataframes,`[` ,'name')) # instead of 'name' here you can specify a vector c('Created.at','another_column','another_column') ``` print(output) ``` name 1 a 2 b 3 c 4 d 5 e 6 f ``` Upvotes: 0 <issue_comment>username_3: One option could be to use `dplyr::select_at` and `dplyr::bind_rows`. The `select_at` will be used to get only columns containing `Over.all.rating.for.teacher` or `Created.At` ``` library(dplyr) res <- Data1 %>% select_at(vars(c("Created.At"),grep("Over.all.rating.for.teacher", names(Data1), value = TRUE))) %>% bind_rows(Data2 %>% select_at(vars(c("Created.At"),grep("Over.all.rating.for.teacher", names(Data2), value = TRUE)))) str(res) 'data.frame': 27 obs. of 5 variables: $ Created.At : chr "2016/01/19 3:36:24 PM" "2016/01/19 4:06:32 PM" "2016/01/19 4:08:52 PM" "2016/01/19 4:40:26 PM" ... $ Please.rate.teacher..John.Doe...Over.all.rating.for.teacher : int 4 5 4 5 4 5 5 4 4 4 ... $ Please.rate.teacher..Jane.Doe....Over.all.rating.for.teacher : int 4 4 4 4 4 5 5 4 4 3 ... $ Please.rate.teacher..Sue.Smith....Over.all.rating.for.teacher: int NA 3 4 5 4 5 5 4 4 2 ... $ Please.rate.teacher..Foo.Bar...Over.all.rating.for.teacher : int NA NA NA NA NA NA NA NA NA NA ... ``` ***Note:*** The data shared by `OP` contains strings as `Factor` hence above solution may give warning. It would be better to convert columns to `character` before operating on data frames. Upvotes: 0 <issue_comment>username_4: Based on some of the other answers and some digging and hacking I tried the following and it seems to work: ``` library(tidyr) # Read in file path to .csv subject files Filepath <- dirname(file.choose()) # Choose a file in the directory # with all the .csv files # Get a list of all files in the directory file_names <- dir(Filepath, full.names = TRUE) # Function to read .csv files listed from directory read_data <- function(file_names) { read.csv(file_names, sep = "\t", fileEncoding = "utf-16", check.names = FALSE, stringsAsFactors = FALSE) } # Create a list of data frames from .csv files data_list <- lapply(file_names, read_data) # Create a wide data from of all rows from variables Data_Wide <- lapply(data_list, select, `Created At`, contains("Over all rating for teacher")) %>% bind_rows() # Gather to long data All_Data <- gather(Data_Wide, Teacher, Overall_Rating, -`Created At`,na.rm = T) %>% mutate(Teacher = gsub("Please rate teacher: | [|] Over all rating for teacher| [(]Scholarship[)]|[(]Teaching Excellence[)]", "", Teacher ), Teacher = trimws(Teacher), Teacher = tolower(Teacher), Teacher = tools::toTitleCase(Teacher)) ``` If anyone has any more efficient or cleaner approaches, please post :) Upvotes: 1
2018/03/14
1,579
4,867
<issue_start>username_0: ``` mysql> select doc, term, item from relevanssi where doc = 26331; +-------+------------------------+------+ | doc | term | item | +-------+------------------------+------+ | 26331 | yes | 0 | | 26331 | zero | 0 | | 26331 | ??? | 0 | | 26331 | ??? | 0 | | 26331 | ???? | 0 | +-------+------------------------+------+ ``` I have no idea what those "???" are. They do not show up on query: ``` select doc, term, item from relevanssi where doc = 26331 and term = '???'; Empty set (0.00 sec) ``` I really want to delete those "???" rows. How do I do that?<issue_comment>username_1: You could try storing them in a list, rather than assigning each frame into a global variable. ``` library(dplyr) read_data <- function(files) { read.csv(files) %>% dplyr::mutate(id_col = files) } filenames <- list.files(pattern = ".csv") mydata <- lapply(files, read_data) ``` That gets you a list with all the dataframes. Then select the columns you want ``` new_data <- lapply(mydata, function(x){ dplyr::select(x, Created.At, id_col, contains("Over.all.rating.for.teacher")) return(x) }) ``` Note that I cant test this exactly due to the lack of a reproducible example, but this should set you on the right track Upvotes: 0 <issue_comment>username_2: Here's a shorter way to do it. You can get list of dataframes in your current environment using `ls()`. In the example below, I select column `name` from two data frames. This is similar to the problem you are solving I guess: ``` library(purrr) # sample dataframes df1 <- data.frame(name = c('a','b','c'), val1 = c(1,2,3)) df2 <- data.frame(name = c('d','e','f'), val2 = c(1,2,3),val3 = c(7,8,9)) # create a list of dataframes list_of_dataframes <- list(df1, df2) # select columns and create final dataframe output <- do.call(rbind, map(list_of_dataframes,`[` ,'name')) # instead of 'name' here you can specify a vector c('Created.at','another_column','another_column') ``` print(output) ``` name 1 a 2 b 3 c 4 d 5 e 6 f ``` Upvotes: 0 <issue_comment>username_3: One option could be to use `dplyr::select_at` and `dplyr::bind_rows`. The `select_at` will be used to get only columns containing `Over.all.rating.for.teacher` or `Created.At` ``` library(dplyr) res <- Data1 %>% select_at(vars(c("Created.At"),grep("Over.all.rating.for.teacher", names(Data1), value = TRUE))) %>% bind_rows(Data2 %>% select_at(vars(c("Created.At"),grep("Over.all.rating.for.teacher", names(Data2), value = TRUE)))) str(res) 'data.frame': 27 obs. of 5 variables: $ Created.At : chr "2016/01/19 3:36:24 PM" "2016/01/19 4:06:32 PM" "2016/01/19 4:08:52 PM" "2016/01/19 4:40:26 PM" ... $ Please.rate.teacher..John.Doe...Over.all.rating.for.teacher : int 4 5 4 5 4 5 5 4 4 4 ... $ Please.rate.teacher..Jane.Doe....Over.all.rating.for.teacher : int 4 4 4 4 4 5 5 4 4 3 ... $ Please.rate.teacher..Sue.Smith....Over.all.rating.for.teacher: int NA 3 4 5 4 5 5 4 4 2 ... $ Please.rate.teacher..Foo.Bar...Over.all.rating.for.teacher : int NA NA NA NA NA NA NA NA NA NA ... ``` ***Note:*** The data shared by `OP` contains strings as `Factor` hence above solution may give warning. It would be better to convert columns to `character` before operating on data frames. Upvotes: 0 <issue_comment>username_4: Based on some of the other answers and some digging and hacking I tried the following and it seems to work: ``` library(tidyr) # Read in file path to .csv subject files Filepath <- dirname(file.choose()) # Choose a file in the directory # with all the .csv files # Get a list of all files in the directory file_names <- dir(Filepath, full.names = TRUE) # Function to read .csv files listed from directory read_data <- function(file_names) { read.csv(file_names, sep = "\t", fileEncoding = "utf-16", check.names = FALSE, stringsAsFactors = FALSE) } # Create a list of data frames from .csv files data_list <- lapply(file_names, read_data) # Create a wide data from of all rows from variables Data_Wide <- lapply(data_list, select, `Created At`, contains("Over all rating for teacher")) %>% bind_rows() # Gather to long data All_Data <- gather(Data_Wide, Teacher, Overall_Rating, -`Created At`,na.rm = T) %>% mutate(Teacher = gsub("Please rate teacher: | [|] Over all rating for teacher| [(]Scholarship[)]|[(]Teaching Excellence[)]", "", Teacher ), Teacher = trimws(Teacher), Teacher = tolower(Teacher), Teacher = tools::toTitleCase(Teacher)) ``` If anyone has any more efficient or cleaner approaches, please post :) Upvotes: 1
2018/03/14
630
1,870
<issue_start>username_0: This is the vbs file: ``` set w = CreateObject^("WScript.Shell"^) W.Run chr^(34^) & "explore.bat" & chr^(34^), 0 echo set w= Nothing ``` (it will run a batch file, but hidden in the background) I tried a suggestion from [this post](https://stackoverflow.com/a/19471152/9240845)... This is my bat file so far (which is *supposed* to make the vbs file - **not** open it): ``` ( echo set w = CreateObject^("WScript.Shell"^) echo W.Run chr^(34^) & "explore.bat" & chr^(34^), 0 echo set w= Nothing )>"1623.vbs" ``` This doesn't work, it ends up opening `explore.bat` (it shouldn't, it should just make the vbs file). I want to just make the vbs file. I may have this all wrong (I'm very new to this), looking for any advice. --- I also tried using [this advice](https://stackoverflow.com/q/30877300/9240845) and didn't get anywhere (since it's multiple lines)<issue_comment>username_1: You forgot to escape the "&" symbol. Below is the solved version ``` ( echo set w = CreateObject^("WScript.Shell"^) echo W.Run chr^(34^) ^& "explore.bat" ^& chr^(34^), 0 echo set w= Nothing )>"1623.vbs" ``` Upvotes: 2 <issue_comment>username_2: While the issue is not escaping the ampersands, you could have also avoided it altogether by avoiding using `Chr(34)` to begin with: ``` ( echo set w = CreateObject^("WScript.Shell"^) echo W.Run """explore.bat""", 0 echo set w= Nothing )>"1623.vbs" ``` Your `Run` command will pass **explore.bat** as a *single* double-quote (`"explore.bat"`). Essentially, you are *quoting* the quotation marks, the same way you would have quoted the string a: `"a"`. If you were to replace **a** with the double-quote **"**, you would have `"""`. *(essentially just imagine the middle quotation mark as the letter a from the previous example to visualize why this works)*. Upvotes: 3 [selected_answer]
2018/03/14
599
2,013
<issue_start>username_0: I am new to the web side of things and I am currently struggling with Razor Pages. Can someone explain the ways I can get a value from control in this case. How can I extract the content of the selected and pass it to a variable to the code behind; ```html @page @model ViewToVM.Pages.IndexModel @{ ViewData["Title"] = "Index"; } Index ----- @using Model; @foreach(City city in Model.Cities) { @city.SelectedCity } ``` with this code behind ``` using Microsoft.AspNetCore.Mvc.RazorPages; using System.Collections.Generic; using ViewToVM.Model; namespace ViewToVM.Pages { public class IndexModel : PageModel { public List Cities = new List() { new City("Sofia"), new City("Plovdiv"), new City("Velingrad") }; public string selectedCities = string.Empty; public void OnGet() { } } } ``` The City class just contains a single string for demo purposes. I know this is probably a pretty bad way to do the code behind but It help me illustrate the problem better.<issue_comment>username_1: You forgot to escape the "&" symbol. Below is the solved version ``` ( echo set w = CreateObject^("WScript.Shell"^) echo W.Run chr^(34^) ^& "explore.bat" ^& chr^(34^), 0 echo set w= Nothing )>"1623.vbs" ``` Upvotes: 2 <issue_comment>username_2: While the issue is not escaping the ampersands, you could have also avoided it altogether by avoiding using `Chr(34)` to begin with: ``` ( echo set w = CreateObject^("WScript.Shell"^) echo W.Run """explore.bat""", 0 echo set w= Nothing )>"1623.vbs" ``` Your `Run` command will pass **explore.bat** as a *single* double-quote (`"explore.bat"`). Essentially, you are *quoting* the quotation marks, the same way you would have quoted the string a: `"a"`. If you were to replace **a** with the double-quote **"**, you would have `"""`. *(essentially just imagine the middle quotation mark as the letter a from the previous example to visualize why this works)*. Upvotes: 3 [selected_answer]
2018/03/14
1,810
8,044
<issue_start>username_0: So I'm trying to figure out the structure behind general use cases of a CQRS+ES architecture and one of the problems I'm having is how aggregates are represented in the event store. If we divide the events into streams, what exactly would a stream represent? In the context of a hypothetical inventory management system that tracks a collection of items, each with an ID, product code, and location, I'm having trouble visualizing the layout of the system. From what I could gather on the internet, it could be described succinctly "one stream per aggregate." So I would have an Inventory aggregate, a single stream with ItemAdded, ItemPulled, ItemRestocked, etc. events each with serialized data containing the Item ID, quantity changed, location, etc. The aggregate root would contain a collection of InventoryItem objects (each with their respective quantity, product codes, location, etc.) That seems like it would allow for easily enforcing domain rules, but I see one major flaw to this; when applying those events to the aggregate root, you would have to first rebuild that collection of InventoryItem. Even with snapshotting, that seems be very inefficient with a large number of items. Another method would be to have one stream per InventoryItem tracking all events pertaining to only item. Each stream is named with the ID of that item. That seems like the simpler route, but now how would you enforce domain rules like ensuring product codes are unique or you're not putting multiple items into the same location? It seems like you would now have to bring in a Read model, but isn't the whole point to keep commands and query's seperate? It just feels wrong. So my question is 'which is correct?' Partially both? Neither? Like most things, the more I learn, the more I learn that I don't know...<issue_comment>username_1: In a typical event store, each *event stream* is an isolated transaction boundary. Any time you change the model you lock the stream, append new events, and release the lock. (In designs that use optimistic concurrency, the boundaries are the same, but the "locking" mechanism is slightly different). You will almost certainly want to ensure that any aggregate is enclosed within a single stream -- sharing an aggregate between two streams is analogous to sharing an aggregate across two databases. A single stream can be dedicated to a single aggregate, to a collection of aggregates, or even to the entire model. Aggregates that are part of the same stream can be changed in the same transaction -- huzzah! -- at the cost of some contention and a bit of extra work to do when loading an aggregate from the stream. The most commonly discussed design assigns each logical stream to a single aggregate. > > That seems like it would allow for easily enforcing domain rules, but I see one major flaw to this; when applying those events to the aggregate root, you would have to first rebuild that collection of InventoryItem. Even with snapshotting, that seems be very inefficient with a large number of items. > > > There are a couple of possibilities; in some models, especially those with a strong temporal component, it makes sense to model some "entities" as a time series of aggregates. For example, in a scheduling system, rather than `Bobs Calendar` you might instead have `Bobs March Calendar`, `Bobs April Calendar` and so on. Chopping the life cycle into smaller installments can keep the event count in check. Another possibility is snapshots, with an additional trick to it: each snapshot is annotated with metadata that describes where in the stream the snapshot was made, and you simply read the stream forward from that point. This, of course, depends on having an implementation of an event stream that supports random access, or an implementation of stream that allows you to read last in first out. Keep in mind that both of these are really performance optimizations, and the [first rule of optimization](http://wiki.c2.com/?RulesOfOptimization) is... don't. Upvotes: 4 [selected_answer]<issue_comment>username_2: > > So I'm trying to figure out the structure behind general use cases of a CQRS+ES architecture and one of the problems I'm having is how aggregates are represented in the event store > > > The event store in a DDD project is designed around event-sourced Aggregates: 1. it provides the *efficient* loading of all events previously emitted by an Aggregate root instance (having a given, specified ID) 2. those events must be retrieved in the order they where emitted 3. it must not permit appending events *at the same time* for the same Aggregate root instance 4. all events emitted as result of a single command must be all appended atomically; this means that they should all succeed or all fail The 4th point could be implemented using transactions but this is not a necessity. In fact, for scalability reasons, if you can then you should choose a persistence that provides you atomicity without the use of transactions. For example, you could store the events in a MongoDB document, as MongoDB guaranties document-level atomicity. The 3rd point can be implemented using optimistic locking, using a `version` column with an unique index per (version x AggregateType x AggregateId). At the same time, there is a DDD *rule* regarding the Aggregates: don't mutate more than one Aggregate per transaction. This rule helps you A LOT to design a scalable system. Break it if you don't need one. So, the solution to all these requirements is something that is called an *Event-stream*, that contains all the previous emitted events by an Aggregate instance. > > So I would have an Inventory aggregate > > > The DDD has higher precedence than the Event-store. So, if you have some business rules that force you to decide that you must have a (big) `Inventory aggregate`, then yes, it would load ALL the previous events generated by itself. Then the `InventoryItem` would be a nested entity that cannot emit events by itself. > > That seems like it would allow for easily enforcing domain rules, but I see one major flaw to this; when applying those events to the aggregate root, you would have to first rebuild that collection of InventoryItem. Even with snapshotting, that seems be very inefficient with a large number of items. > > > Yes, indeed. The simplest thing would be for us to all have a single Aggregate, with a single instance. Then the consistency would be the strongest possible. But this is not efficient so you need to better think about the *real* business requirements. > > Another method would be to have one stream per InventoryItem tracking all events pertaining to only item. Each stream is named with the ID of that item. That seems like the simpler route, but now how would you enforce domain rules like ensuring product codes are unique or you're not putting multiple items into the same location? > > > There is another possibility. You should model the assigning of product codes as a Business Process. For this you could use a Saga/Process manager that would orchestrate the entire process. This Saga could use a collection with an unique index added to the product code column in order to ensure that only one product uses a given product code. You could design the Saga to permit the allocation of an already-taken code to a product and to compensate later or to reject the invalid allocation in the first place. > > It seems like you would now have to bring in a Read model, but isn't the whole point to keep commands and query's seperate? It just feels wrong. > > > The Saga uses indeed a private state maintained from the domain events in an eventual consistent state, just like a Read-model but this does not feel wrong for me. It may use whatever it needs in order to bring (eventually) the *system as a hole* to a consistent state. It complements the Aggregates, whose purpose is to not allow the *building-blocks of the system* to get into an invalid state. Upvotes: 2
2018/03/14
602
2,543
<issue_start>username_0: In `keras`, both `model.fit` and `model.predict` has a parameter of `batch_size`. My understanding is that batch size in model.fit is related to batch optimization, what's the physical meaning of `batch_size` in `model_predict`? Does it need to be equal to the one used by `model.fit`?<issue_comment>username_1: No it doesn‘t. Imagine inside your model there is a function which increases the amount of memory significantly. Therefore, you might run into resource errors if you try to predict all your data in one go. This is often the case when you use gpu with limited gpu memory for predicting. So instead you choose to predict only small batches at the same time. The batch\_size parameter in the predict function will not alter your results in any way. So you can choose any batch\_size you want for prediction. Upvotes: 5 [selected_answer]<issue_comment>username_2: It depends on your model and whether the batch size when training must match the batch size when predicting. For example, if you're using a stateful LSTM then the batch size matters because the entire sequence of data is spread across multiple batches, i.e. it's one long sequence that transcends the batches. In that case the batch size used to predict should match the batch size when training because it's important they match in order to define the whole length of the sequence. In stateless LSTM, or regular feed-forward perceptron models the batch size doesn't need to match, and you actually don't need to specify it for `predict()`. Just to add; this is different to `train_on_batch()` where you can supply a batch of input samples and get an equal number of prediction outputs. So, if you create a batch of 100 samples, you submit to train\_on\_batch() then you get 100 predictions, i.e. one for each sample. This can have performance benefits over issuing one at a time to `predict()`. Upvotes: 2 <issue_comment>username_3: As said above, batch size just increases the number of training data that is fed in at one go(batches). Increasing it may increase chances of your computer resources running out, assuming you are running it on your personal computer. If you are running it on the cloud with higher resources, you should be fine. You can toggle the number as you want, but don't put in a big number, I suggest going up slowly. Also, you may want to read this before you increase your batch size: <https://stats.stackexchange.com/questions/164876/tradeoff-batch-size-vs-number-of-iterations-to-train-a-neural-network> Upvotes: 0
2018/03/14
1,878
6,018
<issue_start>username_0: I was working on a Codility problem: > > You are given two non-empty zero-indexed arrays A and B consisting of > N integers. Arrays A and B represent N voracious fish in a river, > ordered downstream along the flow of the river. > > > The fish are numbered from 0 to N − 1. If P and Q are two fish and P < > Q, then fish P is initially upstream of fish Q. Initially, each fish > has a unique position. > > > Fish number P is represented by A[P] and B[P]. Array A contains the > sizes of the fish. All its elements are unique. Array B contains the > directions of the fish. It contains only 0s and/or 1s, where: > > > 0 represents a fish flowing upstream, 1 represents a fish flowing > downstream. If two fish move in opposite directions and there are no > other (living) fish between them, they will eventually meet each > other. Then only one fish can stay alive − the larger fish eats the > smaller one. More precisely, we say that two fish P and Q meet each > other when P < Q, B[P] = 1 and B[Q] = 0, and there are no living fish > between them. After they meet: > > > If A[P] > A[Q] then P eats Q, and P will still be flowing downstream, > If A[Q] > A[P] then Q eats P, and Q will still be flowing upstream. We > assume that all the fish are flowing at the same speed. That is, fish > moving in the same direction never meet. The goal is to calculate the > number of fish that will stay alive. > > > ``` **Complexity:** ``` > > expected worst-case time complexity is O(N); > expected worst-case space complexity is O(N), beyond input storage (not counting the storage required for input arguments). > > > Here is my solution: (100% correct results) ``` public int solution(int[] a, int[] b) { int remFish = a.length; int i = 0; for (i = 0; i < b.length; i++) { if(b[i] != 0){ /*remFish++; }else { */ break; } } Stack myQ = new Stack(); for (int j = i; j < b.length; j++) { if(b[j] == 1) { myQ.add(j); } while(b[j] == 0 && !myQ.isEmpty()) { if(a[j] > a[myQ.peek()]){ myQ.pop(); remFish--; }else{ remFish--; break; } } } return remFish; } ``` Could someone help me understand whether my solution passes the complexity requirements?<issue_comment>username_1: It's hard to follow this code with the strange data structures and lack of variable names, but I *think* I have the needed understanding ... **Space complexity**: The only dimensioned space you have is `myQ`, which is bounded by the total quantity of fish. Thus, this is **O(n)**. **Time Complexity**: Given your strange logic, this was harder to follow. The paired decrements of `remFish` and the abuse of `while -- break` confused me for a couple of minutes. However, the simpler analysis is ... The `while -- break` turns that loop into a simple `if` statement, since you *always* break the loop on first iteration. Therefore, your only true iteration is the `for` loop, bounded by the quantity of fish. Thus, this is also **O(n)**. Among other properties, note that you decrement `numFish` on each iteration, and it never drops as far as 0. --- Why do you gauge one iteration on a.length and another on b.length? Those must be the same, the starting quantity of fish. Upvotes: 0 <issue_comment>username_2: `N` fish get a series of `O(1)` checks. That's `O(n)`. The `O(n)` fish swimming downstream get added to `myQ` which is also `O(1)` each for another `O(n)` term. Every iteration of the inner loop kills a fish in `O(1)` time. At most `O(n)` fish die so that is also `O(n)`. Adding it all up, the total is `O(n)`. Upvotes: 0 <issue_comment>username_3: Your Idea was good. I tried to make it more understandable. ``` import java.util.*; class Solution { public int solution(int[] A, int[] B) { int numFishes = A.length; // no fishes if(numFishes == 0) return 0; // Deque stores the fishes swimming downstreams (B[i]==1) Deque downstreams = new ArrayDeque(); for(int i = 0; i < A.length; i++){ //Fish is going downstreams if(B[i] == 1){ // push the fish into the Deque downstreams.push(A[i]); }//Fish is going upstreams else{ while( !downstreams.isEmpty() ){ // Downstream-fish is bigger if( downstreams.peek() > A[i] ){ //Upstream-fish gets eaten numFishes--; break; }// Downstream-fish is smaller else if(downstreams.peek() < A[i]){ //Downstream-fish gets eaten numFishes--; downstreams.pop(); } } } } return numFishes; } } ``` Upvotes: 4 [selected_answer]<issue_comment>username_4: I got 100/100 with this code. I've seen other solutions more concise. But this one is pretty readable. ``` ArrayDeque downstreams = new ArrayDeque<>(); int alive = 0; for (int i = 0; i < A.length; i++) { int currFish = A[i] \* (B[i] == 1 ? -1 : 1); if (currFish < 0) { downstreams.push(currFish); alive++; } else { Iterator it = downstreams.iterator(); boolean eaten = false; while (it.hasNext()) { int down = (int) it.next(); if (Math.abs(currFish) > Math.abs(down)) { it.remove(); alive--; eaten = false; } else { eaten = true; break; } } if (!eaten) { alive++; } } } return alive; ``` Upvotes: 0 <issue_comment>username_5: ### Here is python3 try with time complexity O(N) ``` def fish_eater(fish_size, direction): stack = [] fish_alive = len(fish_size) if not len(fish_size): return 0 for i in range(len(fish_size)): if direction[i] == 1: stack.append(fish_size[i]) else: while len(stack): if stack[-1] > fish_size[i]: fish_alive -= 1 break if stack[-1] < fish_size[i]: fish_alive -= 1 stack.pop() return fish_alive ``` Useful resource about time complexity and big O notation [understanding-time-complexity-with-python](https://towardsdatascience.com/understanding-time-complexity-with-python-examples-2bda6e8158a7) Upvotes: 0
2018/03/14
239
887
<issue_start>username_0: How I do add packages sources in my project? In my Visual Studio, i go to tools -> options -> Nuget Package Manager -> Package Sources. And add the package information that in this case is not in Nuget. Now in the App Center I did not find this information and with this my project does not compile<issue_comment>username_1: You can create a [NuGet.config](https://learn.microsoft.com/en-us/appcenter/build/faq#how-to-restore-a-private-nuget-feed) file as described in the docs. ``` xml version="1.0" encoding="utf-8"? ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Maybe this will also help somebody. For me the above solution haven't worked. So what I've discovered - always check if official source of packages is same as in your config. For now nuget source have changed to version 3: ``` xml version="1.0" encoding="utf-8"? ``` Upvotes: 0
2018/03/14
1,673
5,718
<issue_start>username_0: In trying to access *RateCard* info in the *Government Cloud, Region usgovvirgia*, and working from example on github: <https://github.com/Azure-Samples/billing-dotnet-usage-api>. GitHub Sample throws Unhandled exception: AADSTS65005 (see links below) This error is mentioned there but in reworking the referenced sections of the procedure I haven't found a way to correct this and strongly suspect the problem is due to differences in US Gov Cloud. (See image below for App settings in the Portal). My RegisteredApp: RateCardHM, appId/clientID: XXXXXXXX-4ba0-47a3-811e-ca0b0b74118a -> Required Permissions-> (Delegated -- NoApplicationPermissionsAvailable) Access Azure Service Management as organization users (preview) > > RequiresAdmin: No {"AADSTS65005: Invalid resource. The client has > requested access to a resource which is not listed in the requested > permissions in the client's application registration. Client app ID: > XXXXXXXX-XXXX-47a3-811e-ca0b0b74118a. Resource value from request: > <https://management.usgovcloudapi.net/>. Resource app ID: > 40a69793-8fe6-4db1-9591-dbc5c57b17d8. List of valid resources from app > registration: 797f4846-ba00-4fd7-ba43-dac1f8f63013, > 00000002-0000-0000-c000-000000000000. Trace ID: > 6c1f3716-12ca-489e-b183-99cb6f730300 Correlation ID: > 57dbf637-8e01-42f2-873c-4723f1814254 Timestamp: 2018-03-14 18:43:33Z"} > > > Since there probably isn't a "2-letter ISO code" for usgovvirginia, what should be used? <https://msdn.microsoft.com/en-us/library/azure/mt219004.aspx> Indicates: •Set {RegionInfo} to the 2 letter ISO code where the offer was purchased. Reconcile with usgovvirginia Region. ``` ``` I would prefer to get access with PowerShell but getting the C# sample app to work would likely be sufficient and certainly a good start. [![My App Configuration](https://i.stack.imgur.com/MdDuj.png)](https://i.stack.imgur.com/MdDuj.png) UPDATE FOLLOWS (more info): --------------------------- **After logging in with PowerShell AzureRM (as myself) "Locations" for Microsoft.Commerce and RateCard API are empty:** ``` (Get-AzureRmResourceProvider -ListAvailable | ? ProviderNamespace -eq Microsoft.Commerce) # Outputs with LOCATION 'empty': ProviderNamespace : Microsoft.Commerce RegistrationState : Registered ResourceTypes : {UsageAggregates, RateCard, operations} Locations : {} (Get-AzureRmResourceProvider -ListAvailable | ? ProviderNamespace -eq Microsoft.Commerce).ResourceTypes | ? ResourceTypeName -eq RateCard # Outputs with LOCATION 'empty' also: ResourceTypeName : RateCard Locations : {} ApiVersions : {2016-08-31-preview, 2015-06-01-preview, 2015-05-15} ``` **Possibly this means US Gov Cloud doesn't offer these APIs in any region?** This following image shows the alert where the App has been added as a Reader: [![enter image description here](https://i.stack.imgur.com/14U04.png)](https://i.stack.imgur.com/14U04.png) [![enter image description here](https://i.stack.imgur.com/7Irs6.png)](https://i.stack.imgur.com/7Irs6.png)<issue_comment>username_1: First, you need to make sure Billing API is actually supported on Government Subscription. I cannot seem to find an official reference over the Internet about the supportability. It'd be much better to ask about that here <https://azure.microsoft.com/en-us/global-infrastructure/government/contact/> If it is supported, normally you need to add your client app you registered to the Government subscription. [![enter image description here](https://i.stack.imgur.com/upzUL.png)](https://i.stack.imgur.com/upzUL.png) Under Access Control (IAM) blade, click **Add**. Select **Reader** under **Role** (in case you just need to get information without any change). Under Select, you can look up your client app name (the one has client ID associated), you can also copy the client ID and paste to this field. [![enter image description here](https://i.stack.imgur.com/wrqpT.png)](https://i.stack.imgur.com/wrqpT.png) With out appropriate permission, your registered client app can't read to your Government resource to retrieve billing info over REST API. P/S: There is also a role named **Billing Reader** if you would like to explicitly control access. [![enter image description here](https://i.stack.imgur.com/PuWyr.png)](https://i.stack.imgur.com/PuWyr.png) Upvotes: 1 <issue_comment>username_2: Microsoft Support has now attested that accessing the RateCard API is not available in an Enterprise Account (nor in a CSP account). <https://learn.microsoft.com/en-us/azure/billing/billing-usage-rate-card-overview#azure-resource-ratecard-api-preview> Azure Resource RateCard API (Preview) ------------------------------------- * Use the Azure Resource RateCard API to get the list of available Azure resources and estimated pricing information for each. The API includes: Azure Role-based Access Control - Configure your access policies on the Azure portal or through Azure PowerShell cmdlets to specify which users or applications can get access to the RateCard data. Callers must use standard Azure Active Directory tokens for authentication. Add the caller to either the Reader, Owner, or Contributor role to get access to the usage data for a particular Azure subscription. * Support for Pay-as-you-go, MSDN, Monetary commitment, and Monetary credit offers (**EA and CSP not supported**) - This API provides Azure offer-level rate information. The caller of this API must pass in the offer information to get resource details and rates. We're currently unable to provide EA rates because EA offers have customized rates per enrollment. Thanks to everyone who tried to help. Upvotes: 0
2018/03/14
425
1,468
<issue_start>username_0: i made a function to delete from more tables but it does not work? ``` //the 'id' is came through URL so why it does not work $id = $_GET['id']; del($id, "DELETE FROM `companies` WHERE id=$id"); function del($id, $query){ try { $con->query($query); mysqli_commit($con); echo 'Deleted'; } catch (Exception $ex) { mysqli_rollback($con); echo $ex->getTraceAsString(); } } ```<issue_comment>username_1: Assuming `$con` is defined in the same scope as you call the function then try this. ``` //the 'id' is came through URL so why it does not work $id = $_GET['id']; del($con, "DELETE FROM `companies` WHERE id=$id"); function del($con, $query){ try { $con->query($query); mysqli_commit($con); echo 'Deleted'; } catch (Exception $ex) { mysqli_rollback($con); echo $ex->getTraceAsString(); } } ``` Upvotes: 1 <issue_comment>username_2: You should use prepared statements to prevent SQL injection attacks: ``` $id = $_GET['id']; $sql = "DELETE FROM `companies` WHERE id=?"; del($id, $sql, $con); function del($id, $sql, $con){ try { $result = $con->prepare($sql); $result->bind_param('i', $id); $result->execute() === true ? 'Successfully deleted' : 'Failed: '.$con->error; } catch (Exception $ex) { mysqli_rollback($con); echo $ex->getTraceAsString(); } } ``` Upvotes: 0
2018/03/14
545
1,918
<issue_start>username_0: Is it possible to create a contraint on a table and specify a value on one or more of the columns? Condsider this example: ``` mytable = Table('mytable', meta, # per-column anonymous unique constraint Column('col1', Integer,), Column('col2', Integer), Column('col3', ENUM('ready', 'pass', 'fail'), UniqueConstraint('col2', 'col2', 'col3', name='uix_1') ) ``` But I dont only want uniqueness when col3 is equal to something like a state of 'ready' (I WANT multiple success or failures). ``` UniqueConstraint('col2', 'col2', 'col3 == ready', name='uix_1') ``` Is this possible in the sqlalchemy api?<issue_comment>username_1: So from what I understand you want the group (col1, col2, col3) to be unique only if col3 has the value 'ready'? I don't think that is possible using unique constraints. It could be done with a CheckConstraint, assuming your database supports it. You can read up on it [here](http://docs.sqlalchemy.org/en/latest/core/constraints.html#check-constraint) Upvotes: 1 <issue_comment>username_2: There is a full example on this [link](https://www.johbo.com/2016/creating-a-partial-unique-index-with-sqlalchemy-in-postgresql.html): ``` class ExampleTable(Base): __tablename__ = 'example_table' __table_args__ = ( Index( 'ix_unique_primary_content', # Index name 'object_type', 'object_id', # Columns which are part of the index unique=True, postgresql_where=Column('is_primary')), # The condition ) id = Column(Integer, primary_key=True) object_type = Column(Unicode(50)) object_id = Column(Integer) is_primary = Column(Boolean) ``` so you can use something like this: ``` Index( 'col1', 'col2', # Columns which are part of the index unique=True, postgresql_where=Column("col3='ready'")), # The condition ``` Upvotes: 5 [selected_answer]
2018/03/14
1,760
6,618
<issue_start>username_0: I'm trying to come up with an efficient solution to be able to query the entity list and order it correctly. I have created a singly linked list type structure in SQL DB schema. I am using GUID as my IDs but for simplicity, I'll use int here. I could solve this problem easily by having a SortOrder column on the DB but because of other requirements, this is how I have to implement this table. I have a table structure that looks like the following entity model: ``` public class Record { public int ID; public string Name; public int? ChildID; //References the next record } ``` My initial thought is to create a partial class like the following: ``` public partial class Record { public int SortOrder { get { //query table and loop through the entire list building it from //the bottom and keeping count of the integer sort order and return //the one specific to this record } } } ``` However, this seems very inefficient to have to query the entire list every time and iterate through to find the SortOrder. Is there anything else I can leverage like a custom OrderBy function or anything? I'm trying sort by the order that would be created when iterating a building the list. For instance, the record with ChildID = null, is the last one in the list, since it does not have a child. I'll start with that record, then get the next one above it that references the previous as its ChildID and go until there is no more in the list that has a reference to ID, which should be when the list is complete and ordered correctly. No two records have the same ChildID. If I had the following 3 records in a list, ``` ID = 3, Name = "Apple", ChildID = 6, ID = 54, Name = "Orange", ChildID = 3, ID = 6, Name = "Banana", ChildID = null ``` Then I would expect to get Orange, Apple, Banana, in that order.<issue_comment>username_1: One way to do it would be to write a method that will return a list in sorted order. You would first find the record with `ChildId == null`, add it to the results list, and then continue to search for items where `item.ChildId == previousItem.Id`, and then insert them at the beginning of the list: ``` private static IEnumerable OrderRecords(IReadOnlyCollection records) { // "Exit fast" checks if (records == null) return null; if (records.Count < 2) return records.ToList(); // Get last record and add it to our results Record currentRecord = records.Single(r => r.ChildID == null); var results = new List {currentRecord}; // Keep getting the parent reference to the previous record // and insert it at the beginning of the results list while ((currentRecord = records.SingleOrDefault(r => r.ChildID == currentRecord.ID)) != null) { results.Insert(0, currentRecord); } return results; } ``` In use, this would look something like: ``` private static void Main() { var records = new List { new Record {ID = 3, Name = "Apple", ChildID = 6}, new Record {ID = 54, Name = "Orange", ChildID = 3}, new Record {ID = 6, Name = "Banana", ChildID = null} }; var sortedRecords = OrderRecords(records); Console.WriteLine(string.Join(", ", sortedRecords.Select(r => r.Name))); Console.Write("\nPress any key to exit..."); Console.ReadKey(); } ``` **Output** [![enter image description here](https://i.stack.imgur.com/Qd1QI.png)](https://i.stack.imgur.com/Qd1QI.png) Upvotes: 2 <issue_comment>username_2: Given that the record ID order is random, and assuming that the `List` you are ordering is complete, or that you won't run out of memory/time if you have to scan the entire table to order the list, I think the best you can do is compute the depth for a `Record` and cache the results: I am using the `List` as the table, but you could use the table instead if the list you want to order is incomplete: ``` public partial class Record { static Dictionary depth = new Dictionary(); public int Depth(List dbTable) { int ans = 0; var working = new Queue(); var cur = this; do { if (depth.TryGetValue(cur.ID, out var curDepth)) { ans += curDepth; break; } else { working.Enqueue(cur.ID); cur = dbTable.FirstOrDefault(r => r.ChildID == cur.ID); if (cur != null) ++ans; } } while (cur != null); var workAns = ans; while (working.Count > 0) { var workingID = working.Dequeue(); depth.Add(workingID, workAns); --workAns; } return ans; } } ``` **Update:** I re-wrote the code to use a specific queue; my first version was recursive and that was straightforward but risked overflowing the stack and my second version didn't cache the intermediate results when following the linked list which wasn't very efficient. Using a queue of the intermediate IDs ensures I only follow a particular chain depth once. Now that you have a `Depth` method, sorting is easy: ``` var ans = work.OrderBy(w => w.Depth(work)); ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: The best algorithm for this task is to prepare a fast lookup data structure (like `Dictionary`) of `Record` by `ChildID`. Then the ordered result can be produced backwards starting with `ChildID = null` and using the record `ID` to find the previous record. Since the hash lookup time complexity is O(1), the time complexity of the algorithm is linear O(N) - the fastest possible. Here is the implementation: ``` static Record[] Ordered(IEnumerable records) { var recordByNextId = records.ToDictionary(e => e.ChildID.Wrap()); var result = new Record[recordByNextId.Count]; int? nextId = null; for (int i = result.Length - 1; i >=0; i--) nextId = (result[i] = recordByNextId[nextId]).ID; return result; } ``` The explanation of `e.ChildID.Wrap()` custom extension method. I wish I can use simply `e.ChildID`, but the BCL `Dictionary` class throws annoying exception for `null` key. To overcome that limitation in general, I use a simple wrapper `struct` and "fluent" helper: ``` public struct ValueWrapper : IEquatable> { public readonly T Value; public ValueWrapper(T value) => Value = value; public bool Equals(ValueWrapper other) => EqualityComparer.Default.Equals(Value, other.Value); public override bool Equals(object obj) => obj is ValueWrapper other && Equals(other); public override int GetHashCode() => EqualityComparer.Default.GetHashCode(Value); public static implicit operator ValueWrapper(T x) => new ValueWrapper(x); public static implicit operator T(ValueWrapper x) => x.Value; } public static class ValueWrapper { public static ValueWrapper Wrap(this T value) => new ValueWrapper(value); } ``` Upvotes: 1
2018/03/14
783
2,593
<issue_start>username_0: I am trying to make a function that looks at an image, and return the X pixel value. When i run the code, it throws an error on the Int1=CInt(Xdim) line, saying "Type Mismatch (10080)" If i hard-code the value i am testing into Xdim, it works fine. ``` Function ImgXDim(filename As String) As Integer ' Finds the X dimension in pixels of a loaded image Dim objShell As Object Dim objFolder As Object Dim objFile As Object Dim ImgSize As String Dim Int1 As Integer Dim Xdim As String Dim strarray() As String Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.NameSpace(MacroDir & "\PICS\") Set objFile = objFolder.ParseName(filename) ImgSize = objFile.ExtendedProperty("Dimensions") ' Returns string of "700 x 923" strarray = Split(ImgSize, " x ") ' Split into 2 strings of "700" and "923" Xdim = CStr(strarray(0)) ' Force Xdim to be a string of "700" Int1 = CInt(Xdim) ' Convert Xdim to an integer ImgXDim = Int1 ' Return Integer End Function ```<issue_comment>username_1: First check if value can be converted to an integer: ``` If IsNumeric(Trim(Xdim)) then Int1 = CInt(Xdim) else 'for debug purposes MsgBox ("XDim non-numeric or empty") End If ``` Upvotes: 1 <issue_comment>username_2: Ok, i couldnt find what character was causing the issue, so i used this loop of code to pull out only numbers, and it seems to work. ``` For X = 1 To Len(Xdim) If IsNumeric(Mid(Xdim, X, 1)) = True Then holder = holder & Mid(Xdim, X, 1) End If Next X ``` Upvotes: 0 <issue_comment>username_3: Here the [WIA](https://msdn.microsoft.com/en-us/library/windows/desktop/ms630368(v=vs.85).aspx) version: ```vb Function ImgXDim(filename As String) As Long Dim imgWIA as New WIA.ImageFile 'Early Binding needs a reference to Windows Image Aquisition Library in VBA-Ide->Tools->References 'Dim imgWIA as Object 'Late Bound Version 'Set imgWIA = CreateObject("WIA.ImageFile") imgWIA.LoadFile MacroDir & "\PICS\" & filename ImgXDim = imgWIA.Width ' use .Height for height End Function ``` As you see, just three lines of code and returns a long, not a string that needs parsing. Useful functions for [resize](https://www.devhut.net/2017/01/18/vba-resize-image/), [rotate](https://www.devhut.net/2017/05/14/vba-wia-rotate-an-image/) and more. Also useful if you want to display Tiffs in a picture control (page by page) and more. Upvotes: 0
2018/03/14
534
1,695
<issue_start>username_0: When I try to use the zsh shell with iterm2, all colors set by zsh disappear. So git diffs, etc. Show as all one color. I have selected zsh as the shell successfully, but it seems to get overwritten. I have done `chsh -s /bin/zsh`.<issue_comment>username_1: First check if value can be converted to an integer: ``` If IsNumeric(Trim(Xdim)) then Int1 = CInt(Xdim) else 'for debug purposes MsgBox ("XDim non-numeric or empty") End If ``` Upvotes: 1 <issue_comment>username_2: Ok, i couldnt find what character was causing the issue, so i used this loop of code to pull out only numbers, and it seems to work. ``` For X = 1 To Len(Xdim) If IsNumeric(Mid(Xdim, X, 1)) = True Then holder = holder & Mid(Xdim, X, 1) End If Next X ``` Upvotes: 0 <issue_comment>username_3: Here the [WIA](https://msdn.microsoft.com/en-us/library/windows/desktop/ms630368(v=vs.85).aspx) version: ```vb Function ImgXDim(filename As String) As Long Dim imgWIA as New WIA.ImageFile 'Early Binding needs a reference to Windows Image Aquisition Library in VBA-Ide->Tools->References 'Dim imgWIA as Object 'Late Bound Version 'Set imgWIA = CreateObject("WIA.ImageFile") imgWIA.LoadFile MacroDir & "\PICS\" & filename ImgXDim = imgWIA.Width ' use .Height for height End Function ``` As you see, just three lines of code and returns a long, not a string that needs parsing. Useful functions for [resize](https://www.devhut.net/2017/01/18/vba-resize-image/), [rotate](https://www.devhut.net/2017/05/14/vba-wia-rotate-an-image/) and more. Also useful if you want to display Tiffs in a picture control (page by page) and more. Upvotes: 0
2018/03/14
1,904
7,194
<issue_start>username_0: Just messing around with the language thinking of how I want to structure some UserDefaults that automatically generate keys based on the hierarchy. That got me wondering... Is it possible to simultaneously define, and instantiate a type, like this? ``` let myUserSettings = { let formatting = { var lastUsedFormat:String } } let lastUsedFormat = myUserSettings.formatting.lastUsedFormat ``` > > Note: I can't use statics because I specifically need instancing so nested structs/classes with static members will not work for my case. > > > Here's the closest thing I could come up with, but I hate that I have to create initializers to set the members. I'm hoping for something a little less verbose. ``` class DefaultsScope { init(_ userDefaults:UserDefaults){ self.userDefaults = userDefaults } let userDefaults:UserDefaults func keyForSelf(property:String = #function) -> String { return "\(String(reflecting: self)).\(property)" } } let sharedDefaults = SharedDefaults(UserDefaults(suiteName: "A")!) class SharedDefaults : DefaultsScope { override init(_ userDefaults:UserDefaults){ formatting = Formatting(userDefaults) misc = Misc(userDefaults) super.init(userDefaults) } let formatting:Formatting class Formatting:DefaultsScope { let maxLastUsedFormats = 5 fileprivate(set) var lastUsedFormats:[String]{ get { return userDefaults.stringArray(forKey:keyForSelf()) ?? [] } set { userDefaults.set(newValue, forKey:keyForSelf()) } } func appendFormat(_ format:String) -> [String] { var updatedListOfFormats = Array(lastUsedFormats.suffix(maxLastUsedFormats - 1)) updatedListOfFormats.append(format) lastUsedFormats = updatedListOfFormats return updatedListOfFormats } } let misc:Misc class Misc:DefaultsScope { var someBool:Bool{ get { return userDefaults.bool(forKey:keyForSelf()) } set { userDefaults.set(newValue, forKey:keyForSelf()) } } } } ``` So is there a simpler way?<issue_comment>username_1: Disclaimer: this is, probably, just an abstract solution that should not be used in real life :) ``` enum x { enum y { static func success() { print("Success") } } } x.y.success() ``` **Update**: Sorry, folks, I can't stop experimenting. This one looks pretty awful :) ``` let x2= [ "y2": [ "success": { print("Success") } ] ] x2["y2"]?["success"]?() ``` **Update 2**: One more try, this time with tuples. And since tuples must have at least two values, I had to add some dummies in there. Also, tuples cannot have mutating functions. ``` let x3 = ( y3: ( success: { print("Success") }, failure: { print("Failure") } ), z3: 0 ) x3.y3.success() ``` Upvotes: 1 <issue_comment>username_2: You cannot have that kind of structure but you cant access y from inside x, since y is only visible inside the scope of x and so is success inside the scope of y. There is no way that you can access them from outside One other alternative is to have higher order function like so, which return closure which is callable. ``` let x = { { { print("Success") } } } let y = x() let success = y() success() ``` or ``` x()()() ``` The real world usage of higher order function for userdefaults could be something like this, ``` typealias StringType = (String) -> ((String) -> Void) typealias IntType = (String) -> ((Int) -> Void) typealias BoolType = (String) -> ((Bool) -> Void) typealias StringValue = (String) -> String? typealias IntValue = (String) -> Int? typealias BoolValue = (String) -> Bool? func userDefaults(\_ defaults: UserDefaults) -> (String) -> ((T) -> Void) { return { key in return { value in defaults.setValue(value, forKey: key) } } } func getDefaultsValue(\_ defaults: UserDefaults) -> (String) -> T? { return { key in return defaults.value(forKey: key) as? T } } let setStringDefaults: StringType = userDefaults(.standard) setStringDefaults("Name")("<NAME>") setStringDefaults("Address")("Australia") let setIntDefaults: IntType = userDefaults(.standard) setIntDefaults("Age")(35) setIntDefaults("Salary")(2000) let setBoolDefaults: BoolType = userDefaults(.standard) setBoolDefaults("Married")(false) setBoolDefaults("Employed")(true) let getStringValue: StringValue = getDefaultsValue(.standard) let name = getStringValue("Name") let address = getStringValue("Address") let getIntValue: IntValue = getDefaultsValue(.standard) let age = getIntValue("Age") let salary = getIntValue("Salary") let getBoolValue: BoolValue = getDefaultsValue(.standard) let married = getBoolValue("Married") let employed = getBoolValue("Employed") ``` I am not sure if you like the pattern, but it has some good use cases as you can see from below, setStringDefaults you can set strings value to string key and all of them are typesafe. You can extend this for your use case. But, you could use struct as well and use imperative code, which could be easier to understand. I see beauty in this as well. Upvotes: 0 <issue_comment>username_3: How about you try nesting some swift `structs`? ``` struct x { struct y { static func success() { print("success") } } } x.y.success() ``` Upvotes: 0 <issue_comment>username_4: Ok, I think I've figured it out. This first class can go in some common library that you use for all your apps. ``` class SettingsScopeBase { private init(){} static func getKey(setting:String = #function) -> String { return "\(String(reflecting:self)).\(setting)" } } ``` The next part is a pair of classes: 1. The 'Scoping' class where you define which user defaults instance to use (along with anything else you may want to specify for this particular settings instance) 2. The actual hierarchy that defines your settings Here's the first. I'm setting this up for my shared settings between my application and it's extension: ``` class SharedSettingsScope : SettingsScopeBase{ static let defaults = UserDefaults(suiteName: "group.com.myco.myappgroup")! } ``` And finally, here's how you 'set up' your hierarchy as well as how you implement the properties' bodies. ``` class SharedSettings:SharedSettingsScope{ class Formatting:SharedSettingsScope{ static var groupsOnWhitespaceOnlyLines:Bool{ get { return defaults.bool(forKey: getKey()) } set { defaults.set(newValue, forKey: getKey()) } } } } ``` And here's how you use them... ``` let x = SharedSettings.Formatting.groupsOnWhitespaceOnlyLines // x = false SharedSettings.Formatting.groupsOnWhitespaceOnlyLines = true let y = SharedSettings.Formatting.groupsOnWhitespaceOnlyLines // y = true ``` I'm going to see if I can refine/optimize it a little more, but this is pretty close to where I want to be. No hard-coded strings, keys defined by the hierarchy where they're used, and only setting the specific UserDefaults instance in one place. Upvotes: 1 [selected_answer]
2018/03/14
1,619
6,144
<issue_start>username_0: Preface: I am relatively new to mobile app development. My team and I are developing a Mobile app where one of the pieces of intended functionality is a shared calendar between multiple users. We are writing the application using Flutter, which implements the dart language. The dart version of the Google API is in beta, as is Flutter, which means that documentation is relatively scarce, compared to more established methods of mobile development. My question is as follows: How do I display the User's Google Calendar on the App page: This breaks down into two parts, 1)How to I retrieve the information from Google? I know this will require the Google Calendar API, which I have added the dependency for. I am unsure what command will return the needed information. <https://pub.dartlang.org/packages/googleapis> <https://developers.google.com/calendar/v3/reference/> Unfortunately Google has not released any examples on how to implement this in dart 2)How do I physically display the information on the Calendar page?<issue_comment>username_1: Disclaimer: this is, probably, just an abstract solution that should not be used in real life :) ``` enum x { enum y { static func success() { print("Success") } } } x.y.success() ``` **Update**: Sorry, folks, I can't stop experimenting. This one looks pretty awful :) ``` let x2= [ "y2": [ "success": { print("Success") } ] ] x2["y2"]?["success"]?() ``` **Update 2**: One more try, this time with tuples. And since tuples must have at least two values, I had to add some dummies in there. Also, tuples cannot have mutating functions. ``` let x3 = ( y3: ( success: { print("Success") }, failure: { print("Failure") } ), z3: 0 ) x3.y3.success() ``` Upvotes: 1 <issue_comment>username_2: You cannot have that kind of structure but you cant access y from inside x, since y is only visible inside the scope of x and so is success inside the scope of y. There is no way that you can access them from outside One other alternative is to have higher order function like so, which return closure which is callable. ``` let x = { { { print("Success") } } } let y = x() let success = y() success() ``` or ``` x()()() ``` The real world usage of higher order function for userdefaults could be something like this, ``` typealias StringType = (String) -> ((String) -> Void) typealias IntType = (String) -> ((Int) -> Void) typealias BoolType = (String) -> ((Bool) -> Void) typealias StringValue = (String) -> String? typealias IntValue = (String) -> Int? typealias BoolValue = (String) -> Bool? func userDefaults(\_ defaults: UserDefaults) -> (String) -> ((T) -> Void) { return { key in return { value in defaults.setValue(value, forKey: key) } } } func getDefaultsValue(\_ defaults: UserDefaults) -> (String) -> T? { return { key in return defaults.value(forKey: key) as? T } } let setStringDefaults: StringType = userDefaults(.standard) setStringDefaults("Name")("<NAME>") setStringDefaults("Address")("Australia") let setIntDefaults: IntType = userDefaults(.standard) setIntDefaults("Age")(35) setIntDefaults("Salary")(2000) let setBoolDefaults: BoolType = userDefaults(.standard) setBoolDefaults("Married")(false) setBoolDefaults("Employed")(true) let getStringValue: StringValue = getDefaultsValue(.standard) let name = getStringValue("Name") let address = getStringValue("Address") let getIntValue: IntValue = getDefaultsValue(.standard) let age = getIntValue("Age") let salary = getIntValue("Salary") let getBoolValue: BoolValue = getDefaultsValue(.standard) let married = getBoolValue("Married") let employed = getBoolValue("Employed") ``` I am not sure if you like the pattern, but it has some good use cases as you can see from below, setStringDefaults you can set strings value to string key and all of them are typesafe. You can extend this for your use case. But, you could use struct as well and use imperative code, which could be easier to understand. I see beauty in this as well. Upvotes: 0 <issue_comment>username_3: How about you try nesting some swift `structs`? ``` struct x { struct y { static func success() { print("success") } } } x.y.success() ``` Upvotes: 0 <issue_comment>username_4: Ok, I think I've figured it out. This first class can go in some common library that you use for all your apps. ``` class SettingsScopeBase { private init(){} static func getKey(setting:String = #function) -> String { return "\(String(reflecting:self)).\(setting)" } } ``` The next part is a pair of classes: 1. The 'Scoping' class where you define which user defaults instance to use (along with anything else you may want to specify for this particular settings instance) 2. The actual hierarchy that defines your settings Here's the first. I'm setting this up for my shared settings between my application and it's extension: ``` class SharedSettingsScope : SettingsScopeBase{ static let defaults = UserDefaults(suiteName: "group.com.myco.myappgroup")! } ``` And finally, here's how you 'set up' your hierarchy as well as how you implement the properties' bodies. ``` class SharedSettings:SharedSettingsScope{ class Formatting:SharedSettingsScope{ static var groupsOnWhitespaceOnlyLines:Bool{ get { return defaults.bool(forKey: getKey()) } set { defaults.set(newValue, forKey: getKey()) } } } } ``` And here's how you use them... ``` let x = SharedSettings.Formatting.groupsOnWhitespaceOnlyLines // x = false SharedSettings.Formatting.groupsOnWhitespaceOnlyLines = true let y = SharedSettings.Formatting.groupsOnWhitespaceOnlyLines // y = true ``` I'm going to see if I can refine/optimize it a little more, but this is pretty close to where I want to be. No hard-coded strings, keys defined by the hierarchy where they're used, and only setting the specific UserDefaults instance in one place. Upvotes: 1 [selected_answer]
2018/03/14
1,579
5,865
<issue_start>username_0: I'm making an HttpClient get request, and returning the response to my component. I can't seem to do anything with the response. Here is my component method calling the service: ``` onSubmit() { this.userService.updateProfile(this.userForm.value) .subscribe(response => { console.log(response); //This does not log to the console }); } ``` Here is my service: ``` updateProfile(user: User) { return this.httpClient.put('/updateUser', user) .map(response => { console.log(response); //This does not log to the console return response; }); } ``` I am unable to get the response logged to the console in either my service or component. I don't have any compilation errors or anything that indicates there is an issue.<issue_comment>username_1: Disclaimer: this is, probably, just an abstract solution that should not be used in real life :) ``` enum x { enum y { static func success() { print("Success") } } } x.y.success() ``` **Update**: Sorry, folks, I can't stop experimenting. This one looks pretty awful :) ``` let x2= [ "y2": [ "success": { print("Success") } ] ] x2["y2"]?["success"]?() ``` **Update 2**: One more try, this time with tuples. And since tuples must have at least two values, I had to add some dummies in there. Also, tuples cannot have mutating functions. ``` let x3 = ( y3: ( success: { print("Success") }, failure: { print("Failure") } ), z3: 0 ) x3.y3.success() ``` Upvotes: 1 <issue_comment>username_2: You cannot have that kind of structure but you cant access y from inside x, since y is only visible inside the scope of x and so is success inside the scope of y. There is no way that you can access them from outside One other alternative is to have higher order function like so, which return closure which is callable. ``` let x = { { { print("Success") } } } let y = x() let success = y() success() ``` or ``` x()()() ``` The real world usage of higher order function for userdefaults could be something like this, ``` typealias StringType = (String) -> ((String) -> Void) typealias IntType = (String) -> ((Int) -> Void) typealias BoolType = (String) -> ((Bool) -> Void) typealias StringValue = (String) -> String? typealias IntValue = (String) -> Int? typealias BoolValue = (String) -> Bool? func userDefaults(\_ defaults: UserDefaults) -> (String) -> ((T) -> Void) { return { key in return { value in defaults.setValue(value, forKey: key) } } } func getDefaultsValue(\_ defaults: UserDefaults) -> (String) -> T? { return { key in return defaults.value(forKey: key) as? T } } let setStringDefaults: StringType = userDefaults(.standard) setStringDefaults("Name")("<NAME>") setStringDefaults("Address")("Australia") let setIntDefaults: IntType = userDefaults(.standard) setIntDefaults("Age")(35) setIntDefaults("Salary")(2000) let setBoolDefaults: BoolType = userDefaults(.standard) setBoolDefaults("Married")(false) setBoolDefaults("Employed")(true) let getStringValue: StringValue = getDefaultsValue(.standard) let name = getStringValue("Name") let address = getStringValue("Address") let getIntValue: IntValue = getDefaultsValue(.standard) let age = getIntValue("Age") let salary = getIntValue("Salary") let getBoolValue: BoolValue = getDefaultsValue(.standard) let married = getBoolValue("Married") let employed = getBoolValue("Employed") ``` I am not sure if you like the pattern, but it has some good use cases as you can see from below, setStringDefaults you can set strings value to string key and all of them are typesafe. You can extend this for your use case. But, you could use struct as well and use imperative code, which could be easier to understand. I see beauty in this as well. Upvotes: 0 <issue_comment>username_3: How about you try nesting some swift `structs`? ``` struct x { struct y { static func success() { print("success") } } } x.y.success() ``` Upvotes: 0 <issue_comment>username_4: Ok, I think I've figured it out. This first class can go in some common library that you use for all your apps. ``` class SettingsScopeBase { private init(){} static func getKey(setting:String = #function) -> String { return "\(String(reflecting:self)).\(setting)" } } ``` The next part is a pair of classes: 1. The 'Scoping' class where you define which user defaults instance to use (along with anything else you may want to specify for this particular settings instance) 2. The actual hierarchy that defines your settings Here's the first. I'm setting this up for my shared settings between my application and it's extension: ``` class SharedSettingsScope : SettingsScopeBase{ static let defaults = UserDefaults(suiteName: "group.com.myco.myappgroup")! } ``` And finally, here's how you 'set up' your hierarchy as well as how you implement the properties' bodies. ``` class SharedSettings:SharedSettingsScope{ class Formatting:SharedSettingsScope{ static var groupsOnWhitespaceOnlyLines:Bool{ get { return defaults.bool(forKey: getKey()) } set { defaults.set(newValue, forKey: getKey()) } } } } ``` And here's how you use them... ``` let x = SharedSettings.Formatting.groupsOnWhitespaceOnlyLines // x = false SharedSettings.Formatting.groupsOnWhitespaceOnlyLines = true let y = SharedSettings.Formatting.groupsOnWhitespaceOnlyLines // y = true ``` I'm going to see if I can refine/optimize it a little more, but this is pretty close to where I want to be. No hard-coded strings, keys defined by the hierarchy where they're used, and only setting the specific UserDefaults instance in one place. Upvotes: 1 [selected_answer]
2018/03/14
446
1,606
<issue_start>username_0: I want see some example of fade Animator in Vaadin, everytime I try animate some layout or component after button click nothing works, there is no documentation how to do it<issue_comment>username_1: The best way of doing animations is to use CSS. You can have the button click add a classname to the element you want to animate and then define the class something like this (in your case you would add the class with Vaadin's addStyleName() method, not JS) ```js document.querySelector('#hide-button').addEventListener('click', () => { document.querySelector('.your-element').classList.add('hide'); }); ``` ```css .your-element { transition: opacity 400ms; background: blue; height: 200px; width: 200px; } .your-element.hide { opacity: 0; } ``` ```html Hide ``` Upvotes: 2 <issue_comment>username_2: Have you tried the animator addon? <https://vaadin.com/directory/component/animator> After installing it and compiling the widgetset you can use this: ``` Animator.animate(component, new Css().opacity(0)); Animator.animate(component, new Css().translateX("100px")).delay(1000).duration(2000); ``` Upvotes: 1 <issue_comment>username_3: Try out CompAni (= Component Animator) for Vaadin. [Vaadin Component Animator Addon](https://vaadin.com/directory/component/compani) It works out of the box and offers a lot of amazing animations for components and layouts. Description and examples can be found here: [Improve your Vaadin application with fancy animations](https://mekaso.rocks/improve-your-vaadin-application-with-fancy-animations) Upvotes: 0
2018/03/14
768
3,468
<issue_start>username_0: I am trying to build a GStreamer pipeline which interleaves images from multiple cameras into a single data flow which can be passed through a neural network and then split into separate branches for sinking. I am successfully using the `appsrc` plugin and the Basler Pylon 5 - USB 3.0 API to create the interleaved feed. However, before I go through the work to write the neural network GStreamer element, I want to get the splitting working. Currently, I am thinking of tagging the images with an "ID" indicating which camera it came from. Then I thought I could split the data flow using this tag. However, I have not been able to find any subject matter dealing with this issue exactly. I have seen that you can use `tee` plugin to branch the pipeline, but I haven't seen it used to split based on tags. Is it possible to use `tee` to do this? I have seen people use `tee` to split a feed based on the source with something like this: ``` gst-launch-1.0 -vvv \ tee name=splitter \ $VSOURCE \ ! $VIDEO_DECODE \ ! $VIDEO_SINK splitter. \ $VSOURCE1 \ ! $VIDEO_DECODE \ ! $VIDEO_SINK splitter. ``` However, this does not allow me to have a single path through the neural network element. If it helps, here is a diagaram of the pipeline I envision: ``` cam1 ---\ /---> udpsink/appsink \ / appsrc-->neural_network-->tee--- / \ cam2 ---/ \---> udpsink/appsink ```<issue_comment>username_1: The tee element just forwards the same data to both branches. You should write another element which takes the input and only outputs the data of the stream you are interested in. You should also place a queue element behind each branch to provide separate threads for each branch. I called the element to split up the streams camfilter which has a property `id`: ``` cam1 ---\ /---> queue --> camfilter id=1 --> udpsink/appsink \ / appsrc-->neural_network-->tee--- / \ cam2 ---/ \---> queue --> camfilter id=2 --> udpsink/appsink ``` Upvotes: 2 <issue_comment>username_2: This was not available when this question was asked. However, since summer 2018, if you want to reduce the workload of implementing your own "merging" code with appsrc and camera frame handling, you can use nnstreamer. This also allows you to replace neural networks more easily. With recent add of neural-network supporting gstreamer plugins, "nnstreamer" (<https://github.com/nnsuite/nnstreamer>), you can do it without having appsrc in the middle, reducing your workloads of implementation: ``` cam1 (gst src) ---> videoconvert,scale,... --> tensor_converter --\ \ tensor_merge (or tensor_mux depending on the input dimensions) --> tensor_filter (framework=tf_lite, model=abc.tflite) --> tee --> (same from here) / cam2 (gst src) ---> videoconvert,scale,... --> tensor_converter --/ ``` Note that it supports pytorch, caffe2, tf, and more as well. Upvotes: 1