date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/15
1,079
3,456
<issue_start>username_0: There are many threads on SO that explain different ways to get the maximum value from an array (map, max(by:), reduce, filteredArrayUsingPredicate). I've also considered constructing an `NSPredicate` to use in with my `fetchRequest`. But I haven't found anything that gives me a clue about how to accomplish a simple thing I'm trying to do. I have an array of `LiftEvent` `NSManagedObjects` and there can be more than one `LiftEvent` on any given date. I want to get one object per date and I want that object to be the one with the largest oneRepMax value on each date. Say I have these `LiftEvent` objects in an array with these properties: ``` LiftEvent1.date = "3/1/18" LiftEvent1.oneRepMax = 200 LiftEvent2.date = "3/2/18" LiftEvent2.oneRepMax = 210 LiftEvent3.date = "3/2/18" LiftEvent3.oneRepMax = 220 LiftEvent4.date = "3/3/18" LiftEvent4.oneRepMax = 205 LiftEvent5.date = "3/3/18" LiftEvent5.oneRepMax = 225 ``` I want to pick out the one with the highest weight value on each date which would give me these: ``` LiftEvent1.date = "3/1/18" LiftEvent1.oneRepMax = 200 LiftEvent3.date = "3/2/18" LiftEvent3.oneRepMax = 220 LiftEvent5.date = "3/3/18" LiftEvent5.oneRepMax = 225 ``` I fetch the `LiftEvent`'s and sort them like so: ``` let liftEvents = dataManager.fetchLiftsEventsOfType(liftEventTypeUuid) var sortedLiftEvents = liftEvents.sorted(by: { $0.date.compare($1.date) == .orderedAscending }) ``` But of course when I do this, I get a single `LiftEvent` with the highest `oneRepMax` value: ``` var maxLiftEvent = sortedLiftEvents.max { a, b in a.oneRepMax < b.oneRepMax } ``` How can I get a single `LiftEvent` from each date, each one having the highest `oneRepMax`value on that date and ensure that if there are more than one on a date with the same `oneRepMax` value that I only choose one?<issue_comment>username_1: You can group all events in a dictionary keyed by date, and get the maximum for every date. Since your dates can have different hour/minute/second values, you need to unique each date by day, and this can be done by computing the number of days between a given date and Jan 1 1970. ``` let groupedEvents = Dictionary(grouping: liftEvents, by: { floor($0.date.timeIntervalSince1970 / 86400) }) ``` This will result in a [Double: [LiftEvent]] dictionary, the key being the number of days since 1970, and the value being the events in that day. Now that you have all the events grouped by day, you can `map` that dictionary to find the interesting events: ``` let maxEventsPerDay = groupedEvents.map { $1.max(by: { $0.oneRepMax < $1.oneRepMax }) } ``` The result will be an array of events that have the max `oneRepMax` per their date. Upvotes: 2 [selected_answer]<issue_comment>username_2: First you need to group your events by Day. You can do this using `Array.reduce(into:)` to create a Dictionary of type `[Date:[LiftEvent]]`, where you can use `Calendar.current.startOfDay` to group all `Date` objects corresponding to a single day. Then you can find the maximum event of each day by `Array.max(by:)` easily. ``` let dailyLiftEvents = liftEvents.reduce(into: [Date:[LiftEvent]](), { dailyLiftEvents, currentLiftEvent in dailyLiftEvents[Calendar.current.startOfDay(for: currentLiftEvent.date), default: [LiftEvent]()].append(currentLiftEvent) }) let dailyMaxLiftEvents = dailyLiftEvents.map({$0.value.max(by: {$0.oneRepMax < $1.oneRepMax})}) ``` Upvotes: 0
2018/03/15
583
2,190
<issue_start>username_0: I have a custom Annotation Processor which is being used in a sample project. I have added the following in the pom.xml file of the sample project ``` maven-compiler-plugin org.apache.maven.plugins com.\*\*\*\*\*\*.CustomAnnotationProcessor -Amyarg=${project.artifactId} ``` I then declare the argument in the application.properties file as follows: ``` @myarg@.someVal=foobar ``` And access it as follows “ ``` public class TestClass { @Value("${@myarg@.someVal}") private String testVal; public void testMethod(){ System.out.println(testVal); } } ``` It is working fine in Intelli J as testVal prints out to be foobar. However, in eclipse I get the following error : ``` org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘testClass': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Caused by: java.lang.IllegalArgumentException: Could not resolve placeholder ‘@myarg@.someVal’ in string value "${@myarg@.someVal}" ``` I understand that Eclipse uses its own compiler, while Maven probably uses javac. How do I get this custom annotation working for eclipse as well?<issue_comment>username_1: Please follow below steps to enable annotation processing. 1. Right click on the project and select `Properties`. 2. In `Java Compiler -> Annotation Processing`. Check `Enable annotation processing`. [![enter image description here](https://i.stack.imgur.com/J69P4.jpg)](https://i.stack.imgur.com/J69P4.jpg) 3. Open `Java Compiler -> Annotation Processing -> Factory Path`. Check `Enable project specific settings`. Add your JAR file to the list. [![enter image description here](https://i.stack.imgur.com/eSSjL.jpg)](https://i.stack.imgur.com/eSSjL.jpg) 4. Clean and build the project. Upvotes: 1 <issue_comment>username_2: I too had this problem. 99% of the solution was as **gyan** described. The last 1% was for eclipse I needed "jar" files needed specify all needed jar files in the "Factory Path". For my project this meant 1. the annotation-definition project jar, 2. the annotation-processing project jar, and 3. javapoet.jar. Upvotes: 0
2018/03/15
791
2,064
<issue_start>username_0: I am trying to create a simple function that counts lines from a text file and print it by using Unix command `wc` (word count). I don't understand why it does not work; I tried different paths for `wc` location but nothing works. Instead, I get this error: ``` �%r : No such file or directory ``` I want to use the `wc` command. Code: ``` void count_lines() { int p; p=fork(); if(p == 0) { char* args[] = {"./wc","1.txt",NULL}; execv("./wc",args); perror(execv); exit(0); } printf("waiting for child\n"); wait(NULL); } ```<issue_comment>username_1: Unless you have an binary in your current directory, `./wc` is not going to work. Since you want to use the `wc` command, use the path to it: ``` int p; p=fork(); if(p == 0) { char* args[] = {"wc","1.txt",NULL}; execv("/usr/bin/wc",args); perror("execv"); exit(0); } ``` Or you could use [`execvp`](http://%20%20%20%20int%20p;%20%20%20%20%20p=fork();%20%20%20%20%20%20if(p==0)%7B%20%20%20%20%20%20%20%20%20char*%20args[]=%7B%22wc%22,%221.txt%22,NULL%7D;%20%20%20%20%20%20%20%20%20execv(%22/usr/bin/wc%22,args);%20%20%20%20%20%20%20%20%20perror(%22execv%22);%20%20%20%20%20%20%20%20%20exit(0);%20%20%20%20%20%7D) to let it search `wc` in `PATH`: ``` int p; p=fork(); if(p == 0) { char* args[] = {"wc","1.txt",NULL}; execvp("wc",args); perror("execvp"); exit(0); } ``` Upvotes: 2 <issue_comment>username_2: You need to correct below two lines in your code. Always provide full path to the file location. In case of unsuccessful command execution, you need to check access permission to the file location including permission to execute the file or command:- ``` char* args[]={"wc","-l","/full/path/1.txt",NULL}; execv("/usr/bin/wc",args); ``` Normally all unix/Linux commands should be in directory location `/usr/bin/`. To get the full path for a command just try like below:- ``` which command #here command can be wc, ls etc. so try which wc ``` Upvotes: 3 [selected_answer]
2018/03/15
973
2,746
<issue_start>username_0: I wrote UWP but this can be also on Android on IOS because I profiled only UWP application using VS2017. **Steps to create problem.** - Open VS 2017 and start a new xamarin forms project by selecting tabbed page or masterdetail page. No need to write any single code. **Problem;** * First snapshot is after application is loaded. * 2nd one has taken after selecting an item in the list and navigating to the ItemDetailsPage * 3rd snapshot was taken after navigating back to ItemsPage [![enter image description here](https://i.stack.imgur.com/mbnfl.png)](https://i.stack.imgur.com/mbnfl.png) **Expections;** to not see ItemDetailsPage on 3rd snapshot because I am navigating back and this page is popped from the navigation stack. so it should be removed, collected by GC or disposed. Here is the 3rd snapshot details; [![enter image description here](https://i.stack.imgur.com/mQeJb.png)](https://i.stack.imgur.com/mQeJb.png) Do I read this snapshot wrong or there is something wrong with the xamarin forms applications? EDIT: Below screenshot also stats that there is "cycle detected". what does that mean? i thought cycles cause memory leaks, dont they? [![enter image description here](https://i.stack.imgur.com/JGx27.png)](https://i.stack.imgur.com/JGx27.png)<issue_comment>username_1: Unless you have an binary in your current directory, `./wc` is not going to work. Since you want to use the `wc` command, use the path to it: ``` int p; p=fork(); if(p == 0) { char* args[] = {"wc","1.txt",NULL}; execv("/usr/bin/wc",args); perror("execv"); exit(0); } ``` Or you could use [`execvp`](http://%20%20%20%20int%20p;%20%20%20%20%20p=fork();%20%20%20%20%20%20if(p==0)%7B%20%20%20%20%20%20%20%20%20char*%20args[]=%7B%22wc%22,%221.txt%22,NULL%7D;%20%20%20%20%20%20%20%20%20execv(%22/usr/bin/wc%22,args);%20%20%20%20%20%20%20%20%20perror(%22execv%22);%20%20%20%20%20%20%20%20%20exit(0);%20%20%20%20%20%7D) to let it search `wc` in `PATH`: ``` int p; p=fork(); if(p == 0) { char* args[] = {"wc","1.txt",NULL}; execvp("wc",args); perror("execvp"); exit(0); } ``` Upvotes: 2 <issue_comment>username_2: You need to correct below two lines in your code. Always provide full path to the file location. In case of unsuccessful command execution, you need to check access permission to the file location including permission to execute the file or command:- ``` char* args[]={"wc","-l","/full/path/1.txt",NULL}; execv("/usr/bin/wc",args); ``` Normally all unix/Linux commands should be in directory location `/usr/bin/`. To get the full path for a command just try like below:- ``` which command #here command can be wc, ls etc. so try which wc ``` Upvotes: 3 [selected_answer]
2018/03/15
616
2,071
<issue_start>username_0: I've got a Python job that I'm trying to ship in a Docker image. The code is structured in such a way that some modules get imported from a `modules` folder, so I've added to the Python path. Specifically, the Dockerfile is ``` FROM python:3 RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY . /usr/src/app ENV PYTHONPATH "/usr/src/app" RUN pip3 install -r requirements.txt ``` As you can see, I'm trying to set the environment variable for PYTHONPATH so it would find stuff in the same working directory. The script to run is called `main.py` and when I run it locally (not from docker) as ``` PYTHONPATH=$PYTHONPATH:$HOME/job-path python3 main.py ``` it runs fine. With that Dockerfile instesd, after building the image I get, from a `docker inspect` , that the Env field contains ``` "Env": [ "PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "LANG=C.UTF-8", "PYTHON_VERSION=3.6.4", "PYTHON_PIP_VERSION=9.0.1", "PYTHONPATH=/usr/src/app" ] ``` so it'd look like it's fine? But `docker run` gives me a ``` ModuleNotFoundError: No module named 'modules' ```<issue_comment>username_1: Issue here that probably you do not have `__init__.py` in your src folder that you copy to `/usr/src/app`. That is why `modules` package wasn't recognized. You either can add `__init__.py` to your src folder or add `modules` to PYTHONPATH ``` ENV PYTHONPATH "/usr/src/app/:/usr/src/app/modules/" ``` Note that if your `modules` folder has subfolders it should also contain `__init__.py` Upvotes: 0 <issue_comment>username_2: The syntax is ``` ENV variable1[=value1] variable2[=value2] ... ``` Without an equals sign, you are creating two empty variables; the name of the second is `/usr/src/app`. You want ``` ENV PYTHONPATH="/usr/src/app" ``` with an equals sign between the variable's name and its value. If you want to append to an existing value, you can: ``` ENV PYTHONPATH="/usr/src/app:${PYTHONPATH}" ``` Upvotes: 1
2018/03/15
206
735
<issue_start>username_0: I'm working in pgAdmin 4 webtool. When using the query tool, I can save and open files, but I can't delete them. I'm saving the files using the following method: ![here is how to save files](https://i.stack.imgur.com/XtSjJ.png) But then there is no way to delete saved queries that I don't need anymore. Here is the file manager: ![here is the file manager](https://i.stack.imgur.com/GKoNf.png) Did I overlook it somehow?<issue_comment>username_1: As pgAdmin4 can be used as web application & hosted on web servers that functionality is not provided in File manager. Upvotes: 0 <issue_comment>username_2: If not through File Manager, then how? There must be some way to do some cleaning up. Upvotes: 1
2018/03/15
604
2,458
<issue_start>username_0: I have two textview and i want to merge two textview and when i click to share button send data `latitude_textview` and `longitude_textview` together, how I can do it? > > For example,when i'm pressing to share button i want to get 21.00000, 21.00000 and share it. > > > ``` shareit = (Button) findViewById(R.id.shareit); mLatitudeTextView = (TextView) findViewById((R.id.latitude_textview)); mLongitudeTextView = (TextView) findViewById((R.id.longitude_textview)); //share location button shareit.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent intent = new Intent(Intent.ACTION_SEND); intent.setType("text/plain"); TextView msg = (TextView) findViewById(R.id.longitude_textview); String finalMsg = String.valueOf(msg.getText().toString().trim()); intent.putExtra(Intent.EXTRA_TEXT, finalMsg); Intent modIntent = Intent.createChooser(intent, "Поделиться.."); startActivity(modIntent); } }); ```<issue_comment>username_1: Just get text from those textviews and store them in a single string, then share that. ``` String lat = mLatitudeTextView.getText().toString(); String lng = mLongitudeTextView .getText().toString(); String latlng = lat + "\n" + lng; ``` Hope this is what you want. Upvotes: 0 <issue_comment>username_2: I think you need to ``` mLatitudeTextView = (TextView) findViewById((R.id.latitude_textview)); mLongitudeTextView = (TextView) findViewById((R.id.longitude_textview)); //share location button shareit.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent intent = new Intent(Intent.ACTION_SEND); intent.setType("text/plain"); String lat = mLatitudeTextView.getText().toString(); String lng = mLongitudeTextView .getText().toString(); String finalMsg = "Lat : " + lat + ", Lon : "+ lng; intent.putExtra(Intent.EXTRA_TEXT, finalMsg); Intent modIntent = Intent.createChooser(intent, "Поделиться.."); startActivity(modIntent); } }); ``` You can look more here : [Sending Simple Data](https://developer.android.com/training/sharing/send.html) Upvotes: 2 [selected_answer]
2018/03/15
626
2,580
<issue_start>username_0: I am trying to get the total count of api keys in my API gateway via SDK. However I am unsure on the proper implementation of the parameters that the GetApiKeysRequest takes in. My main objective is to get the count of all API keys that are already existing for my account. The code I have so far looks like this : ``` class Program { public static void Main(string[] args) { var awsUserAccessKey = "xxxxx"; var awsUserSecretKey = "<KEY>"; var regionEndpoint = "us-west-2"; var keysInRepository = new GetApiKeysRequest { CustomerId = "", IncludeValues = true||false, Limit=0, NameQuery = "", Position = "" }; var client = new AmazonAPIGatewayClient(awsUserAccessKey, awsUserSecretKey, regionEndpoint); var apiKeys =client.GetApiKeys(keysInRepository); Console.Read(); } } ``` This code throws an error saying that The security token included in the request is invalid (Amazon.APIGateway exception).I am unsure on how to set the parameters for this request.<issue_comment>username_1: Because the AmazonAPIGatewayClient you are using as described [here](https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/APIGateway/MAPIGatewayctorStringStringString.html) takes three string arguments and the last one is `awsSessionToken` i think you are confusing with this [one](https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/APIGateway/MAPIGatewayctorStringStringRegionEndpoint.html) which takes as third argument [RegionEndpoint](https://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/Amazon/TRegionEndpoint.html) Do something like that instead : ``` var client = new AmazonAPIGatewayClient(awsUserAccessKey, awsUserSecretKey, RegionEndpoint.USWest2); ``` Upvotes: 2 <issue_comment>username_2: For someone looking for a solution to similar problems, this worked : ``` var awsUserAccessKey = "xxxx"; var awsUserSecretKey = "yyyyy"; var regionEndpointName = "us-west-2"; var regionEndpoint = RegionEndpoint.GetBySystemName(regionEndpointName); var keysInRepository = new GetApiKeysRequest { Limit = 500 }; //define the key and its fields var client = new AmazonAPIGatewayClient(awsUserAccessKey, awsUserSecretKey, regionEndpoint); var apiKeys = client.GetApiKeys(keysInRepository); Console.WriteLine("Current number of api keys:{0}", apiKeys.Items.Count); ``` Upvotes: -1 [selected_answer]
2018/03/15
1,738
6,837
<issue_start>username_0: I always see examples of functional React components defined with arrow function syntax: ``` const foo = () => (...); export default foo; ``` Rather than the more traditional function declaration syntax: ``` export default function foo() { return ...; } ``` Is there a reason to prefer the former over the latter?<issue_comment>username_1: Using Arrow function is way better than using a regular function not only because the syntax is clean and you will be able to write less code with the arrow function but also because of : 1. Scope safety: when arrow functions are used consistently, everything is guaranteed to use the same thisObject as the root. If even a single standard function callback is mixed in with a bunch of arrow functions there's a chance the scope will become messed up. 2. Compactness: Arrow functions are easier to read and write. 3. Clarity: When almost everything is an arrow function, any regular function immediately sticks out for defining the scope. A developer can always look up the next-higher function statement to see what this object is. For more details, you can take a look at these questions [When should I use Arrow functions in ECMAScript 6?](https://stackoverflow.com/questions/22939130/when-should-i-use-arrow-functions-in-ecmascript-6) Upvotes: 2 <issue_comment>username_2: I would say that this is a bit opinionated choice really. There are at least several reasons why *I* (personally) see arrow function use for a purely functional component as pretty bad practice. Here are those: 1. Syntax abuse. When we define function component we don't need to pre-bind its context to a specific scope. The context (`this`) is going to be `undefined` anyway in the module namespace. The use of arrow functions is dictated here by pure aesthetics reasons like conciseness. But arrow functions as language feature has a very specific purpose for existence in the first place, and this is not *coolness* and conciseness. 2. Error stack trace. Exceptions thrown in arrow function will be less descriptive because arrow function is anonymous by definition. This is not the huge problem probably since React project will most likely be configured with proper source maps support, but still stack trace will be a bit more clear if named function is used. As noted in comments this is not really an issue of the functional component, as the name will be the name of the variable basically. 3. Less convenient logging. Consider this very typical pure function component style: ``` const Header = ({ name, branding }) => ( ... ) ``` In the function above it's impossible to throw in quick `debugger` statement or `console.log`. You will have to temporarily convert it to something like this ``` const Header = function ({ name, branding }) { console.log(name) return ( ... ) } ``` This might be pretty annoying especially for bigger pure functional components. That being said this is a very popular choice for many teams, also by default preferred by ESLint, so if you don't see the problem with it, then it is probably okay. Upvotes: 5 <issue_comment>username_3: Actually, there is **no difference** between them, I make a [little project](https://codesandbox.io/s/bundle-test-qwru5) on the CodeSandBox and make two simple components, one of them is the `Arrow` component by using the arrow function: ```js import React from 'react'; const MyArrowComponent = () => ( Arrow ----- ); export default MyArrowComponent; ``` And the other is the `Declaration` component by using function declaration: ```js import React from "react"; function MyFunctionComponent() { return ( Declaration ----------- ); } export default MyFunctionComponent; ``` Then I run the `yarn build` command and got the bundle like below: ```js (window.webpackJsonp = window.webpackJsonp || []).push([[0], { 14: function (e, n, t) { "use strict"; t.r(n); var a = t(0), r = t.n(a), l = t(2), c = t.n(l), u = t(3), i = t(4), o = t(6), m = t(5), E = t(7); var p = function () { return r.a.createElement("main", null, r.a.createElement("h2", null, "Declaration")) }, s = function () { return r.a.createElement("main", null, r.a.createElement("h2", null, "Arrow")) }, d = function (e) { function n() { return ( Object(u.a)(this, n), Object(o.a)(this, Object(m.a)(n).apply(this, arguments)) } return Object(E.a)(n, e), Object(i.a)(n, [{ key: "render", value: function () { return r.a.createElement( 'div', null, r.a.createElement('div', null, 'Hi'), r.a.createElement(p, null), r.a.createElement(s, null) ); } }]), n }(r.a.Component); c.a.render(r.a.createElement(d, null), document.getElementById("root")) }, 8: function (e, n, t) { e.exports = t(14) } }, [[8, 1, 2]]]); ``` Pay attention to the definition of the `Arrow` and the `Declaration` component: ```js var p = function () { return r.a.createElement("main", null, r.a.createElement("h2", null, "Declaration")) }, s = function () { return r.a.createElement("main", null, r.a.createElement("h2", null, "Arrow")) } ``` Both of them are defined in the same way, so definitely there is no difference between them and it is fully opinion based on developers' attitude to code readability and clean code, based on ESLint 5.x in our team, we choose the *arrow function* to define the functional components. Upvotes: 5 <issue_comment>username_4: Function declaration and arrow functions are different in their essence, but in the scope of your question, it's basically a code style preference. I personally prefer Function declaration as I find it easier to spot the meaning of that line of code. If you will use Arrow functions or Function declarations, try to also think in terms of what makes more sense in the context. To make the code clean and easier to read it isn't only about the amount of code you write but what that code express. I tend to use Arrow Functions for callbacks, for example, `[].map(() => {})` Upvotes: 3 <issue_comment>username_5: A few other points not mentioned in other answers: * With arrow function components, when using React dev tools in Chrome/Firefox, those components come up as `Anonymous` making debugging harder. These Anonymous components are also throughout dev tools including performance flame trees. Functional components display their name in dev tools. * A standard function declaration can be defined on a single line. You don't need to define the `export default` later in a file. This also makes it easier when you want to add/remove the `default` keyword. ``` export default async function MyComponent() { ... } ``` Upvotes: 3
2018/03/15
997
3,628
<issue_start>username_0: I'd like to build a request for testing middleware, but I don't want POST requests to always assume I'm sending form data. Is there a way to set `request.body` on a request generated from `django.test.RequestFactory`? I.e., I'd like to do something like: ``` from django.test import RequestFactory import json factory = RequestFactory(content_type='application/json') data = {'message':'A test message'} body = json.dumps(data) request = factory.post('/a/test/path/', body) # And have request.body be the encoded version of `body` ``` The code above will fail the test because my middleware needs the data to be passed as the document in `request.body` not as form data in `request.POST`. However, `RequestFactory` always sends the data as form data. I can do this with `django.test.Client`: ``` from django.test import Client import json client = Client() data = {'message':'A test message'} body = json.dumps(data) response = client.post('/a/test/path/', body, content_type='application/json') ``` I'd like to do the same thing with `django.test.RequestFactory`.<issue_comment>username_1: RequestFactory has built-in support for JSON payloads. You don't need to dump your data first. But you should be passing the content-type to `post`, not to the instantiation. ``` factory = RequestFactory() data = {'message':'A test message'} request = factory.post('/a/test/path/', data, content_type='application/json') ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: I've tried Jay's solution and didn't work, but after some reseach, this did (Django 2.1.2) ``` factory = RequestFactory() request = factory.post('/post/url/') request.data = {'id': 1} ``` Upvotes: 2 <issue_comment>username_3: In later version of Django (tested on 4.0) this is no longer an issue. On the other hand, to pass data to `request.POST` might be. In default, when passing `content-type` to a RequestFactory, data goes into `request.body` and when you don't, data goes into `request.POST`. ```py request_factory = RequestFactory() # provide content-type request = request_factory.post(f'url', data={'foo': 'bar'}, content_type="application/json") print(request.body) # b'{"foo": "bar"}' # don't provide content type request = request_factory.post(f'url', data={'foo': 'bar'}) print(request.POST) # ``` Upvotes: 1 <issue_comment>username_4: Here's what worked for me in Django 4.1: ``` from django.contrib.sessions.middleware import SessionMiddleware from django.test import TestCase, RequestFactory from customauth import views class RegistrationViewTest(TestCase): def setUp(self): self.factory = RequestFactory() def test_post_request_creates_new_user(self): data = { 'email': '<EMAIL>', 'screen_name': 'new_user', 'password1': '<PASSWORD>', 'password2': '<PASSWORD>', } request = self.factory.post('/any/path/will/do/', data ) middleware = SessionMiddleware(request) middleware.process_request(request) request.session.save() response = views.registration_view(request) self.assertEqual(response.status_code, 302) # ok ``` This test passes. The form was successfully processed in `views.registration_view`. Note: * When I included `content_type='application/json'` in the call to `self.factory.post` (as the accepted answer suggests), `request.POST` had no content in the view. Without that, it worked. I don't know why but would be happy to learn. * I needed to manually added `SessionMiddleware` to `request`. Upvotes: 1
2018/03/15
532
1,977
<issue_start>username_0: I am trying to create a ECS fargate type task on ECS through this command `aws ecs register-task-definition --family ${FAMILY} --cli-input-json file://${NAME}-v_${BUILD_NUMBER}.json --region ${REGION}` as mentined in docs [here](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_AWSCLI_Fargate.html#AWSCLI_register_task_definition). Task defination file is given below [![task definition](https://i.stack.imgur.com/An9p0.png)](https://i.stack.imgur.com/An9p0.png) Every thing seems to work well with this command and have proper response in CLI. But when i try to do the same in Jenkins as i have to setup CI environment it gives following error. ``` Parameter validation failed: Unknown parameter in input: "cpu", must be one of: family, taskRoleArn, networkMode, containerDefinitions, volumes Unknown parameter in input: "executionRoleArn", must be one of: family, taskRoleArn, networkMode, containerDefinitions, volumes Unknown parameter in input: "memory", must be one of: family, taskRoleArn, networkMode, containerDefinitions, volumes Unknown parameter in input: "requiresCompatibilities", must be one of: family, taskRoleArn, networkMode, containerDefinitions, volumes ``` If i removed those tags in task definition json file. It creates a EC2 type task which i don't need. I want to create only a fargate launch type task. [![Jekins commands](https://i.stack.imgur.com/HuJwq.png)](https://i.stack.imgur.com/HuJwq.png)<issue_comment>username_1: From your terminal, type `which aws` and from Jenkins, execute the same command. My guess is you'll find you are using two separate aws cli binaries (and one of them is too old to support Fargate.) Upvotes: 3 <issue_comment>username_2: Actually, I have two different aws cli's installed. And one was too old to support Fargate. I have updated cli installed on root level while jenkins user uses out of dated cli which does not support Fargate. Upvotes: 2 [selected_answer]
2018/03/15
1,283
4,028
<issue_start>username_0: **My problem** Oracle 'DATE' columns actually store time as well, just with less precision than 'TIMESTAMP' (seconds vs picoseconds). I need my application to interact with this legacy schema as if the Date was a DateTime. Because rails thinks of this field as a date, its truncating the time. **Example:** ``` 2.4.1 :003 > m.send_after_ts = Time.now => 2018-03-15 11:45:50 -0600 2.4.1 :004 > m.send_after_ts => Thu, 15 Mar 2018 ``` **Config Data:** The result of `#columns`: ``` #, @table\_name="mail", @virtual=false>, ``` Versions: ``` ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-darwin17] Rails 5.1.5 activerecord (5.1.5) activerecord-oracle_enhanced-adapter (1.8.2) ruby-oci8 (2.2.5.1) ``` I assume there must be a mapping someplace that governs this relationship? How can I make rails cast this field as a timestamp? **UPDATE** Adding `attribute :send_after_ts, :datetime` to my model allows rails to treat the field as a DateTime, but causes an exception while trying to write to the db: ``` SQL (4.4ms) INSERT INTO "MAIL" ("SEND_AFTER_TS", "ID") VALUES (:a1, :a2) [["send_after_ts", "2018-03-15 12:58:02.635810"], ["id", 6778767]] ActiveRecord::StatementInvalid: OCIError: ORA-01830: date format picture ends before converting entire input string: INSERT INTO "MAIL" ("SEND_AFTER_TS", "ID") VALUES (:a1, :a2) from (irb):3 ``` I assume this is caused by the extra precision (fractional seconds), but I don't see a way to define that as part of the attribute setting. I'm able to get around this for now by writing this field as a string, eg: ``` 2.4.1 :013 > m.send_after_ts = Time.now.to_s => "Mar 15, 2018 12:48" 2.4.1 :014 > m.save => true ``` I'm still looking for a real solution.<issue_comment>username_1: You could use the virtual attribute pattern as your interface: ``` def send_after_ts_datetime send_after_ts.to_datetime end def send_after_ts_datetime=(datetime) self.send_after_ts = datetime.to_s end ``` Now you'll read from and write to the attribute using the `_datetime` methods, but your data will be stored in the original `send_after_ts` column. ``` >> m.send_after_ts_datetime = Time.now #> "2018-03-15 15:15:49 -0600" >> m.send_after_ts_datetime #> Thu, 15 Mar 2018 15:15:49 -0600 ``` Upvotes: 1 <issue_comment>username_2: Old question, but I just had to deal with this myself today, so I thought I'd share my hacky solution. `OracleEnhancedAdapter` extends `ActiveRecord::ConnectionAdapters::AbstractAdapter`. During its initialization, it calls its `initialize_type_map` method where it maps these types. OracleEnhancedAdapter contains an override for this, which in turn calls the superclass method inside `AbstractAdapter`. In here is where it maps the undesired date type to the oracle DATE type: ``` register_class_with_precision m, %r(date)i, Type::Date ``` In my "fix" for this, I created an initializer in rails that overrides this method: ``` ActiveSupport.on_load(:active_record) do ActiveRecord::ConnectionAdapters::OracleEnhancedAdapter.class_eval do # ... some other method customization def initialize_type_map(m) # other code from the AbstractAdapter implementation # actually make the mapping to the DateTime that you really want register_class_with_precision m, %r(date)i, ActiveRecord::Type::DateTime # ... code from the OracleEnhancedAdapter implementation end end end ``` The result is that oracle DATE types are now seen in rails as datetimes. Upvotes: 0 <issue_comment>username_3: Add this to your code to force Oracle "DATE" to use DateTime ruby type ```ruby require 'active_record/connection_adapters/oracle_enhanced_adapter' module ActiveRecord module ConnectionAdapters class OracleEnhancedAdapter alias :old_initialize_type_map :initialize_type_map def initialize_type_map(m = type_map) old_initialize_type_map(m) m.register_type "DATE", Type::DateTime.new end end end end ``` Upvotes: 0
2018/03/15
382
1,325
<issue_start>username_0: I am trying to use PyQt4 in pycharm. My code works perfectly in run mode, but in debug mode, when I try to import PyQt4 I get the following error: *"RuntimeError: the PyQt4.QtCore and PyQt5.QtCore modules both wrap the QObject class"* This happens even w/ very barebones code: ``` from PyQt4 import QtGui, QtCore print('cheese') ``` (thus, this is different from [previous](https://intellij-support.jetbrains.com/hc/en-us/community/posts/115000551170-PyQt4-and-PyQt5-collisions-in-PyCharm-2017-2-1-when-debugging-QGIS-application) PyQt4 Pycharm problems w. Matplotlib) Clearly, the PyCharm debugger is using PyQt5 (this can be seen by calling sys.modules['PyQt5']). How can I 'un-import' PyQt5, or at least prevent the collision? Also: I tried importing differently to include explicit dependencies, but this also gives error: ``` import PyQt4 as pp pp.QtGui ``` *AttributeError: module 'PyQt4' has no attribute 'QtGui'* Thanks!<issue_comment>username_1: Try going to File > Settings > Project > Project Interpreter. Edit your current interpreter, or create a new one, and remove PyQt5 from the list that shows up. Upvotes: 1 <issue_comment>username_2: you can go to Settings>Build,Execution,Deployment>Debugger>Python Debugger>PyQt compatible: Select PyQt4. Upvotes: 4 [selected_answer]
2018/03/15
1,016
3,065
<issue_start>username_0: I am trying to get the Cplex basic LP example to work. The code can be found [here](https://www.tu-chemnitz.de/mathematik/discrete/manuals/cplex/doc/getstart/html/ilocplex13.html). I am completely new with c++, but hope to be able to get this running. I am trying to compile it on linux. I am using the following command to run it ``` g++ -D IL_STD -I /opt/ibm/ILOG/CPLEX_Studio1271/opl/include ilolpex1.cpp ``` The -D IL\_STD was put there to solve an error as found [here](https://www-01.ibm.com/support/docview.wss?uid=swg21399983). The -I ... was put there to specify the location of the header files. I came up with this myself after a lot of trying and googling, so i am in no way sure this is correct. Anyway, i when i run it i get errors of undefined references: ``` /tmp/ccl9O1YF.o: In function `populatebyrow(IloModel, IloNumVarArray, IloRangeArray)': ilolpex1.cpp:(.text+0x18f): undefined reference to `IloNumVar::IloNumVar(IloEnv, double, double, IloNumVar::Type, char const*)' ``` I did not make any changes in the file, so i assume the only thing which can be wrong is how the files are linked. I have the feeling it probably just is a simple setting, but after hours of looking i still have no idea how to fix it.<issue_comment>username_1: Obviously, the iloplex1.cpp file is just a demo how to *use* IloCplex. What you yet need is IloCplex *itself*. This should come either as (a) further source file(s) you have to compile with the demo or as a library you link against. Have a look at your cplex directories, you might find a `lib[...].a` file somewhere there, *possibly* in `/opt/ibm/ILOG/CPLEX_Studio1271/opl/lib`. You can link against using GCC's (clang's) [-l and -L options](https://www.rapidtables.com/code/linux/gcc/gcc-l.html). Be aware that when using `-l`, you leave out `lib` and `.a` (`-l [...]` with above (invalid) sample name). Upvotes: 0 <issue_comment>username_2: The easiest way to compile the ilolpex1.cpp example is to use the Makefile that is included with the installation. For example, you should do the following: ``` $ cd /opt/ibm/ILOG/CPLEX_Studio1271/cplex/examples/x86-64_linux/static_pic $ make ilolpex1 ``` This will produce output, like the following: ``` g++ -O0 -c -m64 -O -fPIC -fno-strict-aliasing -fexceptions -DNDEBUG -DIL_STD -I../../../include -I../../../../concert/include ../../../examples/src/cpp/ilolpex1.cpp -o ilolpex1.o g++ -O0 -m64 -O -fPIC -fno-strict-aliasing -fexceptions -DNDEBUG -DIL_STD -I../../../include -I../../../../concert/include -L../../../lib/x86-64_linux/static_pic -L../../../../concert/lib/x86-64_linux/static_pic -o ilolpex1 ilolpex1.o -lconcert -lilocplex -lcplex -lm -lpthread ``` This will tell you everything you'll need to know if you choose to compile your own application by hand in the future. The details about this are described in the documentation (e.g., [here](https://www.ibm.com/support/knowledgecenter/SSSA5P_12.8.0/ilog.odms.cplex.help/CPLEX/GettingStarted/topics/set_up/GNU_Linux.html)). Upvotes: 3 [selected_answer]
2018/03/15
757
2,374
<issue_start>username_0: I'm working on creating a basic ReactJS Frontend App that has to send data to an API created with php. My react app is hosted on localhost:3000 using XAMPP and the php file is hosted on localhost:8000. when I try to connect to it I get a connection refused error. I use Axios to send a Post request. I know this might be very general but any ideas to how to solve this. or go about programming that App? This is what it currently is. I get a net::ERR\_CONNECTION\_REFUSED whenever I call it `axios.post('http://localhost:8000/file.php', someData)`<issue_comment>username_1: Obviously, the iloplex1.cpp file is just a demo how to *use* IloCplex. What you yet need is IloCplex *itself*. This should come either as (a) further source file(s) you have to compile with the demo or as a library you link against. Have a look at your cplex directories, you might find a `lib[...].a` file somewhere there, *possibly* in `/opt/ibm/ILOG/CPLEX_Studio1271/opl/lib`. You can link against using GCC's (clang's) [-l and -L options](https://www.rapidtables.com/code/linux/gcc/gcc-l.html). Be aware that when using `-l`, you leave out `lib` and `.a` (`-l [...]` with above (invalid) sample name). Upvotes: 0 <issue_comment>username_2: The easiest way to compile the ilolpex1.cpp example is to use the Makefile that is included with the installation. For example, you should do the following: ``` $ cd /opt/ibm/ILOG/CPLEX_Studio1271/cplex/examples/x86-64_linux/static_pic $ make ilolpex1 ``` This will produce output, like the following: ``` g++ -O0 -c -m64 -O -fPIC -fno-strict-aliasing -fexceptions -DNDEBUG -DIL_STD -I../../../include -I../../../../concert/include ../../../examples/src/cpp/ilolpex1.cpp -o ilolpex1.o g++ -O0 -m64 -O -fPIC -fno-strict-aliasing -fexceptions -DNDEBUG -DIL_STD -I../../../include -I../../../../concert/include -L../../../lib/x86-64_linux/static_pic -L../../../../concert/lib/x86-64_linux/static_pic -o ilolpex1 ilolpex1.o -lconcert -lilocplex -lcplex -lm -lpthread ``` This will tell you everything you'll need to know if you choose to compile your own application by hand in the future. The details about this are described in the documentation (e.g., [here](https://www.ibm.com/support/knowledgecenter/SSSA5P_12.8.0/ilog.odms.cplex.help/CPLEX/GettingStarted/topics/set_up/GNU_Linux.html)). Upvotes: 3 [selected_answer]
2018/03/15
838
3,252
<issue_start>username_0: I have the following situation: ------------------------------- * The main widget with Row 1,2,3 * In Row 2, I initially set the Widget A What I want: ------------ * replace the Widget A with Widget B once I have a state change in Widget A * handle the replacement in Widget A and not in the main Widget (so I don't want to use callbacks from Widget A or observing a global state and react on it in the main Widget) * Replacement means: Widget B is still a child of the main Widget (in my case stays in the second row and doesn't go fullscreen as it is the case when you use routes/Navigator?) My reasoning for what I want: ----------------------------- * in each row of the main widget the user can interact with a sub menue which e.g. in row 2 consists of WidgetA -> [user interaction] -> WidgetB [user interaction] -> WidgetC * I don't want to manage all this different states from the main widget What I tried: ------------- ``` //in Widget A -> in order to switch to Widget B Navigator.push(context, new MaterialPageRoute( builder: (_) => new WidgetB(), )); ``` This doesn't do the job because the Widget B doesn't stay in the WidgetTree of the main Widget If this is not possible I would like to know what is the flutter way of achieving what I want :-)<issue_comment>username_1: You need to have some parent that knows of this subnavigation. You can't really replace a view by itself, its parent is what has the reference to that widget in the tree, so the parent is what needs to convey this to flutter. That said, you can make a custom widget whose job is to listen for child events, and change the child accordingly. ``` App / \ Row Row | WidgetParent | WidgetA ``` Here `WidgetParent` could keep e.g. a `Listenable` and pass a `ChangeNotifier` to `WidgetA`. When `WidgetA` decides `WidgetX` should come to screen it can emit that to the parent. ``` class WidgetParent extends AnimatedWidget { WidgetParent() : super(listenable: ValueNotifier(null)); ValueNotifier get notifier => listenable as ValueNotifier; Widget build(BuildContext context) { return new Center(child: notifier.value ?? WidgetA(notifier)); } } ``` Now child widgets can notify who's next. ``` class WidgetA extends StatelessWidget { WidgetA(this.notifier); final ValueNotifier notifier; @override Widget build(BuildContext context) { return RaisedButton( onPressed: () => notifier.value = WidgetB(notifier), child: Text("This is A, go to B"), ); } } class WidgetB extends StatelessWidget { WidgetB(this.notifier); final ValueNotifier notifier; @override Widget build(BuildContext context) { return RaisedButton( onPressed: () => notifier.value = WidgetA(notifier), child: Text("This is B, go to A"), ); } } ``` Upvotes: 1 <issue_comment>username_2: I am fairly new to Flutter but one workaround I could think is - You use State with the widget. For example, if you want to display two rows depending on what use does then you set height of one row what you desire and set height of second row to zero. And swap it on demand. Of course, you need to empty the contents as well. e.g. setting text to '' etc. This may not be the ideal solution but could work. Upvotes: 0
2018/03/15
636
2,122
<issue_start>username_0: I'm writing an ASP.NET MVC program in C# and I have a date fetched from my database, but the date is set as a decimal type, and I can't change that. I need to know how I can format the decimal to look like `04/15/2017` instead of `20170415.00` This is how that column is declared in my model. ``` public decimal? SIM2_DUE_DATE { get; set; } ``` I'm calling the date from the database. I have over 1000 dates that need to be formatted. I just used that one as an example, so I can't format it specifically.<issue_comment>username_1: You can use math to convert your "date" to `DateTime` type. First spilt it into parts like this: ``` var date = 20170415.00M; var year = (int)date / 10000; var month = (int) date / 100 % 100; var day = (int)date % 100; ``` then call a `DateTime` constructor: ``` var dateTime = new DateTime(year, month, day); ``` Upvotes: 3 <issue_comment>username_2: Considering your date in decimal value ``` decimal dtdec = 20170415.00M; ``` You have few options ``` var newDate = DateTime.ParseExact(((Int64)dtdec).ToString(), "yyyyMMdd", System.Globalization.CultureInfo.InvariantCulture); Console.WriteLine(newDate.ToShortDateString()); ``` Or ``` Console.WriteLine(DateTime.Parse(dtdec.ToString("####/##/##"))); ``` Or ``` Console.WriteLine(dtdec.ToString("####/##/##")); ``` Upvotes: 0 <issue_comment>username_3: You could do this by using `DateTime.ParseExact`: ``` string dueDate = SIM2_DUE_DATE.ToString("D"); // remove decimals var parsedDate = DateTime.ParseExact(dueDate, "yyyyMMdd", CultureInfo.InvariantCulture); // parse the date and ensure it's in that specific format var display = parsedDate.ToShortDateString(); // get the culture-specific date representation ``` Notice that this would fail if you kept the decimals Upvotes: 2 [selected_answer]<issue_comment>username_4: ``` Decimal decimalDateValue = 20170105.00; DateTime dateEquivalent = DateTime.MinValue; DateTime.TryParse(decimalDateValue.ToString(),out dateEquivalent); ``` Upvotes: 0
2018/03/15
1,142
4,368
<issue_start>username_0: I have a horizontal `UIStackView` that, by default, looks as follows: [![default stack view](https://i.stack.imgur.com/UJWfc.png)](https://i.stack.imgur.com/UJWfc.png) The view with the heart is initially hidden and then shown at runtime. I would like to reduce the spacing between the heart view and the account name view. The following code does the job, but *only*, when executed in `viewDidLoad`: ``` stackView.setCustomSpacing(8, after: heartView) ``` When changing the custom spacing later on, say on a button press, it doesn't have any effect. Now, the issue here is, that the custom spacing is lost, once the subviews inside the stack view change: when un-/hiding views from the stack view, the custom spacing is reset and cannot be modified. ### Things, I've tried: * verified the spacing is set by printing `stackView.customSpacing(after: heartView)` (which properly returns `8`) * unsuccessfully ran several reload functions: + `stackView.layoutIfNeeded()` + `stackView.layoutSubviews()` + `view.layoutIfNeeded()` + `view.layoutSubviews()` + `viewDidLayoutSubviews()` How can I update the custom spacing of my stack view at runtime?<issue_comment>username_1: You need to make sure the `UIStackView`'s `distribution` property is set to `.fill` or `.fillProportionally`. --- I created the following swift playground and it looks like I am able to use `setCustomSpacing` at runtime with random values and see the effect of that. ``` import UIKit import PlaygroundSupport public class VC: UIViewController { let view1 = UIView() let view2 = UIView() let view3 = UIView() var stackView: UIStackView! public init() { super.init(nibName: nil, bundle: nil) } public required init?(coder aDecoder: NSCoder) { fatalError() } public override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = .white view1.backgroundColor = .red view2.backgroundColor = .green view3.backgroundColor = .blue view2.isHidden = true stackView = UIStackView(arrangedSubviews: [view1, view2, view3]) stackView.spacing = 10 stackView.axis = .horizontal stackView.distribution = .fillProportionally let uiSwitch = UISwitch() uiSwitch.addTarget(self, action: #selector(onSwitch), for: .valueChanged) view1.addSubview(uiSwitch) uiSwitch.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ uiSwitch.centerXAnchor.constraint(equalTo: view1.centerXAnchor), uiSwitch.centerYAnchor.constraint(equalTo: view1.centerYAnchor) ]) view.addSubview(stackView) stackView.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ stackView.heightAnchor.constraint(equalToConstant: 50), stackView.centerYAnchor.constraint(equalTo: view.centerYAnchor), stackView.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 50), stackView.trailingAnchor.constraint(equalTo: view.trailingAnchor, constant: -50) ]) } @objc public func onSwitch(sender: Any) { view2.isHidden = !view2.isHidden if !view2.isHidden { stackView.setCustomSpacing(CGFloat(arc4random_uniform(40)), after: view2) } } } PlaygroundPage.current.liveView = VC() PlaygroundPage.current.needsIndefiniteExecution = true ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: Another reason `setCustomSpacing` can fail is if you call it *before* adding the arranged subview after which you want to apply the spacing. **Won't work:** ```swift headerStackView.setCustomSpacing(50, after: myLabel) headerStackView.addArrangedSubview(myLabel) ``` **Will work:** ```swift headerStackView.addArrangedSubview(myLabel) headerStackView.setCustomSpacing(50, after: myLabel) ``` Upvotes: 3 <issue_comment>username_3: I also noticed that custom spacing values get reset after hiding/unhiding children. I was able to override `updateConstraints()` for my parent view and set the custom spacing as needed. The views then kept their intended spacing. ```swift override func updateConstraints() { super.updateConstraints() stackView.setCustomSpacing(10, after: childView) } ``` Upvotes: 0
2018/03/15
889
3,483
<issue_start>username_0: When I update subject tree with new subjects or change subjects, I do not see it reflected in the case subject dropdown. I can go back into the subject tree and I see my changes are there and saved. How do I get my Subject tree changes to show in the case subject dropdown?<issue_comment>username_1: You need to make sure the `UIStackView`'s `distribution` property is set to `.fill` or `.fillProportionally`. --- I created the following swift playground and it looks like I am able to use `setCustomSpacing` at runtime with random values and see the effect of that. ``` import UIKit import PlaygroundSupport public class VC: UIViewController { let view1 = UIView() let view2 = UIView() let view3 = UIView() var stackView: UIStackView! public init() { super.init(nibName: nil, bundle: nil) } public required init?(coder aDecoder: NSCoder) { fatalError() } public override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = .white view1.backgroundColor = .red view2.backgroundColor = .green view3.backgroundColor = .blue view2.isHidden = true stackView = UIStackView(arrangedSubviews: [view1, view2, view3]) stackView.spacing = 10 stackView.axis = .horizontal stackView.distribution = .fillProportionally let uiSwitch = UISwitch() uiSwitch.addTarget(self, action: #selector(onSwitch), for: .valueChanged) view1.addSubview(uiSwitch) uiSwitch.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ uiSwitch.centerXAnchor.constraint(equalTo: view1.centerXAnchor), uiSwitch.centerYAnchor.constraint(equalTo: view1.centerYAnchor) ]) view.addSubview(stackView) stackView.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ stackView.heightAnchor.constraint(equalToConstant: 50), stackView.centerYAnchor.constraint(equalTo: view.centerYAnchor), stackView.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 50), stackView.trailingAnchor.constraint(equalTo: view.trailingAnchor, constant: -50) ]) } @objc public func onSwitch(sender: Any) { view2.isHidden = !view2.isHidden if !view2.isHidden { stackView.setCustomSpacing(CGFloat(arc4random_uniform(40)), after: view2) } } } PlaygroundPage.current.liveView = VC() PlaygroundPage.current.needsIndefiniteExecution = true ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: Another reason `setCustomSpacing` can fail is if you call it *before* adding the arranged subview after which you want to apply the spacing. **Won't work:** ```swift headerStackView.setCustomSpacing(50, after: myLabel) headerStackView.addArrangedSubview(myLabel) ``` **Will work:** ```swift headerStackView.addArrangedSubview(myLabel) headerStackView.setCustomSpacing(50, after: myLabel) ``` Upvotes: 3 <issue_comment>username_3: I also noticed that custom spacing values get reset after hiding/unhiding children. I was able to override `updateConstraints()` for my parent view and set the custom spacing as needed. The views then kept their intended spacing. ```swift override func updateConstraints() { super.updateConstraints() stackView.setCustomSpacing(10, after: childView) } ``` Upvotes: 0
2018/03/15
1,137
4,056
<issue_start>username_0: I have a list of matrices that I want to be able to `cbind` into one matrix, but I run into a problem when they have different sized rows. To fix this I am trying to add empty rows to the bottom of the shorter ones, however the second to last step isn't quite working. ``` ## LIST OF MATRACIES lst = list(as.matrix(data.frame(1:3, 1:3)), as.matrix(data.frame(1:2, 1:2))) ## FIND LONGEST ONE mrow = lapply(lst, function(x) nrow(x)) mrow = max(unlist(lst)) ## CREATE MATRIX LIST TO RBIND tempM = lapply(1:length(lst), function(x) matrix(nrow = mrow - nrow(lst[x][[1]]), ncol = ncol(lst[x][[1]]))) ## ADD ROWS TO SHORTER MATRICES TO MAkE LENGTHS LINE UP ## THIS IS WHERE THINGS GO WRONG lst = lapply(1:length(tempM), function(x) rbind(lst[x][[1]], tempM[x])) ## GOAL TO BE ABLE TO: rlist::list.cbind(lst) ## ERROR: Different number of rows ```<issue_comment>username_1: You need to make sure the `UIStackView`'s `distribution` property is set to `.fill` or `.fillProportionally`. --- I created the following swift playground and it looks like I am able to use `setCustomSpacing` at runtime with random values and see the effect of that. ``` import UIKit import PlaygroundSupport public class VC: UIViewController { let view1 = UIView() let view2 = UIView() let view3 = UIView() var stackView: UIStackView! public init() { super.init(nibName: nil, bundle: nil) } public required init?(coder aDecoder: NSCoder) { fatalError() } public override func viewDidLoad() { super.viewDidLoad() view.backgroundColor = .white view1.backgroundColor = .red view2.backgroundColor = .green view3.backgroundColor = .blue view2.isHidden = true stackView = UIStackView(arrangedSubviews: [view1, view2, view3]) stackView.spacing = 10 stackView.axis = .horizontal stackView.distribution = .fillProportionally let uiSwitch = UISwitch() uiSwitch.addTarget(self, action: #selector(onSwitch), for: .valueChanged) view1.addSubview(uiSwitch) uiSwitch.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ uiSwitch.centerXAnchor.constraint(equalTo: view1.centerXAnchor), uiSwitch.centerYAnchor.constraint(equalTo: view1.centerYAnchor) ]) view.addSubview(stackView) stackView.translatesAutoresizingMaskIntoConstraints = false NSLayoutConstraint.activate([ stackView.heightAnchor.constraint(equalToConstant: 50), stackView.centerYAnchor.constraint(equalTo: view.centerYAnchor), stackView.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 50), stackView.trailingAnchor.constraint(equalTo: view.trailingAnchor, constant: -50) ]) } @objc public func onSwitch(sender: Any) { view2.isHidden = !view2.isHidden if !view2.isHidden { stackView.setCustomSpacing(CGFloat(arc4random_uniform(40)), after: view2) } } } PlaygroundPage.current.liveView = VC() PlaygroundPage.current.needsIndefiniteExecution = true ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: Another reason `setCustomSpacing` can fail is if you call it *before* adding the arranged subview after which you want to apply the spacing. **Won't work:** ```swift headerStackView.setCustomSpacing(50, after: myLabel) headerStackView.addArrangedSubview(myLabel) ``` **Will work:** ```swift headerStackView.addArrangedSubview(myLabel) headerStackView.setCustomSpacing(50, after: myLabel) ``` Upvotes: 3 <issue_comment>username_3: I also noticed that custom spacing values get reset after hiding/unhiding children. I was able to override `updateConstraints()` for my parent view and set the custom spacing as needed. The views then kept their intended spacing. ```swift override func updateConstraints() { super.updateConstraints() stackView.setCustomSpacing(10, after: childView) } ``` Upvotes: 0
2018/03/15
1,416
3,720
<issue_start>username_0: I have a list in R: ``` my_list <- list(a = 1, b = 2, c = list(d = 4, e = 5)) ``` Suppose I don't know the structure of the list, but I know that somewhere in this list, there is an element named `d`, nested or not. I would like to: 1. Subset that list element, without knowing the structure of the master list that contains it 2. Know the name of its parent list (i.e. element `c`) Is there an easy method / package that can solve this seemingly simple problem?<issue_comment>username_1: Maybe something like the following will do what you want. ``` wanted <- "d" inx <- grep(wanted, names(unlist(my_list)), value = TRUE) unlist(my_list)[inx] #c.d # 4 sub(paste0("(\\w)\\.", wanted), "\\1", inx) #[1] "c" ``` Upvotes: 2 <issue_comment>username_2: I'm implementing the suggestion by @r2evans. I'm sure this can be improved: ``` getParentChild <- function(lst, myN) { myFun <- function(lst, myN) { test <- which(names(lst) == myN) if (length(test) > 0) return(lst[test]) lapply(lst, function(x) { if (is.list(x)) myFun(x, myN) }) } temp <- myFun(lst, myN) temp[!sapply(temp, function(x) is.null(unlist(x)))] } getParentChild(my_list, "d") $c $c$d [1] 4 ``` Here is a more complicate example that illustrates how `getParentChild` shows lineage when there are multiple children/grandchildren. ``` exotic_list <- list(a = 1, b = 2, c = list(d = 4, e = 5), f = list(g = 6, h = list(k = 7, j = 8)), l = list(m = 6, n = list(o = 7, p = 8)), q = list(r = 5, s = 11), t = 12) getParentChild(exotic_list, "n") $l $l$n $l$n$o [1] 7 $l$n$p [1] 8 ``` Upvotes: 2 <issue_comment>username_3: Here's another recursive approach, very similar to @JosephWood's answer, that generalizes the solution such that you can search for multiple elements at the same time, and find *all* the matching elements, if there are multiple: ```r find_all <- function(x, elements) { lists <- vapply(x, is.list, logical(1)) # find sublists # find all elements in sublists out <- lapply(x[lists], find_all, elements) out <- out[!vapply(out, is.null, logical(1))] # output any found elements if (any(elements %in% names(x))) out <- c(out, x[names(x) %in% elements]) if (length(out) == 0) NULL else out } ``` The example problem: ```r my_list <- list(a = 1, b = 2, c = list(d = 4, e = 5)) str(find_all(my_list, "e")) #> List of 1 #> $ c:List of 1 #> ..$ e: num 5 ``` And @JosephWood's exotic example complicted further: ```r exotic_list <- list( a = 1, b = 2, c = list(d = 4, e = 5), f = list(g = 6, h = list(k = 7, j = 8)), l = list(m = 6, n = list(o = 7, p = 8)), q = list(r = 5, s = 11), t = 12, n = 13 ) str(find_all(exotic_list, c("n", "q"))) #> List of 3 #> $ l:List of 1 #> ..$ n:List of 2 #> .. ..$ o: num 7 #> .. ..$ p: num 8 #> $ q:List of 2 #> ..$ r: num 5 #> ..$ s: num 11 #> $ n: num 13 ``` --- Using the [`purrr` package](http://purrr.tidyverse.org/) we could also get rid of the `vapply`s, making the function a little bit more succinct, and perhaps a bit more readable, too: ```r library(purrr) find_all2 <- function(x, elements) { # find all elements in sublists out <- map(keep(x, is.list), find_all, elements) out <- compact(out) # drop nulls # output any found elements if (any(elements %in% names(x))) out <- c(out, x[names(x) %in% elements]) if (length(out) == 0) NULL else out } identical( find_all(exotic_list, c("n", "q")), find_all2(exotic_list, c("n", "q")) ) #> [1] TRUE ``` Created on 2018-03-15 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0). Upvotes: 3 [selected_answer]
2018/03/15
234
837
<issue_start>username_0: Looking to determine if BBB exists within the XML. I can effectively echo it out as shown below. How can I effective search the PRODUCT\_CODE node(s) to determine if BBB exists or NOT within the group? **XML:** ``` AAA BBB CCC ``` **XSLT:** ``` ```<issue_comment>username_1: Test for existence is the result of evaluating an XPath predicate expression in an `if` or `choose`: ``` true false ``` This outputs `true` if any node's text equals `'BBB'`. Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use a predicate on your matching template. The following example does only match the first time (indicated by `[1]`) the text of a element matches the string `'BBB'`. ``` ``` If there do exist one or more elements with `'BBB'` text it matches *exactly* one time. Upvotes: 1
2018/03/15
1,430
4,187
<issue_start>username_0: I have a DataFrame with >1M rows. I'd like to select all the rows where a certain column contains a certain substring: ``` matching = df['col2'].str.contains('substr', case=True, regex=False) rows = df[matching].col1.drop_duplicates() ``` But this selection is slow and I'd like to speed it up. Let's say I only need the first *n* results. Is there a way to stop `matching` after getting *n* results? I've tried: ``` matching = df['col2'].str.contains('substr', case=True, regex=False).head(n) ``` and: ``` matching = df['col2'].str.contains('substr', case=True, regex=False).sample(n) ``` but they aren't any faster. The second statement is boolean and very fast. How can I speed up the first statement?<issue_comment>username_1: You can spead it up with: ``` matching = df['col2'].head(n).str.contains('substr', case=True, regex=False) rows = df['col1'].head(n)[matching==True] ``` However this solution would retrieve the matching results within the first `n` rows, not the first `n` matching results. In case you actually want the first `n` matching results you should use: ``` rows = df['col1'][df['col2'].str.contains("substr")==True].head(n) ``` But this option is way slower of course. Inspired in @ScottBoston's answer you can use following approach for a **complete faster solution**: ``` rows = df['col1'][pd.Series(['substr' in i for i in df['col2']])==True].head(n) ``` This is faster but not that faster than showing the whole results with this option. With this solution you can get the first `n` matching results. With below **test code** we can see how fast is each solution and it's results: ``` import pandas as pd import time n = 10 a = ["Result", "from", "first", "column", "for", "this", "matching", "test", "end"] b = ["This", "is", "a", "test", "has substr", "also has substr", "end", "of", "test"] col1 = a*1000000 col2 = b*1000000 df = pd.DataFrame({"col1":col1,"col2":col2}) # Original option start_time = time.time() matching = df['col2'].str.contains('substr', case=True, regex=False) rows = df[matching].col1.drop_duplicates() print("--- %s seconds ---" % (time.time() - start_time)) # Faster option start_time = time.time() matching_fast = df['col2'].head(n).str.contains('substr', case=True, regex=False) rows_fast = df['col1'].head(n)[matching==True] print("--- %s seconds for fast solution ---" % (time.time() - start_time)) # Other option start_time = time.time() rows_other = df['col1'][df['col2'].str.contains("substr")==True].head(n) print("--- %s seconds for other solution ---" % (time.time() - start_time)) # Complete option start_time = time.time() rows_complete = df['col1'][pd.Series(['substr' in i for i in df['col2']])==True].head(n) print("--- %s seconds for complete solution ---" % (time.time() - start_time)) ``` This would output: ``` >>> --- 2.33899998665 seconds --- --- 0.302999973297 seconds for fast solution --- --- 4.56700015068 seconds for other solution --- --- 1.61599993706 seconds for complete solution --- ``` And the resulting Series would be: ``` >>> rows 4 for 5 this Name: col1, dtype: object >>> rows_fast 4 for 5 this Name: col1, dtype: object >>> rows_other 4 for 5 this 13 for 14 this 22 for 23 this 31 for 32 this 40 for 41 this Name: col1, dtype: object >>> rows_complete 4 for 5 this 13 for 14 this 22 for 23 this 31 for 32 this 40 for 41 this Name: col1, dtype: object ``` Upvotes: 1 <issue_comment>username_2: Believe it or not but .str accessor is slow. You can use list comprehensions with better performance. ``` df = pd.DataFrame({'col2':np.random.choice(['substring','midstring','nostring','substrate'],100000)}) ``` Test for equality ``` all(df['col2'].str.contains('substr', case=True, regex=False) == pd.Series(['substr' in i for i in df['col2']])) ``` Output: ``` True ``` Timings: ``` %timeit df['col2'].str.contains('substr', case=True, regex=False) 10 loops, best of 3: 37.9 ms per loop ``` versus ``` %timeit pd.Series(['substr' in i for i in df['col2']]) 100 loops, best of 3: 19.1 ms per loop ``` Upvotes: 3 [selected_answer]
2018/03/15
704
2,965
<issue_start>username_0: We want to present our data in a graph and thought about using one of graphdbs. During our vendor investigation process, one of the experts suggested that using graphdb on dense graph won't be efficient and we'd better off with columnar-based db like cassandra. > > I gave your use case some thought and given your graph is very dense (number of relationships = number of nodes squared) and that you seem to only need a few hop traversals from the particular node along different relationships. I’d actually recommend you also try out a columnar database. > > > Graph databases tend to work well when you have sparse graphs (num of relationships << num of nodes ^ 2) and with deep traversals - from 4-5 hops to hundreds of hops. If I understood your use-case correctly, a columnar database should generally outperform graphs there. > > > Our use case will probably end up with nodes connected to 10s of millions of other nodes with about 30% overlap between different nodes - so in a way, it's probably a dense graph. Overall there will be probably a few billion nodes. Looking in Neo4j source code I found some reference of isDense flag on the nodes to differentiate the processing logic - not sure what that does. But I also wonder whether it was done as an edge case patch and won't work well if most of the nodes in the graph are dense. Does anyone have any experience with graphdbs on dense graphs and should it be considered in such cases? All opinions are appreciated!<issue_comment>username_1: I do not have experience with dense graphs using graph databases, but I do not think that dense graph is a problem. Since You are going to use graph algorithms, I suppose, You would benefit from using graph database (depending on the algorithms complexity - the more "hops", the more You benefit from constant edge traversing time). A good trade-off could be to use one of not native graph databases (like Titan, its follow-up JanusGraph, Mongo Db, ..), which actually uses column based storages (Cassandra, Barkley DB, .. ) as its backend. Upvotes: 0 <issue_comment>username_2: When the use of graph DB comes into mind it shows multiple tables are linked with each other, which is a perfect use case for graph DB. We are handling JansuGraph with a scale of 20B vertices and 15B edges. It's not a large dense graph with a vertex connected with 10s M vertices. But still, we observed the super node case, where a vertex is connected with more number of vertices than expectation. But with our use case while doing traversal (DFS) we always traverse with max N children nodes of a node and a limited depth say M, which is absolutely fine considering the number of joins required in non-graph DBS (columnar, relational, Athena, etc..). The only way (i feel) to get all relations of a node is to do a full DFS or inner joins datasets until no common data found. Excited to know more about other creative solutions. Upvotes: 1
2018/03/15
431
1,518
<issue_start>username_0: I have a `iframe` with `Google Calendar`: ```html ``` **Doubt:** I can customize `Google Calendar` using `CSS`? If not, I can customize otherwise? I would like to change font size, color, etc.<issue_comment>username_1: Google Calendar exposes its APIs. They are available here and these are the same APIs used to customize the layout for android Calendar app. <https://developers.google.com/calendar/> You can write whatever CSS/layout on top of these APIs *Might not be related but there is an opensource repo that can help you in case you are only looking for google calendar events.* Check this: <http://sugi.github.io/jquery-gcal-flow/> Upvotes: 3 <issue_comment>username_2: Another option is to use [Styled Calendar](https://styledcalendar.com/?utm_medium=referral&utm_source=stackoverflow&utm_campaign=forum-posts) which is a free service that allows you to style a Google Calendar embed with some UI options. It also allows custom CSS styles once the calendar is embedded. Upvotes: 2 <issue_comment>username_3: Found a bit of a work around! You can use the `filter` style on the iframe itself ``` iframe{ filter: invert(.9) saturate(0.5) hue-rotate(145deg); } ``` So it goes from this: [![enter image description here](https://i.stack.imgur.com/bNAPI.png)](https://i.stack.imgur.com/bNAPI.png) to this: [![enter image description here](https://i.stack.imgur.com/coLn6.png)](https://i.stack.imgur.com/coLn6.png) Which I believe looks at least somewhat nicer Upvotes: 2
2018/03/15
569
2,104
<issue_start>username_0: I am looking to write a script in PowerShell, however I want to set a variable and then have it fill the variable value in my script. To be more specific the variable I want to set is an IP address and I want to enter it into a script that pulls 3 files for me, as I manage a large number of computers. Here is an example of one file from the script that I want run: ``` Copy-Item -Path '\\VariableIwantToEnterIPaddress\C$\ComputerZ\log\File.log' -Destination C:\Users\JayZ\Desktop ``` Where I put `VariableIwantToEnterIPaddress` I would place a variable then have it fill as an IP address. Either setting the variable first then running the script. Or having PowerShell prompt me for a value then it run the script. Is this possible, or is there a better method of doing this?<issue_comment>username_1: Google Calendar exposes its APIs. They are available here and these are the same APIs used to customize the layout for android Calendar app. <https://developers.google.com/calendar/> You can write whatever CSS/layout on top of these APIs *Might not be related but there is an opensource repo that can help you in case you are only looking for google calendar events.* Check this: <http://sugi.github.io/jquery-gcal-flow/> Upvotes: 3 <issue_comment>username_2: Another option is to use [Styled Calendar](https://styledcalendar.com/?utm_medium=referral&utm_source=stackoverflow&utm_campaign=forum-posts) which is a free service that allows you to style a Google Calendar embed with some UI options. It also allows custom CSS styles once the calendar is embedded. Upvotes: 2 <issue_comment>username_3: Found a bit of a work around! You can use the `filter` style on the iframe itself ``` iframe{ filter: invert(.9) saturate(0.5) hue-rotate(145deg); } ``` So it goes from this: [![enter image description here](https://i.stack.imgur.com/bNAPI.png)](https://i.stack.imgur.com/bNAPI.png) to this: [![enter image description here](https://i.stack.imgur.com/coLn6.png)](https://i.stack.imgur.com/coLn6.png) Which I believe looks at least somewhat nicer Upvotes: 2
2018/03/15
1,226
3,600
<issue_start>username_0: I don't know much about R, and I have a variables in a dataframe that I am trying to calculate some stats for, with the hope of writing them into a csv. I have been using a basic for loop, like this: ```r for(i in x) { mean(my_dataframe[,c(i)], na.rm = TRUE)) } ``` where x is colnames(my\_dataframe) Not every variable is numeric - but when I add a print to the loop, this works fine - it just prints means when applicable, and NA when not. However, when I try to assign this loop to a value (means <- for....), it produces an empty list. Similarly, when I try to directly write the results to a csv, I get an empty csv. Does anyone know why this is happening/how to fix it?<issue_comment>username_1: this should work for you. you don't need a loop. just use the summary() function. ``` summary(cars) ``` Upvotes: 2 <issue_comment>username_2: You can use `lapply` or `sapply` for this sort of thing. e.g. `sapply(my_dataframe, mean)` will get you all the means. You can also give it your own function e.g. `sapply(my_dataframe, function(x) sum(x^2 + 2)/4 - 9)` If all variables are not numeric you can use `summarise_if` from `dplyr` to get the results just for the numeric columns. ``` require(dplyr) my_dataframe %>% summarise_if(is.numeric, mean) ``` Without `dplyr`, you could do ``` sapply(my_dataframe[sapply(my_dataframe, is.numeric)], mean) ``` Upvotes: 0 <issue_comment>username_3: The for loop executes the code inside, but it doesn't put any results together. To do that, you need to create an object to hold the results and explicitly assign each one: ``` my_means = rep(NA, ncol(my_dataframe) for(i in seq_along(x)) { my_means[i] = mean(my_dataframe[, x[i], na.rm = TRUE)) } ``` Note that I have also changed your loop to use `i = 1, 2, 3, ...` instead of each name. `sapply`, as shown in another answer, is a nice shortcut that does the loop and combines the results for you, so you don't need to worry about pre-allocating the result object. It's also smart enough to iterate over columns of a data frame by default. ``` my_means_2 = sapply(my_dataframe, mean, na.rm = T) ``` Upvotes: 0 <issue_comment>username_4: Please give a reproducible example the next time you post a question. `Input` is how I imagine your data would look like. Input: ``` library(nycflights13) library(tidyverse) input <- flights %>% select(origin, air_time, carrier, arr_delay) input # A tibble: 336,776 x 4 origin air_time carrier arr_delay 1 EWR 227. UA 11. 2 LGA 227. UA 20. 3 JFK 160. AA 33. 4 JFK 183. B6 -18. 5 LGA 116. DL -25. 6 EWR 150. UA 12. 7 EWR 158. B6 19. 8 LGA 53. EV -14. 9 JFK 140. B6 -8. 10 LGA 138. AA 8. # ... with 336,766 more rows ``` The way I see it, there are 2 ways to do it: 1. Use `summarise_all()` `summarise_all()` will summarise all your columns, including those that are not numeric. Method: ``` input %>% summarise_all(funs(mean(., na.rm = TRUE))) # A tibble: 1 x 4 origin air_time carrier arr_delay 1 NA 151. NA 6.90 Warning messages: 1: In mean.default(origin, na.rm = TRUE) : argument is not numeric or logical: returning NA 2: In mean.default(carrier, na.rm = TRUE) : argument is not numeric or logical: returning NA ``` You will get a result and a warning if you were to use this method. 2. Use `summarise_if` summarise only numeric columns. You can avoid from getting any error this way. Method: ``` input %>% summarise_if(is.numeric, funs(mean(., na.rm = TRUE))) # A tibble: 1 x 2 air_time arr_delay 1 151. 6.90 ``` You can then create a NA column for others Upvotes: 0
2018/03/15
1,213
3,592
<issue_start>username_0: I am trying to read a CSV file using pandas read\_csv function but I keep getting an OSError. I have tried many different fixes but none of them seem to work. I have fixed a previous file not found error but unexpectedly got this error. Here is the code I used. Please note I am using Jupyter Notebook as part of Anaconda. ``` import pandas as pd import os curDir = os.getcwd() #print(curDir) melbourne_file_path = '..\Downloads\melb_data.csv' melbourne_data = pd.read_csv(melbourne_file_path) print(melbourne_data.describe()) ``` This is the output/error I get: ``` OSError Traceback (most recent call last) in () 6 7 melbourne\_file\_path = '..\Downloads\melb\_data.csv' ----> 8 melbourne\_data = pd.read\_csv(melbourne\_file\_path) 9 print(melbourne\_data.describe()) ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in parser\_f(filepath\_or\_buffer, sep, delimiter, header, names, index\_col, usecols, squeeze, prefix, mangle\_dupe\_cols, dtype, engine, converters, true\_values, false\_values, skipinitialspace, skiprows, nrows, na\_values, keep\_default\_na, na\_filter, verbose, skip\_blank\_lines, parse\_dates, infer\_datetime\_format, keep\_date\_col, date\_parser, dayfirst, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, escapechar, comment, encoding, dialect, tupleize\_cols, error\_bad\_lines, warn\_bad\_lines, skipfooter, skip\_footer, doublequote, delim\_whitespace, as\_recarray, compact\_ints, use\_unsigned, low\_memory, buffer\_lines, memory\_map, float\_precision) 707 skip\_blank\_lines=skip\_blank\_lines) 708 --> 709 return \_read(filepath\_or\_buffer, kwds) 710 711 parser\_f.\_\_name\_\_ = name ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in \_read(filepath\_or\_buffer, kwds) 447 448 # Create the parser. --> 449 parser = TextFileReader(filepath\_or\_buffer, \*\*kwds) 450 451 if chunksize or iterator: ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in \_\_init\_\_(self, f, engine, \*\*kwds) 816 self.options['has\_index\_names'] = kwds['has\_index\_names'] 817 --> 818 self.\_make\_engine(self.engine) 819 820 def close(self): ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in \_make\_engine(self, engine) 1047 def \_make\_engine(self, engine='c'): 1048 if engine == 'c': -> 1049 self.\_engine = CParserWrapper(self.f, \*\*self.options) 1050 else: 1051 if engine == 'python': ~\Anaconda3\lib\site-packages\pandas\io\parsers.py in \_\_init\_\_(self, src, \*\*kwds) 1693 kwds['allow\_leading\_cols'] = self.index\_col is not False 1694 -> 1695 self.\_reader = parsers.TextReader(src, \*\*kwds) 1696 1697 # XXX pandas/\_libs/parsers.pyx in pandas.\_libs.parsers ```<issue_comment>username_1: Try: ``` melbourne_file_path = '../Downloads/melb_data.csv' ``` or ``` melbourne_file_path = '..\\Downloads\\melb_data.csv' ``` Upvotes: 1 <issue_comment>username_2: If using Spyder 3.7, make sure that `#!/usr/bin/env python3` appears on line 1. Sometimes, a `temp.py` file is created when the correct Anaconda environment is not activated. The `temp.py` file may not have included `#!/usr/bin/env python3`. The following two images show two files. The first one is missing the content of line 1 and the second includes the fix. [without fix](https://i.stack.imgur.com/NAHWN.png) [with fix](https://i.stack.imgur.com/N962D.png) 1) ensure the `conda` environment is activated using source activate 2) ensure `#!/usr/bin/env python3` appears on line 1 of new python file Upvotes: 0
2018/03/15
2,157
8,734
<issue_start>username_0: I've recently been learning asynchronous programming and I think I've mastered it. Asynchronous programming is simple just allowing our program to multitask. The confusion comes with `await` and `async` of programming, it seemed to confused me a little more, could somebody help answer some of my concerns? I don't see the `async` keyword as much, just something you chuck on a method to let Visual Studio know that the method may `await` something and for you to allow it to warn you. If it has some other special meaning that actually affects something, could someone explain? Moving onto `await`, after talking to a friend I was told I had 1 major thing wrong, `await` doesn't block the current method, it simply executes the code left in that method and does the asynchronous operation in its own time. Now, I'm not sure how often this happenes, but lets say yo have some code like this. ``` Console.WriteLine("Started checking a players data."); var player = await GetPlayerAsync(); foreach (var uPlayer in Players.Values) { uPlayer.SendMessage("Checking another players data"); } if (player.Username == "SomeUsername") { ExecuteSomeOperation(); } Console.WriteLine("Finished checking a players data."); ``` As you can see, I run some asynchronous code on `GetPlayerAsync`, what happens if we get deeper into the scope and we need to access player, but it hasn't returned the player yet? If it doesn't block the method, how does it know that player isn't null, does it do some magic and wait for us if we got to that situation, or do we just forbid ourselves from writing methods this way and handle it ourselves.<issue_comment>username_1: > > `await` doesn't block the current method > > > Correct. > > it simply executes the code left in that method and does the asynchronous operation in its own time. > > > No, not at all. It schedules the rest of the method to run when the asynchronous operation has finished. It does not run the rest of the method immediately. It's not *allowed* to run any of the rest of the code in the method until the awaited operation is complete. It just doesn't block the current thread in the process, the current thread is returned back to the caller, and can go off to do whatever it wants to do. The rest of the method will be scheduled by the synchronization context (or the thread pool, if none exists) when the asynchronous operation finishes. Upvotes: 2 <issue_comment>username_2: In this code, you will not use Await because `GetPlayerAsync()` runs some asynchronous code. You can consider it from the perspective that Async and Await are different in that "Async" is waiting while "Await" operates asynchronously. Try to use `Task< T >` as return data. Upvotes: -1 <issue_comment>username_3: > > I had 1 major thing wrong, await doesn't block the current method, it simply executes the code left in that method and does the asynchronous operation in its own time. > > > But it *does* block the method, in the sense that a method that calls `await` won't continue until the results are in. It just doesn't block the thread that the method is running on. > > ... and we need to access player, but it hasn't returned the player yet? > > > That simply won't happen. async/await is ideal for doing all kinds of I/O (file, network, database, UI) without wasting a lot of threads. Threads are expensive. But as a programmer you can write (and think) as if it were all happening synchronously. Upvotes: 1 <issue_comment>username_4: > > I've recently been learning asynchronous programming and I think I've mastered it. > > > I was one of the designers of the feature and I don't feel like I've even come close to mastering it, and you are asking beginner level questions and have some very, very wrong ideas, so there's some hubris going on here I suspect. > > Asynchronous programming is simply just allowing our program to multitask. > > > Suppose you asked "why are some substances hard and some soft?" and I answered "substances are made of arrangements of atoms, and some atom arrangements are hard and some are soft". Though that is undoubtedly true, I hope you would push back on this unhelpful non-explanation. Similarly, you've just replaced the vague word "asynchronous" with another vague word "multitask". This is an explanation that explains nothing, since you haven't clearly defined what it means to multitask. Asynchronous workflows are undoubtedly about executing multiple tasks. **That's why the fundamental unit of work in a workflow is the `Task` monad.** An asynchronous workflow is the composition of multiple tasks by constructing a graph of dependency relationships among them. But that says nothing about *how that workflow is actually realized in software*. This is a complex and deep subject. > > I don't see the async keyword as much, just something you chuck on a method to let Visual Studio know that the method may await something and for you to allow it to warn you. > > > That's basically correct, though don't think of it as telling Visual Studio; VS doesn't care. It's the C# compiler that you're telling. > > If it has some other special meaning that actually affects something, could someone explain? > > > It just makes `await` a keyword inside the method, and puts restrictions on the return type, and changes the meaning of `return` to "signal that the task associated with this invocation is complete", and a few other housekeeping details. > > await doesn't block the current method > > > Of course it does. Why would you suppose that it does not? It doesn't block *the thread*, but it surely blocks *the method*. > > it simply executes the code left in that method and does the asynchronous operation in its own time. > > > **ABSOLUTELY NOT.** This is completely backwards. `await` does the *opposite* of that. Await means *if the task is not complete then return to your caller, and sign up the remainder of this method as the continuation of the task*. > > As you can see, I run some asynchronous code on GetPlayerAsync, what happens if we get deeper into the scope and we need to access player, but it hasn't returned the player yet? > > > That doesn't ever happen. If the value assigned to `player` is not available when the `await` executes then the await **returns**, and the remainder of the method is **resumed** when the value is available (or when the task completes exceptionally.) Remember, await mean **asynchronously wait**, that's why we called it "await". **An await is a point in an asynchronous workflow where the workflow cannot proceed until the awaited task is complete**. That is the *opposite* of how you are describing await. Again, remember what an asynchronous workflow is: it is a collection of tasks where those tasks have dependencies upon each other. **We express that one task has a dependency upon the completion of another task by placing an `await` at the point of the dependency**. Let's look at your workflow in more detail: ``` var player = await GetPlayerAsync(); foreach (var uPlayer in Players.Values) ... if (player.Username == "SomeUsername") ... ``` The await means "the remainder of this workflow cannot continue until the player is obtained". Is that actually correct? If you want the `foreach` to *not* execute until the player is fetched, then this is correct. But the foreach doesn't depend on the player, so we could rewrite this like this: ``` Task playerTask = GetPlayerAsync(); foreach (var uPlayer in Players.Values) ... Player player = await playerTask; if (player.Username == "SomeUsername") ... ``` See, we have *moved the point of dependency to later in the workflow*. We start the "get a player" task, then we do the foreach, and *then* we check to see if the player is available *right before we need it*. If you have the belief that `await` somehow "takes a call and makes it asynchronous", this should dispel that belief. `await` takes a *task* and *returns if it is not complete*. If it is complete, then it extracts the value of that task and continues. The "get a player" operation is *already* asynchronous, `await` does not make it so. > > If it doesn't block the method, how does it know that player isn't null > > > It *does* block the method, or more accurately, it *suspends* the method. **The method suspends and does not resume until the task is complete and the value is extracted**. It doesn't block *the thread*. It returns, so that the caller can keep on doing work in a *different* workflow. When the task is complete, the continuation will be scheduled onto the current context and the method will resume. Upvotes: 5 [selected_answer]
2018/03/15
837
2,683
<issue_start>username_0: I have installed Docker containers according [manual](https://docs.docker.com/install/linux/docker-ce/ubuntu/#upgrade-docker-ce) Command below adds repository with ubuntu containers: ``` $ sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" ``` Whole container list: ``` apt-cache madison docker-ce docker-ce | 17.12.1~ce-0~ubuntu | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages docker-ce | 17.12.0~ce-0~ubuntu | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages docker-ce | 17.09.1~ce-0~ubuntu | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages docker-ce | 17.09.0~ce-0~ubuntu | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages docker-ce | 17.06.2~ce-0~ubuntu | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages docker-ce | 17.06.1~ce-0~ubuntu | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages docker-ce | 17.06.0~ce-0~ubuntu | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages docker-ce | 17.03.2~ce-0~ubuntu-xenial | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages docker-ce | 17.03.1~ce-0~ubuntu-xenial | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages docker-ce | 17.03.0~ce-0~ubuntu-xenial | https://download.docker.com/linux/ubuntu xenial/stable amd64 Packages ``` To test system container `hello-world` was used ``` sudo docker run hello-world ``` But where `hello-world` came from? I don't see it on the list? How to know list of other 'hidden' containers?<issue_comment>username_1: Use the [docker images](https://docs.docker.com/engine/reference/commandline/images/#description) command to see the list of images present locally: ``` $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE redis latest 33c26d72bd74 4 weeks ago 107MB hello-world latest f2a91732366c 3 months ago 1.85kB ``` The command `apt-cache madison docker-ce` lists the available ubuntu packages for all the different version of the docker binary and tools. Upvotes: 0 <issue_comment>username_2: When you run the below command, docker will search your local machine for the hello-world image. Since you don't have one, it will fetch it from dockerhub and run it. `$ sudo docker run hello-world` After running the hello world container. if you run the below command, it will list the docker images available in your local machine ``` $ docker images ``` Upvotes: 1
2018/03/15
499
2,046
<issue_start>username_0: Im trying to make a winapp that fills the document template file using the C# form and create a new .docx file. Where should i put the .docx file and how should i use it. I placed my template inside the Debug folder and load it like: ``` dox.LoadFromFile("template.docx"); ``` Im having a problem when using the executable because it doesnt seem to find the template.docx<issue_comment>username_1: It is possibly to have files copies into the Output Directory. This is a simple mater of setting the File up [accordingly](https://msdn.microsoft.com/en-us/library/0c6xyb66(v=vs.100).aspx). However, having data like this in the Programm directory is frowned upon. As of Windows XP, you are unlikely to get write access to those files anymore. So any form of changes would be blocked unless your programm runs with full Administrative Rights. The intended place for such files are the [SpecialFolders](https://msdn.microsoft.com/en-us/library/system.environment.specialfolder.aspx). On those you are guaranteed write rights to some degree. While you could still store the template in the programm directory to copy it there if needed, it might be better to use copy it to default user as part of hte setup process. Of course Visual Studio loves to run code under rather odd user accounts. So I am not sure how far that works for testing. Upvotes: 2 [selected_answer]<issue_comment>username_2: You can store you word document directly in your assembly (juste copy past the file in your project). Then you just copy it to windows temp folder before doing your own business. Just don't forget to delete the file in the temp folder when you are good because windows won't do it for you. ``` Dim fileLocation As String = Path.GetTempFileName() Using newFile As Stream = New FileStream(fileLocation, FileMode.Create) Assembly.GetExecutingAssembly().GetManifestResourceStream("YourAssemblyName.yourfile.docx").CopyTo(newFile) End Using ... System.IO.File.Delete(fileLocation) ``` Upvotes: 0
2018/03/15
535
1,791
<issue_start>username_0: i want to submit my form using ajax following is my javascript ``` $.ajax({ type: "POST", headers: { 'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content') } url: "http://localhost/shago/register/submit", data: user_firstname, dataType: "text", success: function(resultData) { alert("Save Complete") } }); ``` i have included meta tag in form like ``` ``` and in controller i have just returned a message but i am getting POST <http://localhost/shago/register/submit> 419 (unknown status) above error can you please help me ,let me know for any other inputs i know it is mostly caused by csrf token (i have declared submit route in web.php and also api.php file)<issue_comment>username_1: Try this ``` $.ajax({ type: "POST", headers: { 'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content') } url: "http://localhost/shago/register/submit", data: {// change data to this object _token : $('meta[name="csrf-token"]').attr('content'), user_firstname:user_firstname } dataType: "text", success: function(resultData) { alert("Save Complete") } }); ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: You can add below code to your master file ``` $.ajaxSetup({ headers: { 'X-CSRF-Token' : $('meta[name=\_token]').attr('content') } }); ``` Upvotes: 0 <issue_comment>username_3: 1) Use the meta tag in head section ``` ``` 2) Set header in ajax , like ``` header:{'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content')}, ``` 3) Send CSRF token with data ``` data:({_token : $('meta[name="csrf-token"]').attr('content'), name:name,category:category}), ``` or CSRF token can be written as ``` "_token": "{{ csrf_token() }}", ``` Upvotes: 0
2018/03/15
597
1,903
<issue_start>username_0: I have an Excel file that has a column with names. I am trying to write a macro so that when I search the column I obtain every record that has that name. The problem I am having is that in a single cell there may be 1, 2 or even more names (i.e. I'm searching for "DAVE", but in a single cell there may be "DAVE" "ANDY" "FRANK"). Currently my code is picking up all the records that have ONLY "DAVE", but if a cell has "DAVE" and any other names it does not pick it up. Is there a way to code my macro such that it will pick up these that I'm missing? Currently this is my if statement: ``` If WsG.Cells(G_Row, 2).Value = "DAVE" Then WsG.Rows(G_Row).EntireRow.Copy Destination:=Ws2.Rows(S2_Row) S2_Row = S2_Row + 1 End If ``` So I'm looking in WsG, column 2 for "DAVE". Then, trying to copy the row over to a new worksheet.<issue_comment>username_1: Try this ``` $.ajax({ type: "POST", headers: { 'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content') } url: "http://localhost/shago/register/submit", data: {// change data to this object _token : $('meta[name="csrf-token"]').attr('content'), user_firstname:user_firstname } dataType: "text", success: function(resultData) { alert("Save Complete") } }); ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: You can add below code to your master file ``` $.ajaxSetup({ headers: { 'X-CSRF-Token' : $('meta[name=\_token]').attr('content') } }); ``` Upvotes: 0 <issue_comment>username_3: 1) Use the meta tag in head section ``` ``` 2) Set header in ajax , like ``` header:{'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content')}, ``` 3) Send CSRF token with data ``` data:({_token : $('meta[name="csrf-token"]').attr('content'), name:name,category:category}), ``` or CSRF token can be written as ``` "_token": "{{ csrf_token() }}", ``` Upvotes: 0
2018/03/15
290
1,270
<issue_start>username_0: I have to make a simple catalog with products image upload with boilerplate. Has anyone done something like this? Are there public examples of catalogs source code made with AspnetBoilerplate on the internet?<issue_comment>username_1: Ready to use solution? Unlikely. AspnetBoilerPlate is for coders. So start coding. It should be fairly straightforward to hook up existing .NET Core e-commerce solution with AspNetBoilerplate features using starter template as a reference. Alternatively, you can start with AspNetBoilerplate starter template and add necessary e-commerce features like catalog listing using existing e-commerce source code as a reference. [SimplCommerce](https://github.com/simplcommerce/SimplCommerce) is a good simple ecommerce solution also built with .NET Core and EntityFrameworkCore. Upvotes: 0 <issue_comment>username_2: checkout the below websites. * <https://www.nopcommerce.com/> * <https://www.wix.com/> But [AspnetBoilerplate](https://aspnetboilerplate.com/) is a developer friendly framework and fully open to customizations. On the other hand, you need to write code and get your hands dirty with it. For the long-term benefits, I would go with [AspnetBoilerplate](https://aspnetboilerplate.com/). Upvotes: 1
2018/03/15
967
3,022
<issue_start>username_0: Is it possible to pad an array with incrementing numbers? For example ``` $myArr = ["red", "green", "blue"]; $type = "colour"; ``` I want to somehow merge these and add a sort order so I end up with the following ``` Array ( [red] => Array ( [type] => "colour" [sort] => 1 ) [green] => Array ( [type] => "colour" [sort] => 2 ) [blue] => Array ( [type] => "colour" [sort] => 3 ) ) ``` So far I have only managed: ``` $additional_data = array_pad([], count($myArr), ['type_id' => $type_id]); $data = array_combine($myArr, $additional_data); ``` which is yielding: ``` Array ( [red] => Array ( [type] => "colour" ) [green] => Array ( [type] => "colour" ) [blue] => Array ( [type] => "colour" ) ) ``` I know I can do it by iterating through colours, but wondered if it could be done without a loop. Thanks<issue_comment>username_1: I figured it out. Pretty simple really. ``` $myArr = ["red", "green", "blue"]; $type = "colour"; $sort = 0; $additional_data = array_pad([], count($myArr), ['type_id' => $type_id, 'sort_order' => ++$sort]); $data = array_combine($myArr, $additional_data); ``` Upvotes: 0 <issue_comment>username_2: I done this, hope can help ``` $myArr = ["red", "green", "blue"]; $type = "colour"; $x = array(); for($i=0;$i ``` Upvotes: 1 <issue_comment>username_3: The version which you posted to solve this problem always has 1 as the sort\_order. ``` $myArr = ["red", "green", "blue"]; $type_id = "colour"; $sort = 0; $additional_data = array_pad([], count($myArr), ['type_id' => $type_id, 'sort_order' => ++$sort]); $data = array_combine($myArr, $additional_data); print_r($data); ``` outputs... ``` Array ( [red] => Array ( [type_id] => colour [sort_order] => 1 ) [green] => Array ( [type_id] => colour [sort_order] => 1 ) [blue] => Array ( [type_id] => colour [sort_order] => 1 ) ) ``` You could then process the result with array\_walk to correct the values... ``` $myArr = ["red", "green", "blue"]; $type_id = "colour"; $sort = 0; $additional_data = array_pad([], count($myArr), ['type_id' => $type_id, 'sort_order' => ++$sort]); $data = array_combine($myArr, $additional_data); $sort = 1; array_walk($data, function (&$item, $key) use(&$sort) { $item['sort_order'] = $sort++; }); print_r($data); ``` Which corrects it to. ``` Array ( [red] => Array ( [type_id] => colour [sort_order] => 1 ) [green] => Array ( [type_id] => colour [sort_order] => 2 ) [blue] => Array ( [type_id] => colour [sort_order] => 3 ) ) ``` Upvotes: 2 [selected_answer]
2018/03/15
1,297
4,669
<issue_start>username_0: I am trying to add three new columns to my table - Scanned, Actions, and HowToFix. However, I am getting an error that says: > > "Computed column 'Scanned' in table 'TEST\_concat' is not allowed to > be used in another computed-column definition." > > > What can I do about this? I need to use the 'Scanned' computed-column for my other two columns. Below is my code. Thank you in advance! ``` SELECT TOP (4000) [filer_id] ,[filerName] ,[ResouceName] ,[UniqueIdentity] ,[DirID] ,[DirsCount] ,[PermCount] ,[RowID] ,[ResourceType] ,[ResourceFlags] ,[Hresult] ,[Details] ,[fsid] ,[Protocol] ,[vExport] ,[filerType] ,[Error Messages] FROM [master].[dbo].[TEST_concat] ALTER TABLE dbo.TEST_concat ADD [Scanned] AS CASE WHEN [DirID] > 0 AND [DirsCount] is NULL THEN 'in Shares table, but not SortedDirectoryTree properties' WHEN [DirID] > 0 AND [DirsCount] > 0 THEN 'YES' WHEN [DirID] is null and [DirsCount] is null THEN 'NO' ELSE ' ' END ALTER TABLE dbo.TEST_concat ADD [Actions] AS CASE WHEN [protocol] = 'CIFS' AND [Error Messages] LIKE 'monitor type mismatch' AND [Scanned] = 'Yes' THEN 'Needs action' WHEN [protocol] = 'NFS' AND [DirsCount] = 0 AND [Scanned] = 'in Shares table, but not SortedDirectoryTree properties' THEN 'Needs action' WHEN [Error Messages] LIKE 'Pruned Different Security Type' or [Error Messages] LIKE 'mismatch' THEN 'Needs action' WHEN [Error Messages] LIKE 'access denied' THEN 'Needs action' WHEN [protocol] = 'CIFS' AND [Scanned] = 'in Shares table, but not SortedDirectoryTree properties' THEN 'Needs action' WHEN [Error Messages] LIKE 'the inherited' or [Error Messages] LIKE 'the path' or [Error Messages] LIKE 'missing inheritance' THEN 'Needs action' WHEN [Error Messages] LIKE 'Access is Denied. (1)' AND [DirsCount] < 3 THEN 'Needs action' WHEN [protocol] = 'CIFS' AND [protocol] = 'NFS' THEN 'Needs action' ELSE ' ' END ALTER TABLE dbo.TEST_concat ADD [HowToFix] AS CASE WHEN [protocol] = 'CIFS' AND [Error Messages] LIKE 'monitor type mismatch' AND [Scanned] = 'Yes' THEN 'Backend problems. Security is not visible' WHEN [protocol] = 'NFS' AND [DirsCount] = 0 AND [Scanned] = 'in Shares table, but not SortedDirectoryTree properties' THEN 'Backend problems. Security is not visible' WHEN [Error Messages] LIKE 'Pruned Different Security Type' or [Error Messages] LIKE 'mismatch' THEN 'Change to NFS' WHEN [protocol] = 'CIFS' AND [Scanned] = 'in Shares table, but not SortedDirectoryTree properties' THEN 'Backend problems. Security is not visible' WHEN [Error Messages] LIKE 'access denied' THEN 'Give permission to Varonis' WHEN [protocol] = 'CIFS' AND [Scanned] = 'in Shares table, but not SortedDirectoryTree properties' THEN 'Backend problems. Security is not visible' WHEN [Error Messages] LIKE 'the inherited' or [Error Messages] LIKE 'the path' or [Error Messages] LIKE 'missing inheritance' THEN 'Ignore Error' WHEN [Error Messages] LIKE 'Access is Denied. (1)' AND [DirsCount] < 3 THEN 'Give permission to Varonis' ELSE ' ' END ' END ```<issue_comment>username_1: Uhh ... You don't, but your workaround options are.. * Reproduce the computation of the first computed column when calculating the second one. * Create a view that SELECTS your table plus first computed column, and create your second computed column based on the first. * Create a stored procedure where the first computed column is in a subquery, the second one is in a main query, and return all rows in your main query. btw looking at your T-SQL you may want to consider a lookup table with all of these actions/howtofix, and in queries you can JOIN on that lookup table and return the correct action. Upvotes: 2 <issue_comment>username_2: Use the full expression instead of `[Scanned]`. For example, instead of ``` ADD [Actions] AS CASE WHEN [protocol] = 'CIFS' AND [Error Messages] LIKE 'monitor type mismatch' AND [Scanned] = 'Yes' THEN 'Needs action' ``` code: ``` ADD [Actions] AS CASE WHEN [protocol] = 'CIFS' AND [Error Messages] LIKE 'monitor type mismatch' AND CASE WHEN [DirID] > 0 AND [DirsCount] is NULL THEN 'in Shares table, but not SortedDirectoryTree properties' WHEN [DirID] > 0 AND [DirsCount] > 0 THEN 'YES' WHEN [DirID] is null and [DirsCount] is null THEN 'NO' ELSE ' ' END = 'Yes' THEN 'Needs action' ``` Hardly convenient, but that's SQL. Upvotes: 2
2018/03/15
1,236
4,047
<issue_start>username_0: I'm very new to VBA and was hoping to get come clarification on a project. I've tried solving it with formulas but I need to still be able to enter information into cells and not have them filled with a lookup formula. How I'm looking for it to preform is that if an object requires it to be shipped then the serial numbers and identifiers are copied and pasted in another table in the next blank row automatically. Information divided into two tables [![table layout](https://i.stack.imgur.com/B99fU.jpg)](https://i.stack.imgur.com/B99fU.jpg) What I thought I needed was a segment in VBA that went like this: ``` Sub CopyCat() If Range("J2") Like "*yes*" then Range("G2:I2").copy Range("A2:A10").end(xlup).offset(1).pasteSpecial xlpastevalues If Range("J3") Like "*yes*" then Range("G3:I3").copy Range("A2:A10").end(xlup).offset(1).pasteSpecial xlpastevalues End If End If End Sub ``` It does exactly what I ask it to do when it is only the first statement, when I add the second one to check if the next row satisfies the conditions and it does, then it places it in the same resulting cell as the first statement. If both are true I need them both to be displayed in table 1. I'd love to take this as a learning opportunity so any information or direction you can point me in would be great! Thank you so much in advance!<issue_comment>username_1: I think `Range("A2:A10").end(xlup)` is equivalent to `Range("A2").end(xlup)` so will not change, but you don't want the A2 reference, you want to work up from the bottom. You will hit problems if you are going beyond A9. (Plus not sure you want nested Ifs.) ``` If Range("J2") Like "*yes*" Then Range("G2:I2").Copy Range("A10").End(xlUp).Offset(1).PasteSpecial xlPasteValues End If If Range("J3") Like "*yes*" Then Range("G3:I3").Copy Range("A10").End(xlUp).Offset(1).PasteSpecial xlPasteValues End If ``` Or to add a loop and circumvent the copy/paste you could use something like this: ``` Sub CopyCat() Dim r As Long For r = 2 To Range("J" & Rows.Count).End(xlUp).Row If Range("J" & r) Like "*yes*" Then Range("A10").End(xlUp).Offset(1).Resize(, 3).Value = Range("G" & r).Resize(, 3).Value End If Next r End Sub ``` Upvotes: 1 <issue_comment>username_2: You can also do this without VBA. In `A2`, you can use this formula entered as an array formula with `CTRL+SHIFT+ENTER`: ``` =INDEX($G$2:$G$4,SMALL(IF($J$2:$J$4="yes",ROW($J$2:$J$4)-ROW($J$2)+1),ROWS(J$2:J2))) ``` And in `B2`, you can put this and drag down/over from `B2:D3`: ``` =INDEX(H$2:H$4,MATCH($A2,$G$2:$G$4,0)) ``` [![enter image description here](https://i.stack.imgur.com/O46hp.png)](https://i.stack.imgur.com/O46hp.png) Finally, to hide the errors that show when there are no more matches, you can simply wrap both above formulas in `IFERROR([formula above],"")`. Upvotes: 1 <issue_comment>username_3: With autofilter ``` Sub copyRange() Dim wb As Workbook Dim wsSource As Worksheet Dim lastRow As Long Dim filterRange As Range Set wb = ThisWorkbook Set wsSource = wb.Worksheets("Sheet2") 'change to sheet name containing delivery info With wsSource lastRow = .Cells(.Rows.Count, "G").End(xlUp).Row Set filterRange = .Range("G1:K" & lastRow) Dim copyRange As Range Set copyRange = .Range("G2:K" & lastRow) End With Dim lastRowTarget As Long, nextTargetRow As Long With filterRange .AutoFilter .AutoFilter Field:=4, Criteria1:="yes" 'change field to whichever is the field in the range containing your company names lastRowTarget = wsSource.Cells(wsSource.Rows.Count, "A").End(xlUp).Row nextRowTarget = lastRowTarget + 1 Union(wsSource.Range("G2:I" & lastRow).SpecialCells(xlCellTypeVisible), wsSource.Range("K2:K" & lastRow).SpecialCells(xlCellTypeVisible)).Copy wsSource.Range("A" & nextRowTarget) .AutoFilter End With End Sub ``` Upvotes: 0
2018/03/15
924
2,887
<issue_start>username_0: I am trying to make a div element to look like this: ![enter image description here](https://i.stack.imgur.com/k1KUQ.png) So how can I apply the skew transform property so that only one side(the bottom one here) gets tilted?<issue_comment>username_1: I think `Range("A2:A10").end(xlup)` is equivalent to `Range("A2").end(xlup)` so will not change, but you don't want the A2 reference, you want to work up from the bottom. You will hit problems if you are going beyond A9. (Plus not sure you want nested Ifs.) ``` If Range("J2") Like "*yes*" Then Range("G2:I2").Copy Range("A10").End(xlUp).Offset(1).PasteSpecial xlPasteValues End If If Range("J3") Like "*yes*" Then Range("G3:I3").Copy Range("A10").End(xlUp).Offset(1).PasteSpecial xlPasteValues End If ``` Or to add a loop and circumvent the copy/paste you could use something like this: ``` Sub CopyCat() Dim r As Long For r = 2 To Range("J" & Rows.Count).End(xlUp).Row If Range("J" & r) Like "*yes*" Then Range("A10").End(xlUp).Offset(1).Resize(, 3).Value = Range("G" & r).Resize(, 3).Value End If Next r End Sub ``` Upvotes: 1 <issue_comment>username_2: You can also do this without VBA. In `A2`, you can use this formula entered as an array formula with `CTRL+SHIFT+ENTER`: ``` =INDEX($G$2:$G$4,SMALL(IF($J$2:$J$4="yes",ROW($J$2:$J$4)-ROW($J$2)+1),ROWS(J$2:J2))) ``` And in `B2`, you can put this and drag down/over from `B2:D3`: ``` =INDEX(H$2:H$4,MATCH($A2,$G$2:$G$4,0)) ``` [![enter image description here](https://i.stack.imgur.com/O46hp.png)](https://i.stack.imgur.com/O46hp.png) Finally, to hide the errors that show when there are no more matches, you can simply wrap both above formulas in `IFERROR([formula above],"")`. Upvotes: 1 <issue_comment>username_3: With autofilter ``` Sub copyRange() Dim wb As Workbook Dim wsSource As Worksheet Dim lastRow As Long Dim filterRange As Range Set wb = ThisWorkbook Set wsSource = wb.Worksheets("Sheet2") 'change to sheet name containing delivery info With wsSource lastRow = .Cells(.Rows.Count, "G").End(xlUp).Row Set filterRange = .Range("G1:K" & lastRow) Dim copyRange As Range Set copyRange = .Range("G2:K" & lastRow) End With Dim lastRowTarget As Long, nextTargetRow As Long With filterRange .AutoFilter .AutoFilter Field:=4, Criteria1:="yes" 'change field to whichever is the field in the range containing your company names lastRowTarget = wsSource.Cells(wsSource.Rows.Count, "A").End(xlUp).Row nextRowTarget = lastRowTarget + 1 Union(wsSource.Range("G2:I" & lastRow).SpecialCells(xlCellTypeVisible), wsSource.Range("K2:K" & lastRow).SpecialCells(xlCellTypeVisible)).Copy wsSource.Range("A" & nextRowTarget) .AutoFilter End With End Sub ``` Upvotes: 0
2018/03/15
1,339
4,582
<issue_start>username_0: Say I have the following four records (assume there are more): ``` record 1 record 2 record 3 record 4 area California Texas California California food Lobster Lamb Rabbit Bagels popular Bagels Elk Rabbit Rabbit ``` Right now I am adding new records into the database by manually choosing an `area` and a `food`. The `popular` field is then automatically populated by finding the most common `food` for that specific `area` at the time of entry. For example, if the above four records were the ONLY records in the entire database, then if I were to add another record with `area: California` and `food: Bagels` then `popular` (for the record I just added) would automatically be assigned the value `Bagels` because there would be two `Bagels` in the `food` field, for the area `California`. The above is already working, which I achieve the above with the following code: ``` popular = list(ModelData.objects.filter(area=area).values_list('food', flat=True)) Counter(popular).most_common() ``` The problem is, the above still leaves existing records in the database with the incorrect `popular` value; because `popular` is only calculated at the time of entry, and never 're-calculated' when more entries are added. Using the above example (adding a new record with `area: California` and `food: Bagels`), there would still be two `California` records (`record 3` and `record 4`) which would have an incorrect `popular` value - they should be updated to `Bagels` when `Bagels` became the new, most popular food for `California`. How would one go about updating the `popular` values for each `area`, to be the new, most popular `food`, whenever it changes (or periodically, i.e. hourly / daily)? I have tried searching online, unsurprisingly without any luck, as I realise it is quite a confusing question.<issue_comment>username_1: I think `Range("A2:A10").end(xlup)` is equivalent to `Range("A2").end(xlup)` so will not change, but you don't want the A2 reference, you want to work up from the bottom. You will hit problems if you are going beyond A9. (Plus not sure you want nested Ifs.) ``` If Range("J2") Like "*yes*" Then Range("G2:I2").Copy Range("A10").End(xlUp).Offset(1).PasteSpecial xlPasteValues End If If Range("J3") Like "*yes*" Then Range("G3:I3").Copy Range("A10").End(xlUp).Offset(1).PasteSpecial xlPasteValues End If ``` Or to add a loop and circumvent the copy/paste you could use something like this: ``` Sub CopyCat() Dim r As Long For r = 2 To Range("J" & Rows.Count).End(xlUp).Row If Range("J" & r) Like "*yes*" Then Range("A10").End(xlUp).Offset(1).Resize(, 3).Value = Range("G" & r).Resize(, 3).Value End If Next r End Sub ``` Upvotes: 1 <issue_comment>username_2: You can also do this without VBA. In `A2`, you can use this formula entered as an array formula with `CTRL+SHIFT+ENTER`: ``` =INDEX($G$2:$G$4,SMALL(IF($J$2:$J$4="yes",ROW($J$2:$J$4)-ROW($J$2)+1),ROWS(J$2:J2))) ``` And in `B2`, you can put this and drag down/over from `B2:D3`: ``` =INDEX(H$2:H$4,MATCH($A2,$G$2:$G$4,0)) ``` [![enter image description here](https://i.stack.imgur.com/O46hp.png)](https://i.stack.imgur.com/O46hp.png) Finally, to hide the errors that show when there are no more matches, you can simply wrap both above formulas in `IFERROR([formula above],"")`. Upvotes: 1 <issue_comment>username_3: With autofilter ``` Sub copyRange() Dim wb As Workbook Dim wsSource As Worksheet Dim lastRow As Long Dim filterRange As Range Set wb = ThisWorkbook Set wsSource = wb.Worksheets("Sheet2") 'change to sheet name containing delivery info With wsSource lastRow = .Cells(.Rows.Count, "G").End(xlUp).Row Set filterRange = .Range("G1:K" & lastRow) Dim copyRange As Range Set copyRange = .Range("G2:K" & lastRow) End With Dim lastRowTarget As Long, nextTargetRow As Long With filterRange .AutoFilter .AutoFilter Field:=4, Criteria1:="yes" 'change field to whichever is the field in the range containing your company names lastRowTarget = wsSource.Cells(wsSource.Rows.Count, "A").End(xlUp).Row nextRowTarget = lastRowTarget + 1 Union(wsSource.Range("G2:I" & lastRow).SpecialCells(xlCellTypeVisible), wsSource.Range("K2:K" & lastRow).SpecialCells(xlCellTypeVisible)).Copy wsSource.Range("A" & nextRowTarget) .AutoFilter End With End Sub ``` Upvotes: 0
2018/03/15
353
1,212
<issue_start>username_0: I got a `Bad Request:400` error while sending base 64 PDF bytes to DocuSign SOAP API(createAndSendEnvelope). Please help me in this issue. Below is the code for converting PDF file into base 64 bytes: ``` File fFile = new File(pDFFileName); byte[] bBytes = FileUtils.readFileToByteArray(fFile); byte[] bBytesEnc = Base64.getEncoder().encode(bBytes); documents.setPDFBytes(bBytesEnc); ```<issue_comment>username_1: I don't think you need to do Base64 of the byte array. I have below code without base64 and its working fine for me: ``` File f = new File(pDFFileName); FileInputStream fs = new FileInputStream(f); byte[] pdfBytes = new byte[(int) f.length()]; fs.read(pdfBytes); fs.close(); Document doc = new Document(); doc.setPDFBytes(pdfBytes); ``` Upvotes: 1 <issue_comment>username_2: `byte[] fileBytes = System.IO.File.ReadAllBytes(@"C:\Users\%username%\Desktop\File.PDF"); // Add a document to the envelope Document doc = new Document(); doc.DocumentBase64 = System.Convert.ToBase64String(fileBytes); doc.Name = "TestFile.pdf"; doc.DocumentId = "3";` I do this like that and it's workinf for me. Upvotes: 0
2018/03/15
2,054
4,791
<issue_start>username_0: I'd like to create a column in my dataframe that checks whether the values in one column are the dictionary values of *another* column which comprises the dictionary keys, like so: ``` In [3]: df = pd.DataFrame({'Model': ['Corolla', 'Civic', 'Accord', 'F-150'], 'Make': ['Toyota', 'Honda', 'Toyota', 'Ford']}) dic = {'Prius':'Toyota', 'Corolla':'Toyota', 'Civic':'Honda', 'Accord':'Honda', 'Odyssey':'Honda', 'F-150':'Ford', 'F-250':'Ford', 'F-350':'Ford'} df Out [3]: Model Make 0 Corolla Toyota 1 Civic Honda 2 Accord Toyota 3 F-150 Ford ``` And after applying a function, or whatever it takes, I'd like to see: ``` Out [10]: Model Make match 0 Corolla Toyota TRUE 1 Civic Honda TRUE 2 Accord Toyota FALSE 3 F-150 Ford TRUE ``` Thanks in advance! Edit: I tried making a function that is passed a tuple which would be the two columns, but I don't think I'm passing the arguments correctly: ``` def is_match(make, model): try: has_item = dic[make] == model except KeyError: has_item = False return(has_item) df[['Model', 'Make']].apply(is_match) results in: TypeError: ("is_match() missing 1 required positional argument: 'model'", 'occurred at index Model') ```<issue_comment>username_1: You can using `map` ``` df.assign(match=df.Model.map(dic).eq(df.Make)) Out[129]: Make Model match 0 Toyota Corolla True 1 Honda Civic True 2 Toyota Accord False 3 Ford F-150 True ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Comprehension ============= ``` df.assign(match=[dic.get(md, '') == mk for mk, md in df.values]) Make Model match 0 Toyota Corolla True 1 Honda Civic True 2 Toyota Accord False 3 Ford F-150 True ``` --- `dict.items` and `in` ===================== ``` items = dic.items() df.assign(match=[t[::-1] in items for t in map(tuple, df.values)]) Make Model match 0 Toyota Corolla True 1 Honda Civic True 2 Toyota Accord False 3 Ford F-150 True ``` --- `isin` ====== ``` df.assign(match=pd.Series(list(map(tuple, df.values[:, ::-1]))).isin(dic.items())) Make Model match 0 Toyota Corolla True 1 Honda Civic True 2 Toyota Accord False 3 Ford F-150 True ``` --- Numpy Structured Arrays ======================= ``` dtype = [('Make', ' ``` --- Timing Comparison ================= Conlcusions ----------- @wen's method is an order of magnitude better! Functions --------- ``` def wen(df, dic): return df.assign(match=df.Model.map(dic).eq(df.Make)) def maxu(df, dic): return df.assign(match=df[['Make', 'Model']].sum(axis=1).isin(set([v+k for k, v in dic.items()]))) def pir1(df, dic): return df.assign(match=[dic.get(md, '') == mk for mk, md in df.values]) def pir2(df, dic): items = dic.items() return df.assign(match=[t[::-1] in items for t in map(tuple, df.values)]) def pir3(df, dic): return df.assign(match=pd.Series(list(map(tuple, df.values[:, ::-1]))).isin(dic.items())) def pir4(df, dic): dtype = [('Make', ' ``` Back Test --------- ``` res = pd.DataFrame( np.nan, [10, 30, 100, 300, 1000, 3000, 10000, 30000], 'wen maxu pir1 pir2 pir3 pir4'.split() ) for i in res.index: m = dict(dic.items()) d = pd.concat([df] * i, ignore_index=True) for j in res.columns: stmt = f'{j}(d, m)' setp = f'from __main__ import {j}, m, d' res.at[i, j] = timeit(stmt, setp, number=200) ``` Results ======= ``` res.plot(loglog=True) ``` [![enter image description here](https://i.stack.imgur.com/HcIAv.png)](https://i.stack.imgur.com/HcIAv.png) ``` res.div(res.min(1), 0) wen maxu pir1 pir2 pir3 pir4 10 2.041111 2.799885 1.000000 1.032221 1.432887 1.174196 30 1.544264 2.417550 1.000000 1.043218 1.336503 1.003284 100 1.037501 1.843029 1.000000 1.066310 1.319942 1.191763 300 1.000000 2.373917 1.726667 2.009198 2.193276 2.424844 1000 1.000000 3.962928 3.764808 3.932539 4.099261 4.971527 3000 1.000000 6.250289 6.311701 6.740862 6.258989 7.791234 10000 1.000000 9.014925 10.110949 10.964482 10.347168 13.407998 30000 1.000000 10.410604 11.682759 13.113974 11.877862 16.000993 ``` Upvotes: 2 <issue_comment>username_3: yet another option: ``` In [38]: df['match'] = df[['Make','Model']] \ .sum(axis=1) \ .isin(set([v+k for k,v in dic.items()])) In [39]: df Out[39]: Make Model match 0 Toyota Corolla True 1 Honda Civic True 2 Toyota Accord False 3 Ford F-150 True ``` Upvotes: 2
2018/03/15
741
2,574
<issue_start>username_0: So I want to create a simple result page that lets users download their results using the given code. This is the script: ```html php $name = $_POST['logincode']; $filename = $name.'/'.$name.'pdf'; header('Location: ./'$filename''); ? ``` The principle is when the user writes into the input field, for example (1234) and hits enter, it should redirect him to: ``` ./1234/1234.pdf ``` I don't know where the mistake is in my code.<issue_comment>username_1: You are missing a **“.”** before pdf aren’t you? And also wrong **header('Location: ./'$filename'');** Try this :) ``` php $name = $_POST['logincode']; $filename = $name.'/'.$name.'.pdf'; header('Location: ./'.$filename); ? ``` Upvotes: 0 <issue_comment>username_2: **It's very insecure code!** Major changes below: * **write test for user input data** `TODO by you` * change order PHP block code first, form (HTML) code next `in snippet` * add test is post request\_method before any $\_POST['...']; `in snippet` * add .(dot) before filename extension i $filename `in snippet` ```html php if ($_SERVER['REQUEST_METHOD'] === 'POST') { $name = $_POST['logincode']; $filename = $name.'/'.$name.'.pdf'; header('Location: ./'$filename''); } ? ``` Upvotes: 0 <issue_comment>username_3: Few issues, * Your header should be before anything else as @showdev mentioned in a comment. * You're missing a `.` between filename and extension * You also have a syntax error in the header trailing `''` * And you should exit redirect headers. **You should also be checking your variables as you go, plus check the file exists, so you can show errors.** ``` php // check post request if ($_SERVER['REQUEST_METHOD'] === 'POST') { $errors = []; // check logincode is set and a number if (!isset($_POST['logincode']) || !is_numeric($_POST['logincode'])) { $errors['logincode'] = 'Invalid login code'; } else { $name = $_POST['logincode']; // check file is found if (!file_exists($name.'/'.$name.'.pdf')) { $errors['logincode'] = 'Your results are not ready.'; } // now check for empty errors if (empty($errors)) { exit(header('Location: ./'.$name.'/'.$name.'.pdf')); } } } ? = (!empty($errors['logincode']) ? $errors['logincode'] : '') ? ``` Upvotes: 2 <issue_comment>username_4: ``` php if($_POST){ $name = $_POST['logincode']; $filename = $name.'/'.$name.'.pdf'; header('Location: ./'.$filename.''); } ? ``` Upvotes: 0
2018/03/15
590
2,142
<issue_start>username_0: i can't update the sdk tools, i don't know why, if use this command "sdkmanager --update" or "android update sdk" it will give me error like this title : ``` Warning: An error occured during installation: Failed to move away or delete existing target file: C:\androidSDK\sdk\tools ``` big thanks if you can help me!<issue_comment>username_1: Here is how i solved this problem: First, I renamed `AndroidSDK\tools` to `AndroidSDK\tool`. Then, I set the windows' environment variable Path from `AndroidSDK\tools` to `AndroidSDK\tool` (maybe not necessary) Run `sdkmanager --update` in `tool\bin` . Ignore the warning and wait until the update is done. There will be a new folder named tools in `AndroidSDK`. Copy all the files in tools folder and paste them to tool folder. Overlay all the files that which has the same name in tool folder. Finally,delete tools folder and rename tool to tools. Problem solved. Upvotes: 2 <issue_comment>username_2: 1) Rename `C:\Users\%username%\AppData\Local\Android\Sdk\tools` folder to `tool` 2) Navigate to `cd C:\Users\%username%\AppData\Local\Android\Sdk\tool\bin` in cmd 3) And run `sdkmanager --update --verbose`, will create a new folder tools 4) Run `flutter doctor --android-licenses` 5) Then Accept the android licenses by clicking `y` 6) `flutter doctor` command will show the following: ``` Kishore Kumar@Kishore MINGW64 ~$ flutter doctor Doctor summary (to see all details, run flutter doctor -v): [√] Flutter (Channel beta, v0.7.3, on Microsoft Windows [Version 10.0.17134.228], locale en-US) [√] Android toolchain - develop for Android devices (Android SDK 25.0.2) [√] Android Studio (version 3.1) X Flutter plugin not installed; this adds Flutter specific functionality. X Dart plugin not installed; this adds Dart specific functionality. [√] Connected devices (1 available) • No issues found! ``` Upvotes: 2 <issue_comment>username_3: Ок, maybe this will save someone some time and nerves. I Just closed Android Studio and opened again via `Run as administrator` and retried update and this time it was successful without any errors. Upvotes: 0
2018/03/15
868
2,400
<issue_start>username_0: I'm looking for a method to have a variable number of nested for loops instead of the following code. For example if the variable **n** represents the number of nested for loops and **n = 3**, my code would be: ``` p = [] for i in range(26): for j in range(26): for k in range(26): p.append([i,j,k]) ``` Instead, if **n = 2** my code would be: ``` p = [] for i in range(26): for j in range(26): p.append([i,j]) ``` I understand this can be done using recursion but I'm a bit confused as to how I'd go about doing this.<issue_comment>username_1: Something like this should work: ``` import itertools n=3 fixed=26 p = list(itertools.product(range(fixed), repeat=n)) ``` This solution uses the optimized functions of `itertools`, so it should be quite fast. Mind that `itertools.product` returns an iterator, so one needs to transform it to get an array. Upvotes: 1 <issue_comment>username_2: It's important for one to develop the skills to reason about these problems. In this case, Python includes `itertools.product` but what happens the next time you need to write a behaviour specific to your program? Will there be another magical built-in function? Maybe someone else will have published a 3rd party library to solve your problem? Below, we design `product` as a simple recursive function that accepts 1 or more lists. ``` def product (first, *rest): if not rest: for x in first: yield (x,) else: for p in product (*rest): for x in first: yield (x, *p) for p in product (range(2), range(2), range(2)): print ('x: %d, y: %d z: %d' % p) # x: 0, y: 0 z: 0 # x: 1, y: 0 z: 0 # x: 0, y: 1 z: 0 # x: 1, y: 1 z: 0 # x: 0, y: 0 z: 1 # x: 1, y: 0 z: 1 # x: 0, y: 1 z: 1 # x: 1, y: 1 z: 1 ``` Assuming you want a more conventional iteration ordering, you can accomplish do so by using an auxiliary `loop` helper ``` def product (first, *rest): def loop (acc, first, *rest): if not rest: for x in first: yield (*acc, x) else: for x in first: yield from loop ((*acc, x), *rest) return loop ((), first, *rest) for p in product (range(2), range(2), range(2)): print ('x: %d, y: %d z: %d' % p) # x: 0, y: 0 z: 0 # x: 0, y: 0 z: 1 # x: 0, y: 1 z: 0 # x: 0, y: 1 z: 1 # x: 1, y: 0 z: 0 # x: 1, y: 0 z: 1 # x: 1, y: 1 z: 0 # x: 1, y: 1 z: 1 ``` Upvotes: 3 [selected_answer]
2018/03/15
1,048
4,464
<issue_start>username_0: I have two table posts\_table post\_id | user\_id | status\_message | date\_time comments\_table comments\_id | user\_id | post\_id | comments\_message | date\_time My code is this i want show every post and show all comments message in this post ``` router.get('/', (req, res, next) => { connection.query('SELECT * FROM photos_status', (err, result) => { if(err){ console.error(err); }else{ if(result.length >0){ for(var i = 0; i < result . length ;i++){ var Temp = []; var post_id =result[i]. post_id; connection.query('SELECT * FROM comments WHERE post_id = ?', [post_id], function (error, results) { if (error) { res.json({ status:false, message:'there are some error with query' }) }else{ res.status(200).json({ result , results }); } }) } } } }); }); ``` I want select data from database and show like this ``` [ { "post_id":"1", "user_id":"2", "status_x":"demo ..." " comments"[ { "user_id":"1", "post_id":"1", "comments_message":"demo..", }, { "user_id":"2", "post_id":"1", "comments_message":"demo..", } ] } ] ```<issue_comment>username_1: I think this will give you some Idea: ``` var allPosts = [] for(var i = 0; i < result.length ;i++){ singlepost={} singlepost['post_id'] = result[i].post_id; singlepost['user_id'] = result[i].user_id ; singlepost['status_x'] = result[i].status_message; singlepost['comments'] = []; connection.query('SELECT * FROM comments WHERE post_id = ?', [post_id], function (error, results) { if (error) { res.json({ status:false, message:'there are some error with query' }) }else{ res.status(200).json({ //loop over comments create an comment object comment={} //same as we did above for singlepost add values and push to array for each element singlepost['comments'].push(comment) result , results }); } } } ``` for Reference : [How to push JSON object in to array using javascript](https://stackoverflow.com/questions/43361864/how-to-push-json-object-in-to-array-using-javascript) [Dynamically Add Variable Name Value Pairs to JSON Object](https://stackoverflow.com/questions/4071499/dynamically-add-variable-name-value-pairs-to-json-object) Upvotes: 1 <issue_comment>username_2: ``` if(result.length >0){ var allPosts = [] for(var i = 0; i < result.length ;i++){ var result1; singlepost={} singlepost['post_id'] = result[i].post_id; singlepost['user_id'] = result[i].user_id ; singlepost['post_message'] = result[i].post_messagecol; singlepost['comments']=[]; var post_id =result[i]. post_id; connection.query('SELECT * FROM comment WHERE post_id = ?', [post_id], function (error, results) { if (error) { res.json({ status:false, message:'there are some error with query' }) }else{ singlepost['comments'].push(results) console.log(results); } }); allPosts.push(singlepost); } res.json({ allPosts }); } } }); ``` but output is ``` { "allPosts": [ { "post_id": 1, "user_id": 1, "post_message": "hi", "comments": [ ] } ] } console.log(results); ``` then comments results is print Upvotes: 0
2018/03/15
768
2,847
<issue_start>username_0: I'd like to create a component that I can instantiate multiple times pointing at (loading its data from) different vuex-namespaces. The component will gets most of its data from Vuex. So say I have a Person component, I could instantiate many copies of the Person component, based on different paths into vuex. The best way I've figured out how to do this is to pass a vuex path as a prop, but then I don't know how to use mapGetters and friends, because they require a namespace at the time the .vue file is instantiated. I'd appreciate insight into the best "Vue way" to structure this. Here's the closest approach I've figured out at the moment. Person.vue: ``` person {{name}} is {{age}} years old export default { props: ['vuexNamespaceToLoadFrom'], // FIXME: how do I do a mapGetters with a dynamic namespace that's // set through props??? // I can't do the following since props aren't in scope yet :-( ...mapGetters(this.vuexNamespaceToLoadFrom, [ 'name', 'age']) } ``` Instantiating a few Person multiple-use components that load their properties from different vuex-namespaces: ``` ```<issue_comment>username_1: How about this? **Person component:** ``` export default (vuexNamespace) => ({ ...mapGetters(vuexNamespace, ['name', 'age']) }) ``` **Parent component:** ``` export default { components: { Person: Person('someNamespace') } } ``` Didn't test it out, but I think it should work :) Upvotes: 0 <issue_comment>username_2: To expand the problem definition a little, this ```js export default { props: ['vuexNamespaceToLoadFrom'], ... computed: { ...mapGetters(this.vuexNamespaceToLoadFrom, [ 'name', 'age']) } } ``` is a declarative object used by Vue to create instances of components, so instance properties can't be used directly in the helpers like `mapGetters`. However, this discussion [Generating computed properties on the fly](https://forum.vuejs.org/t/generating-computed-properties-on-the-fly/14833/4) shows a way to defer the evaluation of the instance property. Essentially, the body of the computed get() will not be evaluated until the instance is fully mounted, so references to `this.$store` and `this[namespaceProp]` will work here. Adapting it to your scenario, **helper function** ```js function mapWithRuntimeNamespace({namespaceProp} = {}, props = []) { return props.reduce((obj, prop) => { const computedProp = { get() { return this.$store.getters[this[namespaceProp] + '/' + prop] } } obj[prop] = computedProp return obj }, {}) } ``` **usage** ```js export default { props: ['vuexNamespaceToLoadFrom'], ... computed: { ...mapWithRuntimeNamespace( { namespaceProp: 'vuexNamespaceToLoadFrom' }, ['name', 'age'] ) } } ``` Upvotes: 3 [selected_answer]
2018/03/15
1,808
6,935
<issue_start>username_0: I'm totally new to TensorFlow and Python, so please excuse me for posting such a basic question, but I'm a bit overwhelmed with learning both things at once. **EDIT: I found a solution myself and posted it below, however, more efficient solutions are wellcome** **Short version of the question: How can I extract every weight and bias at any point from a neural network using TensorFlow and store it into a Python array with the shape [layers][neurons-previous-layer][neurons-current-layer]. The goal is NOT to store on the hdd but in variables with the same shape and type as the one explained below the last code snipet. I'd also like to know, which is the most efficient way to do so.** The task I want to perform is to create a neural network with pre-trained weights and biases (not obtained from Tensor, but from totally different source), refine the training with Tensor and then return the refined weights to the program. I've investigated how to create NN's in Tensor Flow as well as made my way through a way to initialize the weights of the network using previously created lists in Python based on some Tensor tutorials and some unrelated questions from StackOverflow. So, my question is, given a trained network in TensorFlow, how can I extract every weight and bias to variables (my network has around 2,8 million weights and biases) **in the fastest possible way?** (keep in mind that this operation is going to be repeated over and over) To clarify the question, here's some code: First of all, the entire network creation and training process (except the network layout) is based on this post: [Autoencoder Example](https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/autoencoder.py). The relevant parts of the code for this example are the following (I cut the output part because it is not necessary to explain the way I create the network): ``` num_hidden_1 = 256 # 1st layer num features num_hidden_2 = 128 # 2nd layer num features (the latent dim) num_input = 784 # MNIST data input (img shape: 28*28) X = tf.placeholder("float", [None, num_input]) weights = { 'encoder_h1': tf.Variable(tf.random_normal([num_input, num_hidden_1])), 'encoder_h2': tf.Variable(tf.random_normal([num_hidden_1, num_hidden_2])), } biases = { 'encoder_b1': tf.Variable(tf.random_normal([num_hidden_1])), 'encoder_b2': tf.Variable(tf.random_normal([num_hidden_2])), } # Building the encoder def encoder(x): # Encoder Hidden layer with sigmoid activation #1 layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']), biases['encoder_b1'])) # Encoder Hidden layer with sigmoid activation #2 layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']), biases['encoder_b2'])) return layer_2 ``` The code I created to generate my neural network is the following: ``` def create_network(layout, funciones, pesos, biases): capas = [tf.placeholder("float", [None, layout[0]])] for i in range(0, len(pesos)): #There's an element already, so capas[i] is the previous layer capas.append(tf.layers.dense(capas[i], layout[i+1], activation=funciones[i], kernel_initializer=tf.constant_initializer(pesos[i], dtype=tf.float32), bias_initializer=tf.constant_initializer(biases[i], dtype=tf.float32))) return capas ``` Code explanation: Pesos and biases three dimensional lists containing weights ("pesos") and biases in a format TensorFlow can accept. Capas is an array containing all the layers of the net. "funcciones[]", it is ATM a testing global variable to adjust the activation function of the NN, it should be part of "layout", but I'm just testing now. Layout is an array containing the size of every layer in the network. Layout[0] contains the number of neurons on the input layer, layout[1](https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/autoencoder.py) on the first hidden layer, and so on. Pesos and Biases format: the first level has as many elements as network layers-1 (input doesnt need weights or biases). On the second level it has as many elements as neurons present in the PREVIOUS layer, and the third level has as much elements as neurons on this list. The 2nd and 3rd level have the same shape that the one generated in the examples by `tf.random_normal([num_input, num_hidden_1])` **My question is: Assuming I'm use the same structure to execute my NN than the one used in the example provided in the URL above, how can I store the trained weights and biases back in two three dimensional lists with the same exact structure than the ones provided in my code, and which is the fastest way to do so** SIDE QUESTION: Is there a more efficient way to build/execute my NN than the one mentioned above? Good references for this one are accepted as answers too.<issue_comment>username_1: How about using `tf.trainable_variables()`? This returns a list of all the trainable parameters and since it's a tensorflow model, I would asume it's optimized. You can access specific weights from this list by tensorname: `variable = [weight for weights in tf.trainable_variables() if weight.name == name_my_var]` Upvotes: 2 <issue_comment>username_2: I found a solution and built a working function. The naming convention is the same as in my own question. I had to manually name the layers, otherwise it placed weights and biases on the previous layer (I checked the graph and it was connected correctly at the end, but the script couldn't extract them properly) ``` def extraer_pesos(red, session): pesos = [] biases = [] for i in range(1, len(red)): pesos.append(session.run(tf.get_default_graph().get_tensor_by_name( os.path.split(red[i].name)[0] + '/kernel:0')).tolist()) biases.append(session.run(tf.get_default_graph().get_tensor_by_name( os.path.split(red[i].name)[0] + '/bias:0')).tolist()) return pesos, biases def create_network(layout, funciones, pesos,biases): capas = [(tf.placeholder("float", [None, layout[0]])] for i in range(0, len(pesos)): #There's an element already, so capas[i] is the previous layer capas.append(tf.layers.dense(capas[i], layout[i+1], activation=funciones[i], kernel_initializer=tf.constant_initializer(pesos[i], dtype=tf.float32), bias_initializer=tf.constant_initializer(biases[i], dtype=tf.float32), name="layer"+str(i))) return capas ``` Keep in mind thay the variables have to be initialized. If you're extracting'em after training the network there shouldn't be any problem. Upvotes: 1 [selected_answer]
2018/03/15
1,428
5,947
<issue_start>username_0: I've set up a collection view with self sizing cells using auto layout. However, when I rotate the device the collection view still uses the portrait width and gets cut off. I am using flow layout and not the sizeForItemAt method. Here is my main view controller: ``` import UIKit class HomeController: UIViewController, UICollectionViewDataSource, UICollectionViewDelegate { override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) let layout = UICollectionViewFlowLayout() layout.estimatedItemSize = UICollectionViewFlowLayoutAutomaticSize layout.minimumLineSpacing = 0 layout.scrollDirection = .vertical let homeCollectionView = UICollectionView(frame: self.view.frame, collectionViewLayout: layout) homeCollectionView.dataSource = self homeCollectionView.delegate = self homeCollectionView.backgroundColor = .orange homeCollectionView.register(CellTypeOne.self, forCellWithReuseIdentifier: "CellType1") self.view.addSubview(homeCollectionView) homeCollectionView.translatesAutoresizingMaskIntoConstraints = false homeCollectionView.leadingAnchor .constraint(equalTo: self.view.leadingAnchor).isActive = true homeCollectionView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor).isActive = true homeCollectionView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor).isActive = true homeCollectionView.topAnchor.constraint(equalTo: self.view.topAnchor).isActive = true } func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { return contents.count } func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { // code for specific type of cells } let contents: [ContentType] = [] ``` Also, here is one of my cells: ``` import UIKit class CellType1: UICollectionViewCell { override init(frame: CGRect) { super.init(frame: frame) setupViews() } required init?(coder aDecoder: NSCoder) { fatalError("init(coder:) has not been implemented") } var content: Content? { didSet { sectionTextView.text = content?.sectionView } } let sectionTextView: UITextView = { let textView = UITextView() textView.contentInset = UIEdgeInsets(top: 5, left: 5, bottom: 5, right: 5) // textView.backgroundColor = .orange textView.translatesAutoresizingMaskIntoConstraints = false textView.textContainerInset = UIEdgeInsetsMake(10, 0, 20, 0) textView.text = "This is a Section text" textView.font = UIFont.systemFont(ofSize: 20, weight: .heavy) textView.textColor = UIColor(red: 76/255, green: 76/255, blue: 77/255, alpha: 1.0) textView.isEditable = false textView.isScrollEnabled = false return textView }() func setupViews() { addSubview(sectionTextView) let widthAnchorConstraint = sectionTextView.widthAnchor.constraint(equalToConstant: UIScreen.main.bounds.size.width - 16) widthAnchorConstraint.identifier = "sectionTextView Width Anchor Constraint" widthAnchorConstraint.priority = UILayoutPriority(rawValue: 1000) widthAnchorConstraint.isActive = true let topAnchorConstraint = sectionTextView.topAnchor.constraint(equalTo: topAnchor) topAnchorConstraint.identifier = "sectionTextView Top Anchor Constraint" topAnchorConstraint.isActive = true let leftAnchorConstraint = sectionTextView.leftAnchor.constraint(equalTo: leftAnchor) leftAnchorConstraint.identifier = "sectionTextView Left Anchor Constraint" leftAnchorConstraint.isActive = true let rightAnchorConstraint = sectionTextView.rightAnchor.constraint(equalTo: rightAnchor) rightAnchorConstraint.identifier = "sectionTextView Right Anchor Constraint" rightAnchorConstraint.priority = UILayoutPriority(rawValue: 999) rightAnchorConstraint.isActive = true let bottomAnchorConstraint = sectionTextView.bottomAnchor.constraint(equalTo: bottomAnchor) bottomAnchorConstraint.identifier = "sectionTextView Bottom Anchor Constraint" bottomAnchorConstraint.isActive = true } } ``` I already had tried using Auto Layout as Sh\_Khan's suggested. I updated the code above. However, when I rotate, the collection view get's updated but the cells still stay the size of a portrait width device. I added a screenshot. [![Landscape screenshot](https://i.stack.imgur.com/mcwjd.png)](https://i.stack.imgur.com/mcwjd.png) I removed the content with red marking Could anyone point me in the right direction seeing as how invalidateLayout() doesn't work on flow layout. Thanks in advance<issue_comment>username_1: The problem is that giving the collection a frame when creating makes it keep it even after rotation , you can try to use autoLayout ``` self.view.addSubview(homeCollectionView) homeCollectionView.translatesAutoresizingMaskIntoConstraints = false homeCollectionView.leadingAnchor.constraint(equalTo: self.view.leadingAnchor).isActive = true homeCollectionView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor).isActive = true homeCollectionView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor).isActive = true homeCollectionView.topAnchor.constraint(equalTo: self.view.topAnchor).isActive = true ``` Upvotes: 2 <issue_comment>username_2: You can achieve this without using auto-layout as well, The view controller tells you when its view gets new frame (when rotated e.g.) through `viewWillLayoutSubviews` or `viewDidLayoutSubviews` which is a good place to set all your frame/position related properties. ``` override func viewDidLayoutSubviews() { super.viewDidLayoutSubviews() homeCollectionView.frame = view.frame // Set frames of other views that may have change } ``` And for the cell there is `override layoutSubviews()` Upvotes: 1
2018/03/15
885
2,708
<issue_start>username_0: I'm using the following code (from stack overflow, thank you to the community!) to plot the proportions of different groups over time. ``` library(tidyverse) df %>% mutate(date = as.POSIXct(date)) %>% #convert date to date group_by(group, date) %>% #group summarise(prop = sum(outcome=="1")/n()) %>% #calculate proportion ggplot()+ geom_line(aes(x = date, y = prop, color = group))+ theme_classic()+ geom_point(aes(x = date, y = prop, color = group)) ``` The sample data frame used is this: ``` date <- c("2000-05-01", "2000-05-01", "2000-05-01", "2000-05-02", "2000-05-02", "2000-05-02", "2000-05-02", "2000-05-03", "2000-05-03", "2000-05-03", "2000-05-04", "2000-05-04") outcome <- c("1", "0", "0", "0","1","1","0", "1","1","0", "1","0") group <- c("1", "2", "3", "2", "1", "1", "2", "3", "2", "1", "1", "3") df <- as.data.frame(cbind(date, outcome, group)) ``` Printing the line chart, I want to change the automatically assigned colors for each group but can't figure out how. Can someone help me with this? Many thanks in advance! EDIT: I also want to include manual labels for the Groups and change the legend title.<issue_comment>username_1: Use the `scale_color_*()` functions from ggplot2. If you want to specify specific colors, use `scale_color_manual()`. You can provide named vectors to the `values =` parameter to set specific color values for that color group and to the `labels =` parameter to set custom text in the legend: ``` ggplot(...) + scale_color_manual(values = c('1' = 'yellow', '2' = 'orange', '3' = 'black'), labels = c('1' = 'fish', '2' = 'cats', '3' = 'dogs')) ``` The `scale_color_brewer()` function lets you choose from set palettes designed and tested on discrete data. If you don't need specific colors, these palettes often make your plots clearer than they would be if you picked colors arbitrarily. To modify the legend, use the `guides()` function: ``` ggplot(...) + guides(color = guide_legend(title = 'Animal Kinds:'), shape = guide_legend(title = 'Food Type:')) ``` This lets you control all your legends in one place. You can specify titles and pretty much every visual parameter for how the legend will appear. See `?guide_legend` for a list of all the parameters you can change in `guide_legend()` Upvotes: 3 [selected_answer]<issue_comment>username_2: Referring to your comment about changing the title: the easiest way to modify the title is to use ``` ggplot(...) + scale_color_manual(values = c('1' = 'green', '2' = 'yellow', '3' = 'red'), labels = c('1' = 'A', '2' = 'B', '3' = 'C'), name = "Legend Title") ``` Upvotes: 0
2018/03/15
590
2,250
<issue_start>username_0: I have an Intent that opens a web page in FireFox on a Samsung tablet but it keeps opening a new tab each time. I've been using the putExtra() with an EXTRA\_APPLICATION\_ID and it previously did the trick and opened the page reusing the same browser tab but now its broke. I'm not sure if it makes a difference but the code below is runOnUiThread() as there is other processing that touches views. Any ideas would be appreciated. ``` String pageName = "http://google.com"; Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(pageName)); intent.putExtra(Browser.EXTRA_APPLICATION_ID, getApplication().getPackageName()); startActivity(intent); ```<issue_comment>username_1: Use the `scale_color_*()` functions from ggplot2. If you want to specify specific colors, use `scale_color_manual()`. You can provide named vectors to the `values =` parameter to set specific color values for that color group and to the `labels =` parameter to set custom text in the legend: ``` ggplot(...) + scale_color_manual(values = c('1' = 'yellow', '2' = 'orange', '3' = 'black'), labels = c('1' = 'fish', '2' = 'cats', '3' = 'dogs')) ``` The `scale_color_brewer()` function lets you choose from set palettes designed and tested on discrete data. If you don't need specific colors, these palettes often make your plots clearer than they would be if you picked colors arbitrarily. To modify the legend, use the `guides()` function: ``` ggplot(...) + guides(color = guide_legend(title = 'Animal Kinds:'), shape = guide_legend(title = 'Food Type:')) ``` This lets you control all your legends in one place. You can specify titles and pretty much every visual parameter for how the legend will appear. See `?guide_legend` for a list of all the parameters you can change in `guide_legend()` Upvotes: 3 [selected_answer]<issue_comment>username_2: Referring to your comment about changing the title: the easiest way to modify the title is to use ``` ggplot(...) + scale_color_manual(values = c('1' = 'green', '2' = 'yellow', '3' = 'red'), labels = c('1' = 'A', '2' = 'B', '3' = 'C'), name = "Legend Title") ``` Upvotes: 0
2018/03/15
1,366
3,218
<issue_start>username_0: I need to process a csv file but one of the fields contains line breaks. How can I replace all line breaks that are not after the double quote character (") with space? Any solution with awk, perl, sed etc is acceptable. The file that is in the form: ``` 497,50,2008-08-02T16:56:53Z,469,4,"foo bar foo bar" 518,153,2008-08-02T17:42:28Z,469,2,"foo bar bar" ``` The desired output is: ``` 497,50,2008-08-02T16:56:53Z,469,4,"foo bar foo bar" 518,153,2008-08-02T17:42:28Z,469,2,"foo bar bar" ```<issue_comment>username_1: I understood your question to be a request to replace intra-field newlines (even if they occur immediately after a `"`, such as in a field containing `␊foo` or `foo "bar"␊baz`). The following achieves that: ``` use Text::CSV_XS qw( ); my $qfn_in = ...; my $qfn_out = ...; open(my $fh_in, '<', $qfn_in) or die("Can't open \"$qfn_in\": $!\n"); open(my $fh_out, '>', $qfn_out) or die("Can't create \"$qfn_out\": $!\n"); my $csv = Text::CSV_XS->new({ binary => 1, auto_diag => 2 }); while ( my $row = $csv->getline($fh_in) ) { s/\n/ /g for @$row; $csv->say($fh_out, $row); } ``` I think it would make more sense to use the following: ``` for (@$row) { s/^\s+//; # Remove leading whitespace. s/\s+\z//; # Remove trailing whitespace. s/\s+/ /g; # Replaces whitespace with a single space. } ``` Upvotes: 2 <issue_comment>username_2: You can try this sed but the question is'nt clear enough to know what to do with a line like ``` 497,50,2008-08-02T16:56:53Z,469,4,"truc biz",test sed ':A;/[^"]$/{N;bA};y/\n/ /' infile ``` Upvotes: 0 <issue_comment>username_3: Its fairly easy to match the fields in csv. The framework is the stuff between *quoted/non-quoted fields* and is either delimiter or end of record tokens. So the framework is matched as well to validate the fields. After doing that, it's just a matter of replacing linebreaks in quoted fields. That can be done in a call back. The regex `((?:^|,|\r?\n)[^\S\r\n]*)(?:("[^"\\]*(?:\\[\S\s][^"\\]*)*"[^\S\r\n]*(?=$|,|\r?\n))|([^,\r\n]*(?=$|,|\r?\n)))` Here it is in Perl, all in one package. ``` use strict; use warnings; $/ = undef; sub RmvNLs { my ($delim, $quote, $non_quote) = @_; if ( defined $non_quote ) { return $delim . $non_quote; } $quote =~ s/\s*\r?\n/ /g; return $delim . $quote; } my $csv = ; $csv =~ s/ ( # (1 start), Delimiter (BOS, comma or newline) (?: ^ | , | \r? \n ) [^\S\r\n]\* # Leading optional horizontal whitespaces ) # (1 end) (?: ( # (2 start), Quoted string field " # Quoted string [^"\\]\* (?: \\ [\S\s] [^"\\]\* )\* " [^\S\r\n]\* # Trailing optional horizontal whitespaces (?= $ | , | \r? \n ) # Delimiter ahead (EOS, comma or newline) ) # (2 end) | # OR ( # (3 start), Non quoted field [^,\r\n]\* # Not comma or newline (?= $ | , | \r? \n ) # Delimiter ahead (EOS, comma or newline) ) # (3 end) ) /RmvNLs($1,$2,$3)/xeg; print $csv; \_\_DATA\_\_ 497,50,2008-08-02T16:56:53Z,469,4,"foo bar foo bar" 518,153,2008-08-02T17:42:28Z,469,2,"foo bar bar" ``` Output ``` 497,50,2008-08-02T16:56:53Z,469,4,"foo bar foo bar" 518,153,2008-08-02T17:42:28Z,469,2,"foo bar bar" ``` Upvotes: 0
2018/03/15
497
1,559
<issue_start>username_0: Let's say I have a string of names It holds ["<NAME>","Patrick", "Star", "Sand<NAME>"] How would I access each letter of the array using a for loop? So let's say I wanted to access the "J" in <NAME> all the way to the "s" in Sandy Cheeks? **I can only use the library iostream and string**<issue_comment>username_1: Since C++11 you can achieve this easily by using a range-based for loop like so: ``` std::string arr[] = {"<NAME>","Patrick", "Star", "Sandy Cheeks"}; for(const std::string& str : arr) { for(const char& chr : str) { //use chr } } ``` Upvotes: 2 <issue_comment>username_2: You can use a more explicit method: ``` static const std::string people[] = {"<NAME>", "Patrick", "Star", "<NAME>"}; size_t name_quantity = sizeof(people) / sizeof(people[0]); for (size_t name_index = 0; name_index < name_quantity; ++name_index) { const std::string& name(people[name_index]); const size_t name_length = name.length(); for (size_t letter_index = 0; letter_index < name_length; ++letter_index) { char letter = name[letter_index]; Process_Letter(letter); } } ``` Upvotes: 0 <issue_comment>username_3: you dont have a string of names, you have a array of strings, in c++11 you can use a rang based for loop ``` int main(int argc, char **argv) { std::cout << "-HW-" << std::endl; std::string names[] = { "<NAME>", "Patrick", "Star", "<NAME>" }; for (const auto& x : names) { std::cout << x << std::endl; } return 0; } ``` Upvotes: 0
2018/03/15
819
2,674
<issue_start>username_0: I am trying to store db result into object. Can some suggest me how to store . ``` using (SQLiteConnection connection = new SQLiteConnection(pluginManager.ConnectionString)) { string sql = "SELECT * from contacts ;"; connection.Open(); SQLiteCommand command = new SQLiteCommand(sql, connection); SQLiteDataReader dr; dr = command.ExecuteReader(); var dt = new DataTable(); dt.Load(dr); int rows = dt.Rows.Count; var contactData = new object[rows, 42]; dr.Close(); connection.Close(); } ``` I want to dump dr/ dt result into contactData with out using any loops I have done through collection as below as well it is taking time . ``` List contacts = new List(); foreach (var contact in contacts) { contactData[counter, 0] = contact.first\_name; contactData[counter, 1] = contact.last\_name; contactData[counter, 2] = contact.email; contactData[counter, 3] = contact.alternate\_email; contactData[counter, 4] = contact.title; contactData[counter, 5] = contact.department; contactData[counter, 6] = contact.contact\_number; contactData[counter, 7] = contact.fax\_number; contactData[counter, 8] = contact.website; contactData[counter, 9] = contact.company\_name; contactData[counter, 10] = contact.revenue; contactData[counter, 11] = contact.team\_size; contactData[counter, 12] = contact.bed\_count; } ``` So I don't want to do loop at all .<issue_comment>username_1: Since C++11 you can achieve this easily by using a range-based for loop like so: ``` std::string arr[] = {"<NAME>","Patrick", "Star", "<NAME>"}; for(const std::string& str : arr) { for(const char& chr : str) { //use chr } } ``` Upvotes: 2 <issue_comment>username_2: You can use a more explicit method: ``` static const std::string people[] = {"<NAME>", "Patrick", "Star", "<NAME>"}; size_t name_quantity = sizeof(people) / sizeof(people[0]); for (size_t name_index = 0; name_index < name_quantity; ++name_index) { const std::string& name(people[name_index]); const size_t name_length = name.length(); for (size_t letter_index = 0; letter_index < name_length; ++letter_index) { char letter = name[letter_index]; Process_Letter(letter); } } ``` Upvotes: 0 <issue_comment>username_3: you dont have a string of names, you have a array of strings, in c++11 you can use a rang based for loop ``` int main(int argc, char **argv) { std::cout << "-HW-" << std::endl; std::string names[] = { "<NAME>", "Patrick", "Star", "<NAME>" }; for (const auto& x : names) { std::cout << x << std::endl; } return 0; } ``` Upvotes: 0
2018/03/15
1,369
2,945
<issue_start>username_0: I have a 3x1 point vector representing the start point of some line, and a 3x1 point vector representing the end of some line. I would like to sample an arbitrary amount of points along the line connected by these two points. np.linspace does exactly what I need but in 1 dimension. Is there a similar functionality that can be extended to 3 dimensions? Thanks<issue_comment>username_1: My interpolation suggestion: ``` In [664]: p1=np.array([0,1,2]) In [665]: p2=np.array([10,9,8]) In [666]: l1 = np.linspace(0,1,11) In [667]: l1 Out[667]: array([0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. ]) In [668]: p1+(p2-p1)*l1[:,None] Out[668]: array([[ 0. , 1. , 2. ], [ 1. , 1.8, 2.6], [ 2. , 2.6, 3.2], [ 3. , 3.4, 3.8], [ 4. , 4.2, 4.4], [ 5. , 5. , 5. ], [ 6. , 5.8, 5.6], [ 7. , 6.6, 6.2], [ 8. , 7.4, 6.8], [ 9. , 8.2, 7.4], [10. , 9. , 8. ]]) ``` Equivalent with 3 linspace calls ``` In [671]: np.stack([np.linspace(i,j,11) for i,j in zip(p1,p2)],axis=1) Out[671]: array([[ 0. , 1. , 2. ], [ 1. , 1.8, 2.6], [ 2. , 2.6, 3.2], [ 3. , 3.4, 3.8], [ 4. , 4.2, 4.4], [ 5. , 5. , 5. ], [ 6. , 5.8, 5.6], [ 7. , 6.6, 6.2], [ 8. , 7.4, 6.8], [ 9. , 8.2, 7.4], [10. , 9. , 8. ]]) ``` A variation on this is: ``` np.c_[tuple(slice(i,j,11j) for i,j in zip(p1,p2))] ``` Really the same calculation, just different syntax. --- `outer` can be used instead: ``` p1+np.outer(l1,(p2-p1)) ``` But even that uses broadcasting. `p1` is (3,) and the `outer` is (11,3), the result is (11,3). --- @Brad's approach handles end points differently ``` In [686]: np.append(p1[:, None], np.repeat((p2 - p1) / 10, [10, 10, 10]).reshape ...: (3, -1).cumsum(axis=1), axis=1) Out[686]: array([[ 0. , 1. , 2. , 3. , 4. , 5. , 6. , 7. , 8. , 9. , 10. ], [ 1. , 0.8, 1.6, 2.4, 3.2, 4. , 4.8, 5.6, 6.4, 7.2, 8. ], [ 2. , 0.6, 1.2, 1.8, 2.4, 3. , 3.6, 4.2, 4.8, 5.4, 6. ]]) In [687]: _.shape Out[687]: (3, 11) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Not sure if np.linspace has changed in the 4 years since this question was asked, but you can pass array-like values as start and stop, and the results are the same as username_1's answer. Example (using random points): ``` import numpy as np startpts = np.array([0, 0, 0]) endpts = np.array([12, 3, 8]) out = np.linspace(start=startpts, stop=endpts, num=10) ``` returns the same thing as: ``` out = startpts+(endpts-startpts)*np.linspace(0,1,10)[:,np.newaxis] ``` And it can also be expanded to take in multiple pairs of points: ``` startpts = np.array([[0, 0, 0],[1, 2, 0],[2,3,4]]) endpts = np.array([[12,3, 8],[13,5, 8],[14,4,5]]) out = np.linspace(start=startpts, stop=endpts, num=10, axis=1) ``` Upvotes: 1
2018/03/15
653
1,851
<issue_start>username_0: I have an array with coins, e.g. (`coins=[10, 50, 100]`) and an array of the quantities of each coin (e.g. `quantity=[1, 2, 1]`). I want to generate a new list `l`, which shows all coins I have (e.g `l=[10, 50, 50, 100]`). So far i have this, which does the trick: ``` coins=[i for s in [b*[a] for a,b in zip(coins,quantity)] for i in s] ``` Is there a better way to solve this in python, than to create a list of lists and then flatten it?<issue_comment>username_1: what i can figure out is : ``` for i in range(len(quantity)): for j in range(quantity[i]): print(coins[i]) ``` Upvotes: 0 <issue_comment>username_2: You can leverage itertools chain and repeat: ``` from itertools import repeat,chain coins=[10, 50, 100] quantity=[1, 2, 1] result = list(chain.from_iterable(map(repeat, coins, quantity))) # thx @ miradulo print(result) ``` Output: ``` [10, 50, 50, 100] ``` Advantage: it does not build interal lists that then get flattened but generates the values as iterables. Upvotes: 3 [selected_answer]<issue_comment>username_3: Assuming the final order is not important, you could use a `collections.Counter()`: ``` >>> from collections import Counter >>> c = Counter(dict(zip(coins, quantity))) >>> list(c.elements()) [10, 100, 50, 50] >>> c Counter({50: 2, 10: 1, 100: 1}) ``` Upvotes: 2 <issue_comment>username_4: You could use `sum` to add the lists together: ``` L = sum(([c] * q for c, q in zip(coins, quantity)), []) ``` `numpy` also has a function that does exactly what you need: ``` numpy.repeat(coins, quantity) ``` EDIT: as @Chris\_Rands points out `sum` is not optimal for long lists of lists. It seems to be the fastest solution for `quantity=[100, 200, 100]` but for `quantity=[1000, 2000, 1000]` `numpy` is much faster. Both are faster than the OP solution. Upvotes: 1
2018/03/15
652
2,010
<issue_start>username_0: I have some simple PowerShell code to insert a value into a table: ``` Invoke-SqlCmd -ServerInstance myserver -Query 'insert into Database.dbo.tbl values (1)' ``` and if I save it as a file, I can call it in a CmdExec job step. However, can anyone tell me why I can't run the command, especially as it is only one line: ``` powershell.exe -ExecutionPolicy Bypass -Command 'Invoke-Sqlcmd -ServerInstance myserver -Query ''insert into DBADatabase.dbo.tbl values (1)''' ``` Can anyone tell me why running as a command will not work, yet the same code as a file can work. Any help appreciated. Thanks<issue_comment>username_1: what i can figure out is : ``` for i in range(len(quantity)): for j in range(quantity[i]): print(coins[i]) ``` Upvotes: 0 <issue_comment>username_2: You can leverage itertools chain and repeat: ``` from itertools import repeat,chain coins=[10, 50, 100] quantity=[1, 2, 1] result = list(chain.from_iterable(map(repeat, coins, quantity))) # thx @ miradulo print(result) ``` Output: ``` [10, 50, 50, 100] ``` Advantage: it does not build interal lists that then get flattened but generates the values as iterables. Upvotes: 3 [selected_answer]<issue_comment>username_3: Assuming the final order is not important, you could use a `collections.Counter()`: ``` >>> from collections import Counter >>> c = Counter(dict(zip(coins, quantity))) >>> list(c.elements()) [10, 100, 50, 50] >>> c Counter({50: 2, 10: 1, 100: 1}) ``` Upvotes: 2 <issue_comment>username_4: You could use `sum` to add the lists together: ``` L = sum(([c] * q for c, q in zip(coins, quantity)), []) ``` `numpy` also has a function that does exactly what you need: ``` numpy.repeat(coins, quantity) ``` EDIT: as @Chris\_Rands points out `sum` is not optimal for long lists of lists. It seems to be the fastest solution for `quantity=[100, 200, 100]` but for `quantity=[1000, 2000, 1000]` `numpy` is much faster. Both are faster than the OP solution. Upvotes: 1
2018/03/15
1,793
5,570
<issue_start>username_0: I have some functions like push, pop, delete etc. for a singly linked list and implemented the following function to get user input: ``` void user_input(){ char input[10]; while(fgets(input, 9, stdin)){ if(strncmp(input, "add", 3) == 0){ int x; printf("Number to add: "); scanf("%d", &x); push(x); printf("%d added.\n", x); } else if(strncmp(input, "del", 3) == 0){ int x; printf("Number to delete: "); scanf("%d", &x); delete(x); printf("%d deleted.\n", x); } else if(strncmp(input, "q", 1) == 0){ printf("Program terminated.\n"); break; } // and some other if else statements... } ``` So I can input a string like "add", then strncmp will compare it and I get another prompt asking me to enter the number I want to add and stores in x using scanf. Something like this: ``` add Number to add: 5 5 added. ``` However I am looking for a way to be able to enter something like this: ``` add 5 del 2 etc. ``` Basically a string and int value in one line separated by space, instead of writing "add" first, pressing enter and writing the number. I tried using sscanf but had no luck yet.<issue_comment>username_1: You could use `scanf()` like ``` char str[10]; int c; if( scanf("%9s %d", str, &c)!=2 ) { perror("Something went wrong."); } ``` The width for `%s` is one less than the size of `str`. The extra character is for the `\0`. `scanf()` returns the number of successful assignments that it made which in this case should be `2`. Now if you enter an input like ``` del 2 ``` `str` will have `"del"` and `c` will have `2`. Upvotes: 2 <issue_comment>username_2: It is much easier to step past input errors if you use `fgets` and then `sscanf`. You simply read another input string instead of messing around unblocking the input when invalid data was entered. ``` #include #include void push(int x) { } void delete(int x) { } void triple(int x, int y, int z) { } int main(void) { char input[100]; char oper[20]; int x, y, z; // romantic expansion for more than one argument int res; int err; while(1) { err = 0; if(fgets(input, sizeof input, stdin) != NULL) { res = sscanf(input, "%19s%d%d%d", oper, &x, &y, &z); if(res == 2 && strcmp(oper, "add") == 0){ push(x); printf("%d added.\n", x); } else if(res == 2 && strcmp(oper, "del") == 0) { delete(x); printf("%d deleted.\n", x); } else if(res == 4 && strcmp(oper, "trip") == 0) { // fantasy triple(x, y, z); printf("%d %d %d tripled.\n", x, y, z); } else if(res == 1 && strcmp(oper, "q") == 0){ printf("Program terminated.\n"); break; } // some other if else statements... else { err = 1; } } else { err = 1; } if(err) { printf("Bad operation.\n"); } } return 0; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: A good strategy for what are trying to do would be: 1. Read the input line by line. 2. Process each line. 3. Continue until there is no input or the user chose to quit. --- Here's the core outline. ``` // Make LINE_SIZE big enough for your needs #define LINE_SIZE 100 void processLine(char line[]); int main() { char line[LINE_SIZE]; while ( fgets(line, LINE_SIZE, stdin) != NULL ) { processLine(line); } } void processLine(char line[]) { } ``` In `processLine`, your first job is to pull the command. Do the needful based on the command. ``` void processLine(char line[]) { char command[20]; int num = 0; // Read the command and gather the number of characters read // This allows you to read more data from (line + num) int n = sscanf(line, "%19s%n", command, #); if ( n != 2 ) { // Problem exit(0); } // The command is to quit, exit. if ( isQuitCommand(command) ) { exit(0); } char* commandData = line + num; if ( isAddCommand(command) ) { processAdd(commandData); } else if ( isDeleteCommand(command) ) { processDelete(commandData); } else { ... } } ``` --- Here's a version of the program with stubs used for couple of functions. ``` #include #include #include // Make LINE\_SIZE big enough for your needs #define LINE\_SIZE 100 void processLine(char line[]); int isQuitCommand(char command[]); int isAddCommand(char command[]); int isDeleteCommand(char command[]); void processAdd(char commandData[]); void processDelete(char commandData[]); int main() { char line[LINE\_SIZE]; while ( fgets(line, LINE\_SIZE, stdin) != NULL ) { processLine(line); } } void processLine(char line[]) { char command[20]; int num = 0; // Read the command and gather the number of characters read // This allows you to read more data from (line + num) int n = sscanf(line, "%19s%n", command, #); if ( n != 2 ) { // Problem exit(0); } // The command is to quit, exit. if ( isQuitCommand(command) ) { exit(0); } char\* commandData = line + num; if ( isAddCommand(command) ) { processAdd(commandData); } else if ( isDeleteCommand(command) ) { processDelete(commandData); } else { // ... } } int isQuitCommand(char command[]) { return (command[0] == 'q'); } int isAddCommand(char command[]) { return (strcmp(command, "add") == 0); } int isDeleteCommand(char command[]) { return (strcmp(command, "del") == 0); } void processAdd(char commandData[]) { // Add code to process commandData } void processDelete(char commandData[]) { // Add code to process commandData } ``` Upvotes: 1
2018/03/15
698
3,072
<issue_start>username_0: In Windows UWP, what is the difference between the `ItemInvoked` and `SelectionChanged` events for a `NavigationView`? The [API reference](https://learn.microsoft.com/en-gb/uwp/api/windows.ui.xaml.controls.navigationview) states > > **ItemInvoked** > > Occurs when an item in the menu receives an interaction such as a click or tap. > > > **SelectionChanged** > > Occurs when the currently selected item changes. > > > It seems to me that `SelectionChanged` can detect when navigation occurs even by some method other than clicking a `NavigationView.MenuItem`, so would be the better, more encompassing option to use? Or are there different use cases for each?<issue_comment>username_1: The main difference would be that the `SelectionChanged` event is executed only once, but if you click the selected item repeatedly, it is not fired. `ItemInvoked` on the other hand will execute each time an item is clicked even if it is selected already. Also - `SelectionChanged` event will execute when you manually set `SelectedItem` in code. Upvotes: 3 [selected_answer]<issue_comment>username_2: Another thing that might be useful to note is that `ItemInvoked` fires before `SelectionChanged` when you click on a `NavigationView` item Upvotes: 2 <issue_comment>username_3: I was recently struggling with this question and having a problem when using the SelectionChanged Event. Successive clicks on the same Menu Item yielded no result. Then I found this blog post (<https://blogs.msdn.microsoft.com/appconsult/2018/05/06/using-the-navigationview-in-your-uwp-applications/>) which suggested using the `ItemInvoked` Event for the same reason given in answer #1. This did not work for me as posted in the Blog but after changing the code to use the `InvokedItemContainer` Tag property the solution worked just fine. I have included the test code I used to verify the solution. ``` private void NvTopLevelNav_ItemInvoked(NavigationView sender, NavigationViewItemInvokedEventArgs args) { if (args.IsSettingsInvoked) { contentFrame.Navigate(typeof(SettingsPage)); } else { string navTo = args.InvokedItemContainer.Tag.ToString(); if ( navTo != null) { switch (navTo) { case "Nav_Home": contentFrame.Navigate(typeof(HomePage)); break; case "Nav_Shop": contentFrame.Navigate(typeof(ShopPage)); break; case "Nav_ShopCart": contentFrame.Navigate(typeof(CartPage)); break; case "Nav_Message": contentFrame.Navigate(typeof(MessagePage)); break; case "Nav_Print": contentFrame.Navigate(typeof(PrintPage)); break; } } } } ``` Upvotes: 1
2018/03/15
788
2,766
<issue_start>username_0: I'm trying to change "+" on "-" by changing the class name and using "ng-click" event, but it changes all "+" elements on "-" in all divs. [![enter image description here](https://i.stack.imgur.com/ovVpN.png)](https://i.stack.imgur.com/ovVpN.png) [![enter image description here](https://i.stack.imgur.com/qU3eZ.png)](https://i.stack.imgur.com/qU3eZ.png) Any Idea how to change it only in that div that I clicked? Here is my code ``` #### {{band.artist}} {{band.track}} {{band.collection}} {{band.genre}} ### {{band.artist}} - {{band.track}} **Collection:** {{band.collection}} **Track Count:** {{band.trackAmount}} **Price:** {{band.collPrice}} USD **Duration:** {{band.trackDuration | date:'mm:ss'}} min **Track Price:**{{band.collPrice}} USD ``` JS ``` //class name change function $scope.class = "glyphicon glyphicon-plus" //name of class; $scope.changeClass = function(){ if ($scope.class === "glyphicon glyphicon-plus") //changing one class name on another $scope.class = "glyphicon glyphicon-minus"; else $scope.class = "glyphicon glyphicon-plus"; ```<issue_comment>username_1: The issue here is that all of your rows are binding to the same variable, which is why the icon changes for every row. Instead, you'll need to track a separate variable for each row. It looks like you already have a good candidate, `band.index`, to track which icon was chosen. JS ``` var classes = []; $scope.getClass = function (index) { // Return the current class or a reasonable default return classes[index] || 'glyphicon glyphicon-plus'; }; $scope.changeClass = function (index) { if (classes[index] === "glyphicon glyphicon-plus") //changing one class name on another classes[index] = "glyphicon glyphicon-minus"; else classes[index] = "glyphicon glyphicon-plus"; }; ``` HTML ``` ![]() {{band.artist}} {{band.track}} {{band.collection}} {{band.genre}} ``` Upvotes: 1 <issue_comment>username_2: Another way of doing the same ... Add a class property to your objects ``` var plusClass = "glyphicon glyphicon-plus"; var minusClass = "glyphicon glyphicon-minus"; $scope.bands = [ {... , "genre":"aaa", "class": plusClass}, {.... , "genre":"bbb", "class": plusClass} ]; ``` Assign the band.class property to the html class attribute. Pass the index as a parameter of the function ``` ![]() {{band.artist}} {{band.track}} {{band.collection}} {{band.genre}} ``` Set it on the method ``` $scope.changeClass = function(index) { var actualClass = $scope.bands[index].class; $scope.bands[index].class = (actualClass === plusClass) ? minusClass : plusClass; } ``` Upvotes: 1 [selected_answer]
2018/03/15
1,206
3,544
<issue_start>username_0: * Why receive expression is sometimes called selective receive? * What is the "save queue"? * How the after section works?<issue_comment>username_1: There is a special "save queue" involved in the procedure that when you first encounter the receive expression you may ignore its presence. Optionally, there may be an after-section in the expression that complicates the procedure a little. The receive expression is best explained with a flowchart: ``` receive pattern1 -> expressions1; pattern2 -> expressions2; pattern3 -> expressions3 after Time -> expressionsTimeout end ``` [![Erlang receive](https://i.stack.imgur.com/IMasJ.jpg)](https://i.stack.imgur.com/IMasJ.jpg) Upvotes: 2 <issue_comment>username_2: > > Why receive expression is sometimes called selective receive? > > > ``` -module(my). %-export([test/0, myand/2]). -compile(export_all). -include_lib("eunit/include/eunit.hrl"). start() -> spawn(my, go, []). go() -> receive {xyz, X} -> io:format("I received X=~w~n", [X]) end. ``` In the erlang shell: ``` 1> c(my). my.erl:3: Warning: export_all flag enabled - all functions will be exported {ok,my} 2> Pid = my:start(). <0.79.0> 3> Pid ! {hello, world}. {hello,world} 4> Pid ! {xyz, 10}. I received X=10 {xyz,10} ``` Note how there was no output for the first message that was sent, but there was output for the second message that was sent. The receive was selective: it did not receive all messages, it received only messages matching the specified pattern. > > What is the "save queue"? > > > ``` -module(my). %-export([test/0, myand/2]). -compile(export_all). -include_lib("eunit/include/eunit.hrl"). start() -> spawn(my, go, []). go() -> receive {xyz, X} -> io:format("I received X=~w~n", [X]) end, io:format("What happened to the message that didn't match?"), receive Any -> io:format("It was saved rather than discarded.~n"), io:format("Here it is: ~w~n", [Any]) end. ``` In the erlang shell: ``` 1> c(my). my.erl:3: Warning: export_all flag enabled - all functions will be exported {ok,my} 2> Pid = my:start(). <0.79.0> 3> Pid ! {hello, world}. {hello,world} 4> Pid ! {xyz, 10}. I received X=10 What happened to the message that didn't match?{xyz,10} It was saved rather than discarded. Here it is: {hello,world} ``` > > How the after section works? > > > ``` -module(my). %-export([test/0, myand/2]). -compile(export_all). -include_lib("eunit/include/eunit.hrl"). start() -> spawn(my, go, []). go() -> receive {xyz, X} -> io:format("I received X=~w~n", [X]) after 10000 -> io:format("I'm not going to wait all day for a match. Bye.") end. ``` In the erlang shell: ``` 1> c(my). my.erl:3: Warning: export_all flag enabled - all functions will be exported {ok,my} 2> Pid = my:start(). <0.79.0> 3> Pid ! {hello, world}. {hello,world} I'm not going to wait all day. Bye.4> ``` Another example: ``` -module(my). %-export([test/0, myand/2]). -compile(export_all). -include_lib("eunit/include/eunit.hrl"). sleep(X) -> receive after X * 1000 -> io:format("I just slept for ~w seconds.~n", [X]) end. ``` In the erlang shell: ``` 1> c(my). my.erl:3: Warning: export_all flag enabled - all functions will be exported {ok,my} 2> my:sleep(5). I just slept for 5 seconds. ok ``` Upvotes: 1
2018/03/15
1,056
3,991
<issue_start>username_0: Please pardon my idiocy I don't think my coffee has kicked in yet :( I am trying to loop through two lists and create an assignment based on a value for both lists. eg. List 1 is a list of folders I want to create List 2 is a list of hard drives with free space I want to spread the folder creation across drives by the list count is not equal. So I want to let's say for the first iteration in the folder list try drive 1 and if it has space and does not already have the folder create it. Then move to folder two and drive two - then folder three and drive three (if the total number of drives is 3) I want to move back to drive one for the next folder until all folders are created. Can anyone help me with a sample of how to get there ?<issue_comment>username_1: Here's a rough outline of an algorithm. It will match the first folder to the first drive, the second folder to the second drive, and so on. Unless there's not enough space. Then it will try subsequent drives (in a loop). Once space is found, it will resume iterating over drives after the last one found. For example: folder\_1 -> drive\_a folder\_2 -> drive\_b folder\_3 -> (skip drive\_c) -> drive\_d folder\_4 -> drive\_e etc ``` // You'll need to associate required space with each folder public class FolderThing { public string Name {get; set;} public int Size {get; set;} } // Each drive will need to track available free space public class DriveThing { public string Name {get; set;} public int AvailableSpace {get; set;} } // Collection of folders, populate as required. var foldersToAllocate = new List(); // Collection of drives, populate as required. var drivesToUse = new List(); var numberOfDrives = drivesToUse.Count(); int nextDrive = 0; foreach (var f in foldersToAllocate) { // Track whether space could be allocated ... bool spaceFound = false; int tryCount = 0; // For loop, to exit early on success. // Try every available drive once, until space is found. for (tryCount=0; tryCount f.Size) { // do the allocate thing // update space used for drive spaceFound = true; break; } } // Increment to the next drive after the previous one. nextDrive = (nextDrive + tryCount + 1) % numberOfDrives; if (spaceFound == false) { throw new Exception("Could not find enough free space"); } } ``` Upvotes: 0 <issue_comment>username_2: A question I often consider is "how can I move the tricky mechanisms off to a type of their own". In your case you want to have an enumeration of a list that starts over when it is done, but another way to think of that is to have a counter that wraps around when it gets to the top of a range. So let's implement that: ``` struct WrappingCounter { private int current; private int max; public WrappingCounter(int max) : this(0, max) { if (max <= 0) throw new ArgumentException(); } private WrappingCounter(int current, int max) { this.current = current % max; this.max = max; } public static implicit operator int(WrappingCounter c) { return c.current; } public static WrappingCounter operator ++(WrappingCounter c) { return new WrappingCounter(c.current + 1, c.max); } } ``` This code is also interesting in that it is an object lesson in how to write a correct `++` operator in C#. Note that `++` **returns the incremented value**, unlike in C++. It does not mutate `c`; it produces a *new* object that represents the incremented `c`. The compiler will take care of assigning it at the appropriate time. Now that we have isolated the ugly mechanisms to their own type, the main algorithm now becomes much easier to read: ``` var dirs = new List { "dir1", "dir2", "dir3" }; var drives = new List { "drive1", "drive2" }; var c = new WrappingCounter(drives.Count); foreach(var dir in dirs) { var drive = drives[c]; c++; Console.WriteLine(dir + " " + drive); } ``` Upvotes: 3 [selected_answer]
2018/03/15
320
1,458
<issue_start>username_0: I have a server setup using TFS 2018 that currently builds our software and triggers a release to deploy the binaries to my target system. I want to be able to only deploy the binaries to specific machines within a Deployment group based on the Capabilities each target has defined. I can see the capabilities for each target within my deployment group but no way to reference them. I don't see an option to set Demands for the Deployment Group Phase in my Release Definition, only for a Build Agent. How can I set Demands for my Deployment group?<issue_comment>username_1: Deployment groups don't have demands, but they *do* have tags. You can tag the machines in your deployment groups however you want, then specify that a Deployment Group phase should only run on a certain set of tags. Upvotes: 3 [selected_answer]<issue_comment>username_2: Since deployment groups don't have demands, but tags, you could try to use an agent phase on your release instead of a deployment group. Release/build agents are the same, they share the same code base. When you use an agent phase on a release pipeline, you can still use deployment tasks like to deploy IIS apps, they just will use WinRM, where you will be capable of remote deployments. For instance, your deployment agent could be installed on server 1 and your target IIS in another server. With deployment groups, you only can deploy where the agent is installed. Upvotes: 0
2018/03/15
468
1,996
<issue_start>username_0: I'm having trouble with adding a new SSL certificate to my webService request. ``` var client = new RestClient(tokenUrl); string certif = String.Format("{0}/client.cer", CertifPath); string key = String.Format("{0}/client.key", CertifPath); if (File.Exists(certif) && File.Exists(key)) { X509Certificate2 cert = new X509Certificate2(certif, key); X509CertificateCollection collection1 = new X509CertificateCollection(); collection1.Add(cert); client.ClientCertificates = collection1; } ``` I'm getting as a response : 400 no required ssl certificate was sent nginx !!!!. In Addition : When i use PostMan Or SoapUI .I a must add a third secret key(passphrase) to be able to get response. [ex :Add certificate via postman](https://i.stack.imgur.com/blvkv.png) My Question is How can i add this third parameter(secret key) in my request c# ?. There is another way to implement certificate to my request ???<issue_comment>username_1: Deployment groups don't have demands, but they *do* have tags. You can tag the machines in your deployment groups however you want, then specify that a Deployment Group phase should only run on a certain set of tags. Upvotes: 3 [selected_answer]<issue_comment>username_2: Since deployment groups don't have demands, but tags, you could try to use an agent phase on your release instead of a deployment group. Release/build agents are the same, they share the same code base. When you use an agent phase on a release pipeline, you can still use deployment tasks like to deploy IIS apps, they just will use WinRM, where you will be capable of remote deployments. For instance, your deployment agent could be installed on server 1 and your target IIS in another server. With deployment groups, you only can deploy where the agent is installed. Upvotes: 0
2018/03/15
1,838
4,441
<issue_start>username_0: I need to eliminate rows from a data frame based on the repetition of values in a given column, but only those that are consecutive. For example, for the following data frame: ``` df = data.frame(x=c(1,1,1,2,2,4,2,2,1)) df$y <- c(10,11,30,12,49,13,12,49,30) df$z <- c(1,2,3,4,5,6,7,8,9) x y z 1 10 1 1 11 2 1 30 3 2 12 4 2 49 5 4 13 6 2 12 7 2 49 8 1 30 9 ``` I would need to eliminate rows with consecutive repeated values in the x column, keep the last repeated row, and maintain the structure of the data frame: ``` x y z 1 30 3 2 49 5 4 13 6 2 49 8 1 30 9 ``` Following directions from `help` and some other posts, I have tried using the `duplicated` function: ``` df[ !duplicated(x,fromLast=TRUE), ] # which gives me this: x y z 1 1 10 1 6 4 13 6 7 2 12 7 9 1 30 9 NA NA NA NA NA.1 NA NA NA NA.2 NA NA NA NA.3 NA NA NA NA.4 NA NA NA NA.5 NA NA NA NA.6 NA NA NA NA.7 NA NA NA NA.8 NA NA NA ``` Not sure why I get the NA rows at the end (wasn't happening with a similar table I was testing), but works only partially on the values. I have also tried using the `data.table` package as follows: ``` library(data.table) dt <- as.data.table(df) setkey(dt, x) dt[J(unique(x)), mult ='last'] ``` Works great, but it eliminates ALL duplicates from the data frame, not just those that are consecutive, giving something like this: ``` x y z 1 30 9 2 49 8 4 13 6 ``` Please, forgive if cross-posting. I tried some of the suggestions but none worked for eliminating only those that are consecutive. I would appreciate any help. Thanks<issue_comment>username_1: How about: ``` df[cumsum(rle(df$x)$lengths),] ``` Explanation: ``` rle(df$x) ``` gives you the run lengths and values of *consecutive* duplicates in the `x` variable. Then: ``` rle(df$x)$lengths ``` extracts the lengths. Finally: ``` cumsum(rle(df$x)$lengths) ``` gives the row indices which you can select using `[`. **EDIT** for fun here's a `microbenchmark` of the answers given so far with `rle` being mine, `consec` being what I think is the most fundamentally direct answer, given by @username_3, and would be the answer I would "accept", and `dp` being the `dplyr` answer given by @Nik. ``` #> Unit: microseconds #> expr min lq mean median uq max #> rle 134.389 145.4220 162.6967 154.4180 172.8370 375.109 #> consec 111.411 118.9235 136.1893 123.6285 145.5765 314.249 #> dp 20478.898 20968.8010 23536.1306 21167.1200 22360.8605 179301.213 ``` `rle` performs better than I thought it would. Upvotes: 3 <issue_comment>username_2: A cheap solution with `dplyr` that I could think of: Method: ``` library(dplyr) df %>% mutate(id = lag(x, 1), decision = if_else(x != id, 1, 0), final = lead(decision, 1, default = 1)) %>% filter(final == 1) %>% select(-id, -decision, -final) ``` Output: ``` x y z 1 1 30 3 2 2 49 5 3 4 13 6 4 2 49 8 5 1 30 9 ``` This will even work if your data has the same x value at the bottom New Input: ``` df2 <- df %>% add_row(x = 1, y = 10, z = 12) df2 x y z 1 1 10 1 2 1 11 2 3 1 30 3 4 2 12 4 5 2 49 5 6 4 13 6 7 2 12 7 8 2 49 8 9 1 30 9 10 1 10 12 ``` Use same method: ``` df2 %>% mutate(id = lag(x, 1), decision = if_else(x != id, 1, 0), final = lead(decision, 1, default = 1)) %>% filter(final == 1) %>% select(-id, -decision, -final) ``` New Output: ``` x y z 1 1 30 3 2 2 49 5 3 4 13 6 4 2 49 8 5 1 10 12 ``` Upvotes: 2 <issue_comment>username_3: You just need to check in there is no duplicate following a number, i.e x[i+1] != x[i] and note the last value will always be present. ``` df[c(df$x[-1] != df$x[-nrow(df)],TRUE),] x y z 3 1 30 3 5 2 49 5 6 4 13 6 8 2 49 8 9 1 30 9 ``` Upvotes: 4 [selected_answer]<issue_comment>username_4: Here is a `data.table` solution. The trick is to create a shifted version of `x` with the `shift` function and compare it with `x` ``` library(data.table) dattab <- as.data.table(df) dattab[x != shift(x = x, n = 1, fill = -999, type = "lead")] # edited to add closing ) ``` This way you compare each value of x with its immediately following value and throw out where they match. Make sure to set fill to something that is not in `x` in order for correct handling of the last value. Upvotes: 1
2018/03/15
806
3,061
<issue_start>username_0: I am trying to remove the `structured data` that Woocommerce adds to the product pages. I did some research and found that `WC_Structured_Data::generate_product_data()` generates the structured data markup. It's hooked in the `woocommerce_single_product_summary` action hook in the `woocommerce/templates/content-single-product.php` template file. I tried by adding the following code to the `functions.php` ``` remove_action( 'woocommerce_single_product_summary', 'WC_Structured_Data::generate_product_data()', 60 ); ``` So structured data wouldn't be added by Woocommerce, **but it doesn't work**… Am I doing something wrong? Is there another way to do what I am trying to achieve?<issue_comment>username_1: Instead, you can use dedicated filter hook `'woocommerce_structured_data_product'` that is located in [`WC_Structured_Data`](https://docs.woocommerce.com/wc-apidocs/class-WC_Structured_Data.html) for [`generate_product_data()` method](https://docs.woocommerce.com/wc-apidocs/source-class-WC_Structured_Data.html#265) nulling the structured data output in single product pages: ``` add_filter( 'woocommerce_structured_data_product', 'structured_data_product_nulled', 10, 2 ); function structured_data_product_nulled( $markup, $product ){ if( is_product() ) { $markup = ''; } return $markup; } ``` *Code goes in function.php file of your active child theme (or active theme)*. Tested and works. Upvotes: 3 [selected_answer]<issue_comment>username_2: I suspect people want to remove the default tabs and they come here after they see the Woocommerce `content-single-product.php` template. In this template you see that `generate_product_data()` is hooked with priority 60. After inspecting the hooks that run on `woocommerce_single_product_summary`. You can easily remove the tabs with: ``` remove_action( 'woocommerce_single_product_summary', 'woocommerce_output_product_data_tabs', 60 ); ``` I think Woocommerce forgot to mention this add\_action. Upvotes: 0 <issue_comment>username_3: This is how you can remove hooks associated with instantiated object method. You have to find the variable that holds the new Object instance. In this case the main WooCommerce object is accessible as `$GLOBALS['woocommerce']` and it has public property `$structured_data` which holds an instance of the WC\_Structured\_Data object. Hence, to remove the hook in the question you can write this code: ``` remove_action( 'woocommerce_before_main_content', array( $GLOBALS['woocommerce']->structured_data, 'generate_website_data' ), 30 ); ``` Upvotes: 0 <issue_comment>username_4: Add to functions.php: ``` add_action('wp_loaded', function() { remove_action('woocommerce_single_product_summary', [$GLOBALS['woocommerce']->structured_data, 'generate_product_data'], 60); }); ``` Unhooks WC\_Structured\_Data::generate\_product\_data(). Will not waste resources on generating product data first for no reason and then "nulling" that same generated data a moment later using a filter. Upvotes: 0
2018/03/15
702
2,601
<issue_start>username_0: I have a constructor whose parameters are both `int`: `Berries( int a, int b )` and I need to put a `double` in the place of "a" in the code: `Berries( 23.45, 6)`. I tried with cast, but it does not work. Can you help me please ?<issue_comment>username_1: Instead, you can use dedicated filter hook `'woocommerce_structured_data_product'` that is located in [`WC_Structured_Data`](https://docs.woocommerce.com/wc-apidocs/class-WC_Structured_Data.html) for [`generate_product_data()` method](https://docs.woocommerce.com/wc-apidocs/source-class-WC_Structured_Data.html#265) nulling the structured data output in single product pages: ``` add_filter( 'woocommerce_structured_data_product', 'structured_data_product_nulled', 10, 2 ); function structured_data_product_nulled( $markup, $product ){ if( is_product() ) { $markup = ''; } return $markup; } ``` *Code goes in function.php file of your active child theme (or active theme)*. Tested and works. Upvotes: 3 [selected_answer]<issue_comment>username_2: I suspect people want to remove the default tabs and they come here after they see the Woocommerce `content-single-product.php` template. In this template you see that `generate_product_data()` is hooked with priority 60. After inspecting the hooks that run on `woocommerce_single_product_summary`. You can easily remove the tabs with: ``` remove_action( 'woocommerce_single_product_summary', 'woocommerce_output_product_data_tabs', 60 ); ``` I think Woocommerce forgot to mention this add\_action. Upvotes: 0 <issue_comment>username_3: This is how you can remove hooks associated with instantiated object method. You have to find the variable that holds the new Object instance. In this case the main WooCommerce object is accessible as `$GLOBALS['woocommerce']` and it has public property `$structured_data` which holds an instance of the WC\_Structured\_Data object. Hence, to remove the hook in the question you can write this code: ``` remove_action( 'woocommerce_before_main_content', array( $GLOBALS['woocommerce']->structured_data, 'generate_website_data' ), 30 ); ``` Upvotes: 0 <issue_comment>username_4: Add to functions.php: ``` add_action('wp_loaded', function() { remove_action('woocommerce_single_product_summary', [$GLOBALS['woocommerce']->structured_data, 'generate_product_data'], 60); }); ``` Unhooks WC\_Structured\_Data::generate\_product\_data(). Will not waste resources on generating product data first for no reason and then "nulling" that same generated data a moment later using a filter. Upvotes: 0
2018/03/15
510
1,784
<issue_start>username_0: I need help with this script (i am a complete js newbie) i need the script to store the text that a user types in into a js variable and then print it, this is what i have so far but it doesn't seem to be working: ``` Insult: function othername() { var input = document.getElementById("userInput").value; alert(input); } print(forminput) ``` Thanks for your help!<issue_comment>username_1: There are a couple issues with the script. 1. The `onclick` property is saying to call a function named `formInput`, but that function does not exist. Your script has a function called `othername`, so you want to use that instead. 2. The function `othername` tries to select an element with the id of `userInput`, but the input in your HTML does not have an `id` property. 3. Inside the `script` tag you are calling a function `print` which does not exist. Here is a jsfiddle with your script updated and working: <https://jsfiddle.net/username_1/c4yethp5/3/> And here is the updated code ``` Insult: function othername() { var input = document.getElementById("userInput").value; alert(input); } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: To get a value from a HTML element the simplest way is to give the HTML element an `id`. ``` ``` when a user clicks the button: ``` // Get the text var mytext = document.getElementById("textinput").value; // Get the button var btn = document.getElementById("printbutton"); // When the user clicks on the button print the text btn.onclick = function () { alert(mytext); } ``` To get an output with this text there are several options: In chrome: ``` Console.log(mytext); ``` Get a popup: ``` alert(mytext); ``` jsfiddle: <https://jsfiddle.net/3ogj1ogs/3/> Upvotes: 0
2018/03/15
629
2,231
<issue_start>username_0: I have to parse a range of row keys by date in Google Cloud BigTable using a regex filter, where each date in the rowkey is stored as a reverse-timestamp according to the Unix Epoch (00:00:00 on January 1st, 1970). For example, given `Date d = "2018-03-09T10:48:00.000Z"`, this date in MS since the Unix Epoch is `d.valueOf() = 1520592480000`. In JavaScript, the maximum allowable integer is `Number.MAX_SAFE_INTEGER = 9007199254740991`, and so we calculate *d*'s reverse date *r* by taking the difference: `var r = Number.MAX_SAFE_INTEGER - d` where `r = 9005678662260991`. Given two reverse dates *r1* and *r2*, how can I write a regular expression in RE2 to get all date strings within this range? I'm trying to find all dates within a range using reverse-timestamp rowkeys in BigTable using NodeJS (limited documentation), so if there are any easier solutions than this that you're aware of I'll also accept those! Thanks<issue_comment>username_1: There's not a good way to handle integer ranges using pure regular expressions, only character ranges: <https://stackoverflow.com/a/7861720/643848> <https://github.com/google/re2/wiki/Syntax> Moreover, Bigtable will be much more efficient if you can express your queries to take advantage of lexicographical row keys for filtering/scanning, i.e. if you can set up your schema such that you can express the range you want using a simpler query like a RowRange (<https://cloud.google.com/bigtable/docs/reference/data/rpc/google.bigtable.v2#rowrange>). You may also want to consider just issuing a broader query and then doing more processing and filtering client-side. Upvotes: 0 <issue_comment>username_2: The workaround for the Javascript maximum integer is making the lookup less natural and efficient. I would suggest expressing the rows in a way that are easily understandable by both Javascript and Bigtable, while maintaining an natural order and properties for filtering. If you need reversal to prevent hotspotting, you can try reversing only part of the timestamp (10's of seconds?) or adding a salt into the key (you can check the "Salting" section here <https://cloud.google.com/bigtable/docs/schema-design-time-series>). Upvotes: 1
2018/03/15
1,444
5,454
<issue_start>username_0: What I'm trying to do is when a user (via a touchscreen) clicks on an editable QEditLine I want it to show the Matchbox-Keyboard for user input. When it is not clicked do not show the keyboard. I've gone through the C documentation, and a few C examples, but I'm lost as too make the jump to Python. I see people mentioning setting the "focus" can someone explain this too me? ``` import sys import os from PyQt5.QtWidgets import QApplication, QFileDialog, QSlider, QComboBox, QCheckBox, QWidget, QMainWindow, QPushButton, QLabel, QGridLayout, QGroupBox, QRadioButton, QMessageBox, QLineEdit from PyQt5.QtGui import QIcon, QPixmap from PyQt5.QtCore import pyqtSlot, Qt class App(QMainWindow): def __init__(self): super().__init__() self.title = 'GUI TESTS' self.left = 0 self.top = 0 self.width = 800 self.height = 400 self.statusBarMessage = "GUI TEST" self.currentSprite = 'TEST.png' self.btn1Active = False self.btn2Active = False self.btn3Active = False self.btn4Active = False self.btn5Active = False self.btn6Active = False self.btn7Active = False self.btn8Active = False self.saveLocationDir = "" self.initUI() def initUI(self): self.setWindowTitle(self.title) self.setGeometry(self.left, self.top, self.width, self.height) self.statusBar().showMessage(self.statusBarMessage) self.userNameLabel = QLabel(self) self.userNameLabel.move(0,125) self.userNameLabel.setText("What is your name?") self.userNameLabel.resize(120,20) self.nameInput = QLineEdit(self) self.nameInput.move(0,145) self.nameInput.resize(200,32) self.nameInput.setEchoMode(0) @pyqtSlot() def showKeyboard(self): command = "matchbox-keyboard" os.system(command) ```<issue_comment>username_1: ``` import sys import os from PyQt5.QtWidgets import QApplication, QFileDialog, QSlider, QComboBox, QCheckBox, QWidget, QMainWindow, QPushButton, QLabel, QGridLayout, QGroupBox, QRadioButton, QMessageBox, QLineEdit from PyQt5.QtGui import QIcon, QPixmap from PyQt5.QtCore import pyqtSlot, Qt class App(QMainWindow): def __init__(self): super().__init__() self.title = 'GUI TESTS' self.left = 0 self.top = 0 self.width = 800 self.height = 400 self.statusBarMessage = "GUI TEST" self.currentSprite = 'TEST.png' self.btn1Active = False self.btn2Active = False self.btn3Active = False self.btn4Active = False self.btn5Active = False self.btn6Active = False self.btn7Active = False self.btn8Active = False self.saveLocationDir = "" self.initUI() def initUI(self): self.setWindowTitle(self.title) self.setGeometry(self.left, self.top, self.width, self.height) self.statusBar().showMessage(self.statusBarMessage) self.userNameLabel = QLabel(self) self.userNameLabel.move(0,125) self.userNameLabel.setText("What is your name?") self.userNameLabel.resize(120,20) self.nameInput = QLineEdit(self) self.nameInput.move(0,145) self.nameInput.resize(200,32) self.nameInput.setEchoMode(0) self.nameInput.mousePressEvent=self.showKeyboard @pyqtSlot() def showKeyboard(self,event): if event.button() == QtCore.Qt.LeftButton: QtWidgets.QLineEdit.mousePressEvent(self, event) command = "matchbox-keyboard" os.system(command) ``` You can override mousePressEvent and achieve that functionality Upvotes: 0 <issue_comment>username_2: It is not recommended to override the events method by assigning a function `self.nameInput.mousePressEvent = self.showKeyboard` since the tasks of the `mousePressEvent` of the `QLineEdit` are lost and could cause unexpected events. Also, `mousePressEvent` is not the appropriate event since you can press the `QLineEdit` many times and it would be called back to the keyboard. A better option is to launch it in `focusInEvent` and delete it in `focusOutEvent`: ``` import sys import subprocess from PyQt5.QtWidgets import * from PyQt5.QtGui import * from PyQt5.QtCore import * class MatchBoxLineEdit(QLineEdit): def focusInEvent(self, e): try: subprocess.Popen(["matchbox-keyboard"]) except FileNotFoundError: pass def focusOutEvent(self,e): subprocess.Popen(["killall","matchbox-keyboard"]) class App(QMainWindow): def __init__(self): super().__init__() self.initUI() def initUI(self): self.setWindowTitle('GUI TESTS') widget = QWidget() self.setCentralWidget(widget) lay = QVBoxLayout(widget) self.userNameLabel = QLabel("What is your name?") self.nameInput = MatchBoxLineEdit() lay.addWidget(self.userNameLabel) lay.addWidget(self.nameInput) self.setGeometry( QStyle.alignedRect( Qt.LeftToRight, Qt.AlignCenter,self.sizeHint(), qApp.desktop().availableGeometry() ) ) if __name__ == '__main__': app = QApplication(sys.argv) w = App() w.show() sys.exit(app.exec_()) ``` Upvotes: 2 [selected_answer]
2018/03/15
373
1,093
<issue_start>username_0: I am trying to make a border that connects a horizontal line from the right. The sketch below is how it should look like, and I need ideas on how to create this. Thank you! I would greatly appreciate it on anyone who can help me. [![enter image description here](https://i.stack.imgur.com/XKvt3.png)](https://i.stack.imgur.com/XKvt3.png)<issue_comment>username_1: ``` .test { width: 100%; height: 50px; background-color: #F0F; position: relative; } .test:before { width: 100%; height: 1px; left: 0; top: 50%; content: ""; background-color: #eee; position: absolute; } ``` Upvotes: 0 <issue_comment>username_2: Here is a working example. ```css #example { position: relative; } #example:before { content: ""; display: block; width: 100%; height: 0px; position: absolute; top: 50%; left: 0px; border-bottom: solid 1px #999; } #example span { position: relative; display: inline-block; padding: 5px; color: #999; background: #FFF; border: solid 1px #999; } ``` ```html LATEST PRODUCTS ``` Upvotes: 1
2018/03/15
756
2,269
<issue_start>username_0: I have an object with a bunch of properties, say ``` {a:1, b:2, c:3, d:"hello", e:"world"} ``` and another one with subset of those properties ``` {a:4, b:5, c:6} ``` What would be the most streamlined way with the least amount of code to assign values of properties of the second object to the first? I know I can assign individual properties one by one, but is there a better, maybe ES6 way?<issue_comment>username_1: You could use [`Object.assign`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign), which assigns all properties of the second object to the first object. ```js var object = { a: 1, b: 2, c: 3, d: "hello", e: "world" }, newValues = { a: 4, b: 5, c: 6 }; Object.assign(object, newValues); console.log(object); ``` Upvotes: 2 <issue_comment>username_2: ``` const obj = Object.assign({}, obj1, obj2); console.log(obj); // {a:4, b:5, c:6, d:"hello", e:"world"} console.log(obj1); // {a:1, b:2, c:3, d:"hello", e:"world"} ``` this solution create new object with no impact of obj1 ``` const obj = Object.assign(obj1, obj2); console.log(obj); // {a:4, b:5, c:6, d:"hello", e:"world"} console.log(obj1); // {a:4, b:5, c:6, d:"hello", e:"world"} ``` this solution create new obj object and store reference to obj1 Upvotes: -1 <issue_comment>username_3: As of ES2018, you can use the [object spread syntax](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) to do this: ```js let obj1 = {a:1, b:2, c:3, d:"hello", e:"world"}; let obj2 = {a:4, b:5, c:6}; obj1 = {...obj1, ...obj2}; console.log(obj1); ``` Note that this is very much new syntax, and is only supported by the most up to date browsers. No version of Edge, for example, supports it. You will need to transpile it for older browsers. This assumes that you want the properties in `obj2` to override those in `obj1`. If you don't, change it to `obj1 = {...obj2, ...obj1};`. Upvotes: 3 [selected_answer]<issue_comment>username_4: Good afternoon yes, there is a better way with es6 it's called Spread operator, if that's what you were looking for, rate me <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax> Upvotes: -1
2018/03/15
773
2,299
<issue_start>username_0: I have a string that I want to remove gaps from. I have found the beginning and the end position of the gaps, but I don't want to print to the screen. Instead, I want to save it as a list. Here's the code I have: ``` S1= "AAAA--TTT--GCCTT--GGG" import re t=[] p = re.compile("-+") for m in p.finditer(s1): t.append(m.end()-m.start()) ## This produces an error. I want to save the output as a list ```<issue_comment>username_1: You could use [`Object.assign`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign), which assigns all properties of the second object to the first object. ```js var object = { a: 1, b: 2, c: 3, d: "hello", e: "world" }, newValues = { a: 4, b: 5, c: 6 }; Object.assign(object, newValues); console.log(object); ``` Upvotes: 2 <issue_comment>username_2: ``` const obj = Object.assign({}, obj1, obj2); console.log(obj); // {a:4, b:5, c:6, d:"hello", e:"world"} console.log(obj1); // {a:1, b:2, c:3, d:"hello", e:"world"} ``` this solution create new object with no impact of obj1 ``` const obj = Object.assign(obj1, obj2); console.log(obj); // {a:4, b:5, c:6, d:"hello", e:"world"} console.log(obj1); // {a:4, b:5, c:6, d:"hello", e:"world"} ``` this solution create new obj object and store reference to obj1 Upvotes: -1 <issue_comment>username_3: As of ES2018, you can use the [object spread syntax](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) to do this: ```js let obj1 = {a:1, b:2, c:3, d:"hello", e:"world"}; let obj2 = {a:4, b:5, c:6}; obj1 = {...obj1, ...obj2}; console.log(obj1); ``` Note that this is very much new syntax, and is only supported by the most up to date browsers. No version of Edge, for example, supports it. You will need to transpile it for older browsers. This assumes that you want the properties in `obj2` to override those in `obj1`. If you don't, change it to `obj1 = {...obj2, ...obj1};`. Upvotes: 3 [selected_answer]<issue_comment>username_4: Good afternoon yes, there is a better way with es6 it's called Spread operator, if that's what you were looking for, rate me <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax> Upvotes: -1
2018/03/15
559
1,953
<issue_start>username_0: I want to delete all the words in a textbox from starting to the first comma appearance. Example - > > Ram bought rice, bread and tomatoes. > > > The above sentence should become - > > bread and tomatoes. > > > I know how to replace texts in VB.Net but I can't find a solution for this issue.<issue_comment>username_1: Try this: ``` Dim s As String = "Ram bought rice, bread and tomatoes." Dim index As Integer = s.IndexOf(",") + 1 Dim substring As String = s.Substring(index , s.Length - index) ``` Upvotes: 1 <issue_comment>username_2: This is an example how to do it. ``` Module Example Sub Main() Dim str As String = "foo, bar baz" Dim index As Integer = str.IndexOf(",") + 1 Dim length As Integer = str.Length - index str = str.Substring(index, length) Console.WriteLine(str) Console.ReadLine() End Sub End Module ``` Upvotes: 1 <issue_comment>username_3: The other answers are nice, but adding the 2nd parameter gives an un-needed risk. Try this for safer code ``` Dim MyString As String = "Ram bought rice, bread and tomatoes." Dim index As Integer = MyString.IndexOf(",") + 1 Dim MyNewString As String = MyString.Substring(index) ``` This just uses the first parameter, as the second calculation to calculate the length is not needed. Upvotes: 0 <issue_comment>username_4: I know I'm a little late to the party, but here is how I would do it using a function. ``` Private Function Foo(ByVal str As String) As String If str.Contains(",") Then Return str.Split(",")(1).Trim() End If ''return nothing as the user only wants the 2nd half of the string Return Nothing End Function ``` `Trim()` will remove any leading spaces. You can call the function like so, ``` Dim value As String = "Ram bought rice, bread and tomatoes." textBox1.Text = Foo(value) ``` Output: `bread and tomatoes.` Upvotes: 1
2018/03/15
651
2,426
<issue_start>username_0: I am trying to filter out values from a pandas data-frame and then generate a column with those values. To further clarify myself here is an example ``` print (temp.head()) Index Work-Assigned Location A R NL B df MB A NL C SL NL D RC MB A RC AB ``` Now what I want to do is to filter out all the R and SL values from this data-frame and create another data-frame with just those values and the index. Something like this: ``` print (result.head()) Index R/SL A R B C SL D ``` I tried pivoting the data with Work-Assigned as the value, as you see certain value in the index column is repeated, but that didn't work.<issue_comment>username_1: Try this: ``` Dim s As String = "Ram bought rice, bread and tomatoes." Dim index As Integer = s.IndexOf(",") + 1 Dim substring As String = s.Substring(index , s.Length - index) ``` Upvotes: 1 <issue_comment>username_2: This is an example how to do it. ``` Module Example Sub Main() Dim str As String = "foo, bar baz" Dim index As Integer = str.IndexOf(",") + 1 Dim length As Integer = str.Length - index str = str.Substring(index, length) Console.WriteLine(str) Console.ReadLine() End Sub End Module ``` Upvotes: 1 <issue_comment>username_3: The other answers are nice, but adding the 2nd parameter gives an un-needed risk. Try this for safer code ``` Dim MyString As String = "Ram bought rice, bread and tomatoes." Dim index As Integer = MyString.IndexOf(",") + 1 Dim MyNewString As String = MyString.Substring(index) ``` This just uses the first parameter, as the second calculation to calculate the length is not needed. Upvotes: 0 <issue_comment>username_4: I know I'm a little late to the party, but here is how I would do it using a function. ``` Private Function Foo(ByVal str As String) As String If str.Contains(",") Then Return str.Split(",")(1).Trim() End If ''return nothing as the user only wants the 2nd half of the string Return Nothing End Function ``` `Trim()` will remove any leading spaces. You can call the function like so, ``` Dim value As String = "Ram bought rice, bread and tomatoes." textBox1.Text = Foo(value) ``` Output: `bread and tomatoes.` Upvotes: 1
2018/03/15
1,249
4,533
<issue_start>username_0: I've got a program that will read from a file and add each line of the file as an element of an array. What I'm trying to do now though is to figure out how to edit certain items in the array. The problem is my input file looks like this ``` number of array items id, artist name, date, location id, artist name, date, location ``` so for example ``` 2 34, <NAME>, 1990, Seattle 21, jane doe, 1945, Tampa ``` so if I call `artArray[0]` I get `34, <NAME>, 1990, Seattle` but I'm trying to figure out how to just update `Seattle` when the id of `34` is entered so maybe split each element in the array by comma's? or use a multidimensional array instead? ``` import java.io.File; import java.io.FileNotFoundException; import java.util.Scanner; public class ArtworkManagement { /** * @param args */ public static void main(String[] args) { String arraySize; try { String filename = "artwork.txt"; File file = new File(filename); Scanner sc = new Scanner(file); // gets the size of the array from the first line of the file, removes white space. arraySize = sc.nextLine().trim(); int size = Integer.parseInt(arraySize); // converts String to Integer. Object[] artArray = new Object[size]; // creates an array with the size set from the input file. System.out.println("first line: " + size); for (int i = 0; i < artArray.length; i++) { String line = sc.nextLine().trim(); // line.split(","); artArray[i] = line; } System.out.println(artArray[0]); sc.close(); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } ```<issue_comment>username_1: You are almost there, but [split](https://docs.oracle.com/javase/7/docs/api/java/lang/String.html#split(java.lang.String)) returns an array, so you need an array of arrays. You can change this line ``` Object[] artArray = new Object[size]; ``` By this one, you also can use `String` instead of `Object` since this is indeed an string. ``` String[][] artArray = new Object[size][]; ``` Then you can add the array to the array of arrays with ``` artArray[i] = line.split(); ``` And finally access to it using two indexes: ``` artArray[indexOfTheArray][indexOfTheWord] ``` Also if you want to print the array use: ``` System.out.println(Arrays.toString(artArray[0])); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: This behavior is explicitly documented in **String.split(String regex)** (emphasis mine): > > This method works as if by invoking the two-argument split method with > the given expression and a limit argument of zero. Trailing empty > strings are therefore not included in the resulting array. > > > If you want those trailing empty strings included, you need to use **String.split(String regex, int limit)** with a negative value for the second parameter (**limit**): ``` String[] array = values.split(",", -1); or String[] array = values.split(","); ``` so ``` 1. array[0] equals to id 2. array[1] equals to artist name 3. array[2] equals to date 4. array[3] equals to location ``` Upvotes: 0 <issue_comment>username_3: ``` String[][] artArray = new String[size][]; System.out.println("first line: " + size); for (int i = 0; i < artArray.length; i++) { String line = sc.nextLine().trim(); artArray[i] = new String[line.split(",").length()]; artArray[i]=line.split(","); } ``` Upvotes: 0 <issue_comment>username_4: I think you need to use a better data structure to store the file content so that you can process it easily later. Here's my recommendation: ``` List headers = new ArrayList<>(); List> data = new ArrayList>(); List lines = Files.lines(Paths.get(file)).collect(Collectors.toList()); //Store the header row in the headers list. Arrays.stream(lines.stream().findFirst().get().split(",")).forEach(s -> headers.add(s)); //Store the remaining lines as words separated by comma lines.stream().skip(1).forEach(s -> data.add(Arrays.asList(s.split(",")))); ``` Now if you want to update the city (last column) in the first line (row), all you have to do is: ``` data.get(0).set(data.get(0) - 1, "Atlanta"); //Replace Seattle with Atlanta ``` Upvotes: 0
2018/03/15
911
3,288
<issue_start>username_0: I'm creating LoginActivity-> OtherActivity-> Fragment 1-> Fragment\_2. When I click Back Button I'm back in Fragment\_1 when again I click BackButton I'm in LoginActivity why? I want come back to MainActivity after the click BackButton in Fragment \_1 and after next click BackButton (in MainActivity) I want come back to home screen How can I do this ? In MainActivity i have: ``` @Override public void onBackPressed() { if (getFragmentManager().getBackStackEntryCount() > 0 ) { getFragmentManager().popBackStack(); } else { super.onBackPressed(); } } ```<issue_comment>username_1: You are almost there, but [split](https://docs.oracle.com/javase/7/docs/api/java/lang/String.html#split(java.lang.String)) returns an array, so you need an array of arrays. You can change this line ``` Object[] artArray = new Object[size]; ``` By this one, you also can use `String` instead of `Object` since this is indeed an string. ``` String[][] artArray = new Object[size][]; ``` Then you can add the array to the array of arrays with ``` artArray[i] = line.split(); ``` And finally access to it using two indexes: ``` artArray[indexOfTheArray][indexOfTheWord] ``` Also if you want to print the array use: ``` System.out.println(Arrays.toString(artArray[0])); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: This behavior is explicitly documented in **String.split(String regex)** (emphasis mine): > > This method works as if by invoking the two-argument split method with > the given expression and a limit argument of zero. Trailing empty > strings are therefore not included in the resulting array. > > > If you want those trailing empty strings included, you need to use **String.split(String regex, int limit)** with a negative value for the second parameter (**limit**): ``` String[] array = values.split(",", -1); or String[] array = values.split(","); ``` so ``` 1. array[0] equals to id 2. array[1] equals to artist name 3. array[2] equals to date 4. array[3] equals to location ``` Upvotes: 0 <issue_comment>username_3: ``` String[][] artArray = new String[size][]; System.out.println("first line: " + size); for (int i = 0; i < artArray.length; i++) { String line = sc.nextLine().trim(); artArray[i] = new String[line.split(",").length()]; artArray[i]=line.split(","); } ``` Upvotes: 0 <issue_comment>username_4: I think you need to use a better data structure to store the file content so that you can process it easily later. Here's my recommendation: ``` List headers = new ArrayList<>(); List> data = new ArrayList>(); List lines = Files.lines(Paths.get(file)).collect(Collectors.toList()); //Store the header row in the headers list. Arrays.stream(lines.stream().findFirst().get().split(",")).forEach(s -> headers.add(s)); //Store the remaining lines as words separated by comma lines.stream().skip(1).forEach(s -> data.add(Arrays.asList(s.split(",")))); ``` Now if you want to update the city (last column) in the first line (row), all you have to do is: ``` data.get(0).set(data.get(0) - 1, "Atlanta"); //Replace Seattle with Atlanta ``` Upvotes: 0
2018/03/15
1,174
3,768
<issue_start>username_0: I am trying to create a program that creates a character by randomly assigning it a race, class, and stats. However I want each stat to have a unique value. So if `strength` is 8, then none of the other stats can be 8. How would I go about doing this? Do I need to delete list entries as choices are made? **My code** ``` import random races = ["Human", "Dwarf", "Elf"] classes = ["Fighter", "Wizard", "Rogue"] stats = [8, 10, 11, 12, 14, 15] Strength = 0 Dexterity = 0 Constution = 0 Intelligence = 0 Wisdom = 0 Charisma = 0 Strength = random.choice(stats) Dexterity = random.choice(stats) Constution = random.choice(stats) Intelligence = random.choice(stats) Wisdom = random.choice(stats) Charisma = random.choice(stats) race = random.choice(races) clse = random.choice(classes) ```<issue_comment>username_1: Like you said in your question you could delete entries from the list: ``` Strenght = random.choice(stats) del stats[stats.index(Strength)] ``` Like it is said in the answer above me, a good solution would also be to use `random shuffle`. Plus a hint when you write for example `Strength = random.choice(stats)` you declare the variable Strength in that moment and give it a value so you don't have to declare it above it so you can delete all those `Strength = 0` and etc. Upvotes: 0 <issue_comment>username_2: Create a random permutation of the `stats` list and then assign the `Strength`, `Dexterity` etc. in order of the permutation. This has the added benefit that you won't need to reset the list before creating a new character. ``` from random import shuffle # ... stats = [8, 10, 11, 12, 14, 15] shuffle(stats) Strength = stats[0] Dexterity = stats[1] Constution = stats[2] Intelligence = stats[3] Wisdom = stats[4] Charisma = stats[5] ``` As a side note, there is no need to assign a default of `0` to `Strength`, etc. because they will be changed immediately afterward. Upvotes: 2 <issue_comment>username_3: You can store the value of random.choice in a temp. variable and use ``` stats.remove(temp) ``` Upvotes: -1 <issue_comment>username_4: What you showed us work for 1 character. But what if we want to create hundreds of them. In Python everything is a class and in this case a strongly recommend using it. Look at this simple example: ``` import random from textwrap import dedent races = ["Human", "Dwarf", "Elf"] classes = ["Fighter", "Wizard", "Rogue"] stats = [ 8, 10, 11, 12, 14, 15] class char: def __init__(self, races, classes, stats): random.shuffle(stats) # <--- this solves your problem self.race = random.choice(races) self.cls = random.choice(classes) self.stats = dict(zip(['Strength','Dexterity','Constution', 'Intelligence','Wisdom','Charisma'],stats)) def __str__(self): s = dedent('''\ Your character is a.... Race: {} Class: {} Stats: {}''').format(self.race, self.cls, self.stats) return s char1 = char(races,classes,stats) # Creates char1 based on char class char2 = char(races,classes,stats) # Creates char2 ... print(char1) # printing a class will call the __str__ function if it exists print() print(char2) ``` A class can hold variables and other functions and in this case we create three variables (race, cls and stats) and we add a print function to easy print what we wnat. Returned this when I ran it: ``` Your character is a.... Race: Elf Class: Rogue Stats: {'Dexterity': 11, 'Charisma': 15, 'Constution': 12, 'Wisdom': 10, 'Intelligence': 14, 'Strength': 8} Your character is a.... Race: Dwarf Class: Fighter Stats: {'Dexterity': 14, 'Charisma': 8, 'Constution': 12, 'Wisdom': 11, 'Intelligence': 15, 'Strength': 10} ``` Upvotes: 0
2018/03/15
482
1,757
<issue_start>username_0: What is the best practice for an input field that takes an IP-Address? ie: Numbers and dots. I've initially chosen `type='number'`, However, I noticed that as soon as I type a **second** dot `.` followed by the next number, my submit button gets disabled. *ex: 123.124.4 (boom! button disabled)* Is the above due to a decimal consideration when it comes to `type=number` ? ``` ``` Is there a way out of this without going with ?<issue_comment>username_1: In HTML, you can verify input values with a pattern. This way, you can make something with a regular expression as following : ``` ``` Upvotes: 3 <issue_comment>username_2: Yes, there is a way to take an IP address via HTML using input without using text type, like a type number. The options would require four input boxes, one for each octet. Since there is no HTML input type that matches IPv4 address exactly and if you want to stick to a single input field type text is relatively simple with minlength, maxlength, and pattern attributes. Bare bones HTML: ``` ``` **For security reasons** ensure you always validate submitted data server side, not just client side. You can use same regex pattern. **\* Update 3/2/2019 \*** Added three IPv6 address regex formats. Only accepts IPv6 addresses in long form: ``` ``` Accepts long and medium form IPv6 addresses (medium allows leading zeros): ``` ``` Accepts long and medium form IPv6 addresses (medium without leading zeros): ``` ``` Upvotes: 5 <issue_comment>username_3: this work for me ``` 9 8 ⎸ 1 ``` Upvotes: 0 <issue_comment>username_4: This worked for me, using the answer from @username_2. In his answer it was possible to add IP values like 02.09.08.07 ``` ``` Upvotes: 2
2018/03/15
935
3,013
<issue_start>username_0: Following the [PEP 3101](https://www.python.org/dev/peps/pep-3101/#explicit-conversion-flag), I try to use as often as possible the format() method. In the Explicit Conversion Flag chapter, it is possible to read: > > !r - convert the value to a string using > repr(). > > > Indeed: ``` >>> "{0!r:10}".format('Hello') "'Hello' " ``` But: ``` >>> "{0.__repr__():10}".format('Hello') Traceback (most recent call last): File "", line 1, in AttributeError: 'str' object has no attribute '\_\_repr\_\_()' ``` I don't understand why this exception is raised, because a string have always the **repr**() method !!?? ``` >>> type('Hello') >>> dir('Hello') ['\_\_add\_\_', '\_\_class\_\_', '\_\_contains\_\_', '\_\_delattr\_\_', '\_\_doc\_\_', '\_\_eq\_\_', '\_\_format\_\_', '\_\_ge\_\_', '\_\_getattribute\_\_', '\_\_getitem\_\_', '\_\_getnewargs\_\_', '\_\_getslice\_\_', '\_\_gt\_\_', '\_\_hash\_\_', '\_\_init\_\_', '\_\_le\_\_', '\_\_len\_\_', '\_\_lt\_\_', '\_\_mod\_\_', '\_\_mul\_\_', '\_\_ne\_\_', '\_\_new\_\_', '\_\_reduce\_\_', '\_\_reduce\_ex\_\_', '\_\_repr\_\_', '\_\_rmod\_\_', '\_\_rmul\_\_', '\_\_setattr\_\_', '\_\_sizeof\_\_', '\_\_str\_\_', '\_\_subclasshook\_\_', '\_formatter\_field\_name\_split', '\_formatter\_parser', 'capitalize', 'center', 'count', 'decode', 'encode', 'endswith', 'expandtabs', 'find', 'format', 'index', 'isalnum', 'isalpha', 'isdigit', 'islower', 'isspace', 'istitle', 'isupper', 'join', 'ljust', 'lower', 'lstrip', 'partition', 'replace', 'rfind', 'rindex', 'rjust', 'rpartition', 'rsplit', 'rstrip', 'split', 'splitlines', 'startswith', 'strip', 'swapcase', 'title', 'translate', 'upper', 'zfill'] ``` I think I miss something ...<issue_comment>username_1: In HTML, you can verify input values with a pattern. This way, you can make something with a regular expression as following : ``` ``` Upvotes: 3 <issue_comment>username_2: Yes, there is a way to take an IP address via HTML using input without using text type, like a type number. The options would require four input boxes, one for each octet. Since there is no HTML input type that matches IPv4 address exactly and if you want to stick to a single input field type text is relatively simple with minlength, maxlength, and pattern attributes. Bare bones HTML: ``` ``` **For security reasons** ensure you always validate submitted data server side, not just client side. You can use same regex pattern. **\* Update 3/2/2019 \*** Added three IPv6 address regex formats. Only accepts IPv6 addresses in long form: ``` ``` Accepts long and medium form IPv6 addresses (medium allows leading zeros): ``` ``` Accepts long and medium form IPv6 addresses (medium without leading zeros): ``` ``` Upvotes: 5 <issue_comment>username_3: this work for me ``` 9 8 ⎸ 1 ``` Upvotes: 0 <issue_comment>username_4: This worked for me, using the answer from @username_2. In his answer it was possible to add IP values like 02.09.08.07 ``` ``` Upvotes: 2
2018/03/15
441
1,683
<issue_start>username_0: There is an example for EnableDynamoDBAutoscaling in <https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.HowTo.SDK.html>. However, using this example results in one deprecated warning for the line: ``` static AWSApplicationAutoScalingClient aaClient = new AWSApplicationAutoScalingClient(); ``` How to avoid this deprecated warning?<issue_comment>username_1: In HTML, you can verify input values with a pattern. This way, you can make something with a regular expression as following : ``` ``` Upvotes: 3 <issue_comment>username_2: Yes, there is a way to take an IP address via HTML using input without using text type, like a type number. The options would require four input boxes, one for each octet. Since there is no HTML input type that matches IPv4 address exactly and if you want to stick to a single input field type text is relatively simple with minlength, maxlength, and pattern attributes. Bare bones HTML: ``` ``` **For security reasons** ensure you always validate submitted data server side, not just client side. You can use same regex pattern. **\* Update 3/2/2019 \*** Added three IPv6 address regex formats. Only accepts IPv6 addresses in long form: ``` ``` Accepts long and medium form IPv6 addresses (medium allows leading zeros): ``` ``` Accepts long and medium form IPv6 addresses (medium without leading zeros): ``` ``` Upvotes: 5 <issue_comment>username_3: this work for me ``` 9 8 ⎸ 1 ``` Upvotes: 0 <issue_comment>username_4: This worked for me, using the answer from @username_2. In his answer it was possible to add IP values like 02.09.08.07 ``` ``` Upvotes: 2
2018/03/15
3,397
6,292
<issue_start>username_0: I'm trying to deploy my Django project on Heroku, but something is going wrong, and I'm not able to found out what is happening, and what to do in order to solve this. I read in other posts that the problem may be something related with Gunicorn, but I can't solve it. ``` (env) ignacio@ignacio:~/atletico$ heroku logs 2018-03-15T18:04:33.770218+00:00 app[web.1]: [2018-03-15 18:04:33 +0000] [4] [INFO] Reason: Worker failed to boot. 2018-03-15T18:04:47.082000+00:00 heroku[web.1]: Starting process with command `gunicorn mysite.wsgi --log-file -` 2018-03-15T18:04:49.411412+00:00 heroku[web.1]: Process exited with status 3 2018-03-15T18:04:49.139027+00:00 app[web.1]: [2018-03-15 18:04:49 +0000] [4] [INFO] Starting gunicorn 19.7.1 2018-03-15T18:04:49.139593+00:00 app[web.1]: [2018-03-15 18:04:49 +0000] [4] [INFO] Listening at: http://0.0.0.0:25381 (4) 2018-03-15T18:04:49.139786+00:00 app[web.1]: [2018-03-15 18:04:49 +0000] [4] [INFO] Using worker: sync 2018-03-15T18:04:49.149373+00:00 app[web.1]: [2018-03-15 18:04:49 +0000] [8] [ERROR] Exception in worker process 2018-03-15T18:04:49.149376+00:00 app[web.1]: Traceback (most recent call last): 2018-03-15T18:04:49.149391+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 578, in spawn_worker 2018-03-15T18:04:49.149394+00:00 app[web.1]: worker.init_process() 2018-03-15T18:04:49.143828+00:00 app[web.1]: [2018-03-15 18:04:49 +0000] [8] [INFO] Booting worker with pid: 8 2018-03-15T18:04:49.149396+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/workers/base.py", line 126, in init_process 2018-03-15T18:04:49.149398+00:00 app[web.1]: self.load_wsgi() 2018-03-15T18:04:49.149400+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/workers/base.py", line 135, in load_wsgi 2018-03-15T18:04:49.149401+00:00 app[web.1]: self.wsgi = self.app.wsgi() 2018-03-15T18:04:49.149408+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi 2018-03-15T18:04:49.149411+00:00 app[web.1]: self.callable = self.load() 2018-03-15T18:04:49.149413+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 65, in load 2018-03-15T18:04:49.149416+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp 2018-03-15T18:04:49.149418+00:00 app[web.1]: return util.import_app(self.app_uri) 2018-03-15T18:04:49.149414+00:00 app[web.1]: return self.load_wsgiapp() 2018-03-15T18:04:49.149422+00:00 app[web.1]: __import__(module) 2018-03-15T18:04:49.149420+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/util.py", line 352, in import_app 2018-03-15T18:04:49.149513+00:00 app[web.1]: [2018-03-15 18:04:49 +0000] [8] [INFO] Worker exiting (pid: 8) 2018-03-15T18:04:49.154811+00:00 app[web.1]: [2018-03-15 18:04:49 +0000] [9] [INFO] Booting worker with pid: 9 2018-03-15T18:04:49.149424+00:00 app[web.1]: ModuleNotFoundError: No module named 'mysite.wsgi' 2018-03-15T18:04:49.163852+00:00 app[web.1]: [2018-03-15 18:04:49 +0000] [9] [ERROR] Exception in worker process 2018-03-15T18:04:49.163856+00:00 app[web.1]: Traceback (most recent call last): 2018-03-15T18:04:49.163860+00:00 app[web.1]: worker.init_process() 2018-03-15T18:04:49.163858+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/arbiter.py", line 578, in spawn_worker 2018-03-15T18:04:49.163862+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/workers/base.py", line 126, in init_process 2018-03-15T18:04:49.163863+00:00 app[web.1]: self.load_wsgi() 2018-03-15T18:04:49.163865+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/workers/base.py", line 135, in load_wsgi 2018-03-15T18:04:49.163867+00:00 app[web.1]: self.wsgi = self.app.wsgi() 2018-03-15T18:04:49.163869+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/base.py", line 67, in wsgi 2018-03-15T18:04:49.163871+00:00 app[web.1]: self.callable = self.load() 2018-03-15T18:04:49.163872+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 65, in load 2018-03-15T18:04:49.163876+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/app/wsgiapp.py", line 52, in load_wsgiapp 2018-03-15T18:04:49.163878+00:00 app[web.1]: return util.import_app(self.app_uri) 2018-03-15T18:04:49.163874+00:00 app[web.1]: return self.load_wsgiapp() 2018-03-15T18:04:49.163881+00:00 app[web.1]: __import__(module) 2018-03-15T18:04:49.163880+00:00 app[web.1]: File "/app/.heroku/python/lib/python3.6/site-packages/gunicorn/util.py", line 352, in import_app 2018-03-15T18:04:49.164035+00:00 app[web.1]: [2018-03-15 18:04:49 +0000] [9] [INFO] Worker exiting (pid: 9) 2018-03-15T18:04:49.308012+00:00 app[web.1]: [2018-03-15 18:04:49 +0000] [4] [INFO] Shutting down: Master 2018-03-15T18:04:49.163889+00:00 app[web.1]: ModuleNotFoundError: No module named 'mysite.wsgi' 2018-03-15T18:04:49.308186+00:00 app[web.1]: [2018-03-15 18:04:49 +0000] [4] [INFO] Reason: Worker failed to boot. 2018-03-15T18:04:49.754383+00:00 heroku[web.1]: State changed from starting to crashed ``` **wsgi.py** ``` ` """ WSGI config for mysite project. It exposes the WSGI callable as a module-level variable named application. For more information on this file, see https://docs.djangoproject.com/en/2.0/howto/deployment/wsgi/ """ import os from django.core.wsgi import get_wsgi_application os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings") application = get_wsgi_application() ```<issue_comment>username_1: Your gunicorn start command looks strange: ``` Starting process with command `gunicorn mysite.wsgi --log-file -` ``` The command should specify the module name and variable name separated with a colon, like this: ``` gunicorn mysite.wsgi:application ``` Upvotes: 0 <issue_comment>username_2: **SOLUTION:** The problem was the location of my Procfile file: it has to be on the same path as manage.py. That's all! Upvotes: 1
2018/03/15
463
1,267
<issue_start>username_0: Given the following DataFrame how can I retrieve only the values where IS\_TESTED has both True and False values. ``` d = pd.DataFrame({"ID":[700,700,701,702,702,703],"IS_TESTED":[True,False,True,False,True,True],"TEST_NAME":["A","B","A","A","B","A"]}) ``` [![enter image description here](https://i.stack.imgur.com/9BO8S.png)](https://i.stack.imgur.com/9BO8S.png) In the following example, my desired result should be: ``` 700 True A 700 False B 702 False A 702 True B ``` since 701 & 703 occurs only once. [![enter image description here](https://i.stack.imgur.com/zqFCP.png)](https://i.stack.imgur.com/zqFCP.png)<issue_comment>username_1: Use groupby and nunique ``` d[d.groupby('ID').IS_TESTED.transform('nunique') > 1] ID. IS_TESTED TEST_NAME 0 700 True A 1 700 False B 3 702 False A 4 702 True B ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Or, you can use `groupby` with `filter`: ``` d.groupby('ID').filter(lambda x: x.IS_TESTED.nunique() > 1) ``` Output: ``` ID IS_TESTED TEST_NAME 0 700 True A 1 700 False B 3 702 False A 4 702 True B ``` Upvotes: 2
2018/03/15
1,428
6,125
<issue_start>username_0: I have successfully set up an Entity Event Listener in Symfony 3.4. I registered the service like this, in the Resources/config/services.yml of a separate bundle: ``` services: resource.listener: class: Company\ResourceManagementBundle\EventListener\Entity\ResourceUpdateListener arguments: ["@security.token_storage"] tags: - { name: doctrine.event_listener, event: preUpdate, method: preUpdate } - { name: doctrine.event_listener, event: postUpdate, method: postUpdate } ``` I also added the necessary code in the Entity: ``` /** * @ORM\EntityListeners( * {"Company\ResourceManagementBundle\EventListener\Entity\ResourceUpdateListener"} * ) */ class Resource implements UserInterface { ``` Then in my Event Listener, I created a constructor with Token Storage as the only parameter, type-hinted with the TokenStorageInterface. Here is my event Listener: ``` namespace Company\ResourceManagementBundle\EventListener\Entity; use Company\ResourceManagementBundle\Service\UserNoteLogger; use Doctrine\ORM\Event\LifecycleEventArgs; use Doctrine\ORM\Event\PreUpdateEventArgs; use Symfony\Component\Security\Csrf\TokenStorage\TokenStorageInterface; class ResourceUpdateListener { protected $fields; private $token_storage; function __construct(TokenStorageInterface $token_storage) { $this->token_storage = $token_storage; } public function preUpdate(Resource $resource, PreUpdateEventArgs $args) { $entity = $args->getEntity(); if ($entity instanceof Resource) { $changes = $args->getEntityChangeSet(); } return $this->setFields($entity, $args); } public function postUpdate(Resource $resource, LifecycleEventArgs $args) { $entity = $args->getEntity(); $hasChanged = false; if ($entity instanceof Resource) { foreach ($this->fields as $field => $detail) { if($detail[0] == null) { //continue; } else { $hasChanged = true; } } if ($hasChanged == true) { $userNoteLog = new UserNoteLogger($args->getEntityManager(), $this->token_storage); $comment = "The resource, " . $resource->getFullName() . ", was changed by the user, " . $this->token_storage->getToken()->getUser()->getFullName(); $userNoteLog->logNote($comment, $resource); } } } public function setFields($entity, LifecycleEventArgs $eventArgs) { $this->fields = array_diff_key( $eventArgs->getEntityChangeSet(), [ 'modified'=>0 ] ); return true; } } ``` This is the error I receive: > > Type error: Argument 1 passed to Company\ResourceManagementBundle\EventListener\Entity\ResourceUpdateListener::\_\_construct() must implement interface Symfony\Component\Security\Csrf\TokenStorage\TokenStorageInterface, none given, called in /var/www/sites/sentient02/vendor/doctrine/doctrine-bundle/Mapping/ContainerAwareEntityListenerResolver.php on line 83 > > > This error does not go away, as long as the Token Storage parameter exists in the constructor. If you look at the **EventListener** code above, I am trying to log information every time the Entity changes during the update, and this information needs to know the name of the logged in user.<issue_comment>username_1: Try putting `autowire: true` in your `services.yml`: ``` services: # default configuration for services in *this* file _defaults: autowire: true # Automatically injects dependencies in your services. autoconfigure: true # Automatically registers your services as commands, event subscribers, etc. public: false # Allows optimizing the container by removing unused services; this also means # fetching services directly from the container via $container->get() won't work. # The best practice is to be explicit about your dependencies anyway. ``` Take a look at the docs <https://symfony.com/doc/current/service_container/3.3-di-changes.html>. Upvotes: 0 <issue_comment>username_2: The service definition seems OK, but by default, the EntityListener annotation only support empty constructor. See in the [Doctrine documentation](http://doctrine-orm.readthedocs.io/en/latest/reference/events.html#entity-listeners-class) : > > An Entity Listener could be any class, by default it should be a class > with a no-arg constructor. > > > A little clarification here : * a **Doctrine Event Listener** is called for all entities (generally require test like `$entity instanceof MyClass` before doing any action) * a **Doctrine Entity Listener** are used for easy case, and are only called on one entity. In your code, it seems that you write a common doctrine listener, but use it as an entity listener. Furthermore, you already declare your service like a common doctrine event with the `doctrine.event_listener` tag (EntityListener must have the `doctrine.orm.entity_listener` tag.) In short, if you just delete the `@ORM\EntityListeners` annotation, it should be ok. Note that to get changes when update an entity, you can use the onFlush event. Here is an example with the very useful `unitOfWork` in order to get an array with all fields that will change in the scheduled updates. ``` /** * @param OnFlushEventArgs $args */ public function onFlush(OnFlushEventArgs $args) { $entityManager = $args->getEntityManager(); $unitOfWork = $entityManager->getUnitOfWork(); // Get all updates scheduled entities in the unit of work. foreach ($unitOfWork->getScheduledEntityUpdates() as $entity) { if ($entity instanceof Resource) { dump( $unitOfWork->getEntityChangeSet($entity) ); // In this array, you'll have only fields and values that will be update. // TODO : your stuff. } } } ``` Upvotes: 2
2018/03/15
1,958
8,617
<issue_start>username_0: We sync calendar for our clients using Google Calendar API. `Event.description` has always been in plain text and is supposed to be plain text, right? But recently we started running into cases of HTML formatted text that our code is not prepared to handle. There is actually a very long thread about new Google apps causing this: <https://productforums.google.com/forum/#!topic/calendar/r3OC4cL53NQ> It's actually pretty funny. A Google Cloud team sent me an invitation today and the event description is HTML garbage in my Mac iCal. This is all because Google Calendar apps started writing HTML to the field that is treated as plain text by anyone using Google Calendar API. Even the original iCalendar standard allowed only plain text as part of an event description. HTML markup, such as font attributes (bold, underline) and layout (, ) was not allowed in the text description field.<issue_comment>username_1: On Google's version 3 of the calendar API. In their documentation, it says the following "Description of the event. Can contain HTML. Optional." Source: <https://developers.google.com/calendar/v3/reference/events/insert> Upvotes: 1 <issue_comment>username_2: I thought I'd chime in here. Originally, google incorporated the "Internet Calendar Standard" (ICAL -- not iCal, that's mac only), which was an open standard for calendar databases. It began when programmers wanted an alternative to Microsoft's VCAL standard (short for VISUAL CALENDAR STANDARD --a reference to their programming utilities, which natively handled these types of items if the header data was included). While they were attempting to write something different, they ended up with something very similar, and the two originally worked well together in a one way communication mode. Some programmers got brave and programmed two-way or multi-directional modes that were edit capable from Microsoft Outlook or the other platform that was using ICAL. The problems they dealt with were figuring out how to format data, since VCAL could use HTML (you could copy from any document you created in Office programs or use a mail merge operation to create many events) and in ICAL you had only plain text; the other problems was how to deal with recurring or repeating events, since Exchange treated them as one piece of data with a modifier that denoted how they were duplicated. Microsoft was focused on memory space, since that was costly when the programming was created, and open source was focused on expedience and portability. Most of this HAS NOT CHANGED. Google still uses the ICAL standard as its primary guideline on how to do calendars, since it's portable and the programming will be similar across platforms. However, with more people using more powerful systems, with more than enough power to handle some visual formatting, Google has "allowed" HTML to be added in the data fields. What they have not yet done is set up a section of the API to handle it for you. You have to build your own data handlers for dealing with HTML, and displaying it. In short: YOU HAVE TO PARSE THE HTML FOR YOURSELF. Many visual platforms DO IN FACT have an HTML implementation. The best practice, then, is to grab just a bit of the text, and find out if there is HTML present in it, parse the HTML if you need to, then display. Another option is to find the HTML, then remove it. There are repositories on GITHUB and in SourceForge that have methods for removing the HTML from text, turning it into PLAIN TEXT. I have experienced the same confusion, though, admittedly with different overall problems to solve. I've done it for the last 20 years. It was a lot of reading and finding the right people to augment my own capability for a particular issue, but ultimately, the interconnectivity and secure access markets have begun to work together in providing the linkup between platforms. Early in the Google life cycle, it all had to be done with powershell and WinAMP apache, using javascript to drop information into temporary files that were then picked up by the opposite programming interface. It meant programming watchdogs (applications that checked if there were changes to files etc) into each system, then having each platform check to see if it was the start or the end of the update chain. It was daunting, and clunky, and it was leveraged over internal servers with little power to spare, but it ran. Eventually, places that dealt with the APIs of either or both would have to buy into large sets of servers to handle the processing loads of different operations, but most of them still stayed limited to small sets. Only one or two moved on to "Bigger and Better Business" structures. Now the big ones are all hybridized, the small ones usually have a single server for internal operations, and are all cloud for their communications. That move was considerably more difficult. Upvotes: 2 <issue_comment>username_3: ##### For what it is worth. I have been able to update the Google Calendar API (Google Events) description field with very basic HTML: HTML headings to `######` Line Breaks Line Breaks Horizontal rule `---` I suspect others will work as well (my event js below). ``` let remarks = '* Coffee * Tea * Milk --- h2 tag! ------- ' let event = { "description": ' --- Super limited HTML!: ' + remarks, "colorId": colorID, "summary": SOtitle, "end": { "dateTime": end, "timeZone": $rootScope.app.timezone }, "attendees": attendeesArrray, "sendUpdates": "all", "start": { "dateTime": start, 'timeZone': timezone }, }; ``` [![enter image description here](https://i.stack.imgur.com/Va6cF.png)](https://i.stack.imgur.com/Va6cF.png) Upvotes: 2 <issue_comment>username_2: Yes, the latest update has "Allowed" a few HTML tags. You'll have to conform your HTML from other linked calendars that talk to each other, and most don't have an easy way to do this. For simple text formatting, however, you shouldn't have to worry anymore. For anything beyond that, I suggest the following: Use an attachment. If a message on one server contains HTML, you could collect it to a file attachment and paste a link in the description box pointing to where you would need to edit the information. That way you'd never have to worry about the two talking to each other, so long as your link was active. It would open in the browser. If going to other people, it will be in files contained in the attachments, and you can handle that easily. Otherwise, the only way is to test it out and see how much you can mix it up. I haven't pushed it yet, but I may. I'm just getting back into it a little. Upvotes: 0 <issue_comment>username_4: Like many regular users, I've been using the Apple iOS calendar app for years to access my Google calendar, and every time I edit a note [here's a new fact: I'm usually seeing it in all-day appointments] in the Google calendar using their web interface, it's been converting everything to HTML. The iOS calendar then displays it verbatim, which is pretty dumb to see on that phone screen, already pretty small to be reading a paragraph of notes. This has been going on for years. What a mess. I've never tried the Google Calendar API, but it seems like the wrong way for me to be going, as this handshake should be resolved by the Google and/or Apple folks. I don't really see why leaving plain text alone would be such a bad thing. If a user embeds HTML in a Google calendar note, then sure, leave it that way, but they shouldn't be messing up perfectly readable plain text. I'll see if I can figure out how to report this to Google, but I'll admit most things have worked so well, I've not needed to know how to do this. Update: While I was composing the support request to Google, I think I figured out why this is happening, but it's still a problem. I think I copied/pasted a snippet of (bold) formatted text from an email, into the existing plain text in the event note field in my browser. I couldn't see the formatting, but that's probably what triggered Google to convert my otherwise plain text to HTML formatted gibberish. They clearly need a "plain text" checkbox for us, next to the note field in the calendar web interface. And, of course... Apple also needs to support HTML formatting in their Calendar app's note field. I just left them a bug/request report on this subject. Upvotes: 0
2018/03/15
788
2,835
<issue_start>username_0: I'm currently working on a code, where the user is able to search through a dictionary with an input and keywords to get the corresponding line number in return. The keywords are actually in the line. Using this code: ``` usr_in = input('enter text to search: ') print('That text is found at line(s) {}'.format ( [v for k,v in my_dict.items() if usr_in in k])) ``` Now I also want to have another user input right after this one, where the user can type in the number of the line and in return the whole line shows up as an output. I've tried to alter the code above, but without results. The altered version: ``` usr_in_line = input('Please enter the line number: ') print('Corresponding Line {}'.format( [k for v,k in my_dict.items() if usr_in_line in v])) ``` The whole code can be found here: [Is it possible to save the print-output in a dict with python?](https://stackoverflow.com/questions/49287058/is-it-possible-to-save-the-print-output-in-a-dict-with-python/49287082?noredirect=1#comment85615605_49287082)<issue_comment>username_1: Try using default parameters into a function controlling the lookup Optionally, if you doing input from the command line, checkout argparse Those things together should give you a default value for input, and an optional flag that can be used in command line, to print out the entire line. Honestly, Defaults are the best way to provide behaviour on a function like this Upvotes: 0 <issue_comment>username_2: A suggestion I have is to rebuild your dictionary from the previous question but in the loop, you'd add two sets of key-value pairs. One set where the key is the sentence and the value is the line number and another set were the key is the line number and the value is the sentence. There wouldn't be any risk in doing so because the keys are different types: `str` and `int`. This code comes from your previous question and I've added a couple of lines. ``` my_dict={} filepath = 'myfile.txt' with open(filepath) as fp: line = fp.readline() cnt = 1 while line: # print("Line {}: {}".format(cnt, line.strip())) s = str(line.strip()) # Added my_dict[cnt] = s # Added my_dict[s] = cnt line = fp.readline() cnt += 1 ``` You can now not only ask for the word search, but you now ask for the line: ``` usr_in = input('enter text to search: ') print('That text is found at line(s) {}'.format([v for k,v in my_dict.items() if usr_in in k])) line_in = int(input('enter line: ')) print('The line is: {}'.format(my_dict[line_in]) ``` Note that I didn't do any error checking where you specify a line number that may not exist. I'll leave that to you to figure out, but catching a possible `KeyError` exception may be useful. Upvotes: 2 [selected_answer]
2018/03/15
762
2,717
<issue_start>username_0: As shown below, I have a struct that has a few bitfields. I also have a union that contains a volatile 64-bit value and this struct. Are changes to the struct, accessed through the union, also volatile? i.e. since the struct and the volatile int share the same memory location, do accesses to the struct also result in accessing memory for each access, or will the compiler store them in registers? ``` typedef struct { uint64 my_info_1:12; uint64 my_info_2:16; uint64 reserved: 36; } my_info_s; typedef union { my_info_s my_info_struct; volatile unsigned long long my_info_vol; //64 bit } my_info_u; my_info_u my_info; //Are these volatile accesses? my_info.my_info_1 = 4; my_info.my_info_2 = 8; //This is the motivation- update a bunch of bitfields, but set them in one shot. atomic_set(atomic_location, my_info.my_info_vol); ``` I cannot use locks/mutexes to achieve this because of some real-time constraints.<issue_comment>username_1: Try using default parameters into a function controlling the lookup Optionally, if you doing input from the command line, checkout argparse Those things together should give you a default value for input, and an optional flag that can be used in command line, to print out the entire line. Honestly, Defaults are the best way to provide behaviour on a function like this Upvotes: 0 <issue_comment>username_2: A suggestion I have is to rebuild your dictionary from the previous question but in the loop, you'd add two sets of key-value pairs. One set where the key is the sentence and the value is the line number and another set were the key is the line number and the value is the sentence. There wouldn't be any risk in doing so because the keys are different types: `str` and `int`. This code comes from your previous question and I've added a couple of lines. ``` my_dict={} filepath = 'myfile.txt' with open(filepath) as fp: line = fp.readline() cnt = 1 while line: # print("Line {}: {}".format(cnt, line.strip())) s = str(line.strip()) # Added my_dict[cnt] = s # Added my_dict[s] = cnt line = fp.readline() cnt += 1 ``` You can now not only ask for the word search, but you now ask for the line: ``` usr_in = input('enter text to search: ') print('That text is found at line(s) {}'.format([v for k,v in my_dict.items() if usr_in in k])) line_in = int(input('enter line: ')) print('The line is: {}'.format(my_dict[line_in]) ``` Note that I didn't do any error checking where you specify a line number that may not exist. I'll leave that to you to figure out, but catching a possible `KeyError` exception may be useful. Upvotes: 2 [selected_answer]
2018/03/15
2,590
7,891
<issue_start>username_0: Having used this page as a reference: <http://executeautomation.com/blog/running-chrome-in-headless-mode-with-selenium-c/> I have tried to get Chrome working in headless mode (with a view to running under SpecFlow/xUnit). This has failed spectacularly and I would like to know if anyone has a solution/fix. I'm running VS 2015 on Windows 7 with a .NET 4.7 console application and Google Chrome Version 65.0.3325.162 (Official Build) (64-bit)... Here's my packages.config: ``` xml version="1.0" encoding="utf-8"? ``` And here's my Class1.cs: ``` using OpenQA.Selenium.Chrome; using Xunit; namespace xUnitSpecFlowChrome { public class Class1 { [Fact] public void GoTest() { var options = new ChromeOptions(); options.AddArgument("--headless"); options.AddArgument("start-maximized"); options.AddArgument("--disable-gpu"); options.AddArgument("--disable-extensions"); var driver = new ChromeDriver(options); driver.Navigate().GoToUrl("http://www.daringfireball.net"); var title = driver.Title; } } } ``` And here's what I see, when running that test, in the output: ``` ------ Run test started ------ [xUnit.net 00:00:00.5727852] Starting: xUnitSpecFlowChrome [xUnit.net 00:00:03.3212989] xUnitSpecFlowChrome.Class1.GoTest [FAIL] [xUnit.net 00:00:03.3239354] System.InvalidOperationException : session not created exception from tab crashed (Session info: headless chrome=65.0.3325.162) (Driver info: chromedriver=2.36.540470 (e522d04694c7ebea4ba8821272dbef4f9b818c91),platform=Windows NT 6.1.7601 SP1 x86_64) (SessionNotCreated) [xUnit.net 00:00:03.3264983] Stack Trace: [xUnit.net 00:00:03.3276332] at OpenQA.Selenium.Remote.RemoteWebDriver.UnpackAndThrowOnError(Response errorResponse) [xUnit.net 00:00:03.3280606] at OpenQA.Selenium.Remote.RemoteWebDriver.Execute(String driverCommandToExecute, Dictionary`2 parameters) [xUnit.net 00:00:03.3284914] at OpenQA.Selenium.Remote.RemoteWebDriver.StartSession(ICapabilities desiredCapabilities) [xUnit.net 00:00:03.3289079] at OpenQA.Selenium.Remote.RemoteWebDriver..ctor(ICommandExecutor commandExecutor, ICapabilities desiredCapabilities) [xUnit.net 00:00:03.3292786] at OpenQA.Selenium.Chrome.ChromeDriver..ctor(ChromeDriverService service, ChromeOptions options, TimeSpan commandTimeout) [xUnit.net 00:00:03.3296566] at OpenQA.Selenium.Chrome.ChromeDriver..ctor(ChromeOptions options) [xUnit.net 00:00:03.3300055] C:\git\xUnitSpecFlowChrome\xUnitSpecFlowChrome\Class1.cs(14,0): at xUnitSpecFlowChrome.Class1.GoTest() [xUnit.net 00:00:03.3573096] Finished: xUnitSpecFlowChrome ========== Run test finished: 1 run (0:00:04.757) ========== ``` **UPDATE** Having changed nothing of apparent consequence, I am now seeing this in the output pane: ``` ------ Run test started ------ [xUnit.net 00:00:00.3376673] Starting: xUnitSpecFlowChrome [xUnit.net 00:01:00.8905380] xUnitSpecFlowChrome.Class1.GoTest [FAIL] [xUnit.net 00:01:00.9024429] OpenQA.Selenium.WebDriverException : The HTTP request to the remote WebDriver server for URL http://localhost:23698/session timed out after 60 seconds. [xUnit.net 00:01:00.9038433] ---- System.Net.WebException : The operation has timed out [xUnit.net 00:01:00.9185440] Stack Trace: [xUnit.net 00:01:00.9199720] at OpenQA.Selenium.Remote.HttpCommandExecutor.MakeHttpRequest(HttpRequestInfo requestInfo) [xUnit.net 00:01:00.9212636] at OpenQA.Selenium.Remote.HttpCommandExecutor.Execute(Command commandToExecute) [xUnit.net 00:01:00.9224907] at OpenQA.Selenium.Remote.DriverServiceCommandExecutor.Execute(Command commandToExecute) [xUnit.net 00:01:00.9237665] at OpenQA.Selenium.Remote.RemoteWebDriver.Execute(String driverCommandToExecute, Dictionary`2 parameters) [xUnit.net 00:01:00.9250241] at OpenQA.Selenium.Remote.RemoteWebDriver.StartSession(ICapabilities desiredCapabilities) [xUnit.net 00:01:00.9262697] at OpenQA.Selenium.Remote.RemoteWebDriver..ctor(ICommandExecutor commandExecutor, ICapabilities desiredCapabilities) [xUnit.net 00:01:00.9275900] at OpenQA.Selenium.Chrome.ChromeDriver..ctor(ChromeDriverService service, ChromeOptions options, TimeSpan commandTimeout) [xUnit.net 00:01:00.9289251] at OpenQA.Selenium.Chrome.ChromeDriver..ctor(ChromeOptions options) [xUnit.net 00:01:00.9302787] C:\git\xUnitSpecFlowChrome\xUnitSpecFlowChrome\Class1.cs(16,0): at xUnitSpecFlowChrome.Class1.GoTest() [xUnit.net 00:01:00.9315103] ----- Inner Stack Trace ----- [xUnit.net 00:01:00.9327543] at System.Net.HttpWebRequest.GetResponse() [xUnit.net 00:01:00.9339867] at OpenQA.Selenium.Remote.HttpCommandExecutor.MakeHttpRequest(HttpRequestInfo requestInfo) [xUnit.net 00:01:00.9698773] Finished: xUnitSpecFlowChrome ========== Run test finished: 1 run (0:01:01.085) ========== ```<issue_comment>username_1: The error **System.InvalidOperationException : session not created exception** indicates that a session could not be created. To configure **Headless Chrome** you have to pass the following arguments through an instance of **ChromeOptions** : ``` var options = new ChromeOptions(); options.AddArgument("--headless"); options.AddArgument("start-maximized"); options.AddArgument("--disable-gpu"); options.AddArgument("--disable-extensions"); var driver = new ChromeDriver(options); ``` Additional Considertations -------------------------- Ensure the following : * *Clean* your *Project Workspace* through your *IDE* and *Rebuild* your project with required dependencies only. * Use [*CCleaner*](https://www.ccleaner.com/ccleaner) tool to wipe off all the OS chores before and after the execution of your *Test Suite*. * If your base *Chrome* version is too old, then uninstall it through [*Revo Uninstaller*](https://www.revouninstaller.com/revo_uninstaller_free_download.html) and install a recent GA and released version of Chrome. * Execute your `@Test`. --- Update ------ As you are still seeing the error as **from tab crashed**, this issue relates to **lack of memory** within the system/docker\_image. You can find a detailed discussion in [**UnknownError: session deleted because of page crash from tab crashed**](https://github.com/elgalu/docker-selenium/issues/20) Upvotes: 0 <issue_comment>username_2: Genuinely can't believe this was it: ``` options.AddArgument("no-sandbox"); ``` As accidentally found in: <https://stackoverflow.com/a/39299877/71376> Don't know why this option isn't documented or flagged up more - but I hope this helps others. Upvotes: 3 [selected_answer]<issue_comment>username_3: The posted answers did not work for me - the test works in Chrome, but not with the `--headless` option argument. However, it did not work as expected and I got a timeout error. This code worked with the latest version... Window size is important for headless mode. Default sizes are `1280x800` for headed mode and `1366x768` in headless mode. ``` public void GoTest() { ChromeOptions options = new ChromeOptions(); options.AddArgument("--window-size=1920,1080"); options.AddArgument("--disable-gpu"); options.AddArgument("--disable-extensions"); options.AddArgument("--proxy-server='direct://'"); options.AddArgument("--proxy-bypass-list=*"); options.AddArgument("--start-maximized"); options.AddArgument("--headless"); options.AddArgument("no-sandbox"); var _driver = new ChromeDriver(options); _driver.Navigate().GoToUrl("https://www.google.com"); object html = _driver.ExecuteScript("return document.body.parentElement.outerHTML"); _driver.Close(); } ``` Upvotes: 0
2018/03/15
1,229
4,467
<issue_start>username_0: I am getting an error code that is auto-generated when I use webpack and babel in a NodeJs React application: ``` Uncaught ReferenceError: Invalid left-hand side in assignment ``` Here is the offending line in the code auto-generated by babel/webpack: ``` "development" = 'development'; // bundle.js line 17933 ``` If I manually delete the above line from the auto-generated code then the error goes away. But that is not a good solution for obvious reasons. I'm using webpack 4, as shown in this excerpt from **package.json**: ``` "babel-loader": "^7.1.4", "babel-preset-stage-0": "^6.24.1", "webpack": "^4.1.1", "webpack-cli": "^2.0.12" ``` Here is my **webpack.config.js**: ``` module.exports = { entry: './source/main.js', output: { path: __dirname, filename: 'public/javascripts/bundle.js' }, devtool: "inline-source-map", module: { rules: [ { test: /.js?$/, exclude: /node_modules/, use: { loader: "babel-loader", options: { presets: ['react', 'stage-0'] } } } ] } }; ``` I build **bundle.js** like this: ``` node_modules/.bin/webpack --mode development ``` The error goes away if I keep everything the same but use **webpack 3.11.0** and uninstall **webpack-cli** and build **bundle.js** like this: ``` node_modules/.bin/webpack ``` **NOTE**: *The latest webpack 3.X versions support the webpack 4.X syntax shown in my **webpack.config.js**.* Details: ``` $ node --version v9.8.0 $ npm --version 5.7.1 ```<issue_comment>username_1: The error **System.InvalidOperationException : session not created exception** indicates that a session could not be created. To configure **Headless Chrome** you have to pass the following arguments through an instance of **ChromeOptions** : ``` var options = new ChromeOptions(); options.AddArgument("--headless"); options.AddArgument("start-maximized"); options.AddArgument("--disable-gpu"); options.AddArgument("--disable-extensions"); var driver = new ChromeDriver(options); ``` Additional Considertations -------------------------- Ensure the following : * *Clean* your *Project Workspace* through your *IDE* and *Rebuild* your project with required dependencies only. * Use [*CCleaner*](https://www.ccleaner.com/ccleaner) tool to wipe off all the OS chores before and after the execution of your *Test Suite*. * If your base *Chrome* version is too old, then uninstall it through [*Revo Uninstaller*](https://www.revouninstaller.com/revo_uninstaller_free_download.html) and install a recent GA and released version of Chrome. * Execute your `@Test`. --- Update ------ As you are still seeing the error as **from tab crashed**, this issue relates to **lack of memory** within the system/docker\_image. You can find a detailed discussion in [**UnknownError: session deleted because of page crash from tab crashed**](https://github.com/elgalu/docker-selenium/issues/20) Upvotes: 0 <issue_comment>username_2: Genuinely can't believe this was it: ``` options.AddArgument("no-sandbox"); ``` As accidentally found in: <https://stackoverflow.com/a/39299877/71376> Don't know why this option isn't documented or flagged up more - but I hope this helps others. Upvotes: 3 [selected_answer]<issue_comment>username_3: The posted answers did not work for me - the test works in Chrome, but not with the `--headless` option argument. However, it did not work as expected and I got a timeout error. This code worked with the latest version... Window size is important for headless mode. Default sizes are `1280x800` for headed mode and `1366x768` in headless mode. ``` public void GoTest() { ChromeOptions options = new ChromeOptions(); options.AddArgument("--window-size=1920,1080"); options.AddArgument("--disable-gpu"); options.AddArgument("--disable-extensions"); options.AddArgument("--proxy-server='direct://'"); options.AddArgument("--proxy-bypass-list=*"); options.AddArgument("--start-maximized"); options.AddArgument("--headless"); options.AddArgument("no-sandbox"); var _driver = new ChromeDriver(options); _driver.Navigate().GoToUrl("https://www.google.com"); object html = _driver.ExecuteScript("return document.body.parentElement.outerHTML"); _driver.Close(); } ``` Upvotes: 0
2018/03/15
577
1,635
<issue_start>username_0: I'm writing a simple game and I have a problem with generating values. I need to generate a number between 1 and 500, but only in the form of 10, 20, 30, 40, 50 etc. I have written such a function, but the numbers adjust once for an average of 100 page refreshments. How can I do this? ```js console.log(generatePosition()); function generatePosition() { var x = Math.floor(Math.random() * 500) + 1; var y = Math.floor(Math.random() * 500) + 1; if (x % 10 == 0 && x % 10 == 0) { var cords = [x, y]; } return cords; } ```<issue_comment>username_1: The simplest solution would probably be to generate a random number between 1 and 50, and then multiply it by 10. As a result you will get only random numbers between 10 and 500, just as you wanted :-) So, basically like this: ``` const randomInt = function (min, max) { return Math.floor(Math.random() * (max - min + 1)) + min; }; const randomNumberDivisibleBy10 = randomInt(1, 50) * 10; console.log(randomNumberDivisibleBy10); // => Something such as 10, 20, 30, …, 500 ``` So, your function would look like this: ``` const generatePosition = function () { const x = randomInt(1, 50) * 10, y = randomInt(1, 50) * 10; const coordinates = [ x, y ]; return coordinates; }; ``` Et voilà :-) Upvotes: 2 <issue_comment>username_2: ``` function generatePosition() { var x = Math.floor(Math.random() * 500) + 1; var y = Math.floor(Math.random() * 500) + 1; var x = x - (x % 10); var y = y - (y % 10); var cords = [x, y]; return cords; } ``` You can use this :) Upvotes: -1
2018/03/15
1,931
8,015
<issue_start>username_0: I'm still a bit new to rxjs in Angular 5 and it a bit hard to formulate my question. Still I hope for some tips. I often end up with the same setup: * Multiple Components to display the same data * A single Service to access the data Now I have 2 options when receiving data via Observables: a) Subscribe to an observable to get data once, and subscribe again to get updates b) Subscribe to an observable and always get updates when data changes a) ist straight forward, but with b) I often have troubles and wondering if this is the right way to use Observables. One issue is, that unsubscribe gets important in some cases, and missing the unsubscribe leads to serious garbage being executed with each update of the observable. On the other hand, with option a) I might miss some updates in one component when another component is updating the underlying data. Is there any best practice to avoid all these pitfalls?<issue_comment>username_1: Another option is to use the observable to retrieve the data, then let Angular's change detection handle the rest. With Angular's change detection, it will update the UI as the data changes ... no need to subscribe again to get updates. For example, I have this type of UI: [![enter image description here](https://i.stack.imgur.com/emsyt.png)](https://i.stack.imgur.com/emsyt.png) I retrieve the data using Http and an observable. But then I leverage Angular's change detection to handle any updates. Here is a piece of my service: ``` @Injectable() export class MovieService { private moviesUrl = 'api/movies'; private movies: IMovie[]; currentMovie: IMovie | null; constructor(private http: HttpClient) { } getMovies(): Observable { if (this.movies) { return of(this.movies); } return this.http.get(this.moviesUrl) .pipe( tap(data => console.log(JSON.stringify(data))), tap(data => this.movies = data), catchError(this.handleError) ); } // more stuff here } ``` And here is the *full* code (except the imports) for the detail component shown on the right above: ``` export class MovieDetailComponent implements OnInit { pageTitle: string = 'Movie Detail'; errorMessage: string; get movie(): IMovie | null { return this.movieService.currentMovie; } constructor(private movieService: MovieService) { } ngOnInit(): void { } } ``` You can see the complete example (with editing) here: <https://github.com/username_1/MovieHunter-communication/tree/master/MH-Take5> Upvotes: 1 <issue_comment>username_2: It sounds like the concept you are trying to figure out is how to regulate subscription management for RxJS when using Angular. There are three main options that come to mind for this: 1. Automatically create and delete subscriptions using the `async` pipe. If you want to make UI changes based strictly on data emitted from an observable, then the `async` pipe handily creates a subscription to the given observable when the component is created and removes those subscription when the component is destroyed. This is arguably the cleanest way to use subscriptions. As an example: ``` @Component({ selector: 'my-component', template: ` {{value}} ` }) export class MyComponent { public value$: Observable = this.myValueService .getValues() .map(value => `Value: $value`); constructor(myValueService: MyValueService) {} } ``` 2. Manage subscriptions in components by creating class-level `Subscription` objects in the `ngOnInit` method and then unsubscribing in the `ngOnDestroy` method. This is the convention that I tend toward when I need access to the subscriptions within the component code. Having `ngOnInit` and `ngOnDestroy` methods in every component that uses subscriptions adds boilerplate but is generally necessary if you need subscriptions in your component code. For example: ``` @Component({ selector: 'my-component', template: ` ` }) export class MyComponent implements OnInit, OnDestroy { private mySub: Subscription; constructor(myValueService: MyValueService) {} public ngOnInit() { this.mySub = this.myValueService.getValue().subscribe((value) => { console.log(value); // Do something with value }); } public ngOnDestroy() { this.mySub.unsubscribe(); } } ``` 3. Limit subscription life by using a limiting operation, such as `first()`. This is what is done by default when you initiate a subscription to `HttpClient` observables. This has the benefit of requiring little code, but it can also lead to cases where the subscription is not cleaned up (e.g., if the observable never emits). If everything that I want to do with an observable can be done in the view, then I virtually always use **option 1**. This covers most cases in my experience. You can always use intermediate observables to produce an observable that you can subscribe to in the view if you need to. Intermediate observables don't introduce memory leak concerns. Upvotes: 2 <issue_comment>username_3: When passing data between components, I find the RxJS `BehaviorSubject` very useful. You can also use a regular RxJS `Subject` for sharing data via a service, but here’s why I prefer a BehaviorSubject. 1. It will always return the current value on subscription - there is no need to call onnext(). 2. It has a getValue() function to extract the last value as raw data. 3. It ensures that the component always receives the most recent data. 4. you can get an observable from behavior subject using the `asobservable()` method on behavior subject. 5. [Refer this for more](https://stackoverflow.com/questions/43348463/what-is-the-difference-between-subject-and-behaviorsubject) **Example** In a service, we will create a private BehaviorSubject that will hold the current value of the message. We define a currentMessage variable to handle this data stream as an observable that will be used by other components. Lastly, we create the function that calls next on the `BehaviorSubject` to change its value. The parent, child, and sibling components all receive the same treatment. We inject the DataService in the components, then subscribe to the currentMessage observable and set its value equal to the message variable. Now if we create a function in any one of these components that changes the value of the message. The updated value is automatically broadcasted to all other components. **shared.service.ts** ``` import { Injectable } from '@angular/core'; import { BehaviorSubject } from 'rxjs/BehaviorSubject'; @Injectable() export class SharedService { private messageSource = new BehaviorSubject("default message"); currentMessage = this.messageSource.asObservable(); constructor() { } changeMessage(message: string) { this.messageSource.next(message) } } ``` **parent.component.ts** ``` import { Component, OnInit } from '@angular/core'; import { SharedService } from "../shared.service"; @Component({ selector: 'app-sibling', template: ` {{message}} New Message `, styleUrls: ['./sibling.component.css'] }) export class SiblingComponent implements OnInit { message: string; constructor(private service: sharedService) { } ngOnInit() { this.service.currentMessage.subscribe(message => this.message = message) } newMessage() { this.service.changeMessage("Hello from Sibling") } } ``` **sibling.component.ts** ``` import { Component, OnInit } from '@angular/core'; import { SharedService } from "../shared.service"; @Component({ selector: 'app-sibling', template: ` {{message}} New Message `, styleUrls: ['./sibling.component.css'] }) export class SiblingComponent implements OnInit { message: string; constructor(private service: SharedService) { } ngOnInit() { this.service.currentMessage.subscribe(message => this.message = message) } newMessage() { this.service.changeMessage("Hello from Sibling"); } } ``` Upvotes: 1
2018/03/15
486
1,603
<issue_start>username_0: If I had an input field like so for example: ``` ``` And the user enters 'abc' for example, how can I change this immediately to '123'? I want this to happen as soon as soon as the user enters 'abc'. How can I do this?<issue_comment>username_1: Hope this will help you, use keyup event to get entered value ``` $("#inputBox").on("keyup", function(){ if($(this).val() == "abc") $(this).val("123"); }) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You could do something like this: HTML: ``` ``` JS/jQuery: ``` $(document).on('keydown', '#inputBox', function(){ $(this).val("Your new value"); return false; }); ``` <https://jsfiddle.net/j0ug8y3e/> There is a 'change' event on jQuery, but it's only called when the input has lost it's focus, so using a keydown, keypressed or keyup is better. Upvotes: 1 <issue_comment>username_3: ``` $( "#inputBox" ).keyup(function() { if($(this).val() == "abc") { $(this).val("123"); } }); ``` Upvotes: 0 <issue_comment>username_4: This is the full solution, it will convert each letter to its numerical value as you type: ``` const input = $('#inputBox'); input.keyup(event => { input.val(processContents(input.val())); }); function processContents(value) { let contents = value.split(''); contents.forEach((char, index) => { var charPos = char.toLowerCase().charCodeAt(0); console.log(charPos); if (charPos >= 97 && charPos <= 122) { contents[index] = charPos - 96; } }); return contents.join(''); } ``` Upvotes: 0
2018/03/15
492
1,657
<issue_start>username_0: Why is Live Reload, Hot Reload, and Remote Debugger all unavailable? I've tried 1. Reloading JS Bundle 2. Restarting the simulator 3. Restarting packager 4. Restarting packager and clearing cache 5. Resetting the simulator 6. Restarting the computer Prior to this, everything was working fine. I'm relatively new to the React Native development environment. Is there a place to look (a command or a log file or something) that may have more information figure out what the issue is? For reference, my app was created using the `create-react-native-app` command. [![enter image description here](https://i.stack.imgur.com/700WM.png)](https://i.stack.imgur.com/700WM.png)<issue_comment>username_1: I solved my own problem. For some reason the `.expo/settings.json` had `dev: true` even though this wasn't reflecting anywhere in the UI. Diffing file trees of the current project with older versions solved my problem. ``` 4c4 < "dev": false, --- > "dev": true, ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: In order to solve this issue, go to `.expo/settings.json` and change `dev` to true: [![enter image description here](https://i.stack.imgur.com/HOoU1.png)](https://i.stack.imgur.com/HOoU1.png) Credits: <https://github.com/expo/expo/issues/1210#issuecomment-402347285> Upvotes: 1 <issue_comment>username_3: You don't need to change `.expo/settings.json` manually. In the `Expo DevTools` window, which is normally running at `http://localhost:19002`, just toggle `production mode` off.[![enter image description here](https://i.stack.imgur.com/yKsxY.png)](https://i.stack.imgur.com/yKsxY.png) Upvotes: 1
2018/03/15
588
1,817
<issue_start>username_0: For some reason after updating to the latest version of Windows 10 build, Query.h file fails to compile with my C++ code! We use Query for our FPS time-step counter which is vital for any type of game engine, however since the transmission from version 10.0.15063.0 to 10.0.16299.0, we get the following error messages when attempting to build the vs project: 1>c:\program files (x86)\windows kits\10\include\10.0.16299.0\um\query.h(262): error C2059: syntax error: '||' 1>c:\program files (x86)\windows kits\10\include\10.0.16299.0\um\query.h(262): error C2238: unexpected token(s) preceding ';' Is anyone else getting the same issue? It's fundamental for us to use Query.h as it keeps our frame count ticking at a constant speed via high-res counters! I hope someone can point me to the right direction here. Thanks. William.<issue_comment>username_1: I solved my own problem. For some reason the `.expo/settings.json` had `dev: true` even though this wasn't reflecting anywhere in the UI. Diffing file trees of the current project with older versions solved my problem. ``` 4c4 < "dev": false, --- > "dev": true, ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: In order to solve this issue, go to `.expo/settings.json` and change `dev` to true: [![enter image description here](https://i.stack.imgur.com/HOoU1.png)](https://i.stack.imgur.com/HOoU1.png) Credits: <https://github.com/expo/expo/issues/1210#issuecomment-402347285> Upvotes: 1 <issue_comment>username_3: You don't need to change `.expo/settings.json` manually. In the `Expo DevTools` window, which is normally running at `http://localhost:19002`, just toggle `production mode` off.[![enter image description here](https://i.stack.imgur.com/yKsxY.png)](https://i.stack.imgur.com/yKsxY.png) Upvotes: 1
2018/03/15
955
2,347
<issue_start>username_0: I have an input file. The first string of the third column will be always the fourth column. I want to remove the third column string from the first column. The input format is ``` 7 6502549 TA T 7 6502822 GAAGAA G 17 8347553 TAA T 17 8354321 CG C 17 8363312 CCTT C 17 8366613 AAT A 17 8366623 CCTTT C ``` The desired output is ``` 7 6502549 A T 7 6502822 AAGAA G 17 8347553 AA T 17 8354321 G C 17 8363312 CTT C 17 8366613 AT A 17 8366623 CTTT C ```<issue_comment>username_1: `awk` to the rescue! if doesn't match, no replacement takes place... ``` $ awk '{sub("^"$4,"",$3)}1' file | column -t 7 6502549 A T 7 6502822 AAGAA G 17 8347553 AA T 17 8354321 G C 17 8363312 CTT C 17 8366613 AT A 17 8366623 CTTT C ``` Upvotes: 0 <issue_comment>username_2: If your data is valid and first character of the 3rd column is always the 4th column you can just remove the first character of the 3rd column with `substr`: ``` awk '{print $1,$2,substr($3,2),$4}' in.txt ``` Upvotes: 0 <issue_comment>username_3: **awk** ``` awk -v OFS="\t" '{sub($4,"",$3)}1' f1 7 6502549 A T 7 6502822 AAGAA G 17 8347553 AA T 17 8354321 G C 17 8363312 CTT C 17 8366613 AT A 17 8366623 CTTT C ``` `sub($4,"",$3)` : `sub` will substitute the fourth field character's first occurence in third field with an empty character Upvotes: 2 [selected_answer]<issue_comment>username_4: In case the solution does not have to be either sed or awk, use cut: ``` cut --complement -c 17 ``` Output for sample input: ``` 7 6502549 A T 7 6502822 AAGAA G 17 8347553 AA T 17 8354321 G C 17 8363312 CTT C 17 8366613 AT A 17 8366623 CTTT C ``` Explanation: Cut the 17th character `-c 17`. But invert the selection to print `--complement`. *(Note: This answer is intentionally slightly provocative. It would not hurt to trigger some elaboration by OP on special cases, e.g. when there is no identity or for empty lines or lines which do not match the very static pattern which is applicable to all shown sample input lines.)* Upvotes: 0
2018/03/15
637
2,255
<issue_start>username_0: I have "n" different clasess ``` .classOne .classTwo .classThree .classFour .... .classN ``` I want to bind `.classTwo` & unbind the rest except `.ClassOne`, How Should I do that? ``` $("#container").off("click"); /*except classOne*/ $("#container").on("click.ClassTwo",function... ``` **HTML** ``` [Show alert] [Click dont show anything] [Show alert] [Show alert] [Show alert] ``` After clickin on this button, the click event deactivates for everyone excepts `classOne` ...(If you click it it stills shous an alert) and at the same time `.classTwo` gets the capacity to show an alert after clicking ``` [] ``` So, at the end the arrangement would be: ``` [Show alert] [Show alert] [Click dont show anything] [Click dont show anything] [Click dont show anything].... ```<issue_comment>username_1: If I understand what you are saying, they have a function bound to them already and then when you click on the button you want to unbind all and then rebind except for `.classTwo` which you can do like this: ``` $('.button').on('click', function() { $('div').off('click'); $('div').not('.classTwo').on('click', function() { // show alert }) }) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Add your classes to an array so they're more easy to manage, then join that array & use as your jQuery selector. ``` classList = ['One','Two','Three'...]; // remove 'One' from the classList if needed. var index = classList.indexOf('One'); if (index > -1) { array.splice(index, 1); } classes = classList.join(' '); // the jquery call, notice that we're using a string created by the .join() function called on our array of classes. $('#container').on('click',classes,function(e){ /* ... */ } // you can also loop one array and push it's items to a second one, might not be needed, but could help your work. classes = []; for(var thisClass in classList){ classes.push('class'+classList[thisClass]); } ``` further reading: [How do I remove a particular element from an array in JavaScript?](https://stackoverflow.com/questions/5767325/how-do-i-remove-a-particular-element-from-an-array-in-javascript) Upvotes: 1
2018/03/15
490
1,788
<issue_start>username_0: I have 2 view controllers (**VC1** & **VC2**). Both of them are made **programmatically** without InterfaceBuilder. I'm trying to **segue from VC1 to VC2** through code. I know there are methods like: * performSegue(withIdentifier: String, sender: Any?) * prepare(for: UIStoryboardSegue, sender: Any?) But I don't have VC2 identifier and I don't have Storyboard. So how to segue through code? Just 5 days with Swift. Thank you in advance.<issue_comment>username_1: Segues are specific to Storyboards (hence the class name – UIStoryboardSegue). If you want to perform a transition from VC1 to VC2 programmatically you have two options: 1. Present VC2 modally in VC1: ``` let vc2 = VC2() self.present(vc2, animated: true) // to return from VC2 to VC1 call this somewhere in VC2 self.dismiss(animated: true) ``` 2. Wrap VC1 in a UINavigationController and then push VC2 to it when needed: ``` let vc2 = VC2() self.navigationController?.pushViewController(vc2, animated: true) // to return from VC2 to VC1 call this somewhere in VC2 self.navigationController?.popViewController(animated: true) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Segues are a storyboard thing. You can't use segues without storyboards. The simple answer to your question is that you can't do that. If you don't use storyboards, you can't use `UIStoryboardSegue`s. That's why they're called storyboard segues. Segues are provided by Storyboards. Stop fighting the tools and start using Storyboards. (I would not hire somebody who said "I don't use Storyboards. I write all my interfaces in code" any more than I would hire somebody who eschews high level languages and does everything in assembly language. Yes it's possible, but why would you do that?) Upvotes: -1
2018/03/15
1,505
5,658
<issue_start>username_0: is there any difference between chained: ``` r.db('catbox').table("bw_mobile").filter( r.row("value")("appVersion")("major").le(2) ).filter( r.row("value")("appVersion")("minor").le(2) ).filter( r.row("value")("appVersion")("patch").le(10) ) ``` nested: ``` r.db('catbox').table("bw_mobile").filter( r.row("value")("appVersion")("major").le(2).and( r.row("value")("appVersion")("minor").le(2).and( r.row("value")("appVersion")("patch").le(10) ) ) ) ``` or lambda functions ``` r.db('catbox').table("bw_mobile").filter( r.js("(function (session) { return session.value.appVersion.major < 0 || ( session.value.appVersion.major == 0 && session.value.appVersion.minor < 0 ) || ( session.value.appVersion.major == 0 && session.value.appVersion.minor == 0 && session.value.appVersion.patch < 71 ) ; })") ) ``` TY!<issue_comment>username_1: I believe that the second case (single `filter` with multiple `and` expressions) is the most efficient and the most convenient to use. I would take the following ideas into account: `r.filter`, as it's [documented](https://rethinkdb.com/api/javascript/filter/), *always* creates a new selection, a stream or an array regardless the results of the predicate function passed to `r.filter`. I'm not sure how selections are implemented in RethinkDB (I believe they are stream-like), but arrays chaining may be an expensive operation allocating intermediate arrays. Compare this to `Array.prototype.filter` which creates a new array as its result. Streams are lazy thus each element is computed (or not) lazily as well, hence making a smaller memory foot-print. Compare this with iterators/streams and generators in other languages (`Iterator`/`Stream` in Java, `IEnumerator` and `yield return` in .NET/C#, iterators and generator functions in JavaScript, `yield` in Python, pipes `|` in shell commands. etc) where you can combine iterators/generators. In any case you have intermediate filters. A single expression can replace a bunch of chained filter operations. Note the `r.and` operation in your expression has one very important feature: this is a short-circuit evaluation operation. If the left-hand operand of the AND operation is `false`, the operation does not even need to evaluate the right-hand expression to get the result that is always `false` then. You cannot do such a thing with `r.filter`. Compare this with the SQL WHERE clause that can be specified once per a single query (all false cases can be simply discarded by the `AND` operator). Also, from a pragmatic perspective, you can create a factory method that can have a convenient name and returns parameterized ReQL expressions, that can even be assigned to constants since ReQL expressions are immutable and safe to re-use: ```js const maxVersionIs = (major, minor, patch) => r.row("value")("appVersion")("major").le(major) .and(r.row("value")("appVersion")("minor").le(minor)) .and(r.row("value")("appVersion")("patch").le(patch)); const versionPriorToMilestone = maxVersionIs(2, 2, 10); ... .filter(maxVersionIs(major, minor, patch)) ... .filter(versionPriorToMilestone) ``` ReQL expressions RethinkDB queries are actually expression trees that are much easier to parse and directly convert to execution plan than execute JavaScript scripts. The official documentation even [recommends](https://www.rethinkdb.com/api/javascript/js/) avoid use of `r.js` for better performance. I guess the cost here is JavaScript runtime setup, isolated script execution and checking for the script timeout. Additionally, scripts are more error-prone, whereas expression trees can be more or less inspected during compilation time. But, for the sake of completeness, `r.js` can be more powerful even with those costs, because ReQL is a limited set of operations. From my personal experience: I had to implement a sort of a permission-checking subsystem based on RethinkDB, and I needed to have a bitwise AND operation in RethinkDB. Unfortunately, RethinkDB as of 2.3 does not support bitwise operations, so I had to use `r.js`: `r.js('(function (user) { return !!(user.permissions & ${permissions}); })')`. A future release of RethinkDB will support [bitwise operations](https://github.com/rethinkdb/rethinkdb/pull/6534), so `r.getField('permissions').bitAnd(permissions))` should work someday in the future faster and combinable with other expressions to fit in a single `filter`. Upvotes: 3 [selected_answer]<issue_comment>username_2: Because I ran into a similar issue last night and there's not a lot of RethinkDB content here, I'll share my solution. ```js export interface LooseObjectInterface { [key: string]: any; }; const tldQuery: LooseObjectInterface = await r.table(tldDatabase) .orderBy({ index: r.asc("name") }) // alphabetical sort .filter((row: any) => { return row("collection") .contains(suppliedCollection[0]) .or(row("collection").contains(suppliedCollection[1])) .or(row("collection").contains(suppliedCollection[2])) .or(row("collection").contains(suppliedCollection[3])) // ^ these extra "or"s silently fail, huzzah! }) .pluck("name") // we just want the names .run(databaseConnection); ``` In my code, `suppliedCollection` is an array of strings. Each item in my table could have any amount of strings in `suppliedCollection`, from one to four. Based on these strings in one item's collection, I want to find other items that contain these same strings. Thankfully, RethinkDB silently fails if `suppliedCollection[n]` is `undefined`. Upvotes: 0
2018/03/15
708
2,362
<issue_start>username_0: I am using html-pdf in electron to generate pdf from a html. It is worked when I tested by "npm run start". I can get pdf. But when I package electron app to .dmg file by electron-builder, I got "spawn ENOTDIR" error when call pdf.create() ``` var pdf = require('html-pdf'); var options = { format: 'Letter' }; //resultFilePath = /Users/myname/Documents/result.pdf pdf.create(htmlContent, options).toFile(resultFilePath, function(err, res) { } ``` [![javascript debugger with error message](https://i.stack.imgur.com/mPPrS.png)](https://i.stack.imgur.com/mPPrS.png) I have no idea now. Does anyone have the same problem? Any help would be greatly appreciated.<issue_comment>username_1: html-pdf may be having trouble finding the phantom binary after it has been packaged. When not packaged, the binary can be found (at least on my machine) in `node_modules/phantomjs-prebuilt/bin/phantomjs` Try setting the phantomJS binary location explicitly via an html-pdf option. ``` > var pdf = require('html-pdf'); > var options = { format: 'Letter', phantomPath: '/path/to/phantomJSbinary' }; > //resultFilePath = /Users/myname/Documents/result.pdf > pdf.create(htmlContent, options).toFile(resultFilePath, function(err, > res) { } ``` You may also need to set options.script to point to a copy of `pdf_a4_portrait.js` from the html-pdf module. Other people have had a similar problem. See <https://discuss.atom.io/t/asar-limitations-on-node-api-spawn-a-child/28235/2> Upvotes: 1 <issue_comment>username_2: For anyone who meet problem when print in electron. Open print content in a visible windows is good solution, I followed zen's answer in [How to print a DIV in ElectronJS](https://stackoverflow.com/questions/37627064/how-to-print-a-div-in-electronjs) Works well on Windows and MacOS. Upvotes: -1 <issue_comment>username_3: ``` const pdf = require('html-pdf'); npm i witch phantomjs-prebuilt ``` and, then, in the options json object, ``` const options = { phantomPath: `${phantomPath}` }; ``` after that, use options object like this to create PDF: ``` pdf.create(html, options).toFile(`${fileName}.pdf`, function (err, res) { if (err) return console.log(err); console.log(res); }); ``` use phantomPath like this, Hope this works. It worked for me. Upvotes: 0
2018/03/15
1,837
5,276
<issue_start>username_0: I have a data frame that looks like this: ``` line = c(1, 2, NA, 4 ,5, NA, 7) group = c("1.0 Group A", "2.0 Group B", "3.0 Group C", "4.0 Group D", "5.0 Group E", "6.0 Group F", "7.0 Group G") df <- data.frame(line, group) view(df) line group 1 1 1.0 Group A 2 2 2.0 Group B 3 NA 3.0 Group C 4 4 4.0 Group D 5 5 5.0 Group E 6 NA 6.0 Group F 7 7 7.0 Group G ``` What I want to do is to find all the NA value in the "line" column and place a row underneath that row in "group" column saying "Not Applicable". So that the new data frame should look like: ``` view(df) line group 1 1 1.0 Group A 2 2 2.0 Group B 3 NA 3.0 Group C 4 NA Not Applicable 5 4 4.0 Group D 6 5 5.0 Group E 7 NA 6.0 Group F 8 NA Not Applicable 9 7 7.0 Group G ``` I am thinking about using an ifelse statement or using case\_when from dplyr. But I don't know how to work it out. Does anyone have any suggestion? Thank you!<issue_comment>username_1: Here's a base R method: split the data by the cumulative NA count, add in the new lines, recombine. ``` df$group = as.character(df$group) split_df = split(df, cumsum(is.na(df$line))) split_df[-1] = lapply(split_df[-1], function(d) rbind(d[1, ], data.frame(line = NA, group = "Not applicable"), d[-1, ])) do.call(rbind, split_df) # line group # 0.1 1 1.0 Group A # 0.2 2 2.0 Group B # 1.3 NA 3.0 Group C # 1.1 NA Not applicable # 1.4 4 4.0 Group D # 1.5 5 5.0 Group E # 2.6 NA 6.0 Group F # 2.1 NA Not applicable # 2.7 7 7.0 Group G ``` Note that I converted `group` to `character` to make adding new values easy, and I put `NA`s in the `line` column - you can't just have blanks in a numeric vector, every element needs to either be a number or `NA`. Upvotes: 2 <issue_comment>username_2: An solution could be achieved using `dplyr`. The approach is simple. Add a column representing `row number`. Take out rows having line value as `NA`. Replace `group` with `Not Applicable` and increment `row number` column by 0.5. Bind those two data frames. ``` library(dplyr) df %>% mutate(rownum = row_number()) %>% bind_rows(., filter(., is.na(line)) %>% mutate(group = "Not Applicable", rownum = rownum+.5)) %>% arrange(rownum) %>% select(-rownum) # line group # 1 1 1.0 Group A # 2 2 2.0 Group B # 3 NA 3.0 Group C # 4 NA Not Applicable # 5 4 4.0 Group D # 6 5 5.0 Group E # 7 NA 6.0 Group F # 8 NA Not Applicable # 9 7 7.0 Group G ``` Limitations mentioned by @Gregor are valid. Numeric column can have value as `NA` but not `blank`. Upvotes: 2 <issue_comment>username_3: Create a separate data.frame, `ds_blank`, and then use a union query to stack, then order it by a temp variable called `index`. ``` library(magrittr) na_index <- which(is.na(df$line)) ds_blank <- tibble::tibble( index = na_index + .5, line = rep(NA_real_ , length(na_index)), group = rep("Not Applicable" , length(na_index)) ) df <- df %>% tibble::rowid_to_column("index") %>% dplyr::union(ds_blank) %>% dplyr::arrange(index) %>% dplyr::select(-index) ``` ### Result ``` > df line group 1 1 1.0 Group A 2 2 2.0 Group B 3 NA 3.0 Group C 4 NA Not Applicable 5 4 4.0 Group D 6 5 5.0 Group E 7 NA 6.0 Group F 8 NA Not Applicable 9 7 7.0 Group G ``` I wanted to try [`tibble::add_row()`](http://tibble.tidyverse.org/reference/add_row.html), but that apparently doesn't allow multiple rows to be inserted if you specify a position. ### Secondary approach ...using @Gregor's tip to use a for loop. Notice the `na_index` is reverse sorted now. ``` na_index <- sort(which(is.na(df$line)), decreasing = T) for( i in na_index ) { df <- df %>% tibble::add_row( line = NA_integer_, group = "Not Applicable", .after = i ) } ``` Upvotes: 3 [selected_answer]<issue_comment>username_4: I feel like the `tidyr::uncount` function might also be what you're looking for. Just mark line==NA rows with a 2 in a new column, which we'll call `n`, and then `uncount` will duplicate each row based on the value in `n`. By mutating an `ifelse` that sets NA rows `n` == 2, we essentially only duplicate the NA rows directly below where they were, as opposed to at the bottom of the df and thereby needing to `arrange`. Finally, the `dplyr::mutate_at` just says to set `group` equal to "Not Applicable" if the `line` and the previous row's `line` (i.e. lag(line)) are both NA (which tells us to focus just on these duplicated rows). I'd like to think this method is also pretty scalable too! ``` library(tidyverse) df %>% modify_if(is.factor, as.character) %>% mutate(n = ifelse(is.na(line), 2, 1)) %>% uncount(n) %>% mutate_at(vars(group), ~ifelse(is.na(line) & is.na(lag(line)), "Not Applicable", .)) # line group # 1 1 1.0 Group A # 2 2 2.0 Group B # 3 NA 3.0 Group C # 4 NA Not Applicable # 5 4 4.0 Group D # 6 5 5.0 Group E # 7 NA 6.0 Group F # 8 NA Not Applicable # 9 7 7.0 Group G ``` Upvotes: 1
2018/03/15
470
1,886
<issue_start>username_0: I am using WPF MVVM design pattern. I need to raise a PreviewKeyDown event from a textbox that was created using an ItemsControl. I am able to add items to the collection SourceCollection, but unable to trigger the PreviewKeyDown event using interaction triggers. Any ideas on what I might be missing in the xaml is appreciated :) Here's my code: MainWindow.xaml ``` ``` MainWindowViewModel.cs ``` public class MainWindowViewModel { MainWindowModel model = new MainWindowModel(); private ObservableCollection \_SourceCollection; public ObservableCollection SourceCollection { get { return \_SourceCollection; } set { \_SourceCollection = value; } } public MainWindowViewModel() { SourceCollection = new ObservableCollection(); for (int i = 0; i < 4; i++) { model = new MainWindowModel(); model.CollectionText = "This is line " + i; if (i % 2 == 0) { model.ForegroundColor = Brushes.Blue; } else { model.ForegroundColor = Brushes.Green; } SourceCollection.Add(model); } } public RelayCommand KeyDownAction { get { return new RelayCommand(KeyDownMethod); } } private void KeyDownMethod(KeyEventArgs e) { //some code here } } ```<issue_comment>username_1: You should have below ICommand definitions in your WindowModel rather than MainWindowViewModel. ``` public RelayCommand KeyDownAction { get { return new RelayCommand(KeyDownMethod); } } private void KeyDownMethod(KeyEventArgs e) { //some code here } ``` Because datacontext for item template of your itemcontrol is item and not main view model. Upvotes: 0 <issue_comment>username_2: the "Binding" for your Command in the "InvokeCommandAction" is incorrect. It is NOT on an individual collection item, but at the ViewModel level. Change it to: ``` ``` This way you're pointing to the command defined in the ViewModel. Upvotes: 2 [selected_answer]