date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/15
1,761
6,518
<issue_start>username_0: So I was wondering if there was a much more elegant solution to the one I have implemented right now into merging values of an ordered dict. I have an ordered dict that looks like this ``` 'fields': OrderedDict([ ("Sample Code", "Vendor Sample ID"), ("Donor ID", "Vendor Subject ID"), ("Format", "Material Format"), ("Sample Type", "Sample Type"), ("Age", "Age"), ("Gender", "Gender"), ("Ethnicity/ Race", "Race"), ]), ``` If I pass in a parameter like so as a list ``` [2,3] or [2,4,5] ``` is there an elegant way to merge the values together under a new key so ``` [2,3], "Random_Key" ``` would return ``` 'fields': OrderedDict([ ("Sample Code", "Vendor Sample ID"), ("Donor ID", "Vendor Subject ID"), **("Random Key", "Material Format Sample Type"),** ("Age", "Age"), ("Gender", "Gender"), ("Ethnicity/ Race", "Race"), ]), ``` while also deleting the keys in the dictionary?<issue_comment>username_1: You can optimize this by sorting the indices descending, then you can use `dict.pop(key,None)` to retreive and remove the key/value at once, but I decided against it, append the values in the order the occured in `indices`. ``` from collections import OrderedDict from pprint import pprint def mergeEm(d,indices,key): """Merges the values at index given by 'indices' on OrderedDict d into a list. Appends this list with key as key to the dict. Deletes keys used to build list.""" if not all(x < len(d) for x in indices): raise IndexError ("Index out of bounds") vals = [] # stores the values to be removed in order allkeys = list(d.keys()) for i in indices: vals.append(d[allkeys[i]]) # append to temporary list d[key] = vals # add to dict, use ''.join(vals) to combine str for i in indices: # remove all indices keys d.pop(allkeys[i],None) pprint(d) fields= OrderedDict([ ("Sample Code", "Vendor Sample ID"), ("Donor ID", "Vendor Subject ID"), ("Format", "Material Format"), ("Sample Type", "Sample Type"), ("Age", "Age"), ("Gender", "Gender"), ("Ethnicity/ Race", "Race"), ("Sample Type", "Sample Type"), ("Organ", "Organ"), ("Pathological Diagnosis", "Diagnosis"), ("Detailed Pathological Diagnosis", "Detailed Diagnosis"), ("Clinical Diagnosis/Cause of Death", "Detailed Diagnosis option 2"), ("Dissection", "Dissection"), ("Quantity (g, ml, or ug)", "Quantity"), ("HIV", "HIV"), ("HEP B", "HEP B") ]) pprint(fields) mergeEm(fields, [5,4,2], "tata") ``` Output: ``` OrderedDict([('Sample Code', 'Vendor Sample ID'), ('Donor ID', 'Vendor Subject ID'), ('Format', 'Material Format'), ('Sample Type', 'Sample Type'), ('Age', 'Age'), ('Gender', 'Gender'), ('Ethnicity/ Race', 'Race'), ('Organ', 'Organ'), ('Pathological Diagnosis', 'Diagnosis'), ('Detailed Pathological Diagnosis', 'Detailed Diagnosis'), ('Clinical Diagnosis/Cause of Death', 'Detailed Diagnosis option 2'), ('Dissection', 'Dissection'), ('Quantity (g, ml, or ug)', 'Quantity'), ('HIV', 'HIV'), ('HEP B', 'HEP B')]) OrderedDict([('Sample Code', 'Vendor Sample ID'), ('Donor ID', 'Vendor Subject ID'), ('Sample Type', 'Sample Type'), ('Ethnicity/ Race', 'Race'), ('Organ', 'Organ'), ('Pathological Diagnosis', 'Diagnosis'), ('Detailed Pathological Diagnosis', 'Detailed Diagnosis'), ('Clinical Diagnosis/Cause of Death', 'Detailed Diagnosis option 2'), ('Dissection', 'Dissection'), ('Quantity (g, ml, or ug)', 'Quantity'), ('HIV', 'HIV'), ('HEP B', 'HEP B'), ('tata', ['Gender', 'Age', 'Material Format'])]) ``` Upvotes: 0 <issue_comment>username_2: not sure there's an elegant way. `OrderedDict` has a `move_to_end` method to move keys at start or end, but not at a random position. I'd try to be as efficient as possible, and minimze loops * get a list of the keys * find the index of the key you want to merge with the following one * remove the next key of the dictionary * create a list with `d` items * alter this list with the new value at the stored index * rebuild an `OrderedDict` from it like this (I removed some keys because it shortens the example): ``` from collections import OrderedDict d = OrderedDict([ ("Sample Code", "Vendor Sample ID"), ("Donor ID", "Vendor Subject ID"), ("Format", "Material Format"), ("Sample Type", "Sample Type"), ("Age", "Age"), ("Gender", "Gender"), ]) lk = list(d.keys()) index = lk.index("Sample Type") v = d.pop(lk[index+1]) t = list(d.items()) t[index] = ("new key",t[index][1]+" "+v) d = OrderedDict(t) print(d) ``` result: > > OrderedDict([('Sample Code', 'Vendor Sample ID'), ('Donor ID', 'Vendor Subject ID'), ('Format', 'Material Format'), ('new key', 'Sample Type Age'), ('Gender', 'Gender')]) > > > Upvotes: 1 <issue_comment>username_3: This can also be done nicely with a generator. This generator yields the key item pair if it doesn't have to be squashed, and if it has, it saves the items till the last entry, and then yields it, with a new key and the saved items joined. With the generator a new OrderedDict can be constructed. ``` from collections import OrderedDict def sqaushDict(d, ind, new_key): """ Takes an OrderedDictionary d, and yields its key item pairs, except the ones at an index in indices (ind), these items are merged and yielded at the last position of indices (ind) with a new key (new_key) """ if not all(x < len(d) for x in ind): raise IndexError ("Index out of bounds") vals = [] for n, (k, i), in enumerate(d.items()): if n in ind: vals += [i] if n == ind[-1]: yield (new_key, " ".join(vals)) else: yield (i, k) d = OrderedDict([ ("Sample Code", "Vendor Sample ID"), ("Donor ID", "Vendor Subject ID"), ("Format", "Material Format"), ("Sample Type", "Sample Type"), ("Age", "Age"), ("Gender", "Gender"), ]) t = OrderedDict(squashDict(d, [2, 3], "Random")) print(t) ``` Upvotes: 2 [selected_answer]
2018/03/15
252
999
<issue_start>username_0: I have an administrative panel that lists all users on my system. I would like to be able to delete the account of a saved user in `firebase.auth()`. I know the firebase provides an SDK for this, but I do not have a server on Node, my site is hosted directly on Firebase hosting. Is there any other way to do this?<issue_comment>username_1: You do have a node server. It's free. You can install node on your laptop or even better you can stand one up online. I use the free tier of [cloud9 IDE](https://c9.io/login) (I do not work for them and I'm not trying to sell anything). Easiest of all, use Firebase's web console to delete a user, unless you have a need to build out a more elaborate system. Upvotes: 0 <issue_comment>username_2: Create a Cloud Function (serverless) that deletes users using the Node.js Admin SDK. Then you can hit that endpoint whenever you want to delete a user: <https://firebase.google.com/docs/hosting/functions> Upvotes: 2 [selected_answer]
2018/03/15
829
2,879
<issue_start>username_0: I followed <https://github.com/graphaware/neo4j-uuid> link to generate UUID for each and every neo4j node which gets created from Spring boot application. Here is the list of steps I followed as per the link: 1. Added `graphaware-uuid-3.3.3.52.16.jar` file to `\plugins` folder of Neo4jDB. In my case `C:\Users\Naveen\AppData\Roaming\Neo4j Desktop\Application\neo4jDatabases\database-***\installation-3.3.2\plugins` 2. Added following configurations to \conf\neo4j.conf file > > com.graphaware.runtime.enabled=true > com.graphaware.module.UIDM.1=com.graphaware.module.uuid.UuidBootstrapper > com.graphaware.module.UUID.uuidGeneratorClass=com.graphaware.module.uuid.generator.SequenceIdGenerator > > > 3. Created Model class in spring boot application ``` @NodeEntity public class Skill { @GraphId private Long graphId; @Property(name = "uuid") private Long uuid; @Property(name = "skillName") private String skillName; //...getters and setters } ``` 4. Created Spring Neo4j Data repository interface ``` public interface SkillRepository extends GraphRepository { } ``` 5. Started Neo4j DB and loaded Spring context and tested the configurations: ``` public Skill createkill() { Skill skill = new Skill(); skill.setSkillName("Java"); skill = skillRepository.save(skill); return skill; } ``` Issue: Node is getting created in Neo4j DB with `graphId` property populating automatically, but `uuid` property is not populated. The returned Skill object is holding null value for `uuid` property. I checked [Graphaware Framework and UUID not starting on Neo4j GrapheneDB](https://stackoverflow.com/questions/42553024/graphaware-framework-and-uuid-not-starting-on-neo4j-graphenedb) and [GraphAware UUID not generating](https://stackoverflow.com/questions/41640781/graphaware-uuid-not-generating) links but couldn't find any solution for my problem. Please help out to know what I am doing wrong or if I am missing anything. Or suggest any alternate `uuid` generation solution. Version details of libraries and tools used: `Java 1.8.0_131` `Neo4J 3.3.2 Enterprise` `graphaware-uuid-3.3.3.52.16.jar` `Spring boot 1.5.10`<issue_comment>username_1: You do have a node server. It's free. You can install node on your laptop or even better you can stand one up online. I use the free tier of [cloud9 IDE](https://c9.io/login) (I do not work for them and I'm not trying to sell anything). Easiest of all, use Firebase's web console to delete a user, unless you have a need to build out a more elaborate system. Upvotes: 0 <issue_comment>username_2: Create a Cloud Function (serverless) that deletes users using the Node.js Admin SDK. Then you can hit that endpoint whenever you want to delete a user: <https://firebase.google.com/docs/hosting/functions> Upvotes: 2 [selected_answer]
2018/03/15
683
2,263
<issue_start>username_0: What is the fastest way to search a list whether or not it has an element that begins with a specified string, and then return the index of the element if it's found. Something like: ``` mylist=['one','two','three','four'] mystring='thr' ``` It should return `2`.<issue_comment>username_1: You can't get better than O(n) complexity here, but generally speaking if you are after pure speed then don't even use Python. The canonical Python solution I would propose is to use a memory efficient generator and call `next` on it once. ``` >>> mylist = ['one','two','three','four'] >>> mystring = 'thr' >>> next(index for index, item in enumerate(mylist) if item.startswith('thr')) 2 ``` By default, this will give you a `StopIteration` exception if the condition is never satisfied. You can provide a second argument to `next` if you want a fallback-value. Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` mystring='thr' [n for (n,item) in enumerate(mylist) if item.startswith(mystring)][0] Out: 2 ``` Upvotes: -1 <issue_comment>username_3: `indices = [i for i, s in enumerate(mylist) if s.startswith('thr')]` Enumerate is slightly faster Upvotes: 0 <issue_comment>username_4: If you are going to do more than a single search, you can organize the list in a way to get better than O(n) execution time for each search. Obviously if you're only doing a single search the overhead of reorganizing the list will be prohibitive. ``` import bisect mylist.sort() n = bisect.bisect_left(mylist, mystring) if n >= len(mylist) or not mylist[n].startswith(mystring): print('not found') ``` If you need to preserve the original order it's only a little more complicated. ``` mysorted = sorted((s,i) for i,s in enumerate(mylist)) n = bisect.bisect_left(mysorted, (mystring, 0)) if n >= len(mysorted) or not mysorted[n][0].startswith(mystring): print('not found') else: n = mysorted[n][1] ``` Upvotes: 0 <issue_comment>username_5: Just running a counter should do you fine ``` i = 0 mylist = ['one','two','three','four'] mystring = 'thr' for x in mylist: if mystring in x: i = i + 1 print (i) else: i = i + 1 ``` Although this will print '3' and not '2'. I hope this helps. Upvotes: 0
2018/03/15
303
1,263
<issue_start>username_0: I hope I can make sense with this question is it possible to alter your text label within Angular .html layout depending on some criteria in your query? For example below, possibly I'd like to interactively alter the "placeholder="Service Date" to something other than 'Service Date', depending on some criteria in my query in the corresponding .ts file, some indicator field. Effectively, if indicator='x' then placeholder="something else" is that possible to accomplish? ``` ```<issue_comment>username_1: By setting the placeholder as a binding you can then use an expression that can be evaluated either in the component or the view itself ``` ” name="serviceDate" value=" {{claim.serviceDate | date}}" disabled="disabled"> ``` Where can be a variable, a ternary expression or anything that can be evaluated. Upvotes: 1 <issue_comment>username_2: It is good practice to move(wrap) some expression logic into **method** of class and use it in template. It's helps to keep template clean and easy to test. **Template:** ``` ``` **Component class:** ``` class SomeComponent: { getPlaceholder(): string { return this.indicator == x ? "placeHolder1" : "placeHolder2"; } } ``` Upvotes: 0
2018/03/15
296
1,188
<issue_start>username_0: I am using Anonymous Auth in my firebase webapp and today my uid changed. It just so happened to be 30 days since my anonymous account was created, so I am assuming that all anonymous account sessions expired after 30 days. [related](https://github.com/firebase/firebase-simple-login/blob/master/docs/v1/providers/anonymous.md#optional-settings) Is there a way to have anonymous account sessions never expire?<issue_comment>username_1: By setting the placeholder as a binding you can then use an expression that can be evaluated either in the component or the view itself ``` ” name="serviceDate" value=" {{claim.serviceDate | date}}" disabled="disabled"> ``` Where can be a variable, a ternary expression or anything that can be evaluated. Upvotes: 1 <issue_comment>username_2: It is good practice to move(wrap) some expression logic into **method** of class and use it in template. It's helps to keep template clean and easy to test. **Template:** ``` ``` **Component class:** ``` class SomeComponent: { getPlaceholder(): string { return this.indicator == x ? "placeHolder1" : "placeHolder2"; } } ``` Upvotes: 0
2018/03/15
309
1,188
<issue_start>username_0: I have a .js file in my Angular's assets folder with this format: ``` ;(function($){ $.someFunction() {...} })(jQuery) ``` **Question**: It possible to call functions from my .js file from a Typescript file? Note: I'm avoiding to export javascript functions in a typescript file, I want to use it directly from the file.<issue_comment>username_1: As toskv said, that's jquery. In order to import it you'll need to do the following (assuming you have jquery installed): Run this in the command line to install jquery types ``` npm install @types/jquery --save ``` Add this to the file in which you wish to use the lib ``` import * as $ from ‘jquery’; ``` Upvotes: 2 <issue_comment>username_2: Assuming that `$` is defined in your JavaScript code, then yes, but you will need to declare to the TypeScript compiler that that variable actually exists, which you can do by using the `declare` keyword: ``` declare const $: any; ``` (Though this doesn't give you proper typings, so you might want to fetch the jQuery declaration file, which you can find [here](https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/jquery)) Upvotes: 1
2018/03/15
421
1,572
<issue_start>username_0: I could use some help figuring out what's the problem with running the development server for one of my django projects. When I run `python managy.py runserver`, I get an error that ends with this: `OperationalError: could not connect to server: Connection refused Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5432? could not connect to server: Connection refused Is the server running on host "localhost" (127.0.0.1) and accepting TCP/IP connections on port 5432?` Never before has anything like this happened to me while working with Django, so I will be very thankful for every possible solution.<issue_comment>username_1: Sounds like you are using Postgresql database and it(database) is not answering Upvotes: 2 <issue_comment>username_2: In the `settings.py`, search for this code and make sure if it is correct: ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), } } ``` And Then start postgresql database service: ``` systemctl start postgresql systemctl enable postgresql systemctl status postgresql # Check if it is running correctly. ``` **UPDATE**: If you are familiar with databases, then its better to setup a database server of your own and use credentials to login. ``` DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': 'databasename', 'USER': 'databaseuser', 'PASSWORD': '<PASSWORD>', 'HOST' : 'localhost', 'PORT': '', } } ``` Upvotes: 0
2018/03/15
414
1,415
<issue_start>username_0: I tried to copy and paste an example from NetworkX package documentation. This is the example: ``` >>>G = nx.path_graph(5) >>> path = nx.all_pairs_shortest_path(G) >>> print(path[0][4]) [0, 1, 2, 3, 4] ``` Unfortunately, instead of the expected output, I get the following error message: ``` 'generator' object has no attribute '__getitem__' ```<issue_comment>username_1: Looks like path is a generator: convert it into a dictionary and it works: ``` path = dict(nx.all_pairs_shortest_path(G)) ``` Upvotes: 1 <issue_comment>username_2: So your error is due to the fact that in Python 2.x many of the methods that used to return dicts now return generators. Among them is `all_pairs_shortest_path`. You're using this new version of networkx, but looking at an out-of-date tutorial. So the error message you saw comes from the fact that you have a generator `path` and you're trying to access `path[0]`, which doesn't make sense to Python. The easiest fix here is to simply follow the answer provided by Walter and say ``` path = dict(nx.all_pairs_shortest_path(G)) ``` In general, when using code that was written for networkx 1.x, but you are using version 2.x, you should consult the [migration guide](https://networkx.github.io/documentation/stable/release/migration_guide_from_1.x_to_2.0.html) (though in your case it's not particularly useful). Upvotes: 3 [selected_answer]
2018/03/15
1,363
5,180
<issue_start>username_0: I have recently started using Node.js/Express. I understand an A-Sync call must complete before it can move on. In my code there are three different end points I need to hit. As I have taken A-Sync into account I have tried to code it so it does them in order they are presented. However it hits the first end point, then the third and then the second. I understand it must be an issue in my code however I've been at this for hours. Where have I gone wrong? Why does it leave the second endpoint till last? ``` app.get("/start/:origin/:destination", function ( reqt, resp ) { var origin = reqt.params.origin var destination = reqt.params.destination var url = 'http://localhost:5000/maps/' + origin + '/' + destination var rate; var overallDis; var aRoadDis; var data; http.get(url, res => { res.setEncoding('utf8') res.on('data', function(body){ data = JSON.parse(body) overallDis = data["distance"] aRoadDis = data["ARoads"] }) }) var driver; http.get("http://localhost:4000/lowestRate/", res => { res.setEncoding('utf8') res.on('data', function(body){ driver = JSON.parse(body) rate = driver.rate console.log(rate) }) }) var totalPrice = 0 http.get("http://localhost:6000/surge/:" + rate + "/:" + overallDis + "/:" + aRoadDis, res => { // console.log(overallDis) // console.log(aRoadDis) // console.log(rate) res.setEncoding('utf8') res.on('data', function(body){ console.log(body) totalPrice += parseInt(body) }) console.log(totalPrice) }) }) ```<issue_comment>username_1: > > I understand an A-Sync call must complete before it can move on. > > > This is actually not the case. When you make your HTTP request it will make that request and move on. In your case it will continue on to make the next two HTTP requests. After it gets a response it will fire the corresponding [callback function](https://developer.mozilla.org/en-US/docs/Glossary/Callback_function). Your callbacks will fire in the order that you get responses to your HTTP requests. Here's a nice link for learning about the Javascript event loop. <https://developer.mozilla.org/en-US/docs/Web/JavaScript/EventLoop> Hope that helps! PS: If you'd like to wait for one request to finish before moving on to the rest I would suggest [Promises](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise). ``` app.get("/start/:origin/:destination", function ( reqt, resp ) { const origin = reqt.params.origin const destination = reqt.params.destination const url = 'http://localhost:5000/maps/' + origin + '/' + destination const totalPrice = 0 const firstPromise = new Promise((resolve, reject) => { http.get(url, res => { res.setEncoding('utf8') res.on('data', function(body){ data = JSON.parse(body) resolve({ overallDis: data["distance"], aRoadDis: data["ARoads"] }); }) }) }); const secondPromise = new Promise((resolve, reject) => { http.get("http://localhost:4000/lowestRate/", res => { res.setEncoding('utf8') res.on('data', function(body){ const driver = JSON.parse(body) const rate = driver.rate console.log(rate) resolve(rate); }) }) }); Promise.all([firstPromise, secondPromise]).then((values) => { // This will fire after both promises have called resolve() const overallDis = values[0].overallDis; const aRoadDis = values[0].aRoadDis; const rate = values[1]; http.get("http://localhost:6000/surge/:" + rate + "/:" + overallDis + "/:" + aRoadDis, res => { // console.log(overallDis) // console.log(aRoadDis) // console.log(rate) res.setEncoding('utf8') res.on('data', function(body){ console.log(body) totalPrice += parseInt(body) }) console.log(totalPrice) }) }); }) ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: As is mentioned in the other answers, your interpretation of async is the wrong way around: synchronous calls block the execution of following code, whereas asynchronous calls do not. If you want to go through your operations in order, but they actually are asynchronous, the easiest method is using callbacks. This is doable for smaller call stacks, but it is not called callback-hell for nothing. The best way would be to wrap the async calls in Promises and then using the async/await structure to order them in a synchronous fashion. This could look something like this. ``` async function (req, res) { let op_1_result = await new Promise(function(resolve, reject) { ... do your async operation and finally call resolve(response); }); ... do your other operations in the same fashion let op_n_result = await new Promise(function(resolve, reject) { ... do your async operation and finally call resolve(response); }); return op_n_result; } ``` Upvotes: -1
2018/03/15
780
1,683
<issue_start>username_0: I need to append a datetime object to my POSIXct element. Some sampledata: ``` my_chr<-c('2017-02-19 06:00','2017-03-10 06:00','2017-04-15 06:00') myPSX<-as.POSIXct(my_chr,format='%Y-%m-%d %H:%M',tz='UTC') PSXappend<-as.POSIXct('2017-08-09 06:00',format='%Y-%m-%d %H:%M',tz='UTC') ``` But somehow if I try `c()` it changes the timezone. If i try to coerce it together with `as.POSIXct` it drops the datetime object I need to append.<issue_comment>username_1: In this case you could append a value by indexing, which will neither change the time zone nor the class of `myPSX`: ``` myPSX[length(myPSX) + 1] <- PSXappend ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Since I have to run this on quite a large dataset, I ran some benchmarks to compare the different possibilities. Actually @Dan's solution is quite fast. However using `attr(dttm,'tzone')<-'UTC'` is slightly faster. ``` myfun1<-function(){ myPSX[length(myPSX) + 1] <- PSXappend } myfun2<-function(){ dttm<-c(myPSX,PSXappend) attr(dttm,'tzone')<-'UTC' } library(lubridate) myfun3<-function(){ dttm<-c(myPSX,PSXappend) with_tz(dttm, "UTC") } myfun4<-function(){ dttm<-as.POSIXct(c(my_chr,'2017-08-09 06:00'),format='%Y-%m-%d %H:%M',tz='UTC') } microbenchmark::microbenchmark(myfun1(),myfun2(),myfun3(),myfun4()) Unit: microseconds expr min lq mean median uq max neval myfun1() 12.642 15.210 17.92005 16.9875 17.7780 59.654 100 myfun2() 11.852 13.827 16.39909 14.4200 15.8025 43.062 100 myfun3() 26.864 29.432 121.86874 30.8150 33.1850 5852.844 100 myfun4() 31.605 34.766 61.66142 36.3460 40.2970 2182.323 100 ``` Upvotes: 0
2018/03/15
949
3,078
<issue_start>username_0: I'm trying to connect to amazon athena using JDBC. As I didn't find the *AthenaDriver* repository on maven, I created it myself on my [github](https://github.com/raphael-psr/maven-repository). Basically what I'm doing is this: pom.xml: ``` mvn-rep maven repository https://github.com/raphael-psr/maven-repository/raw/master/ com.amazonaws.athena.jdbc AthenaJDBC41 1.1.0 ``` java: ``` class.forName("com.amazonaws.athena.jdbc.AthenaDriver"); Properties properties = new Properties(); properties.setProperty("user", user); properties.setProperty("password", <PASSWORD>); properties.setProperty("aws_credentials_provider_class", "amazon.AmazonCredentialsProvider"); Connection connection = DriverManager.getConnection("jdbc:awsathena://athena." + region + ".amazonaws.com:443", properties); ``` An exception is raised: > > java.sql.SQLException: No suitable driver found for jdbc:amazonaws://athena.us-east-1.amazonaws.com:443 > > > Anyone knows what it might be?<issue_comment>username_1: 1. Maybe you would like to download the JDBC driver from: <https://s3.amazonaws.com/athena-downloads/drivers/AthenaJDBC41-1.1.0.jar> 2. You can install it on your own maven repository: ``` mvn install:install-file -Dfile=/home/users/User01/Documents/AthenaJDBC41-1.1.0.jar -DgroupId=com.amazonaws.athena.jdbc -DartifactId=athenaJDBC -Dpackaging=jar ``` 3. Then you can reference it in your pom.xml: [![enter image description here](https://i.stack.imgur.com/kaQHk.png)](https://i.stack.imgur.com/kaQHk.png) Let me know if that helps you somehow. PS: Not sure why code formatting is not working in my computer. I've tried three different browsers and I'm expecting the same issue. Upvotes: 1 <issue_comment>username_2: Download Jar from <https://s3.amazonaws.com/athena-downloads/drivers/AthenaJDBC41-1.1.0.jar> Add the jar to your own maven repository ``` mvn install:install-file -Dfile=/home/sumit/Downloads/AthenaJDBC41-1.1.0.jar -DgroupId=com.amazonaws.athena.jdbc -DartifactId=athenaJDBC -Dversion=1.1.0 -Dpackaging=jar ``` change **-Dfile** value to your downloaded jar path. If required update **-Dversion**. Add dependency to your pom.xml ``` com.amazonaws.athena.jdbc athenaJDBC 1.1.0 ``` Upvotes: 1 <issue_comment>username_3: I have just come across this problem myself using the `2.0.2` of the `JDBC42` version of the driver while developing a Spark job in Clojure. Despite those difference I think the answer will translate and I got it from [Spark Unable to find JDBC Driver](https://stackoverflow.com/questions/29552799/spark-unable-to-find-jdbc-driver). I believe you need to set the `driver` property in your `Properties` object to the Athena Driver class i.e. ``` properties.setProperty("driver", "com.simba.athena.jdbc.Driver"); ``` which is correct for the `2.0.2` version. In you case is should be ``` properties.setProperty("driver", "com.amazonaws.athena.jdbc.AthenaDriver"); ``` I don't know why the `Class.forName` isn't sufficient. I also had to build my own Maven package to include. Upvotes: 0
2018/03/15
787
2,909
<issue_start>username_0: I am trying to use the Django package: [Django Filter](https://github.com/carltongibson/django-filter) I installed it via Pip, made sure I was running supported versions of Python(3.6), and Django(2.0), but whenever I try to run my application, I get the following error: ``` class Table1(models.Model, django_filters.FilterSet): NameError: name 'django_filters' is not defined ``` Here's a sample of my code, with the names changed to protect my work. models.py: ``` from django.db import models from django.contrib.postgres.search import SearchVectorField, SearchQuery from django_filters import FilterSet class Table1(models.Model, django_filters.FilterSet): field1 = models.IntegerField(db_column='field1', blank=True, null=True) field2 = models.NullBooleanField(db_column='field2') field3= models.IntegerField(db_column='field3', blank=True, null=True) field4= models.TextField(db_column='field4', blank=True, null=False, primary_key=True) #def __str__(self): # return self.sid class Meta: managed = False db_table = 'Table1' unique_together = (('field1', 'field2', 'field3', 'field4'),) ``` filters.py: ``` from .models import Table1 import django_filters class Table1Filter(django_filters.FilterSet): class Meta: model = Table1 fields = ['field1', 'field2', 'field3', 'field4'] ``` views.py: ``` from django.shortcuts import render from django_tables2 import RequestConfig from django_tables2.export.export import TableExport from django.contrib.postgres.search import SearchQuery, SearchRank from django.template import RequestContext from django.views.generic import * from .models import * from .tables import * from .forms import * from .filters import Table1Filter def table1(request): filter = Table1Filter(request.GET, queryset=Table1.objects.all()) return render(request, 'table1.html', {'filter': filter}) ``` I wrote some basic filtering stuff manually and then realized that Django Filter(s) was a thing and figured I shouldn't reinvent the wheel. The goal with this is to display data from an exisiting database and allow the end user to filter it. If there's a better way to do this, I am all ears. Thanks for your input, and taking the time to read this!<issue_comment>username_1: Perhaps because you haven't imported `django_filters` in your models.py file. ``` import django_filters # instead of django_filters import FilterSet ``` or use it the other way around. Upvotes: 2 <issue_comment>username_2: Your problem is probably in circular imports... in models.py you import `from django_filters import FilterSet` then you import in `fitlers.py` one of models from `models.py` and the same time `django_filters`. This may causing problems. I guess you don't need importing that library and FilterSet in your models.py Upvotes: 1 [selected_answer]
2018/03/15
432
1,464
<issue_start>username_0: Say I have a text file `123.txt` ``` one,two,three four,five,six ``` My goal is to capitalize the first character of each line by using `Get-Culture`. This is my attempt: ``` $str = gc C:\Users\Administrator\Desktop\123.txt #Split each line into an array $array = $str.split("`n") for($i=0; $i -lt $array.Count; $i++) { #Returns O and F: $text = (Get-Culture).TextInfo.ToTitleCase($array[$i].Substring(0,1)) #Supposed to replace the first letter of each array with $text $array[$i].Replace($array[$i].Substring(0,1), $text) >> .\Desktop\finish.txt } ``` Result: `One,twO,three Four,Five,six` I understand that `.Replace()` is replaces every occurrence of the current array, which is why I made sure that it's replacing ONLY the first character of the array with `$array[$i].Substring(0,1)`, but this doesn't work.<issue_comment>username_1: Perhaps because you haven't imported `django_filters` in your models.py file. ``` import django_filters # instead of django_filters import FilterSet ``` or use it the other way around. Upvotes: 2 <issue_comment>username_2: Your problem is probably in circular imports... in models.py you import `from django_filters import FilterSet` then you import in `fitlers.py` one of models from `models.py` and the same time `django_filters`. This may causing problems. I guess you don't need importing that library and FilterSet in your models.py Upvotes: 1 [selected_answer]
2018/03/15
887
3,014
<issue_start>username_0: I've been researching for a while about how to replace the item if it's already exist inside state array but I couldn't find a propitiate solution <https://codesandbox.io/s/4xl24j7r69> ``` import React from 'react'; class App extends React.Component { constructor(props) { super(props); this.state = { data: [ { "userId": 1, "id": 1, "title": "One", "body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto" }, { "userId": 1, "id": 2, "title": "Two", "body": "est rerum tempore vitae\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\nqui aperiam non debitis possimus qui neque nisi nulla" }, ] } } add = () => { this.setState(prevState => ({ data: [...prevState.data, { "userId": 1, "id": 2, "title": "Two New", "body": "new data", }// this item already exist so I want to replace it to new data ] })) }; render() { return ( {this.state.data.map((data) => * {data.title} )} Replace ); } } export default App; ``` I tried to make like this but it's doesn't work ``` add = () => { this.setState(prevState => ({ data: prevState.data.filter(item => this.state.data.indexOf(item.id) !== -1).slice(0, 5), // this not working data: [...prevState.data, { "userId": 1, "id": 2, "title": "Two New", "body": "new data" } ] })) }; ``` How can I update the item if it's already exist?<issue_comment>username_1: You could find the item and simply update it with object.assign ``` let newUser1 = { "userId": 1, "id": 2, "title": "Two New", "body": "new data" } this.setState(prevState => { let newData = prevState.data; let user = newData.find(d => d.id === newUser1.id); Object.assign(user, newUser1); return {data: newData}; }) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You have to create a deep copy recursively and then update the state. **Note:** The `slice` function will not create deep copy, so you have to create a new object instead. (I use spread operator, there are another choice, such as `Object.assign()`). There is the tested code based on your question: ```js add = () => { this.setState(prevState => { const idx = prevState.data.findIndex(item => item.id === 2); const nextData = prevState.data.slice(); const nextItem = { ...nextData[idx] }; nextItem.title = "Wow! New Two"; nextData[idx] = nextItem; return { data: nextData, }; }); }; ``` There is the [codesandbox link](https://codesandbox.io/s/307p9vpy46). Upvotes: 1
2018/03/15
878
3,055
<issue_start>username_0: I need a program that asks the user to introduce up to 10 names (to end, the user could type "fim" [that is end in Portuguese]). My current problem is how to terminate the program if the user reach 10 names. Here is my main function: ``` public static void main(String[] args) { Scanner keyboard = new Scanner (System.in); System.out.println("Introduza até 10 nomes completos com até 120 caracteres e pelo menos dois nomes com pelo menos 4 caracteres: "); String nome = keyboard.next(); for(int i = 0; i < 10; i++) { while(!nome.equalsIgnoreCase("fim") && i<10) { nome = keyboard.next(); } } keyboard.close(); } ```<issue_comment>username_1: You're running into an infinite loop with the `while` as is. You want to change it to an `if` statement and ask just for `fim` and call `break;` if that happens. So it should end as: ``` for(int i = 0; i < 10; i++) { //This will run 10 times nome = keyboard.next(); if(nome.equalsIgnoreCase("fim")) { //This will verify if last input was "fim" break; //This breaks the for-loop } } ``` Or if you really want to use a `while` loop inside the `for` one (not recommended tho) you need to increase `i` inside it: ``` for(int i = 0; i < 10; i++) { while(!nome.equalsIgnoreCase("fim") && i<10) { nome = keyboard.next(); i++; } } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You might want to give a try to this code (I have made some explanations in the comments to proper lines): ``` public static void main(String[] args) { Scanner keyboard = new Scanner (System.in); System.out.println("Introduza até 10 nomes completos com até 120 caracteres e pelo menos dois nomes com pelo menos 4 caracteres: "); String nome = keyboard.next(); int i = 0; // Here I instroduce a counter, to increment it after each input given while(!nome.equalsIgnoreCase("fim") && i!=10) { // stop performing while-loop when nome is equal to "fim" // or if i==10 (if any of these conditions is false, entire condition is false) nome = keyboard.nextLine(); i++; // increment counter after input } keyboard.close(); System.out.println("End of input"); // Just to confirm that you exited while-loop } ``` Upvotes: 1 <issue_comment>username_3: I am not a big fan of `break`, so adding to username_1's excellent answer, you can use a `do-while` loop instead: ``` String nome; int i = 0; do { nome = keyboard.next(); i++; } while(!nome.equalsIgnoreCase("fim") && i<10); ``` Also right now you're overwriting all previously entered names. So you either have to handle them directly inside the loop, or collect them in some kind of container, e.g. a list. I would rewrite the loop as such: ``` String nome; int i = 0; while(i<10 && !(nome = keyboard.next()).equalsIgnoreCase("fim")) { i++; // Either handle nome here directly, or add it to a list for later handling. } ``` Upvotes: 2
2018/03/15
8,096
24,862
<issue_start>username_0: I am using Spring JPA and postgres database. I have created the following table: ``` CREATE TABLE public.book ( id bigint NOT NULL DEFAULT nextval('book_id_seq'::regclass), title character varying(60) NOT NULL, year date, publisher character varying(30), author bigint[] NOT NULL, cathegory smallint[], CONSTRAINT book_primary_key PRIMARY KEY (id) ) ``` Here is DAO: ``` @Data @NoArgsConstructor @Entity @Table(name = "book", schema = "public") public class BookDao { @Id @GeneratedValue(strategy = GenerationType.AUTO, generator = "auto_gen_book") @SequenceGenerator(name = "auto_gen_book", sequenceName = "book_id_seq", allocationSize = 1) private Long id; private String title; private LocalDateTime year; private String publisher; private long[] author; private short[] cathegory; public BookDao(String title, long[] author, short[] cathegory) { this.title = title; this.author = author; this.cathegory = cathegory; } } ``` Here is a very simple controller: ``` @RestController @RequestMapping("/book") public class BookController { @Autowired BookService bookService; @RequestMapping("/{id}") public String getUserSurnameFromId(@PathVariable long id) { return bookService.getBookById(id).toString(); } @RequestMapping("/all") public List getAllBooks() { return bookService.getAllBooks(); } @RequestMapping("/save/{title}/{author}/{category}") public BookDao saveByTitleAuthorCategory(@PathVariable String title, @PathVariable long author, @PathVariable short category) { long[] authors = {author}; short[] categories = {category}; BookDao bookDao = new BookDao(title, authors, categories); return bookService.save(bookDao); } } ``` I the controller service is used, but the only thing it does is invoking repository methods. No logic is yet implemented. The problem occurs, when I am connecting to `http://localhost:8090/book/all` to list all records, I am getting following error: ``` 2018-03-15 22:40:33.291 ERROR 8022 --- [nio-8090-exec-6] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.springframework.orm.jpa.JpaSystemException: could not deserialize; nested exception is org.hibernate.type.SerializationException: could not deserialize] with root cause java.io.EOFException: null at java.base/java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2754) ~[na:na] at java.base/java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3249) ~[na:na] at java.base/java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:866) ~[na:na] at java.base/java.io.ObjectInputStream.(ObjectInputStream.java:342) ~[na:na] at org.hibernate.internal.util.SerializationHelper$CustomObjectInputStream.(SerializationHelper.java:309) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.internal.util.SerializationHelper$CustomObjectInputStream.(SerializationHelper.java:299) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.internal.util.SerializationHelper.doDeserialize(SerializationHelper.java:218) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.internal.util.SerializationHelper.deserialize(SerializationHelper.java:287) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.type.descriptor.java.SerializableTypeDescriptor.fromBytes(SerializableTypeDescriptor.java:139) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.type.descriptor.java.SerializableTypeDescriptor.wrap(SerializableTypeDescriptor.java:114) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.type.descriptor.java.SerializableTypeDescriptor.wrap(SerializableTypeDescriptor.java:28) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.type.descriptor.sql.VarbinaryTypeDescriptor$2.doExtract(VarbinaryTypeDescriptor.java:60) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.type.descriptor.sql.BasicExtractor.extract(BasicExtractor.java:47) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:261) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:257) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.type.AbstractStandardBasicType.nullSafeGet(AbstractStandardBasicType.java:247) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.type.AbstractStandardBasicType.hydrate(AbstractStandardBasicType.java:333) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.persister.entity.AbstractEntityPersister.hydrate(AbstractEntityPersister.java:2854) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.loader.Loader.loadFromResultSet(Loader.java:1747) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.loader.Loader.instanceNotYetLoaded(Loader.java:1673) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.loader.Loader.getRow(Loader.java:1562) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.loader.Loader.getRowFromResultSet(Loader.java:732) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.loader.Loader.processResultSet(Loader.java:991) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.loader.Loader.doQuery(Loader.java:949) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:341) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.loader.Loader.doList(Loader.java:2692) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.loader.Loader.doList(Loader.java:2675) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2507) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.loader.Loader.list(Loader.java:2502) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:502) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.hql.internal.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:392) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.engine.query.spi.HQLQueryPlan.performList(HQLQueryPlan.java:216) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.internal.SessionImpl.list(SessionImpl.java:1490) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.query.internal.AbstractProducedQuery.doList(AbstractProducedQuery.java:1445) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1414) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.query.Query.getResultList(Query.java:146) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.hibernate.query.criteria.internal.compile.CriteriaQueryTypeQueryAdapter.getResultList(CriteriaQueryTypeQueryAdapter.java:72) ~[hibernate-core-5.2.13.Final.jar:5.2.13.Final] at org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll(SimpleJpaRepository.java:307) ~[spring-data-jpa-2.0.4.RELEASE.jar:2.0.4.RELEASE] at org.springframework.data.jpa.repository.support.SimpleJpaRepository.findAll(SimpleJpaRepository.java:74) ~[spring-data-jpa-2.0.4.RELEASE.jar:2.0.4.RELEASE] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:564) ~[na:na] at org.springframework.data.repository.core.support.RepositoryComposition$RepositoryFragments.invoke(RepositoryComposition.java:377) ~[spring-data-commons-2.0.4.RELEASE.jar:2.0.4.RELEASE] at org.springframework.data.repository.core.support.RepositoryComposition.invoke(RepositoryComposition.java:200) ~[spring-data-commons-2.0.4.RELEASE.jar:2.0.4.RELEASE] at org.springframework.data.repository.core.support.RepositoryFactorySupport$ImplementationMethodExecutionInterceptor.invoke(RepositoryFactorySupport.java:629) ~[spring-data-commons-2.0.4.RELEASE.jar:2.0.4.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.doInvoke(RepositoryFactorySupport.java:593) ~[spring-data-commons-2.0.4.RELEASE.jar:2.0.4.RELEASE] at org.springframework.data.repository.core.support.RepositoryFactorySupport$QueryExecutorMethodInterceptor.invoke(RepositoryFactorySupport.java:578) ~[spring-data-commons-2.0.4.RELEASE.jar:2.0.4.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.data.projection.DefaultMethodInvokingMethodInterceptor.invoke(DefaultMethodInvokingMethodInterceptor.java:59) ~[spring-data-commons-2.0.4.RELEASE.jar:2.0.4.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294) ~[spring-tx-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98) ~[spring-tx-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139) ~[spring-tx-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.data.jpa.repository.support.CrudMethodMetadataPostProcessor$CrudMethodMetadataPopulatingMethodInterceptor.invoke(CrudMethodMetadataPostProcessor.java:135) ~[spring-data-jpa-2.0.4.RELEASE.jar:2.0.4.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:92) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.data.repository.core.support.SurroundingTransactionDetectorMethodInterceptor.invoke(SurroundingTransactionDetectorMethodInterceptor.java:61) ~[spring-data-commons-2.0.4.RELEASE.jar:2.0.4.RELEASE] at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) ~[spring-aop-5.0.4.RELEASE.jar:5.0.4.RELEASE] at com.sun.proxy.$Proxy112.findAll(Unknown Source) ~[na:na] at ibdb.service.implementations.BookServiceImpl.getAllBooks(BookServiceImpl.java:24) ~[classes/:na] at ibdb.controller.BookController.getAllBooks(BookController.java:28) ~[classes/:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:564) ~[na:na] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:209) ~[spring-web-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136) ~[spring-web-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102) ~[spring-webmvc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:870) ~[spring-webmvc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:776) ~[spring-webmvc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:991) ~[spring-webmvc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:925) ~[spring-webmvc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:978) ~[spring-webmvc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:870) ~[spring-webmvc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:635) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:855) ~[spring-webmvc-5.0.4.RELEASE.jar:5.0.4.RELEASE] at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) ~[tomcat-embed-websocket-8.5.28.jar:8.5.28] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:158) ~[spring-boot-actuator-2.0.0.RC2.jar:2.0.0.RC2] at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.filterAndRecordMetrics(WebMvcMetricsFilter.java:126) ~[spring-boot-actuator-2.0.0.RC2.jar:2.0.0.RC2] at org.springframework.boot.actuate.metrics.web.servlet.WebMvcMetricsFilter.doFilterInternal(WebMvcMetricsFilter.java:111) ~[spring-boot-actuator-2.0.0.RC2.jar:2.0.0.RC2] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.springframework.boot.actuate.web.trace.servlet.HttpTraceFilter.doFilterInternal(HttpTraceFilter.java:84) ~[spring-boot-actuator-2.0.0.RC2.jar:2.0.0.RC2] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) ~[spring-web-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:109) ~[spring-web-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:81) ~[spring-web-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) ~[spring-web-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.4.RELEASE.jar:5.0.4.RELEASE] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) ~[tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:496) [tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) [tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) [tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) [tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) [tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803) [tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:790) [tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1459) [tomcat-embed-core-8.5.28.jar:8.5.28] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-8.5.28.jar:8.5.28] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167) [na:na] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) [na:na] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-8.5.28.jar:8.5.28] at java.base/java.lang.Thread.run(Thread.java:844) [na:na] ``` In addition to that, I am using spring jpa to represent the following table: ``` CREATE TABLE public."user" ( id bigint NOT NULL DEFAULT nextval('user_id_seq'::regclass), surname character varying(30) NOT NULL, name character varying(30) NOT NULL, date_of_birth date, place_of_birth character varying(30), CONSTRAINT user_primary_key PRIMARY KEY (id) ) ``` This indicates no error at all, although everything is practically the same. For me, the main difference is in using tables in definition of columns in postgres, but after long hours of thinking and researching I couldn't find any other clue. I will be thankful for any help.<issue_comment>username_1: The issue here is that Postgres array types are not by default supported by JPA. you will need to create a usertype, like the answer in this question [How to use Spring Data / JPA to insert into a Postgres Array type column?](https://stackoverflow.com/questions/39119164/how-to-use-spring-data-jpa-to-insert-into-a-postgres-array-type-column) In addition, you may want to check your model, because these arrays (assuming they are id's of authors and categories) violate the 1st normal form. See <https://en.wikipedia.org/wiki/First_normal_form> Normally for a mapping like this, you would have many to many with a associative table in the middle. Upvotes: 0 <issue_comment>username_2: Just like @jeff-wang has pointed out, Postgres array types are not supported by JPA. Specific mapping implementations can be added manually, but @vlad-mihalcea has already provided sample implementation. You get it from maven central: ``` com.vladmihalcea hibernate-types-52 ${hibernate-types.version} ``` More details can be found here: <https://vladmihalcea.com/how-to-map-java-and-sql-arrays-with-jpa-and-hibernate/> After resolving the dependency, an implementation for specific mapping needs to be added. Let us take Postgres `bigint[]` as an example. It can be mapped to for example `Long[]`. Firstly, we need to add descriptor for desired type: ``` import com.vladmihalcea.hibernate.type.array.internal.AbstractArrayTypeDescriptor; public class LongArrayTypeDescriptor extends AbstractArrayTypeDescriptor { public static final LongArrayTypeDescriptor INSTANCE = new LongArrayTypeDescriptor(); public LongArrayTypeDescriptor() { super(Long[].class); } protected String getSqlArrayType() { return "bigint"; } } ``` After that the actual mapping class: ``` import com.vladmihalcea.hibernate.type.array.internal.ArraySqlTypeDescriptor; import org.hibernate.type.AbstractSingleColumnStandardBasicType; import org.hibernate.usertype.DynamicParameterizedType; import java.util.Properties; public class LongArrayType extends AbstractSingleColumnStandardBasicType implements DynamicParameterizedType { public static final LongArrayType INSTANCE = new LongArrayType(); public LongArrayType() { super(ArraySqlTypeDescriptor.INSTANCE, LongArrayTypeDescriptor.INSTANCE); } public String getName() { return "long-array"; } @Override protected boolean registerUnderJavaType() { return true; } @Override public void setParameterValues(Properties parameters) { ((LongArrayTypeDescriptor) getJavaTypeDescriptor()).setParameterValues(parameters); } } ``` That being done, all that is left to do is to take our mappings into account in Spring configuration. I am keeping my data model configuration separately in annotation based form: ``` import ibdb.model.mappers.LongArrayType; import ibdb.model.mappers.ShortArrayType; import org.hibernate.annotations.TypeDef; import org.hibernate.annotations.TypeDefs; import javax.persistence.MappedSuperclass; @TypeDefs({ @TypeDef( name = "long-array", typeClass = LongArrayType.class ), @TypeDef( name = "short-array", typeClass = ShortArrayType.class ) }) @MappedSuperclass public class DaoConfig { } ``` Now everything is ready to use. Example column annotated with newly added mapping in DAO definition looks like following: ``` @Type( type = "long-array" ) @Column( columnDefinition = "bigint[]" ) private Long[] author; ``` Upvotes: 2 [selected_answer]
2018/03/15
2,020
8,055
<issue_start>username_0: Is there any way to show RDLC Local ReportViewer control in asp.net core webpage? To show a ReportViewer, on a traditional WebForms application, the below code works. ``` ``` I have already tried and tested the below components. The results are given below. 1. [ReportViewerForMvc](https://github.com/armanio123/ReportViewerForMvc) - Works for MVC, but not compatible with ASPNET Core. 2. [MvcReportViewer](https://github.com/ilich/MvcReportViewer) - Works for MVC, but not compatible with ASPNET Core(See this issue: <https://github.com/ilich/MvcReportViewer/issues/121>). 3. [MvcReportViewer](https://github.com/alanjuden/MvcReportViewer) - Does not use microsoft viewer control, thus supports aspnet core, but does not work with Local Reports(Need a report server url). 4. [ngx-ssrs-reportviewer](https://www.npmjs.com/package/ngx-ssrs-reportviewer) npm package - A wrapper over Remote Reports, does not supports Local reports.(Need a report server url) Q1. What is the best approach to use in asp.net core application?<issue_comment>username_1: If the question is how to use Microsoft Reportviewer on ASP.NET Core project, regardless of implementation details, my solution is to bypass the actual reportviewer control and render reports directly to PDF or Excel. It works in .net Core 1.1. NuGet package we use is [Microsoft.ReportViewer.2012.Runtime by Fornax](https://www.nuget.org/packages/Microsoft.ReportViewer.2012.Runtime/). ``` using System.IO; using Microsoft.AspNetCore.Mvc; using Microsoft.Reporting.WebForms; namespace WebApplication3.Controllers { public class ReportController : Controller { private readonly IHostingEnvironment environment = null; public ReportController(IHostingEnvironment environment) { this.environment = environment; } public IActionResult Report() { string mimeType; string encoding; string filenameExtension; string[] streams; Warning[] warnings; var rv = new ReportViewer(); rv.ProcessingMode = ProcessingMode.Local; rv.LocalReport.ReportPath = Path.Combine(environment.ContentRootPath, "Reports", "Report1.rdlc"); rv.LocalReport.Refresh(); var bytes = rv.LocalReport.Render("PDF", null, out mimeType, out encoding, out filenameExtension, out streams, out warnings); return File(bytes, mimeType); } } } ``` Upvotes: 2 <issue_comment>username_2: Microsoft is not implementing or bringing RDLC report viewer into aspnet core. Instead they are purchasing a product to fill the void. Full link to news - <https://blogs.msdn.microsoft.com/sqlrsteamblog/2018/04/02/microsoft-acquires-report-rendering-technology-from-forerunner-software/> Link to original issue - <https://github.com/aspnet/Home/issues/1528> Here is the essence. "Microsoft acquires report rendering technology from Forerunner Software We’re pleased to announce that we’ve acquired technology from Forerunner Software to accelerate our investments in Reporting Services. This technology includes, among other things, client-side rendering of Reporting Services (\*.rdl) reports, responsive UI widgets for viewing reports, and a JavaScript SDK for integrating reports into other apps – a testament to what our partners can achieve building on our open platform. This is great news for you, as we see opportunities to apply this technology to multiple points of feedback we’ve heard from you: You’re looking for cloud Software-as-a-Service (SaaS) or Platform-as-a-Service (PaaS) that can run SSRS reports. As you might’ve seen in our Spring ’18 Release Notes, we’re actively working on bringing SSRS reports to the Power BI cloud service, and we’re building on client-side rendering to make that possible. You want to view SSRS reports on your phone, perhaps using the Power BI app. We believe this technology will help us deliver better, more responsive UI for supplying report parameter values, navigating within reports, and possibly even viewing report content. ***You love the Report Viewer control… but it’s an ASP.NET Web Forms control. You need something you can integrate into your ASP.NET Core/MVC app or non-ASP.NET app. With this technology, we hope to deliver a client-side/JavaScript-based Report Viewer you can integrate into any modern app.*** These are large undertakings and we don’t yet have timeframes to share, but stay tuned over the coming months as we always strive to share our progress with you and hear your feedback as early and often as we can. Forerunner Software will continue to support existing customers for a limited period of time." Upvotes: 5 [selected_answer]<issue_comment>username_3: ``` public List \_dataSourceList = new List(); public string \_dataSourceName { get; set; } public string \_reportPath = CommonUtil.Report\_path; //set your report path in app.config file public Dictionary Parameters = new Dictionary(); public void PDFPrint\_Load() { string mimtype=""; int extension = 1; LocalReport localReport= new LocalReport(\_reportPath); localReport.AddDataSource(\_dataSourceName, \_dataSourceList); if (Parameters != null && Parameters.Count > 0)// if you use parameter in report { List reportparameter = new List(); foreach (var record in Parameters) { reportparameter.Add(new ReportParameter()); } } var result = localReport.Execute(RenderType.Pdf, extension,parameters: Parameters, mimtype); byte[] bytes = result.MainStream; string fileName = "Report.pdf"; return File(bytes , "application/pdf",fileName ); } ``` Upvotes: 0 <issue_comment>username_4: Found an npm package **ng2-pdfjs-viewer**, though it is not quite the MS report viewer, if you are willing to use PDFJS, the documentation of the package has an example on similar lines to use LocalReport viewer for generating pdf and ng2-pdfjs-viewer to display it in browser - (<https://www.npmjs.com/package/ng2-pdfjs-viewer>) ```html Open Pdf ``` ```js export class MyComponent implements OnInit { @ViewChild('pdfViewer') pdfViewer ... private downloadFile(url: string): any { return this.http.get(url, { responseType: ResponseContentType.Blob }).map( (res) => { return new Blob([res.blob()], { type: "application/pdf" }); }); } public openPdf() { let url = "http://localhost:4200/api/GetMyPdf"; this.downloadFile(url).subscribe( (res) => { this.pdfViewer.pdfSrc = res; // pdfSrc can be Blob or Uint8Array this.pdfViewer.refresh(); // Ask pdf viewer to load/reresh pdf } ); } ``` ``` [HttpGet] [Route("MyReport")] public IActionResult GetReport() { var reportViewer = new ReportViewer {ProcessingMode = ProcessingMode.Local}; reportViewer.LocalReport.ReportPath = "Reports/MyReport.rdlc"; reportViewer.LocalReport.DataSources.Add(new ReportDataSource("NameOfDataSource1", reportObjectList1)); reportViewer.LocalReport.DataSources.Add(new ReportDataSource("NameOfDataSource2", reportObjectList1)); Warning[] warnings; string[] streamids; string mimeType; string encoding; string extension; var bytes = reportViewer.LocalReport.Render("application/pdf", null, out mimeType, out encoding, out extension, out streamids, out warnings); return File(bytes, "application/pdf") } ``` Upvotes: 2 <issue_comment>username_5: In order for Jame's solution to work - it requires that you reference the full .NET Framework. This is all well and good for ASP.NET Core 1 and 2, however - as everyone should be aware by now - ASP .NET 3 will **NOT** allow you to reference the full .NET Framework. Currently, it's only possible to use SSRS hosted server reports (RDL) reports with .NET Core. For client **RDLC** reports, currently only the paid Syncfusion solution works (I've tested the trail version) username_4 solution will is entirely invalid with ASP.NET Core 3 (which again - only allows you to reference .NET Core - not the .NET Framework) Upvotes: 3
2018/03/15
1,886
7,629
<issue_start>username_0: i have a tab one tab has the ck editor and the other tab has the preview so i would like to extract text from ck editor to the preview part(preview part is nothing but a div) ``` Editor Preview ``` i have tried doing this but with no luck **Script** ``` var editor = CKEDITOR.replace('textbox', { allowedContent:true, removePlugins:"about" }); $('#preview').click(function(){ // alert("test"); var test = CKEDITOR.instances.yourEditorInstance.editable().getText(); alert(test); }); ``` so when i click on the preview tab i get nothing.<issue_comment>username_1: If the question is how to use Microsoft Reportviewer on ASP.NET Core project, regardless of implementation details, my solution is to bypass the actual reportviewer control and render reports directly to PDF or Excel. It works in .net Core 1.1. NuGet package we use is [Microsoft.ReportViewer.2012.Runtime by Fornax](https://www.nuget.org/packages/Microsoft.ReportViewer.2012.Runtime/). ``` using System.IO; using Microsoft.AspNetCore.Mvc; using Microsoft.Reporting.WebForms; namespace WebApplication3.Controllers { public class ReportController : Controller { private readonly IHostingEnvironment environment = null; public ReportController(IHostingEnvironment environment) { this.environment = environment; } public IActionResult Report() { string mimeType; string encoding; string filenameExtension; string[] streams; Warning[] warnings; var rv = new ReportViewer(); rv.ProcessingMode = ProcessingMode.Local; rv.LocalReport.ReportPath = Path.Combine(environment.ContentRootPath, "Reports", "Report1.rdlc"); rv.LocalReport.Refresh(); var bytes = rv.LocalReport.Render("PDF", null, out mimeType, out encoding, out filenameExtension, out streams, out warnings); return File(bytes, mimeType); } } } ``` Upvotes: 2 <issue_comment>username_2: Microsoft is not implementing or bringing RDLC report viewer into aspnet core. Instead they are purchasing a product to fill the void. Full link to news - <https://blogs.msdn.microsoft.com/sqlrsteamblog/2018/04/02/microsoft-acquires-report-rendering-technology-from-forerunner-software/> Link to original issue - <https://github.com/aspnet/Home/issues/1528> Here is the essence. "Microsoft acquires report rendering technology from Forerunner Software We’re pleased to announce that we’ve acquired technology from Forerunner Software to accelerate our investments in Reporting Services. This technology includes, among other things, client-side rendering of Reporting Services (\*.rdl) reports, responsive UI widgets for viewing reports, and a JavaScript SDK for integrating reports into other apps – a testament to what our partners can achieve building on our open platform. This is great news for you, as we see opportunities to apply this technology to multiple points of feedback we’ve heard from you: You’re looking for cloud Software-as-a-Service (SaaS) or Platform-as-a-Service (PaaS) that can run SSRS reports. As you might’ve seen in our Spring ’18 Release Notes, we’re actively working on bringing SSRS reports to the Power BI cloud service, and we’re building on client-side rendering to make that possible. You want to view SSRS reports on your phone, perhaps using the Power BI app. We believe this technology will help us deliver better, more responsive UI for supplying report parameter values, navigating within reports, and possibly even viewing report content. ***You love the Report Viewer control… but it’s an ASP.NET Web Forms control. You need something you can integrate into your ASP.NET Core/MVC app or non-ASP.NET app. With this technology, we hope to deliver a client-side/JavaScript-based Report Viewer you can integrate into any modern app.*** These are large undertakings and we don’t yet have timeframes to share, but stay tuned over the coming months as we always strive to share our progress with you and hear your feedback as early and often as we can. Forerunner Software will continue to support existing customers for a limited period of time." Upvotes: 5 [selected_answer]<issue_comment>username_3: ``` public List \_dataSourceList = new List(); public string \_dataSourceName { get; set; } public string \_reportPath = CommonUtil.Report\_path; //set your report path in app.config file public Dictionary Parameters = new Dictionary(); public void PDFPrint\_Load() { string mimtype=""; int extension = 1; LocalReport localReport= new LocalReport(\_reportPath); localReport.AddDataSource(\_dataSourceName, \_dataSourceList); if (Parameters != null && Parameters.Count > 0)// if you use parameter in report { List reportparameter = new List(); foreach (var record in Parameters) { reportparameter.Add(new ReportParameter()); } } var result = localReport.Execute(RenderType.Pdf, extension,parameters: Parameters, mimtype); byte[] bytes = result.MainStream; string fileName = "Report.pdf"; return File(bytes , "application/pdf",fileName ); } ``` Upvotes: 0 <issue_comment>username_4: Found an npm package **ng2-pdfjs-viewer**, though it is not quite the MS report viewer, if you are willing to use PDFJS, the documentation of the package has an example on similar lines to use LocalReport viewer for generating pdf and ng2-pdfjs-viewer to display it in browser - (<https://www.npmjs.com/package/ng2-pdfjs-viewer>) ```html Open Pdf ``` ```js export class MyComponent implements OnInit { @ViewChild('pdfViewer') pdfViewer ... private downloadFile(url: string): any { return this.http.get(url, { responseType: ResponseContentType.Blob }).map( (res) => { return new Blob([res.blob()], { type: "application/pdf" }); }); } public openPdf() { let url = "http://localhost:4200/api/GetMyPdf"; this.downloadFile(url).subscribe( (res) => { this.pdfViewer.pdfSrc = res; // pdfSrc can be Blob or Uint8Array this.pdfViewer.refresh(); // Ask pdf viewer to load/reresh pdf } ); } ``` ``` [HttpGet] [Route("MyReport")] public IActionResult GetReport() { var reportViewer = new ReportViewer {ProcessingMode = ProcessingMode.Local}; reportViewer.LocalReport.ReportPath = "Reports/MyReport.rdlc"; reportViewer.LocalReport.DataSources.Add(new ReportDataSource("NameOfDataSource1", reportObjectList1)); reportViewer.LocalReport.DataSources.Add(new ReportDataSource("NameOfDataSource2", reportObjectList1)); Warning[] warnings; string[] streamids; string mimeType; string encoding; string extension; var bytes = reportViewer.LocalReport.Render("application/pdf", null, out mimeType, out encoding, out extension, out streamids, out warnings); return File(bytes, "application/pdf") } ``` Upvotes: 2 <issue_comment>username_5: In order for Jame's solution to work - it requires that you reference the full .NET Framework. This is all well and good for ASP.NET Core 1 and 2, however - as everyone should be aware by now - ASP .NET 3 will **NOT** allow you to reference the full .NET Framework. Currently, it's only possible to use SSRS hosted server reports (RDL) reports with .NET Core. For client **RDLC** reports, currently only the paid Syncfusion solution works (I've tested the trail version) username_4 solution will is entirely invalid with ASP.NET Core 3 (which again - only allows you to reference .NET Core - not the .NET Framework) Upvotes: 3
2018/03/15
515
1,755
<issue_start>username_0: When using `var years = new Date().setFullYear(new Date().getFullYear())` I am getting a 13-digit number in return. (`1521150199880`) The entire code block is as follows: ``` function changeYear(value) { var years = new Date().setFullYear(new Date().getFullYear()); alert(years) if (value < 0) { year1 = year1 - 1; document.getElementById("barText").innerHTML = years; } else if(value > 0) { years = years + 1; document.getElementById("barText").innerHTML = years; } else { alert("Error!"); } } ``` The program subtracts the year by one if the `value` input of the function is negative, and opposite for any other cases (Except when the `value` input is == 0, obviously) Anyone see the problem? I have also tried exchanginf `.getFullYear()` with `.getYear()` without luck :/<issue_comment>username_1: The value for Date object is the number of milliseconds since midnight 1 Jan 1970 UTC. Replace this line ``` var years = new Date().setFullYear(new Date().getFullYear()); ``` with this ``` var years = new Date(new Date().setFullYear(new Date().getFullYear())); ``` Upvotes: 1 <issue_comment>username_2: setFullYear() returns *The number of milliseconds between 1 January 1970 00:00:00 UTC and the updated date*. For your code to work Replace this line ``` var years = new Date().setFullYear(new Date().getFullYear()); ``` with this ``` var years = new Date(new Date().setFullYear(new Date().getFullYear())).getFullYear(); //as you want year from here ``` I don't know why you are doing this as `new Date(new Date().setFullYear(new Date().getFullYear())).getFullYear()` equals to `new Date().getFullYear()`. So you can also do this ``` var years = new Date().getFullYear(); ``` Upvotes: 3 [selected_answer]
2018/03/15
976
3,799
<issue_start>username_0: I am new to multithreading. I have a volatile variable currentPrimeNo and it will print the next prime number as implemented in run method for every new thread. But everytime I am getting currentPrimeNo as 0 for every thread. How should I keep the global variable currentPrimeNo updated? ``` public class Processor implements Runnable { private int id; private static volatile int currentPrimeNo = 0; public Processor(int id) { this.id = id; } @Override public void run() { System.out.println("Starting process id: " + id); currentPrimeNo = Utils.generateNextPrime(currentPrimeNo); System.out.println("Prime Number Associated with this thread is: " + currentPrimeNo); System.out.println("Completed process id: " + id); } } ``` And the main class is: ``` public class MainClass { @SuppressWarnings("resource") public static void main(String[] args) { System.out.println("****This is where the project starts*****"); Scanner reader = new Scanner(System.in); System.out.print("Enter number of processes you want to create: "); int n = reader.nextInt(); ExecutorService executor = Executors.newFixedThreadPool(n); for(int i=1;i<=n; i++) { executor.submit(new Processor(i)); } executor.shutdown(); try { executor.awaitTermination(10, TimeUnit.MINUTES); } catch (InterruptedException e1) { e1.printStackTrace(); } System.out.println("****This is where the project ends*****"); } } ``` Below is the generateNextPrime method from Util class: ``` public synchronized static int generateNextPrime(int currentPrime) { int nextPrime = 2; if (currentPrime <= 1) { return nextPrime; } else { for (int i = currentPrime + 1;; i++) { boolean notPrime = false; for (int j = 2; j < i; j++) { if (i % j == 0) { notPrime = true; break; } } if (notPrime == false) { return i; } } } } ``` Below is the output I am getting: \*\*\*\*This is where the project starts\*\*\*\*\* Enter number of processes you want to create: 4 Starting process id: 2 Starting process id: 3 Starting process id: 1 Starting process id: 4 Prime Number Associated with this thread is: 2 Prime Number Associated with this thread is: 2 Completed process id: 4 Completed process id: 1 Prime Number Associated with this thread is: 2 Completed process id: 2 Prime Number Associated with this thread is: 2 Completed process id: 3 \*\*\*\*This is where the project ends\*\*\*\*\*<issue_comment>username_1: The value for Date object is the number of milliseconds since midnight 1 Jan 1970 UTC. Replace this line ``` var years = new Date().setFullYear(new Date().getFullYear()); ``` with this ``` var years = new Date(new Date().setFullYear(new Date().getFullYear())); ``` Upvotes: 1 <issue_comment>username_2: setFullYear() returns *The number of milliseconds between 1 January 1970 00:00:00 UTC and the updated date*. For your code to work Replace this line ``` var years = new Date().setFullYear(new Date().getFullYear()); ``` with this ``` var years = new Date(new Date().setFullYear(new Date().getFullYear())).getFullYear(); //as you want year from here ``` I don't know why you are doing this as `new Date(new Date().setFullYear(new Date().getFullYear())).getFullYear()` equals to `new Date().getFullYear()`. So you can also do this ``` var years = new Date().getFullYear(); ``` Upvotes: 3 [selected_answer]
2018/03/15
714
2,712
<issue_start>username_0: I'm creating my first game "Endless Runner" and my problem is in the code, I don’t need to destroy the new object, I need to transfer them to the old place. ``` public class GenerateEnv1 : MonoBehaviour { public GameObject []EnvTile; float tileZ = 29.31f; // Use this for initialization void Start () { } // Update is called once per frame void Update () { } void OnCollisionEnter (Collision col) { if(col.gameObject.tag =="ground"){ GameObject o = Instantiate (EnvTile[Random.Range(0,4)], new Vector3(0f,0f,tileZ), Quaternion.EulerAngles(0,0,0)); tileZ += 2.96f; } } void OnCollisionExit (Collision col) { if(col.gameObject.tag =="ground"){ Destroy (col.gameObject,3); // this's i need to replace that in the new object } } } ```<issue_comment>username_1: Since instantiating and destroying a tile is more costly, every time you leave a tile OnCollisionExit will fire and you will move that tile right after the next tile and so on and so forth, also instantiating two tiles onStart is enough since we're reusing them ``` public class GenerateEnv1 : MonoBehaviour { public GameObject[] EnvTile; float tileZ = 29.31f; // Use this for initialization void Start() { Instantiate(EnvTile[Random.Range(0, 4)], new Vector3(0f, 0f, tileZ), Quaternion.EulerAngles(0, 0, 0)); Instantiate(EnvTile[Random.Range(0, 4)], new Vector3(0f, 0f, tileZ * 2), Quaternion.EulerAngles(0, 0, 0)); } void OnCollisionExit(Collision col) { if (col.gameObject.tag == "ground") { col.gameObject.transform.position = new Vector3(col.gameObject.transform.position.x , col.gameObject.transform.position.y, col.gameObject.transform.position.z + (tileZ * 2)); } } } ``` Upvotes: 1 <issue_comment>username_2: The solution to your problem is called Object Pooling. There's a tiny bit too little detail to provide a full code as an answer, but in general it works like that: Instantiate all the objects you'll need when the game loads, this is farily expensive and is best done at front. Than, everything you do is do some juggling with gameObject.SetActive(true/false). Generic implementation will contain a list of gameobjects (pool), you deactivate (instead of destroing) an object on OnCollisionExit, than you take new object from the pool (pick one that is not active), and activate it (and place it). It is not clear however how the colliders are placed, obviously collision will not be triggered by an object that is not yet active on scene. Is your script meant to go on the player? Upvotes: 0
2018/03/15
456
1,650
<issue_start>username_0: In my project using Entity Framework, I have a bunch of functions that look almost exactly alike, so I want to created a generic method they call: ```cs private IHttpActionResult GetData(DbSet data) ``` The problem I'm having is that the `data` parameter is saying `TEntity` has to be a reference type to work, but the type of the entity comes from auto-generated code that doesn't have any base class that I can constrain via a `where` clause on the method definition. I'd basically want to call it by getting a context and passing the table in like so: ```cs using (var context = new DataModel.MyEntities()) { GetData(context.Lab_SubSpace_Contact); } ```<issue_comment>username_1: You do not need a base class, you only have to specify a constraint that it has to be a class (not a struct). This can be done with `where TEntity : class` [Constraints on Type Parameters](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/generics/constraints-on-type-parameters) > > `where T : class` : The type argument must be a reference type; this applies also to any class, interface, delegate, or array type. > > > Modified code ``` private IHttpActionResult GetData(DbSet data) where TEntity : class ``` Upvotes: 2 <issue_comment>username_2: To expand on [@username_1's answer](https://stackoverflow.com/a/49309913/2141621), you don't have to pass the `DbSet`, you can also get that dynamically through the type parameter: ``` private IHttpActionResult GetData() where TEntity : class { using (var context = new YourContext()) { var dbSet = context.Set(); } } ``` Upvotes: 3 [selected_answer]
2018/03/15
538
1,967
<issue_start>username_0: I want to make a Chrome Bookmarklet which open a new tab with specific action. To be more exact I want to have a fixed URL like `"https://www.blablabla.com/search="` inside the bookmarklet and when I press it, I want a popup window to appear with an input field. When I type something in the input field and press enter or OK/submit it should "run" the whole link plus my query. For example, I press the bookmarklet, the input field appears and input the word "test" (without the quotes). When I press submit the query, a new tab will open with the address of `https://www.blablabla.com/search=test` as the URL. How do I do that? I tried with prompt function but I can't get it work... My question is a bit similar to [How do I get JavaScript code in a bookmarklet to execute after I open a new webpage in a new tab?](https://stackoverflow.com/questions/28998586).<issue_comment>username_1: Although it remains unclear what exact issue you encounter, try the following bookmarklet: ``` javascript:(function() { var targetUrl = "http://www.blablabla.com/search="; new Promise ( (setQuery) => {var input = window.prompt("ENTER YOUR QUERY:"); if (input) setQuery(input);} ) .then ( (query) => window.open(targetUrl + query) ); })(); ``` If it doesn't work, you should provide the problem description in more detail. Upvotes: 3 <issue_comment>username_2: @username_1's answer is mostly correct, but you don't need the promise. ``` javascript:(function() { var targetUrl = "http://www.blablabla.com/search="; var input = window.prompt("ENTER YOUR QUERY:"); if (input) window.open(targetUrl + input) })(); ``` Upvotes: 3 <issue_comment>username_3: javascript:void(window.open('http://www.URL.com/'+prompt ('Enter your Query:'))); I hope this helps. Works for me and is much simpler code than I see above. (as long as we reach the end result, that is all that matters right?) Upvotes: 2
2018/03/15
788
2,595
<issue_start>username_0: My question is why `x.proc(z)` below does print 57 instead of printing 39 ? ``` class X { protected int v=0; public X() {v+=10; System.out.println("constr X");} public void proc(X p) {System.out.println(43);} } class Y extends X { public Y() {v+=5;System.out.println("constr Y");} public void proc(X p) {System.out.println(57);} public int getV() {return v;} } class Z extends Y { public Z() {v+=9;System.out.println("constr Z");} public void proc(Z p) {System.out.println(39);} } class Main { public static void main(String argv[]) { X x = new Z(); // v=24 Y y = new Z(); // v=24 Z z = new Z(); // v=24 x.proc(z); //57 } } ``` `X x` refers to a `Z` object, and class `Z` does have the method `proc(Z p)` but it also has the method `proc(X p)`. Also the parameter `z` is of type `Z` so it would be reasonable to print 39.<issue_comment>username_1: First, the compiler chooses which method (with signature) to call at compile time first. With method resolution on which signature to call, the compile-time type of the variable is considered; polymorphism doesn't apply here. The compile-time type of `x` is `X`, even if it really is a `Z` object at runtime. Only `proc(X)` is considered, and that matches, because the `z` you pass in is an `X`. At runtime, polymorphism in Java means that the runtime of the object on which the method is called is taken into consideration for determining which override to call. `Y`'s `proc` overrides `X`'s `proc`. However, `Z`'s `proc` method takes a `Z`, not an `X`, so `Z`'s `proc` method *overloads* `proc`; it does *not* override it. Therefore, `Y`'s `proc` is chosen. Upvotes: 0 <issue_comment>username_2: The method ``` public void proc(Z p) {System.out.println(39);} ``` in `Z` does not override ``` public void proc(X p) {System.out.println(43);} ``` in `X` because it restricts the domain to `Z` instead of `X`. However, the analogous method in `Y` *does* override `proc` in `X`. Since the compile time type of `x` is `X`, the only method signature that matches `x.proc(z)` is that of `public void proc(X p)`. Only now does the dynamic dispatch take place, and the overriding version from `Y` is selected and executed, which results in output "57", as expected. Upvotes: 1 <issue_comment>username_3: Its because you are **not overriding** `proc` method but you are **overloading** it - that means that you are creating another method with the same name but different arguments list. The most fitting method will be used. Upvotes: 0
2018/03/15
432
1,133
<issue_start>username_0: I have this array of objects ``` [{ "A": "thisA", "B": "thisB", "C": "thisC" }, { "A": "thatA", "B": "thatB", "C": "thatC" }] ``` I'm trying to get this format as an end result: `[["thisA","thisC"], ["thatA","thatC"]]` I'm trying with a for loop ``` var arr = [], arr2 = []; for (var = i; i < obj.length; i++) { arr.push(obj[i].A, obj[i].C); arr2.push(arr); } ``` but I end up having `["thisA","thisC","thatA","thatC"]`<issue_comment>username_1: You can do this with `map()` method. ```js const data = [{"A": "thisA","B": "thisB","C": "thisC"}, {"A": "thatA","B": "thatB","C": "thatC"}] const result = data.map(({A, C}) => [A, C]); console.log(result) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You coulöd push an array with the values. Beside that, you need to initialize `i` with zero. ```js var objects = [{ A: "thisA", B: "thisB", C: "thisC" }, { A: "thatA", B: "thatB", C: "thatC" }], array = [], i; for (i = 0; i < objects.length; i++) { array.push([objects[i].A, objects[i].C]); } console.log(array); ``` Upvotes: 2
2018/03/15
635
1,700
<issue_start>username_0: I am getting an array error out of bounds issue. I am trying loop through a multi-dimensional array and add the value from the formula to each element. How do i fix the loop so i jump out of the array bounds. ``` z=int(4.3/7.9) V =51 T =51 r = 1 c = 1 a=[[0]*c for i in range(r)] for r in range(1,51): for c in range(1,51): a[c][r]=35.74 + 0.6215*T - (35.75*V)**0.16 + (0.4275*T*V)**0.16 print() #print to html file down below outfile=open("c:\\data\\pfile.html","w") outfile.write("") outfile.write("") outfile.write("") outfile.write("Kye Fullwood") outfile.write(" table,td{border:1px solid black;border-collaspse:collapse;background-color:aqua;}\r\n") outfile.write("") outfile.write("") outfile.write("This is a Windchill table ========================= ") outfile.write(" ") for V in range(1,51,1): outfile.write("|") for TV in range(1,51,1): outfile.write(" "+str(a[r][c])+" |\r\n") outfile.write(" ") outfile.write(" ") outfile.write("") outfile.write("") outfile.close() print("complete") ```<issue_comment>username_1: You can do this with `map()` method. ```js const data = [{"A": "thisA","B": "thisB","C": "thisC"}, {"A": "thatA","B": "thatB","C": "thatC"}] const result = data.map(({A, C}) => [A, C]); console.log(result) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You coulöd push an array with the values. Beside that, you need to initialize `i` with zero. ```js var objects = [{ A: "thisA", B: "thisB", C: "thisC" }, { A: "thatA", B: "thatB", C: "thatC" }], array = [], i; for (i = 0; i < objects.length; i++) { array.push([objects[i].A, objects[i].C]); } console.log(array); ``` Upvotes: 2
2018/03/15
821
2,723
<issue_start>username_0: I am pretty new to ionic and I tried to build my app, but it failed, so I runned “cordova requirements” and I got this error: ```html $ cordova requirements Android Studio project detected Requirements check results for android: Java JDK: installed Android SDK: installed true Android target: not installed cmd: Command failed with exit code 1 Error output: Exception in thread "main" java.lang.NoClassDefFoundError: javax/xml/bind/annotation/XmlSchema at com.android.repository.api.SchemaModule$SchemaModuleVersion.(SchemaModule.java:156) at com.android.repository.api.SchemaModule.(SchemaModule.java:75) at com.android.sdklib.repository.AndroidSdkHandler.(AndroidSdkHandler.java:81) at com.android.sdklib.tool.AvdManagerCli.run(AvdManagerCli.java:213) at com.android.sdklib.tool.AvdManagerCli.main(AvdManagerCli.java:200) Caused by: java.lang.ClassNotFoundException: javax.xml.bind.annotation.XmlSchema at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:582) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:185) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496) ... 5 more Gradle: installed D:\Program Files\Android\Android Studio\gradle\gradle-4.1\bin\gradle ``` Does anybody know how to fix this? Thanks!<issue_comment>username_1: You will have to use android sdk manager to download targets other then bundled by default. But before that check the android target version in your Ionic app. You can launch the SDK Manager in one of the following ways: > > From the Android Studio File menu: File > Settings > Appearance & Behavior > System Settings > Android SDK. > > > From the Android Studio > Tools menu: Tools > Android > SDK Manager. From the SDK Manager icon > (enter image description here) in the menu bar. > > > add the min and target version in config.xml ``` ``` Upvotes: 0 <issue_comment>username_2: This should fix it simply! As mentioned here <https://stackoverflow.com/a/50097394/2642351> try installing JDK version less than 9, like JDK 8 or so. And update your JAVA\_HOME path in the Environment Variables Upvotes: 1 <issue_comment>username_3: For me below steps saved me 1. Install the Android Studio 5.0 to 7.1 from menu: File > Settings > Appearance & Behavior > System Settings > Android SDK. 2. Install JDK 8 + 3. Set JAVA\_HOME in system variables 4. Restart machine Upvotes: 0 <issue_comment>username_4: Uninstall existing JDK which has version above 9. Install this jdk (cordova works in jdk 8+) : <https://download.oracle.com/otn-pub/java/jdk/8u191-b12/2787e4a523244c269598db4e85c51e0c/jdk-8u191-windows-x64.exe> Upvotes: 1
2018/03/15
1,638
5,839
<issue_start>username_0: My app's `urls.py` is: ``` from django.urls import path from . import views app_name = 'javascript' urlpatterns = [ path('create_table', views.create_table, name='create_table') ``` My `views.py` is: ``` def create_table(request): row_data = "this is row data" context = {'row_data': row_data} return render(request, 'javascript/create_table.html', context) ``` My create\_table.html is: ``` {% load static %} Get data ``` And my `create_table.js` is: ``` function create_table() { document.getElementById("place_for_table").innerHTML = '{{ row_data }}'; } document.getElementById("create_table").onclick = function() { create_table() } ``` What I am trying to do is to run the create\_table.js script on the click of the `create_table` button which should display "this is row data" text in `place for table` div element. However, what gets diplayed is just `{{ row_data ))`. I have read other similar questions on using Django's variables inside Javascript but as per my understanding they all suggest to do the way I did it, so I am not sure what I am doing wrong.<issue_comment>username_1: When you write `{{ row_data }}`, you're using a Django-specific "language" called [Django template language](https://docs.djangoproject.com/en/2.0/ref/templates/language/) which means that the mentioned syntax can only be "understood" by Django templates. What you're doing here is loading a separate JavaScript file in which the Django template syntax simply won't work because when browser comes to the point to evaluate that file, `{{ row_data }}` will look just another string literal, and not what you would expect to. It should work if you inline your JavaScript example directly into the Django template. Alternatively you could somehow "bootstrap" the external JavaScript file with the data available in the Django template, here's how I would go about doing that: **create\_table.html** ``` $(function() { var create\_table = Object.create(create\_table\_module); create\_table.init({ row\_data: '{{ row\_data }}', ... }); }); ``` Note: wrapping the above code in the jQuery's [.ready()](https://api.jquery.com/ready/) function is optional, but if you're already using jQuery in your app, it's a simple way to make sure the DOM is safe to manipulate after the initial page load. **create\_table.js** ``` var create_table_module = (function($) { var Module = { init: function(opts) { // you have access to the object passed // to the `init` function from Django template console.log(opts.row_data) }, }; return Module; })(jQuery); ``` Note: passing jQuery instance to the module is optional, it's just here as an example to show how you can pass an external dependancy to the module. Upvotes: 3 <issue_comment>username_2: If you've got an element in your template which you're getting to then detect clicks, why not just do it the other way around where you can then pass the context variable to your JS function? ``` Click me ``` By doing that you can inspect the page to see if the data is going to be passed correctly. You'll probably have to pass the data through a filter like [`escapejs`](https://docs.djangoproject.com/en/2.0/ref/templates/builtins/#escapejs) or [`safe`](https://docs.djangoproject.com/en/2.0/ref/templates/builtins/#safe). Alternatively you could do something like ``` {% load static %} Get data var row\_data = "{{ row\_data }}"; ``` The issue with this approach is the scope of variables as you may not want to declare things globally so it could be considered an easy approach, but not necessarily the best solution. Upvotes: 4 [selected_answer]<issue_comment>username_3: What I did was to include the javascript/jquery inside {% block scripts %} and use the the Django specific data as follows: ` ``` $.ajax({ type:"GET", url: "/reserve/run/?ip={{ row_data }}", dataType: "html", async: true, }).done(function(response) { $("#Progress").hide(); $('#clickid').attr('href','javascript:ClearFlag()'); var win = window.open("", "MsgWindow"); win.document.write(response); }); ``` ` Upvotes: 2 <issue_comment>username_4: I've found a solution to avoid the extra typing of all the previous answers. It's a bit hacky: Just transform you `myjavascriptfile.js` into `myjavascriptfile.js.html` and wrap the code in a `...` tag. Than `include` them instead of linking them in your template file. **myTemplate.html** ``` .... {% block js_footer %} {% include "my_app/myjavascriptfile.js.html" %} {% endblock js_footer %} ``` **myjavascriptfile.js.html** ``` console.log('the time now is {% now "Y m d H:i:s" %}'); ... ``` Upvotes: 2 <issue_comment>username_5: instead of writing the function in a separated js file, write it in script tags within your html page, so it can use the django template language Upvotes: 0 <issue_comment>username_6: You can use Django template tags and filters in . For example, you can pass `Hello` to [with](https://docs.djangoproject.com/en/4.2/ref/templates/builtins/#with) tag's `dj_val` and [upper](https://docs.djangoproject.com/en/4.2/ref/templates/builtins/#upper) filter in in `index.html`, then `Hello` and `HELLO` was displayed on console as shown below: ``` {% "index.html" %} {% with dj\_val="Hello" %} console.log("{{ dj\_val }}") # Hello {% endwith %} console.log("{{ "Hello"|upper }}") # HELLO ``` Be careful, you cannot pass JavaScript's `js_val` set `Hello` to `with` tag's `dj_val` and `upper` filter in in `index.html`, then nothing was displayed on console as shown below: ``` {% "index.html" %} let js\_val = "Hello" {% with dj\_val=js\_val %} console.log("{{ dj\_val }}") # Nothing {% endwith %} console.log("{{ js\_val|upper }}") # Nothing ``` Upvotes: 0
2018/03/15
660
2,653
<issue_start>username_0: I'm trying to log out of my application that's using AWS Cognito by calling their logout [endpoint](https://docs.aws.amazon.com/cognito/latest/developerguide/logout-endpoint.html). I'm not using the AWS SDK because as far as I can tell, it does not yet cover oauth app integrations and sign in using external federated identity providers (please correct me if I'm wrong about that). I log in from an AWS-hosted login screen that I'm redirected to when I call their authorization [endpoint](https://docs.aws.amazon.com/cognito/latest/developerguide/authorization-endpoint.html). They redirect me back to my page with a "code" which I post back to them using their token [endpoint](https://docs.aws.amazon.com/cognito/latest/developerguide/token-endpoint.html) to get tokens. All of this is textbook oauth 2.0 stuff. The problem is that when I call the logout endpoint using a JavaScript browser redirect (window.location.href = ....) it doesn't clear the cookies that are set when I logged in ("XSRF-TOKEN" and "cognito") and I can't manually clear them because they were set from the AWS domain which is different from the one where my site is hosted. The cookies do get cleared when I enter the logout link in the address bar. There's clearly a difference between using window.location.href in code and dropping a link in my address bar.<issue_comment>username_1: To clear out the sessoin you need to use `clearCachecId()` and then reset the Cognito Id credentials. This is my function using the AWS SDK: ``` import AWS from 'aws-sdk/global'; const getCurrentUser = () => { const userPool = newCognitoUserPool({ UserPoolId: YOUR_USER_POOL_ID, ClientId: YOUR_APP_CLIENT_ID }); return userPool.getCurrentUser(); } const signOutUser = () => { const currentUser = getCurrentUser(); if (currentUser !== null) { curentUser.signOut(); } if (AWS.config.credentials) { AWS.config.credentials.clearCachedId(); // this is the clear session AWS.config.credentials = new AWS.CognitoIdentityCredentials({}); // this is the new instance after the clear } } ``` That should take care of that. Upvotes: 1 <issue_comment>username_2: It's a timing issue involving the use of windows.location and cookies. It seems that I was causing the same cookie, XSRF-TOKEN, to be unset and then reset so fast that it was just not happening at all. Inserting a timeout between logging out and redirecting back to the log in screen fixes the problem. There are some guys on this thread who seem to know something about it: <https://bytes.com/topic/javascript/answers/90960-window-location-cookies> Upvotes: 0
2018/03/15
518
2,022
<issue_start>username_0: I have more then one property I need to grab, that starts with the same prefix but I can only get the exact value by key for `ModelBindingContext.ValueProvider`. Is there a way to grab multiple ValueProviders or iterate the `System.Web.Mvc.DictionaryValueProvider`? ``` var value = bindingContext.ValueProvider.GetValue(propertyDescriptor.Name); ``` The reason for doing this is a dynamic property called Settings which will bind to json properties below. Right now there is no property called "Enable" on Settings so it doesnt bind normally. ``` public class Integration { public dynamic Settings {get;set;} } "Integrations[0].Settings.Enable": "true" "Integrations[0].Settings.Name": "Will" ```<issue_comment>username_1: To clear out the sessoin you need to use `clearCachecId()` and then reset the Cognito Id credentials. This is my function using the AWS SDK: ``` import AWS from 'aws-sdk/global'; const getCurrentUser = () => { const userPool = newCognitoUserPool({ UserPoolId: YOUR_USER_POOL_ID, ClientId: YOUR_APP_CLIENT_ID }); return userPool.getCurrentUser(); } const signOutUser = () => { const currentUser = getCurrentUser(); if (currentUser !== null) { curentUser.signOut(); } if (AWS.config.credentials) { AWS.config.credentials.clearCachedId(); // this is the clear session AWS.config.credentials = new AWS.CognitoIdentityCredentials({}); // this is the new instance after the clear } } ``` That should take care of that. Upvotes: 1 <issue_comment>username_2: It's a timing issue involving the use of windows.location and cookies. It seems that I was causing the same cookie, XSRF-TOKEN, to be unset and then reset so fast that it was just not happening at all. Inserting a timeout between logging out and redirecting back to the log in screen fixes the problem. There are some guys on this thread who seem to know something about it: <https://bytes.com/topic/javascript/answers/90960-window-location-cookies> Upvotes: 0
2018/03/15
629
1,542
<issue_start>username_0: i want to paste the values from a vector to the column names of a data frame. Suppose that the file has 4 columns and the tmp2 vector has 4 values. The for loop works perfectly fine(the new colnames is the value of tmp and the value of tmp2 together) but i want to try the same with an apply family function ``` tmp <- colnames(file) tmp2 <- c(1,2,3,4) for(i in 1:length(tmp)){ names(tmp)[i] = paste(tmp[i], tmp2[i]) } ``` For example something like this ``` sapply(tmp,function(x,y){ names(x)<-paste(x,y) },y=tmp2) ``` Any ideas?<issue_comment>username_1: `names(tmp)=mapply(x=tmp,y=tmp2,function(x,y)paste(x,y))` Upvotes: 1 <issue_comment>username_2: You don't need any `loop`. The `paste` is vectorised too. As I have understood OP wants to suffix all column with a sequence number. Just try: ``` names(file) <- paste(names(file),1:ncol(file), sep = "") # A1 B2 C3 # 1 A 1 101 # 2 B 2 102 # 3 C 3 103 # 4 D 4 104 # 5 E 5 105 ``` **Data** ``` file <- data.frame(A = "A":"E", B = 1:5, C = 101:105) file # A B C # 1 A 1 101 # 2 B 2 102 # 3 C 3 103 # 4 D 4 104 # 5 E 5 105 ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: If you want to do it `data.table` way which is not computationally expensive. ``` library(data.table) file <- data.table(x=(1:3),y=letters[1:3],z=(1),w=letters[1:3]) tmp2 <- paste(colnames(file),rep(1:nrow(file))) setnames(file,1:ncol(file),tmp2) ``` EDIT: If you want to suffix `colnames` with sequence numbers, as mentioned by @username_2 Upvotes: 1
2018/03/15
1,399
3,918
<issue_start>username_0: I came here from [Stackover Topic](https://stackoverflow.com/questions/49308242/my-excel-macro-is-working-extremely-slow/49308558?noredirect=1#comment85620443_49308558) to doing faster macro. I got answer but this code not working and i am asking to you, (i tried to fix) ``` Sub Faster_Method() Dim objIE As InternetExplorer Dim Prc1 As String Set objIE = New InternetExplorer Dim Search_Terms() As Variant Dim CopiedData() As Variant objIE.Visible = True Search_Terms() = ActiveSheet.Range("A1:A121").Value ReDim CopiedData(1 To UBound(Search_Terms) + 1) For a = 1 To UBound(Search_Terms) + 1 objIE.navigate "https://opskins.com/?loc=shop_search&app=578080_2&sort=lh&search_item=" & Search_Terms(a) Do: DoEvents: Loop Until objIE.readyState = 4 Prc1 = objIE.document.getElementsByClassName("item-amount")(0).innerText CopiedData(a) = Prc1 Next ActiveSheet.Range(Cells(1, 2), Cells(UBound(CopiedData), 2)).Value = CopiedData objIE.Quit End Sub ``` **And the error is :** `run time error '9' subscript out of range` **Debug is :** `objIE.navigate "https://opskins.com/?loc=shop_search&app=578080_2&sort=lh&search_item=" & Search_Terms(a)` **Fixed** ``` Sub Faster_Method() Dim objIE As InternetExplorer Dim Prc1 As String Set objIE = New InternetExplorer Dim Search_Terms() As Variant Dim CopiedData() As Variant Dim y As Integer objIE.Visible = True Search_Terms = Application.Transpose(ActiveSheet.Range("A1:A121").Value) ReDim CopiedData(LBound(Search_Terms) To UBound(Search_Terms)) y = 1 For a = LBound(Search_Terms) To UBound(Search_Terms) objIE.navigate "https://opskins.com/?loc=shop_search&app=578080_2&sort=lh&search_item=" & Search_Terms(a) Do: DoEvents: Loop Until objIE.readyState = 4 Prc1 = objIE.document.getElementsByClassName("item-amount")(0).innerText Sheets("Sheet1").Range("B" & y).Value = Prc1 y = y + 1 Next ActiveSheet.Range(Cells(1, 2), Cells(UBound(CopiedData), 2)) = Application.Transpose(CopiedData) objIE.Quit End Sub ```<issue_comment>username_1: There are a few problems. ``` Search_Terms() = ActiveSheet.Range("A1:A121").Value ``` The above code creates a 2-D array of Search\_Terms(1 to 121, 1 to 1). This can be verified with, ``` debug.print lbound(Search_Terms, 1) & " to " & ubound(Search_Terms, 1) debug.print lbound(Search_Terms, 2) & " to " & ubound(Search_Terms, 2) ``` Next you attempt to reshape a new array with, ``` ReDim CopiedData(1 To UBound(Search_Terms) + 1) ``` This converts the 2-D array to a new blank 1-D array of CopiedData(1 to 122). Now you go into a loop. ``` For a = 1 To UBound(Search_Terms) + 1 ``` The ubound here is the first rank (e.g. 1 to 121). You cannot go to Search\_Terms(122, 1) because it doesn't exist. So when *a* becomes 122 the following crashes with run time error '9' subscript out of range. ``` objIE.navigate "https://opskins.com/?loc=shop_search&app=578080_2&sort=lh&search_item=" & Search_Terms(a) ``` **Possible solution(s)** ``` Search_Terms = application.transpose(ActiveSheet.Range("A1:A121").Value) ``` Creates a 1-D array of Search\_Terms(1 to 121). ``` ReDim CopiedData(LBound(Search_Terms) To UBound(Search_Terms)) ``` Reshape the target array to the same dimensions. ``` For a = LBound(Search_Terms) To UBound(Search_Terms) ``` Don't go outside of the Lower Boundary or the Upper Boundary. ``` ActiveSheet.Range(Cells(1, 2), Cells(UBound(CopiedData), 2)) = application.transpose(CopiedData) ``` You transposed the A1:A121 to get it into a 1-D array. It makes sense you need to transpose an identical 1-D array to put it back into B1:B121. Upvotes: 3 [selected_answer]<issue_comment>username_2: There may be an issue with the result of getElementsByClassName not being compatible with CopiedData(). I suggest eliminating CopiedData() and saving prc1 directly to Excel cells to see if that eliminates the wrong data type error. Upvotes: 0
2018/03/15
774
3,057
<issue_start>username_0: I'm trying to figure out some issue I'm having. Basically what my issue is, is that the values that are being returned by my methods aren't right. I have the print line statement just to be sure it's working but it always returns `1`, even when I call to another method that should return a String. The variable `current_number/image_number` is supposed to be updated every time I call to a method (If I keep calling to forward starting from `1`, I should be getting `2`, `3`, `4` etc..). This is my code ``` public class Menu { static final int MIN_NUMBER = 1; static final int MAX_NUMBER = 8; static int image_number = 1; static boolean exit; public static int forward(int current_number) { if (current_number < MAX_NUMBER) { current_number++; } else if (current_number >= MAX_NUMBER) { current_number = MIN_NUMBER; } return current_number; } public static void showMenu() { int current_number = image_number; // global int that equals 1 while (!exit) { Scanner input = new Scanner(System.in); Random rand = new Random(); int randomvalue = rand.nextInt(MAX_NUMBER) + MIN_NUMBER; // used in another method System.out.println("1. forward"); // menu with options System.out.println("2."); System.out.println("3."); System.out.println("4. Exit"); System.out.println(current_number); int choice = input.nextInt(); switch (choice) { case 1: forward(current_number); break; case 4: exit = true; break; } } } } ```<issue_comment>username_1: whenever you call the method a copy of current\_number is passed. Then when the method ends, the copy is discarded. Try using the Integer object instead. Upvotes: 0 <issue_comment>username_2: Ok, I think I understand. In your `switch` statement, when you call the forward method, where the new number is return, you need a value to pass the new number to, otherwise it gets lost. Changing your code to this might help. ``` switch (choice) { case 1: current_number = forward(current_number); break; case 4: exit = true; break; } ``` Upvotes: 1 <issue_comment>username_3: `int` is a primitive data type in Java. This means that your function `forward`'s `current_number` is a different instance than your methods `current_number`. The variable you passed in is of the same value, but not the same reference. To fix you need to assign the value of `current_number` to the result of `forward`. ``` current_number = forward(current_number); ``` Useful reads: * [Is Java "pass-by-reference" or "pass-by-value"?](https://stackoverflow.com/questions/40480/is-java-pass-by-reference-or-pass-by-value) * <https://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html> Upvotes: 1
2018/03/15
627
2,091
<issue_start>username_0: Question -------- When doing an update-join, where the `i` table has multiple rows per key, how can you control which row is returned? Example ------- In this example, the update-join returns the last row from `dt2` ``` library(data.table) dt1 <- data.table(id = 1) dt2 <- data.table(id = 1, letter = letters) dt1[ dt2 , on = "id" , letter := i.letter ] dt1 # id letter # 1: 1 z ``` How can I control it to return the 1st, 2nd, nth row, rather than defaulting to the last? --- References ---------- A couple of references similar to this by user @Frank * [data.table tutorial](http://franknarf1.github.io/r-tutorial/_book/tables.html#joins-update) - in particular the 'warning' on update-joins * [Issue on github](https://github.com/Rdatatable/data.table/issues/2022)<issue_comment>username_1: whenever you call the method a copy of current\_number is passed. Then when the method ends, the copy is discarded. Try using the Integer object instead. Upvotes: 0 <issue_comment>username_2: Ok, I think I understand. In your `switch` statement, when you call the forward method, where the new number is return, you need a value to pass the new number to, otherwise it gets lost. Changing your code to this might help. ``` switch (choice) { case 1: current_number = forward(current_number); break; case 4: exit = true; break; } ``` Upvotes: 1 <issue_comment>username_3: `int` is a primitive data type in Java. This means that your function `forward`'s `current_number` is a different instance than your methods `current_number`. The variable you passed in is of the same value, but not the same reference. To fix you need to assign the value of `current_number` to the result of `forward`. ``` current_number = forward(current_number); ``` Useful reads: * [Is Java "pass-by-reference" or "pass-by-value"?](https://stackoverflow.com/questions/40480/is-java-pass-by-reference-or-pass-by-value) * <https://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html> Upvotes: 1
2018/03/15
2,112
7,092
<issue_start>username_0: I have a system that is generating very large text logs (in excess of 1GB each). The utility into which I am feeding them requires that each file be less than 500MB. I cannot simply use the split command because this runs the risk of splitting a log entry in half, which would cause errors in the utility to which they are being fed. I have done some research into split, csplit, and awk. So far I have had the most luck with the following: `awk '/REG_EX/{if(NR%X >= (X-Y) || NR%2000 <= Y)x="split"++i;}{print > x;}' logFile.txt` In the above example, X represents the number of lines I want each split file to contain. In practice, this ends up being about 10 million. Y represents a "plus or minus." So if I want "10 million plus or minus 50", Y allows for that. The actual regular expression I use is not important, because that part works. The goal is that the file be split every X lines, but only if it is an occurrence of REG\_EX. This is where the if() clause comes in. I attempted to have some "wiggle room" of plus or minus Y lines, because there is no guarantee that REG\_EX will exist at exactly NR%X. My problem is that if I set Y too small, then I end up with files with two or three times the number of lines I am aiming for. If I set Y too large, then I end up with some files containing anywhere between 1 and X lines(it is possible for REG\_EX to occurr several times in immediate succession). Short of writing my own program that traverses the file line by line with a line counter, how can I go about elegantly solving this problem? I have a script that a co-worker created, but it takes easily over an hour to complete. My awk command completes in less than 60 seconds on a 1.5GB file with a X value of 10 million, but is not a 100% solution. == EDIT == Solution found. Thank you to everyone who took the time to read my question, understand it, and provide a suggested solution. Most of them were very helpful, but the one I marked as the solution provided the greatest assistance. My problem was with my modular math being the cutoff point. I needed a way to keep track of lines and reset the counter each time I split a file. Being new to awk, I wasn't sure how to utilize the `BEGIN{ ... }` feature. Allow me to summarize the problem set and then list the command that solved the problem. PROBLEM: -- System produces text logs > 1.5GB -- System into which logs are fed requires logs <= 500MB. -- Every log entry begins with a standardized line -- using the split command risks a new file beginning WITHOUT the standard line REQUIREMENTS: -- split files at Xth line, BUT -- IFF Xth line is in the standard log entry format NOTE: -- log entries vary in length, with some being entirely empty SOLUTION: ``` awk 'BEGIN {min_line=10000000; curr_line=1; new_file="split1"; suff=1;} \ /REG_EX/ \ {if(curr_line >= min_line){new_file="split"++suff; curr_line=1;}} \ {++curr_line; print > new_file;}' logFile.txt ``` The command can be typed on one line; I broke it up here for readability. Ten million lines works out to between 450MB and 500MB. I realized that given that how frequently the standard log entry line occurrs, I didn't need to set an upper line limit so long as I picked a lower limit with room to spare. Each time the REG\_EX is matched, it checks to see if the current number of lines is greater than my limit, and if it is, starts a new file and resets my counter. Thanks again to everyone. I hope that anyone else who runs into this or a similar problem finds this useful.<issue_comment>username_1: You could potentially split the log file by 10 million lines. Then if the 2nd split file does not start with desired line, go find the last desired line in 1st split file, delete that line and subsequent lines from there, then prepend those lines to 2nd file. Repeat for each subsequent split file. This would produce files with very similar count of your regex matches. In order to improve performance and not have to actually write out intermediary split files and edit them, you could use a tool such as [pt-fifo-split](https://www.percona.com/doc/percona-toolkit/LATEST/pt-fifo-split.html) for "virtually" splitting your original log file. Upvotes: 0 <issue_comment>username_2: If splitting based on the regex is not important, one option would be to create new files line-by-line keeping track of the number of characters you are adding to an output file. If the number of characters are greater than a certain threshold, you can start outputting to the next file. An example command-line script is: ``` cat logfile.txt | awk 'BEGIN{sum=0; suff=1; new_file="tmp1"} {len=length($0); if ((sum + len) > 500000000) { ++suff; new_file = "tmp"suff; sum = 0} sum += len; print $0 > new_file}' ``` In this script, `sum` keeps track of the number of characters we have parsed from the given log file. If `sum` is within 500 MB, it keeps outputting to `tmp1`. Once `sum` is about to exceed that limit, it will start outputting to `tmp2`, and so on. This script will not create files that are greater than the size limit. It will also not break a log entry. Please note that this script doesn't make use of any pattern matching that you used in your script. Upvotes: 1 [selected_answer]<issue_comment>username_1: If you want to create split files based on exact n-count of pattern occurrences, you could do this: ``` awk '/^MYREGEX/ {++i; if(i%3==1){++j}} {print > "splitfilename"j}' logfile.log ``` Where: * `^MYREGEX` is your desired pattern. * `3` is the count of pattern occurrences you want in each file. * `splitfilename` is the prefix of the filenames to be created. * `logfile.log` is your input log file. * `i` is a counter which is incremented for each occurrence of the pattern. * `j` is a counter which is incremented for each n-th occurrence of the pattern. Example: ``` $ cat test.log MY 123 ksdjfkdjk MY 234 23 MY 345 MY MY 456 MY MY xyz xyz MY something $ awk '/^MY/ {++i; if(i%3==1){++j}} {print > "file"j}' test.log $ ls file1 file2 file3 test.log $ head file* ==> file1 <== MY 123 ksdjfkdjk MY 234 23 MY 345 ==> file2 <== MY MY 456 MY ==> file3 <== MY xyz xyz MY something ``` Upvotes: 1 <issue_comment>username_3: Replace `fout` and `slimit` values to your needs ``` #!/bin/bash # big log filename f="test.txt" fout="$(mktemp -p . f_XXXXX)" fsize=0 slimit=2500 while read line; do if [ "$fsize" -le "$slimit" ]; then # append to log file and get line size at the same time ;-) lsize=$(echo "$line" | tee -a $fout | wc -c) # add to file size fsize=$(( $fsize + $lsize )) else echo "size of last file $fout: $fsize" # create a new log file fout="$(mktemp -p . f_XXXXX)" # reset size counter fsize=0 fi done < <(grep 'YOUR_REGEXP' "$f") size of last file ./f_GrWgD: 2537 size of last file ./f_E0n7E: 2547 size of last file ./f_do2AM: 2586 size of last file ./f_lwwhI: 2548 size of last file ./f_4D09V: 2575 size of last file ./f_ZuNBE: 2546 ``` Upvotes: 0
2018/03/15
765
2,707
<issue_start>username_0: **I am using Laravel 5.6.7 and vue.js** **Validation Message** When the validation works in vue.js, it says "The User Name field is required.". Can it be "Please enter user name"? Can I use two custom error messages for same field? Example > > 1. Required: Please enter username. > 2. Alpha: Please enter alpha chars only. > > > I am using vee-validate Package for Form Validation **Below is the code in blade** ``` ``` **Code in vue.js Component.** ``` {{ errors.first('User Name') }} Create export default { data() { return { createForm: { UserName: '' } }; }, methods: { validateBeforeSubmit() { this.$validator.validateAll(); } } } ```<issue_comment>username_1: that is will not work use ajax to get data between vuejs and server Upvotes: -1 <issue_comment>username_2: To achieve what you want you can use [**Field Specific Custom Messages**](http://vee-validate.logaretm.com/configuration.html#field-sepecific-messages). Not much to it, you just create a `customMessages` dictionary and add it to your validator using `.localize()`. So, nothing in your template changes, but in your `</code> you would declare the <code>customMessages</code> and apply it in the <code>created()</code> lifecycle hook, as shown below and in the demo:</p> <pre class="lang-html prettyprint-override"><code><script> const customMessages = { // added custom: { // added 'User Name': { // added required: 'Required: Please enter username', // added alpha: 'Alpha: Please enter alpha chars only.' // added } // added } // added }; // added export default { data() { return { createForm: { UserName: '' } }; }, created() { // added // change 'en' to your locale // added this.$validator.localize('en', customMessages); // added }, // added methods: { validateBeforeSubmit() { this.$validator.validateAll(); } } }` **Runnable Demo:** ```js Vue.use(VeeValidate); const customMessages = { custom: { 'User Name': { required: 'Required: Please enter username', alpha: 'Alpha: Please enter alpha chars only.' } } }; Vue.component('create-user', { template: '#create-user-template', data() { return { createForm: { UserName: '' } }; }, created() { // change 'en' to your locale this.$validator.localize('en', customMessages); }, methods: { validateBeforeSubmit() { this.$validator.validateAll(); } } }); new Vue({ el: '#app' }) ``` ```html {{ errors.first('User Name') }} Create To see the 'required' message: type "x", then delete To see the 'alpha' message: type "\_" ``` Upvotes: 3 [selected_answer]
2018/03/15
403
1,379
<issue_start>username_0: I've a class like the following one: ``` class BigButton : public QWidget { Q_OBJECT public: BigButton(QWidget* parent = nullptr); virtual ~BigButton() = default; void setSvgImagePath(const QString& imagePath); void setLabel(const QString& label) override; private: QLabel* m_label; QSvgWidget* m_svgImage; QPushButton* m_button; }; ``` I want to create a stylesheet for my application that allows to set some properties (like the background color) of the private `QPushButton` member `m_button`, but not other `QPushButton` around my GUI. I've seen how to set [stylesheet for subclasses](http://doc.qt.io/qt-5/stylesheet-syntax.html), but I cannot find a way to set the stylesheet for the specific private member of a class. Is there a way to achieve it?<issue_comment>username_1: ``` m_button->setStyleSheet(m_button->styleSheet().append("background-color: rgb(9, 91, 255);")); ``` This will set the background color only for m\_button Upvotes: 1 <issue_comment>username_2: As suggested by eyllanesc, [set an object name](http://doc.qt.io/qt-5/qobject.html#objectName-prop) to your button, and use the [ID selector](http://doc.qt.io/qt-5/stylesheet-syntax.html) in your stylesheet: ``` m_button->setObjectName("myButton"); ``` --- ``` widget->setStyleSheet("QPushButton#myButton{...}"); ``` Upvotes: 3 [selected_answer]
2018/03/15
2,502
7,305
<issue_start>username_0: I am new to C language. I am trying to implement the formula `(r to power of (-3/2)) * exp(t*(r-1/r)`. I am using `pow` function but when I keep this equation in for loop it continuously implementing the loop saying that `"pow domain error"`. I want the increment in for loop to be of `0.2`. Please help me in implementing this formula. The code is: ``` #include #include #include void main() { double R,r; float f; clrscr(); for(r=0;r<=5;r+0.2) { R=pow(r,-1.5)\*exp(2.5\*(r-1)/r); f=r-R; printf("value of f is:%f",f); } getch(); } ```<issue_comment>username_1: From <http://en.cppreference.com/w/c/numeric/math/pow>: > > If base is zero and exp is negative, a domain error or a pole error may occur. > > > You run into that situation when you evaulate `pow(r, -1.5)` with the value of `r` set to zero. Change your loop so that `r` starts with 0.2 instead of zero. Also, you need to use `r += 0.2` instead of `r+0.2` in the `for` statement. ``` for ( r = 0.2; r <= 5; r += 0.2) { R = pow(r, -1.5)*exp(2.5*(r-1)/r); f = r-R; printf("value of f is:%f", f); } ``` Upvotes: 1 <issue_comment>username_2: You may want to be aware that: The C Standard, `7.12.1 [ISO/IEC 9899:2011]`, defines three types of errors that relate specifically to math functions in . Paragraph 2 states > > A **domain error** occurs if an input argument is **outside the domain** over > which the mathematical function is defined. > > > Paragraph 3 states > > A pole error (also known as a singularity or infinitary) occurs if the > mathematical function has an exact infinite result as the finite input > argument(s) are approached in the limit. > > > Paragraph 4 states > > A range error occurs if the mathematical result of the function cannot > be represented in an object of the specified type, due to extreme > magnitude. > > > The available **domain** for function **`pow(x,y)`** is: ``` (x > 0) || (x == 0 && y > 0) || (x < 0 && y is an integer) ``` Which means that you cannot use `r = 0` as an argument of `pow`. 1) `pow(r, -1.5) = r^(-3/2) = 1.0/ r^3/2 = 1.0/sqrt(r*r*r);` 2) In C language `main` function should return `int` value and takes `void` not empty bracket `()` 3) Incrementing operation can be done via `r = r + 0.2` or `r += 0.2` 4) To avoid division by zero in `(r-1)/r` close to `0` value has to be used instead. 5) In order to not loose `double` precision the variable `f` should be also declared `double`. 6) The value of `f` for `r=0` is extremely difficult to calculate. The algorithm is numerically unstable for `r = 0`. We have division of two infinities. The proper way to solve it is to find function limit at point `0+`. Numerically, its is enough to set `r` close to `0`. [![enter image description here](https://i.stack.imgur.com/OQiqp.png)](https://i.stack.imgur.com/OQiqp.png) 7) You may consider much more smaller step for `r` in order to capture a better plot due to the nature of your `f` function. ``` #include #include #include #define ZERO\_PLUS 1e-14 void calculate\_and\_print(double r) { double f, f1; f = r - pow(r, -1.5) \* exp(2.5\*(r-1)/r); f1 = r - (1.0 /sqrt(r\*r\*r)) \* exp(2.5\*(r-1)/r); printf("Value of f for r= %f is f= %f f1= %f \n", r, f, f1); } int main(void) { clrscr(); calculate\_and\_print(ZERO\_PLUS); for (double r = 0.2; r < 5.0; r += 0.2 ) calculate\_and\_print(r); calculate\_and\_print(5.0); return 0; } ``` Test: ``` Value of f for r= 0.000000 is f= 0.000000 f1= 0.000000 Value of f for r= 0.200000 is f= 0.199492 f1= 0.199492 Value of f for r= 0.400000 is f= 0.307038 f1= 0.307038 Value of f for r= 0.600000 is f= 0.193604 f1= 0.193604 Value of f for r= 0.800000 is f= 0.051949 f1= 0.051949 Value of f for r= 1.000000 is f= 0.000000 f1= 0.000000 Value of f for r= 1.200000 is f= 0.046058 f1= 0.046058 Value of f for r= 1.400000 is f= 0.166843 f1= 0.166843 Value of f for r= 1.600000 is f= 0.338256 f1= 0.338256 Value of f for r= 1.800000 is f= 0.542116 f1= 0.542116 Value of f for r= 2.000000 is f= 0.765977 f1= 0.765977 Value of f for r= 2.200000 is f= 1.001644 f1= 1.001644 Value of f for r= 2.400000 is f= 1.243810 f1= 1.243810 Value of f for r= 2.600000 is f= 1.489073 f1= 1.489073 Value of f for r= 2.800000 is f= 1.735278 f1= 1.735278 Value of f for r= 3.000000 is f= 1.981075 f1= 1.981075 Value of f for r= 3.200000 is f= 2.225642 f1= 2.225642 Value of f for r= 3.400000 is f= 2.468498 f1= 2.468498 Value of f for r= 3.600000 is f= 2.709387 f1= 2.709387 Value of f for r= 3.800000 is f= 2.948194 f1= 2.948194 Value of f for r= 4.000000 is f= 3.184898 f1= 3.184898 Value of f for r= 4.200000 is f= 3.419535 f1= 3.419535 Value of f for r= 4.400000 is f= 3.652177 f1= 3.652177 Value of f for r= 4.600000 is f= 3.882916 f1= 3.882916 Value of f for r= 4.800000 is f= 4.111856 f1= 4.111856 Value of f for r= 5.000000 is f= 4.339103 f1= 4.339103 ``` [![enter image description here](https://i.stack.imgur.com/vS0xV.png)](https://i.stack.imgur.com/vS0xV.png) Let me know if you have more questions. Upvotes: 3 [selected_answer]
2018/03/15
431
1,411
<issue_start>username_0: There are a lot of questions on how to delete all rows older than 30 days but i can't find anything same with mine so i can fix it i need to delete some records of messages that are older than 30 days, the column name with the date is named sentOn and the rows in that column looks like this 2018-01-12 12:25:00 How should i format my query to delete all records from the table containing those that are older than 30 days? ``` DELETE FROM messages WHERE sentOn < '2018-02-21 00:00:00'; ``` would this work? EDIT: above query works but very very slowly any way to make it faster? i tried now() but it gives error that the function is wrong<issue_comment>username_1: ``` DELETE FROM messages WHERE sentOn > '2018-02-21 00:00:00'; ``` You want to delete messages that are greater than '2018-02-21 00:00:00'. You can check that the logic is correct first by Select \* FROM messages WHERE sentOn > '2018-02-21 00:00:00'. Upvotes: -1 <issue_comment>username_2: The following code will delete the records of messages that are older than 30 days ``` DELETE FROM messages WHERE sentOn < NOW() - INTERVAL 30 DAY; ``` The `NOW()` method in MySQL is used to pick the current date with time. `INTERVAL 30 DAY` used for subtracting 30 days from the current date. After the above query, you can check the current table using the `SELECT` statement. Thank you! Upvotes: 5 [selected_answer]
2018/03/15
509
1,871
<issue_start>username_0: Is it possible to set an emptyText just like I did with a title ? ``` me.myForm.down('#field').setTitle('New Title'); { xtype: 'filefield', width: 490, fieldLabel: 'Buscar Foto', labelWidth: 90, emptyText: 'Text I want to set', buttonText: 'Buscar Foto', }, ``` emptyText, I want to change it depending on some conditions.<issue_comment>username_1: Nope as of ExtJS 6.5.3 Classic. Although the [emptyText](http://docs.sencha.com/extjs/6.2.0/classic/Ext.form.field.File.html#cfg-emptyText) is bindable property. And there is in fact [.setEmptyText(](http://docs.sencha.com/extjs/6.5.3/classic/Ext.form.field.File.html#placeholder-setEmptyText)) method. **It does not work.** Note the message: > > Overridden to undefined as emptyText is not supported with > inputType:'file' and should be avoided. The default text to place into > an empty field. > > > And from my little test, it doesn't work <https://fiddle.sencha.com/#view/editor&fiddle/2egc> *Also tested with VM bind, not working too.* Why is the property marked as `bindable`? Idk probably bug in the docs. *Modern version of ext doesn't support emptyText at all* --- As a workaround you could create new fileField element with predefined emptyText and used it based on the if you need. Something like this: <https://fiddle.sencha.com/#view/editor&fiddle/2egd> Upvotes: 1 <issue_comment>username_2: Well, that works too: ``` var Myfield = Ext.first('myField'); Myfield.el.dom.getElementsByTagName('input')[0].placeholder = 'MyEmptyText'; ``` Upvotes: 0
2018/03/15
172
494
<issue_start>username_0: I Have a date thats 12/1/2018 and i want to switch it to 2018/12/01 and im using carbon but im unsure on how to do this. I've tried using the parse method and carbon seems to confuse me and i get random errors.<issue_comment>username_1: solved it, i used: ``` Carbon::parse($input)->format('Y-m-d'); // converts MM/DD/YYYY to YYYY/MM/DD ``` Upvotes: 3 <issue_comment>username_2: You should try this: ``` Carbon::parse('12/1/2018')->format('Y/m/d'); ``` Upvotes: 1
2018/03/15
455
1,595
<issue_start>username_0: I'm looking for a way to get the type information of the parameters of a function passed into the constructor. ``` export class Test { constructor( test: ((... params : any[]) => T) | (( ... params : any[] ) => Promise ) { // Convert function or promise to promise this.test = this.test = (...params: any[]) => Promise.resolve(test(...params)) } // How I store the promisify'd function/promise private test : (...params) => Promise // I want to see typing information on the parameters for this method public async execute(...params: any[]) : Promise { try { return await this.test(...params) } catch (error) { return error } } ``` When a function or promise is passed in, I store it as a promise. Currently the typing information is lost. In the execute method, I'd like to be able to see the typing information on the parameters to be passed in; they should match the original functions' parameters. For example; ``` let test = new Test( (a: number, b: string) : string => `${a}${b}`) let output : string = test.execute(1, 'b') // should compile let output : number = test.execute(1, 'b') // should not compile let output : string = test.execute('b', 1) // should not compile, and this is the one I'm looking for. ``` Any ideas? I was wondering if I can use `keyof` in a clever way.<issue_comment>username_1: solved it, i used: ``` Carbon::parse($input)->format('Y-m-d'); // converts MM/DD/YYYY to YYYY/MM/DD ``` Upvotes: 3 <issue_comment>username_2: You should try this: ``` Carbon::parse('12/1/2018')->format('Y/m/d'); ``` Upvotes: 1
2018/03/15
386
1,482
<issue_start>username_0: Mobile browsers (chrome, ff, safari) are displaying a site's styles for tablet-size breakpoints from on my mobile phone? I'm using [Bootstrap 4](https://getbootstrap.com/) along with some custom styles. From the desktop everything in Chrome, FF, and Safari work fine when resizing the screen. However the Chrome emulator and live (from my Pixel) are rendering styles for a 768px breakpoint. This can be seen here: [firstintreeservices.com](http://firstintreeservices.com/) from a mobile phone browser. I've tried adjusting sass partial import order, removed the Bootstrap scss from my Webpack bundle completely and connected to the CDN. I still can't find a solution to this and have never encountered it before. **Note:** I'm using a combination of Bootstrap's responsive utilities and my own responsive styles. **UPDATE** This is a GoDaddy URL redirect/masking issue The project directory/structure can be found in [this repository](https://github.com/first-in/First-In-Tree-Services). ``` First In Tree Services ```<issue_comment>username_1: as far as i can see in the website you linked to ( <http://firstintreeservices.com/> ) there is no meta viewport like in your code here. Upvotes: 1 <issue_comment>username_2: This is a GoDaddy URL redirect + masking issue. Found the answer in GoDaddy's Community [here](https://www.godaddy.com/community/Managing-Domains/Forward-w-Masking-Disables-Mobile-Website/td-p/7793). Upvotes: 1 [selected_answer]
2018/03/15
1,192
3,123
<issue_start>username_0: I've tried several methods to get this working and nothing works. So, I'll give my entire process here. I have a dataframe that I'm trying to count 1 column of. The data is stored in a csv file. Here's my current method of importing it into a dataframe: ``` import pandas as pd df = pd.read_csv('csvfile.csv') ``` My dataframe looks like this: ``` index id name dob visit 0 111 Joe 1/1/2000 1/1/2018 1 111 Joe 1/1/2000 1/5/2018 2 122 Bob 1/1/1999 2/8/2018 3 133 Jill 1/2/1988 7/9/2017 4 111 Joe 1/1/2000 12/31/2018 ``` Because each client will have multiple lines in the dataframe based on how many visits they had, I want to create a column that includes the count of how many times their id number show up under the id column. I've tried the following: ``` df['counts'] = df.groupby('id').id.count() ``` but this gives me NaN values for every row. If I switch to size(), again NaN. So I decided to make a series out of the results: ``` visits = df.groupby('id').id.count() ``` That gives me: ``` index id 111 3 122 1 133 1 ``` It isn't exactly what I need, but close. I then try to update my dataframe with the series: ``` visitcounts = visits.to_frame() ``` I need to get the index to be a column, and that column to have a different name. visitcounts.rename(columns = {'id': 'visitnum'}, inplace = True) visitscounts['id'] = visitcounts.index Then, to add the field back to the dataframe: ``` pd.merge(df, visitcounts, on=['id'], how='left') ``` And nothing changes. What am I doing wrong?<issue_comment>username_1: jpp's solution is probably the cleaner way to go, but in order to clarify why your code doesn't work: Your issue is that your `id` that you want to merge on in `visitcounts` is actually your index, and not the column named `id`: ``` >>> visitcounts id id 111 3 122 1 133 1 ``` So, if you wanted to use `merge`, you could merge on the index for `visitcounts` and on the column `id`of your `df`, it should work: ``` # First rename column in visitcounts to `count`: visitcounts.columns=['count'] # Then merge: merged_df = pd.merge(df, visitcounts, left_on='id', right_index=True) >>> merged_df index id name dob visit count 0 0 111 Joe 1/1/2000 1/1/2018 3 1 1 111 Joe 1/1/2000 1/5/2018 3 4 4 111 Joe 1/1/2000 12/31/2018 3 2 2 122 Bob 1/1/1999 2/8/2018 1 3 3 133 Jill 1/2/1988 7/9/2017 1 ``` Upvotes: 2 <issue_comment>username_2: You can use [`pd.Series.value_counts`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) for this: ``` df['count'] = df['id'].map(df['id'].value_counts()) ``` Result: ``` index id name dob visit count 0 0 111 Joe 1/1/2000 1/1/2018 3 1 1 111 Joe 1/1/2000 1/5/2018 3 2 2 122 Bob 1/1/1999 2/8/2018 1 3 3 133 Jill 1/2/1988 7/9/2017 1 4 4 111 Joe 1/1/2000 12/31/2018 3 ``` Upvotes: 3 [selected_answer]
2018/03/15
360
1,640
<issue_start>username_0: Is there any good way to use multiple wildcard certificates with single Apache 2 `VirtualHost`? I have one IP based VirtualHost which is hosting websites from multiple domains. Each domain has wildcard certificate and I have been trying a way to make as compact as possible configuration file for this kind of situation. At the moment, I have one VirtualHost for each main domain, but the configuration file is getting quite large when I am adding more domains.<issue_comment>username_1: You need to follow the general SSL Certificate installation guide for Apache2 Virtual Host, there is no other simple option available. Make sure to enable the SNI. Upvotes: 0 <issue_comment>username_2: If you have one IP, you can define multiple VirtualHosts on it. Every VirtualHost has its own certificate and ServerName and that works well together with SNI. The „problem“ is, that there is no configuration option for this kind of problem. This is a „limitation“ of mod\_ssl configuration options. You could use additional domains in the certificate, in that case you have one vhost per DocRoot. If the size of the configuration is your problem, then it would be a good advice to use a configuration directory (with include) and put one file per domain or virtualhost in it. That’s the way, distributions are managing vhosts. In this case you can easily switch or copy to the configuration you need. In my case (900 domains) this works very well. I‘ve also added variables on the beginning of a domain vhost file that is used inside the configuration for DocRoot, Logfile, ServerName and ServerAliases. Upvotes: -1
2018/03/15
424
1,753
<issue_start>username_0: I'm not very familiar with modules in Coq, but it was brought up in a recent question I asked. I have the following code. ``` Module Type Sig. Parameter n : nat. Definition n_plus_1 := n + 1. End Sig. Module ModuleInstance <: Sig. Definition n := 3. End ModuleInstance. ``` Coq complains about the definition of `n_plus_1` when I try to run `End ModuleInstance`. I'm not sure if this is a correct way to use modules, but I want it to just use the definition in the signature, it's a complete definition that doesn't need any additional information. Is it possible to do it?<issue_comment>username_1: You need to follow the general SSL Certificate installation guide for Apache2 Virtual Host, there is no other simple option available. Make sure to enable the SNI. Upvotes: 0 <issue_comment>username_2: If you have one IP, you can define multiple VirtualHosts on it. Every VirtualHost has its own certificate and ServerName and that works well together with SNI. The „problem“ is, that there is no configuration option for this kind of problem. This is a „limitation“ of mod\_ssl configuration options. You could use additional domains in the certificate, in that case you have one vhost per DocRoot. If the size of the configuration is your problem, then it would be a good advice to use a configuration directory (with include) and put one file per domain or virtualhost in it. That’s the way, distributions are managing vhosts. In this case you can easily switch or copy to the configuration you need. In my case (900 domains) this works very well. I‘ve also added variables on the beginning of a domain vhost file that is used inside the configuration for DocRoot, Logfile, ServerName and ServerAliases. Upvotes: -1
2018/03/15
738
2,418
<issue_start>username_0: I am getting error with ``` static propTypes = {} ``` The error: > > Module parse failed: Unexpected token (6:21) You may need an > > appropriate loader to handle this file type. > > > Here is my .babelrc ``` { "presets": ["es2015", "react"], "plugins": ["transform-es2015-arrow-functions", "transform-class-properties"] } ``` I have tried changing `es2015` to `es2016`, and adding `stage-0`/`stage-2`. I have also tried doing `MyClass.propTypes` but neither worked. Any help would be appreciated<issue_comment>username_1: **Step : 1 -** I am assuming you are installing [prop-types](https://www.npmjs.com/package/prop-types) `npm install --save prop-types` as a seperate library and then importing it in the project ``` import PropTypes from 'prop-types'; ``` And then making use of it inside your project . **Step: 2** - One of the ways is to make use of `babel-polyfill` , You can install it as a project dependency `npm install --save babel-polyfill` and inside your `webpack.config.js` you can pass it to the entry point (You can amend this accordingly): ``` entry: ["babel-polyfill", "./App.jsx"], ``` You can read more about babel Polyfill [Here](https://babeljs.io/docs/usage/polyfill/) And if you are using `eslint` you can add `"parser": "babel-eslint"` to the `.eslintrc` file ``` { "parser": "babel-eslint" } ``` You can read more about eslint config [Here](https://eslint.org/docs/user-guide/configuring) Upvotes: 1 <issue_comment>username_2: Turns out that I had a typo in my file name... wrote file.js instead of file.jsx. Upvotes: 2 <issue_comment>username_3: Some time has passed and some things have changed. Here's what's working for me: ``` // .eslintrc { "plugins": ["babel"], "parser": "babel-eslint" } // .babelrc { "presets": [ "@babel/preset-env", "@babel/preset-react" ], "plugins": [ "@babel/plugin-proposal-class-properties" ] } ``` And we need the following dependencies: With Yarn: ``` $ yarn add @babel/core @babel/preset-env @babel/preset-react @babel/plugin-proposal-class-properties $ yarn add eslint babel-eslint eslint-plugin-babel eslint-plugin-react ``` With NPM: ``` $ npm i --save @babel/core @babel/preset-env @babel/preset-react @babel/plugin-proposal-class-properties $ npm i --save eslint babel-eslint eslint-plugin-babel eslint-plugin-react ``` Upvotes: 2
2018/03/15
651
2,161
<issue_start>username_0: I have a String in Java: ``` Hello-World. My phone number is 333-333-333 ``` I want to remove dash characters occurring between string but not the digits like as in phone number. My expected output is: ``` Hello World. My phone number is 333-333-333 ```<issue_comment>username_1: **Step : 1 -** I am assuming you are installing [prop-types](https://www.npmjs.com/package/prop-types) `npm install --save prop-types` as a seperate library and then importing it in the project ``` import PropTypes from 'prop-types'; ``` And then making use of it inside your project . **Step: 2** - One of the ways is to make use of `babel-polyfill` , You can install it as a project dependency `npm install --save babel-polyfill` and inside your `webpack.config.js` you can pass it to the entry point (You can amend this accordingly): ``` entry: ["babel-polyfill", "./App.jsx"], ``` You can read more about babel Polyfill [Here](https://babeljs.io/docs/usage/polyfill/) And if you are using `eslint` you can add `"parser": "babel-eslint"` to the `.eslintrc` file ``` { "parser": "babel-eslint" } ``` You can read more about eslint config [Here](https://eslint.org/docs/user-guide/configuring) Upvotes: 1 <issue_comment>username_2: Turns out that I had a typo in my file name... wrote file.js instead of file.jsx. Upvotes: 2 <issue_comment>username_3: Some time has passed and some things have changed. Here's what's working for me: ``` // .eslintrc { "plugins": ["babel"], "parser": "babel-eslint" } // .babelrc { "presets": [ "@babel/preset-env", "@babel/preset-react" ], "plugins": [ "@babel/plugin-proposal-class-properties" ] } ``` And we need the following dependencies: With Yarn: ``` $ yarn add @babel/core @babel/preset-env @babel/preset-react @babel/plugin-proposal-class-properties $ yarn add eslint babel-eslint eslint-plugin-babel eslint-plugin-react ``` With NPM: ``` $ npm i --save @babel/core @babel/preset-env @babel/preset-react @babel/plugin-proposal-class-properties $ npm i --save eslint babel-eslint eslint-plugin-babel eslint-plugin-react ``` Upvotes: 2
2018/03/15
661
2,607
<issue_start>username_0: I'm writing a base class which contains the httpClient. It is used to make REST api calls. The httpClient variable is set correctly if defined in constructor, but not in private variable. Here's my sample code: ``` @Injectable() export class MyBaseClass implements { private httpClient = HttpClient constructor( private httpClient2: HttpClient ) { console.log("httpClient2", httpClient2) console.log("httpClient2.get", httpClient2.get) } callApi() { console.log("httpClient", this.httpClient) console.log("httpClient.get", this.httpClient.get) } } ``` **constructor output:** [![enter image description here](https://i.stack.imgur.com/GDaAa.png)](https://i.stack.imgur.com/GDaAa.png) **callApi output:** [![enter image description here](https://i.stack.imgur.com/j2Fya.png)](https://i.stack.imgur.com/j2Fya.png) As you can see the two variables aren't the same and the get property of httpClient is undefined. I would use the variable in the constructor throughout my class, but I what I want is to extend this class and having the variable in the constructor isn't convenient. Any help/suggestions would be greatly appreciated. Thanks, Will<issue_comment>username_1: The property `httpClient` on your class is simply being set to the `HttpClient` class constructor. If you need an instance of that class, you need to let Angular provide it to you via dependency injection like your `httpClient2` property (i.e. as a parameter to your constructor). As a note, adding the `private` accessor to the parameter on the constructor is syntactic sugar that is doing the following: ``` class MyBaseClass { private httpClient2: HttpClient; constructor(httpClient2: HttpClient) { this.httpClient2 = httpClient2; } } ``` Upvotes: 2 <issue_comment>username_2: There is another option if you really don't want to inject the services in your base class' constructor. 1. Declare a global variable containing a ref to the injector and assign it in your module (or somewhere else, before your base class'constructor is called) ``` import {Injector} from '@angular/core'; export let InjectorInstance: Injector; export class AppModule { constructor(private injector: Injector) { InjectorInstance = this.injector; } } ``` 2 Then you can use it like this in your base class ``` import {InjectorInstance} from '../app.module'; export class MyBaseClass { private httpClient : HttpClient: constructor() { this.httpClient = InjectorInstance.get(HttpClient); } } ``` Upvotes: 5 [selected_answer]
2018/03/15
1,492
4,283
<issue_start>username_0: I recently reinstalled node.js (latest version, 9.8.0) on my computer (running windows 10) as it started to glitch out. The problem is, now npm doesn't work. For example, upon entering `npm` in cmd, I get the error: ``` Error: EPERM: operation not permitted, mkdir 'C:\Program Files (x86)\Nodist' TypeError: Cannot read property 'get' of undefined at errorHandler (C:\Program Files\nodejs\node_modules\npm\lib\utils\error-handler.js:205:18) at C:\Program Files\nodejs\node_modules\npm\bin\npm-cli.js:83:20 at cb (C:\Program Files\nodejs\node_modules\npm\lib\npm.js:224:22) at C:\Program Files\nodejs\node_modules\npm\lib\npm.js:262:24 at C:\Program Files\nodejs\node_modules\npm\lib\config\core.js:81:7 at Array.forEach () at C:\Program Files\nodejs\node\_modules\npm\lib\config\core.js:80:13 at f (C:\Program Files\nodejs\node\_modules\npm\node\_modules\once\once.js:25:25) at afterExtras (C:\Program Files\nodejs\node\_modules\npm\lib\config\core.js:178:20) at C:\Program Files\nodejs\node\_modules\npm\node\_modules\mkdirp\index.js:35:29 at C:\Program Files\nodejs\node\_modules\npm\node\_modules\mkdirp\index.js:47:53 at C:\Program Files\nodejs\node\_modules\npm\node\_modules\graceful-fs\polyfills.js:284:29 at FSReqWrap.oncomplete (fs.js:170:21) C:\Program Files\nodejs\node\_modules\npm\lib\utils\error-handler.js:205 if (npm.config.get('json')) { ^ TypeError: Cannot read property 'get' of undefined at process.errorHandler (C:\Program Files\nodejs\node\_modules\npm\lib\utils\error-handler.js:205:18) at process.emit (events.js:180:13) at process.\_fatalException (bootstrap\_node.js:431:27) Error: EPERM: operation not permitted, mkdir 'C:\Program Files (x86)\Nodist' TypeError: Cannot read property 'get' of undefined at errorHandler (C:\Program Files\nodejs\node\_modules\npm\lib\utils\error-handler.js:205:18) at C:\Program Files\nodejs\node\_modules\npm\bin\npm-cli.js:83:20 at cb (C:\Program Files\nodejs\node\_modules\npm\lib\npm.js:224:22) at C:\Program Files\nodejs\node\_modules\npm\lib\npm.js:262:24 at C:\Program Files\nodejs\node\_modules\npm\lib\config\core.js:81:7 at Array.forEach () at C:\Program Files\nodejs\node\_modules\npm\lib\config\core.js:80:13 at f (C:\Program Files\nodejs\node\_modules\npm\node\_modules\once\once.js:25:25) at afterExtras (C:\Program Files\nodejs\node\_modules\npm\lib\config\core.js:178:20) at Conf. (C:\Program Files\nodejs\node\_modules\npm\lib\config\core.js:234:20) at C:\Program Files\nodejs\node\_modules\npm\lib\config\set-user.js:23:20 at C:\Program Files\nodejs\node\_modules\npm\node\_modules\mkdirp\index.js:35:29 at C:\Program Files\nodejs\node\_modules\npm\node\_modules\mkdirp\index.js:47:53 at C:\Program Files\nodejs\node\_modules\npm\node\_modules\graceful-fs\polyfills.js:284:29 at FSReqWrap.oncomplete (fs.js:170:21) C:\Program Files\nodejs\node\_modules\npm\lib\utils\error-handler.js:205 if (npm.config.get('json')) { ^ TypeError: Cannot read property 'get' of undefined at process.errorHandler (C:\Program Files\nodejs\node\_modules\npm\lib\utils\error-handler.js:205:18) at process.emit (events.js:180:13) at process.\_fatalException (bootstrap\_node.js:431:27) ``` The node command works fine, though. Why would this happen? Edit: I have tried reverting to the recommended Node.js version (8.10.0), but I am getting the same issues.<issue_comment>username_1: This maybe related to the NPM cache that needs to be cleaned you can do it using: ``` npm cache clean ``` Let me know if it helped.... Upvotes: 1 <issue_comment>username_2: I had a similar problem a while ago. Try navigating to the file named `.npmrc`, in `C:\Users\%UserName%`, and deleting it. Upvotes: 5 [selected_answer]<issue_comment>username_3: To fix this in Windows, create a directory junction to your users folder that didn't have a space in it. You can run a command like this in an administrative powershell: ``` cmd /c mklink /J "C:\Users\myname" "C:\Users\My Name" ``` You can then use this junction as if it were your real user directory: ``` npm config set cache C:\Users\myname\AppData\Roaming\npm-cache npm config set prefix C:\Users\myname\AppData\Roaming\npm ``` (the above should update your user .npmrc file) Upvotes: 2
2018/03/15
612
2,412
<issue_start>username_0: I read in the official Airflow documentation [the following](https://airflow.apache.org/configuration.html#scaling-out-with-celery): [![enter image description here](https://i.stack.imgur.com/BeHip.png)](https://i.stack.imgur.com/BeHip.png) What does this mean exactly? What do the authors mean by scaling out? That is, **when** is it **not** enough to use Airflow or when would anyone use Airflow in combination with something like Celery? (same for `dask`)<issue_comment>username_1: In Airflow terminology an "Executor" is the component responsible for running your task. The `LocalExecutor` does this by spawning threads on the computer Airflow runs on and lets the thread execute the task. Naturally your capacity is then limited by the available resources on the local machine. The `CeleryExecutor` distributes the load to several machines. The executor itself publishes a request to execute a task to a queue, and one of several worker nodes picks up the request and executes it. You can now scale the cluster of worker nodes to increase overall capacity. Finally, and not ready yet, there's a `KubernetesExecutor` in the works ([link](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=71013666)). This will run tasks on a Kubernetes cluster. This will not only give your tasks complete isolation since they're run in containers, you can also leverage the existing capabilities in Kubernetes to for instance auto scale your cluster so that you always have an optimal amount of resources available. Upvotes: 6 [selected_answer]<issue_comment>username_2: You may enjoy reading this comparison of dask to celery/airflow task managers <http://matthewrocklin.com/blog/work/2016/09/13/dask-and-celery> Since you are not asking a specific question, general reading like that should be informative, and maybe you can clarify what you are after. -EDIT- Some people coming to this more recently may wish to look into [prefect](https://docs.prefect.io/), which is a sort of rewritten airflow with dask in mind (comes in open-source core with paid enterprise features). Upvotes: 4 <issue_comment>username_3: Following on @username_2's answer, here is a tutorial I wrote with 3 Airflow DAG examples that use Dask. Hopefully this gives you some sense of when and how to use Dask together with Airflow. <https://coiled.io/blog/3-airflow-dag-examples-with-dask/> Upvotes: 0
2018/03/15
692
2,596
<issue_start>username_0: I want to know if I can look for multiple locations using the google maps api. For example, I want to find restaurants and museums using one url "<https://maps.googleapis.com/maps/api/place/textsearch/json?query=restaurants+in+Manhattan&key=>", so I can use the same JSON file instead of duplicating files<issue_comment>username_1: No. If you search for *"restaurants or museums in sundancesquare"* your query is interpreted as *"restaurants museums sundancequare"* as in my experience the maps API doesn't parse your query for meaning, it just grabs the search terms out of it, and ignores directional words like "in". It's very difficult to predict what results you would get - you might only get matches that match restaurant **and** museum **and** sundancesquare, you might get a mix of the two, and you might get something else you're not expecting: there's no guarantee with this call whether the results will actually *be* restaurants or museums, you might get restaurant supplies or restaurant insurance or anything that matches the term. You should be specifying the type and using a nearby search as in [@username_2's answer](https://stackoverflow.com/a/49310517/43846). However you still can't search for two types, as the [documentation](https://developers.google.com/places/web-service/search) says: > > Restricts the results to places matching the specified type. **Only one type may be specified (if more than one type is provided, all types following the first entry are ignored).** > > > I don't really understand your question, though. You say you want to do this *"so I can use the same JSON file instead of duplicating files"* - why are you using files for this? And even if you must, can't you parameterise them so you can change the query? Upvotes: 1 <issue_comment>username_2: You can do this: <https://maps.googleapis.com/maps/api/place/nearbysearch/json?location=37.753,-122.443&radius=1000&type=library&key=>{YourKey} This will return all libraries within 1000 meters radius in SF. Set your `location` parameter: be it a Lat-Lng or address. Then set `radius` parameter: 1 = 1 meter. Assign `type` parameter: to restaurant. [More on Type](https://developers.google.com/places/web-service/supported_types) Then you will get your JSON data for your restaurants. Then make other request for other type. Can't have more than one type. [Place Search](https://developers.google.com/places/web-service/search#PlaceSearchRequests). Making more than one HTTP request still counts towards your daily quota, fyi. Upvotes: 3 [selected_answer]
2018/03/15
460
1,465
<issue_start>username_0: I have this simple array with 2 elements which is being converted into json format: ``` echo '['.json_encode(array("name" => "FRENCH POLYNESIA", "name" => "POLAND")).']'; ``` The result is: `[{"name":"POLAND"}]` In my case I need this result: `[{"name": "FRENCH POLYNESIA"},{"name": "POLAND"}]` How I can do this ?<issue_comment>username_1: This JSON: ``` [{"name": "FRENCH POLYNESIA"},{"name": "POLAND"}] ``` is an array containing two objects. So you need your input array to contain two *arrays* rather than two key-value pairs. ``` array(array("name" => "FRENCH POLYNESIA"), array("name" => "POLAND")) ``` `json_encode` will convert the inner arrays to objects. It's already been mentioned in the comments, but your original array format doesn't make sense because you can't have two of the same array key anyway. Upvotes: 2 <issue_comment>username_2: You can't create two (or more) identical keys in the same array, the last value will overwrite the previous ones. This value: ``` [{"name": "FRENCH POLYNESIA"},{"name": "POLAND"}] ``` Is decoded to ``` array (size=2) 0 => array (size=1) 'name' => string 'FRENCH POLYNESIA' (length=16) 1 => array (size=1) 'name' => string 'POLAND' (length=6) ``` So to answer your question, your array should look like this ``` $array = [ array( 'name' => 'FRENCH POLYNESIA' ), array( 'name' => 'POLAND' ), ]; ``` Upvotes: 0
2018/03/15
1,290
4,185
<issue_start>username_0: Upon submitting the form, I want to capture only the values of the fields that have been changed. Using the valueChanges function on the form captures every field of the form and it can't be traversed as an array as I've tried to do in my code below. I'm not sure of any other way the 'before and after' values of a form can be compared and stored where only the fields that were changed are stored. ``` export class JobEditComponent implements OnInit { jobNumber: any; jobs: any[]; jobToEdit: Job; job: any; brands: Brand[]; jobForm: FormGroup; bsConfig: Partial; isFormReady: any; jobId: number; existingValues: string[] = []; changedValues: string[] = []; constructor(private route: ActivatedRoute, private jobService: JobService, private fb: FormBuilder, private router: Router, private alertify: AlertifyService) { } ngOnInit() { this.route.params.subscribe(params => { this.jobNumber = +params['id']; }); this.jobService.getJobToEdit().subscribe((jobs: Job[]) => { this.jobs = jobs; }, error => { console.log('error'); }, () => { this.jobToEdit = this.jobs.find(j => j.id === this.jobNumber); this.editJobForm(); this.onChanges(); }); this.getBrands(); this.bsConfig = { containerClass: 'theme-blue' }; } onChanges(): void { this.jobForm.valueChanges.subscribe(val => { if (this.existingValues != null) { this.existingValues = val; } }); } editJobForm() { this.jobForm = this.fb.group({ jobNumber: [this.jobToEdit.jobNumber, Validators.required], item: [this.jobToEdit.item, Validators.required], status: [this.jobToEdit.status, Validators.required], orderedBy: [this.jobToEdit.orderedBy, Validators.required], orderDate: [this.jobToEdit.orderDate, Validators.required], quantity: [this.jobToEdit.quantity, Validators.required], unitPrice: [this.jobToEdit.unitPrice, Validators.required], lineValue: [this.jobToEdit.lineValue, Validators.required], dpAp: [this.jobToEdit.dpAp, Validators.required], eta: [this.jobToEdit.eta, Validators.required], detailStatus: [this.jobToEdit.detailStatus, Validators.required], comments: [this.jobToEdit.comments, null] }, ); this.isFormReady = true; } updateJob() { this.alertify.confirm('Are you sure you want save changes?', () => { this.jobId = this.jobToEdit.id; this.jobToEdit = Object.assign({}, this.jobForm.value); this.jobService.editJob(this.jobToEdit, this.jobId).subscribe(() => { this.alertify.success('Update successful'); \*\*for (let i = 0; i < this.existingValues.length; i++) { if (this.jobToEdit[i] !== this.existingValues[i]) { this.changedValues[i] = this.jobToEdit[i]; } }\*\* console.log(this.changedValues); }, error => { this.alertify.error(error); }, () => { this.router.navigate(['/home']); }); }); } ``` }<issue_comment>username_1: Try this for your loop: ``` // declare changedValues as key -> val object (not array) changedValues: { [key: string]: any } = {}; Object.keys(this.existingValues).forEach(i => { if (this.jobToEdit[i] !== this.existingValues[i]) { this.changedValues[i] = this.jobToEdit[i]; } } ``` **UPDATE** the same loop using `for (key in object)` syntax as proposed by @Alex: ``` for (let key in this.existingValues) { if (this.jobToEdit[key] !== this.existingValues[key]) { this.changedValues[key] = this.jobToEdit[key]; } } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can check the values on submit, instead of using `valueChanges`, so if your form would look like this: ```ts constructor(private fb: FormBuilder) { this.myForm = this.fb.group({ field1: [this.values.field1], field2: [this.values.field2], field3: [this.values.field3] }) } ``` you can on submit check the values of the properties and see if they match: ```ts // 'values' is 'myForm.values' onSubmit(values) { let changed = {}; for (let ctrl in values) { if(values[ctrl] !== this.values[ctrl]) { changed[ctrl] = values[ctrl]; } } console.log(changed); } ``` Here's a [StackBlitz](https://stackblitz.com/edit/angular-3fotxz?file=app/app.component.ts) ---------------------------------------------------------------------------------- Upvotes: 0
2018/03/15
815
2,609
<issue_start>username_0: I have problem in python to read a line that contains int separated with space. for example I want to get input: ``` 8 9 3 2 7 5 ``` and save it into an Array like: ``` A = [8, 9, 3, 2, 7, 5] ``` I tried the following code but it has syntax error: ``` A = [input().split()] ```<issue_comment>username_1: `A = [input().split()]` is close, except that in Python2, `input` is designed to evaluate what is passed to it. Thus, use `raw_input` and `map` over the split results with `int`: ``` A = map(int, raw_input().split()) ``` Upvotes: 1 <issue_comment>username_2: The problem is that you're using [`input`](https://docs.python.org/2/library/functions.html#input). In Python 2, `input` asks for input, and then tries to evaluate it as Python code. So, when you type in `8 9 3 2 7 5`, it tries to evaluate that as Python code—and of course that's not valid Python syntax. What you want to do is use [`raw_input`](https://docs.python.org/2/library/functions.html#raw_input), which just asks for input and returns it to your code as a string. You can then call `split()` on it and get a list of strings. When you move to Python 3, this problem goes away—`input` just returns a string. --- Meanwhile, just putting the result of `split()` in brackets doesn't do much good. You end up with a list, whose one element is another list, whose elements are the split strings. So you probably just want: ``` A = raw_input().split() ``` --- If you need to convert each string into an int, you have to call `int` on each one. You can do this with a [list comprehension](https://docs.python.org/2/tutorial/datastructures.html#list-comprehensions): ``` A = [int(numeral) for numeral in raw_input().split()] ``` … or [`map`](https://docs.python.org/2/library/functions.html#map): ``` A = map(int, raw_input().split()) ``` … or an explicit loop: ``` A = [] for numeral in raw_input().split(): A.append(int(numeral)) ``` If you don't understand the first two, use the last one until you can figure out how they work. Concise, readable code is very important, but code that you can understand (and debug and maintain and extend) is even more important. Upvotes: 3 [selected_answer]<issue_comment>username_3: so the input is a string of numbers separated by space? You just have to put the info inside the method split, meaning ``` A = [input().split(" ")] ``` Upvotes: 0 <issue_comment>username_4: open file and do this thing: ``` print([list(map(int,line.strip().split(' '))) for line in open('file.txt','r')][0]) ``` output: ``` [8, 9, 3, 2, 7, 5] ``` Upvotes: 0
2018/03/15
644
2,064
<issue_start>username_0: I have this code: ``` ............................................... 'Detalle' => array() ]; foreach ($cart as $line => $item) { //check condition if (strtolower($item['description']) === 'something') { $cond = true; } else { $cond= false; } $ab['Detalle'][]=array( 'NmbItem' => $item['name'], 'QtyItem' => (int)$item['quantity'], 'PrcItem' => $item['price'] ); if ($cond){ $array2 = array('IndExe' => 1); array_merge($ab['Detalle'],$array2); } ``` } How can I add 'IndExe' to `$ab['Detalle']` array only when the condition is true? I tried array\_merge, array\_merge\_recursive but nothing. IndExe only can be 1, another value like 0 or null is not possible. I tried: ``` $ab['Detalle'][]=array( 'NmbItem' => $item['name'], 'QtyItem' => (int)$item['quantity'], 'PrcItem' => $item['price'] 'IndExe' => ($cond? 1 : 0 ) ); ``` but when `cond = false` then `IndExe = 0`, is not what I need. IndExe must be added only when `cond = true`.<issue_comment>username_1: Let's introduce a temporary array: ``` $tempArray=Array( 'NmbItem' => $item['name'], 'QtyItem' => (int)$item['quantity'], 'PrcItem' => $item['price'] ); if ($cond){ $tempArray['IndExe'] = 1; } $ab['Detalle'][] = $tempArray; ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: The problem is the dynamically appended `[]` element. You could use the index `$line`: ``` foreach ($cart as $line => $item) { $ab['Detalle'][$line] = array( 'NmbItem' => $item['name'], 'QtyItem' => (int)$item['quantity'], 'PrcItem' => $item['price'] ); if (strtolower($item['description']) === 'something') { $ab['Detalle'][$line]['IndExe'] = 1; } } ``` If `$line` doesn't give the indexes you want (but they shouldn't matter), then: ``` $ab['Detalle'] = array_values($ab['Detalle']); ``` Unless you're using `$cond` again later in the code, you don't need it. Upvotes: 0
2018/03/15
1,575
5,473
<issue_start>username_0: I am using Spark's multiple inputstream reader to read message from Kafka. I am getting below mentioned error. If I don't use multiple input stream reader , I am not getting any error. To achieve performance, I need to use parallel concept, testing purpose I using only one. **Error** ``` java.io.NotSerializableException: org.apache.kafka.clients.consumer.ConsumerRecord Serialization stack: - object not serializable (class: org.apache.kafka.clients.consumer.ConsumerRecord, value: ConsumerRecord(topic = test, partition = 0, offset = 120, CreateTime = -1, checksum = 2372777361, serialized key size = -1, serialized value size = 48, key = null, value = 10051,2018-03-15 17:12:24+0000,Bentonville,Gnana)) at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40) at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46) at org.apache.spark.serializer.SerializationStream.writeValue(Serializer.scala:134) at org.apache.spark.storage.DiskBlockObjectWriter.write(DiskBlockObjectWriter.scala:239) at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:151) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53) at org.apache.spark.scheduler.Task.run(Task.scala:108) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 18/03/15 17:12:24 ERROR TaskSetManager: Task 0.0 in stage 470.0 (TID 470) had a not serializable result: org.apache.kafka.clients.consumer.ConsumerRecord ``` **Code:** ``` import org.apache.kafka.common.serialization.StringDeserializer import org.apache.spark.Success import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe import org.apache.spark.streaming.kafka010.KafkaUtils import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent import org.apache.spark.streaming.{Milliseconds, StreamingContext} object ParallelStreamJob { def main(args: Array[String]): Unit = { val spark = SparkHelper.getOrCreateSparkSession() val ssc = new StreamingContext(spark.sparkContext, Milliseconds(50)) val kafkaStream = { val kafkaParams = Map[String, Object]( "bootstrap.servers" -> "localhost:9092", "key.deserializer" -> classOf[StringDeserializer], "value.deserializer" -> classOf[StringDeserializer], "group.id" -> "use_a_separate_group_id_for_each_stream", "auto.offset.reset" -> "latest", "enable.auto.commit" -> (false: java.lang.Boolean) ) val topics = Array("test") val numPartitionsOfInputTopic = 1 val streams = (1 to numPartitionsOfInputTopic) map { _ => KafkaUtils.createDirectStream[String, String]( ssc, PreferConsistent, Subscribe[String, String](topics, kafkaParams) ) } val unifiedStream = ssc.union(streams) val sparkProcessingParallelism = 1 unifiedStream.repartition(sparkProcessingParallelism) } kafkaStream.foreachRDD(rdd=> { rdd.foreach(conRec=> { println(conRec.value()) }) }) println(" Spark parallel reader is ready !!!") ssc.start() ssc.awaitTermination() } } ``` **sbt** ``` scalaVersion := "2.11.8" val sparkVersion = "2.2.0" val connectorVersion = "2.0.7" val kafka_stream_version = "1.6.3" libraryDependencies ++= Seq( "org.apache.spark" %% "spark-core" % sparkVersion , "org.apache.spark" %% "spark-sql" % sparkVersion , "org.apache.spark" %% "spark-hive" % sparkVersion , "com.datastax.spark" %% "spark-cassandra-connector" % connectorVersion , "org.apache.kafka" %% "kafka" % "0.10.1.0", "org.apache.spark" %% "spark-streaming-kafka-0-10" % sparkVersion, "org.apache.spark" %% "spark-streaming" % sparkVersion , ) ``` How to resolve this issue ?<issue_comment>username_1: The issue is clear `java.io.NotSerializableException:org.apache.kafka.clients.consumer.ConsumerRecord`. The [ConsumerRecord](https://kafka.apache.org/0100/javadoc/org/apache/kafka/clients/consumer/ConsumerRecord.html) class doesn't extend `Serializable` Try to take out `value` field of `ConsumerRecord` before `foreachRdd` operation `kafkaStream.map(_.value())`. Update 1: The above fix doesn't work because exception happen at `ssc.union(streams)`.`ssc.union(streams)` requires data transfer between nodes, it must serialize data. So, you can take out `value` field by `map` before `union` operation to fix the issue. ``` KafkaUtils.createDirectStream[String, String]( ssc, PreferConsistent, Subscribe[String, String](topics, kafkaParam) ).map(_.value()) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: First of all, if you have 1 topic then you shouldn't use creating multiple Kafkastreams as you are using the direct approach which will automatically create as many numbers of threads as there are a number of Kafka partitions for a topic.Spark will automatically take care of parallelizing your tasks if you follow DirectApproach. Try to use repartition() at RDD level instead repartitioning the Dstream itself. Upvotes: 0
2018/03/15
1,866
6,628
<issue_start>username_0: So I am trying to verify the hashed password of the user from MySQL DB but `password_verify` doesn't seem to be working. I feel like maybe it is something I am doing wrong. Hashing and Storing the Password: ``` // Set POST variables $firstname = mysqli_real_escape_string($conn, $_POST['firstname']); $lastname = mysqli_real_escape_string($conn, $_POST['lastname']); $email = mysqli_real_escape_string($conn, $_POST['email']); $password = mysqli_real_escape_string($conn, $_POST['password']); $hashedpwd = password_hash($password, PASSWORD_DEFAULT); // SQL query & Error Handlers $sql = "INSERT INTO `users_admin` (Firstname, Lastname, Email, Password) VALUES ('$firstname', '$lastname', '$email', '$hashedpwd')"; ``` Retrieving the Hashed Password: ``` if ($row = mysqli_fetch_assoc($result)) { $user_pass = $row['Password']; $passwordCheck = password_verify($password, $user_pass); if (!$passwordCheck) { header("Location: ../login.php?wrong-password"); exit(); } elseif ($passwordCheck) { // log in the user $_SESSION['logged_in'] = true; $_SESSION['id'] = $row['ID']; $_SESSION['firstname'] = $row['Firstname']; $_SESSION['lastname'] = $row['Lastname']; $_SESSION['email'] = $row['Email']; header("Location: ../dashboard"); exit(); } } ``` > > `password_verify = bool(true)` > > > EDIT: ===== ``` if (isset($_POST['submit'])) { include('DB_Connect.php'); $email = mysqli_real_escape_string($conn, $_POST['email']); $password = mysqli_real_escape_string($conn, $_POST['password']); // error handlers $sql = "SELECT * FROM users_admin WHERE Email = '$email'"; $result = mysqli_query($conn, $sql); $resultCheck = mysqli_num_rows($result); if ($resultCheck < 1) { header("Location: ../login.php?input:invalid"); exit(); } else { if ($row = mysqli_fetch_assoc($result)) { $user_pass = $row['Password']; $passwordCheck = password_verify($password, $user_pass); if (!$passwordCheck) { header("Location: ../login.php?wrong-password"); exit(); } elseif ($passwordCheck) { // log in the user $_SESSION['logged_in'] = true; $_SESSION['id'] = $row['ID']; $_SESSION['firstname'] = $row['Firstname']; $_SESSION['lastname'] = $row['Lastname']; $_SESSION['email'] = $row['Email']; header("Location: ../dashboard"); exit(); } } } }else{ header("Location: ../login.php?login=error"); exit(); } ```<issue_comment>username_1: There is nothing on the code you posted that leads to the problem described. Here are some hints and improvements: 1. -- You are escaping/sanityzing the password before hashing it. This way you're altering the stored password Let ``` $password = $_POST['<PASSWORD>']; ``` both when you create the account and when you check if the password match at login. 2. -- Ensure the `Password` field in the database (that stores the hashed password) is able to store up to 255 characters. From the [**documentation**](http://php.net/manual/en/function.password-hash.php) > > it is recommended to store the result in a database column that can expand beyond 60 characters (255 characters would be a good choice). > > > If the field is narrower the hash will be truncated and you'll never have a match. 3. -- As you get the user by **email** at login ensure that `Email` is unique (as long as your primary key `ID`) at database level (in the table definition). Good idea to check this also upon user registration and login (at php level). 4. -- Keys in the Key-Value pairs returned by `mysqli_fetch_assoc` are **case sensitive**. Ensure you access the values using keys written exactly as the field are named in the database. Ex. in your code I read `$row['Email']`. In the table is the field name actually `Email` or maybe it's `email` (lowercase) ? 5. -- **Debug your code!** Use a debugger or simply place a breakpoint like ``` var_export( $the_variable ); exit(); ``` in "key" points of your code. 6. -- Use **[prepared statements](http://php.net/manual/en/mysqli.quickstart.prepared-statements.php)** instead of escaping input and injecting it directly into the SQL strings. --- **Hashing and Storing the Password:** ``` // Set POST variables $firstname = mysqli_real_escape_string($conn, $_POST['firstname']); $lastname = mysqli_real_escape_string($conn, $_POST['lastname']); $email = mysqli_real_escape_string($conn, $_POST['email']); $password = $_POST['password']; $hashedpwd = password_hash($password, PASSWORD_DEFAULT); // SQL query & Error Handlers $sql = "INSERT INTO `users_admin` (Firstname, Lastname, Email, Password) VALUES ('$firstname', '$lastname', '$email', '$hashedpwd')"; ``` --- **Retrieving the Hashed Password:** ``` include('DB_Connect.php'); $email = mysqli_real_escape_string($conn, $_POST['email']); $password = $_POST['<PASSWORD>']; $sql = "SELECT * FROM users_admin WHERE Email = '$email'"; $result = mysqli_query($conn, $sql); if( $result === false ) { header("Location: ../login.php?login=error"); exit(); } $count = mysqli_num_rows($result); if( $count === 0 ) { header("Location: ../login.php?input:invalid"); exit(); } $row = mysqli_fetch_assoc( $result ); $passwordHash = $row['Password']; $passwordCheck = password_verify( $password, $passwordHash ); if( ! $passwordCheck ) { header("Location: ../login.php?wrong-password"); exit(); } // log in the user $_SESSION['logged_in'] = true; $_SESSION['id'] = $row['ID']; $_SESSION['firstname'] = $row['Firstname']; $_SESSION['lastname'] = $row['Lastname']; $_SESSION['email'] = $row['Email']; header("Location: ../dashboard"); exit(); ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: username_1's answer is good and answers your question, I just wanted to explain to you why your passwords *are not the same*. First, you take the POSTed password and escape it: ``` $password = mysqli_real_escape_string($conn, $_POST['password']); ``` And then you hash **the escaped password**: ``` $hashedpwd = password_hash($password, PASSWORD_DEFAULT); ``` But here: ``` $passwordCheck = password_verify($password, $user_pass); ``` You `password_verify` **the unescaped password** (I assume as you didn't publish the full code) To solve this issue, do not escape the password, a better solution as username_1 suggested in his answer is to use Prepared Statement. Upvotes: 1
2018/03/15
683
2,511
<issue_start>username_0: I got an email from google about that the Kubernetes project recently disclosed new security vulnerabilities. was advised to upgrade the nodes as soon as the patch becomes available which is with the new version releases by March 16. How soon should I do it or how long can I wait ? Because I need at least a week to plan the upgrade !!<issue_comment>username_1: [CVE-2017-1002101](https://cve.mitre.org/cgi-bin/cvename.cgi?name=2017-1002101) affects all volume types, so to prevent the vulnerability being exploited on your cluster you'd need to deny the use of all volume types using [PodSecurityPolicy](https://kubernetes.io/docs/concepts/policy/pod-security-policy/). Refer to the `Mitigations prior to upgrading` section of the GitHub issue [here](https://github.com/kubernetes/kubernetes/issues/60813). There isn't an amount of time you can wait, it's just more likely to be exploited the longer you wait before upgrading. Upvotes: 3 [selected_answer]<issue_comment>username_2: You should upgrade as soon as possible, the more you wait the more you will expose your cluster to vulnerabilities as username_1 pointed out. I added this comment to suggest you to doublecheck the [release notes](https://cloud.google.com/kubernetes-engine/release-notes) of the patch since it could affect your workload: > > March 13, 2018 > Fixed > A patch for Kubernetes vulnerabilities CVE-2017-1002101 and CVE-2017-1002102 is now available according to this week's rollout schedule. We recommend that you manually upgrade your nodes as soon as the patch becomes available in your cluster's zone. > > > **Issues** > > > Breaking Change: **Do not** upgrade your cluster if your application requires mounting a secret, configMap, downwardAPI, or projected volume with write access: > > > * To fix security vulnerability CVE-2017-1002102, Kubernetes 1.9.4-gke.1, Kubernetes 1.8.9-gke.1, and Kubernetes 1.7.14-gke.1 changed secret, configMap, downwardAPI, and projected volumes to mount read-only, instead of allowing applications to write data and then reverting it automatically. We recommend that you modify your application to accommodate these changes before you upgrade your cluster. > * If your cluster uses IP Aliases and was created with the --enable-ip-alias flag, upgrading the master to Kubernetes 1.9.4-gke.1 will prevent it from starting properly. This issue will be addressed in an upcoming release. > > > Disclaimer: I work for Google Cloud Platform Support Upvotes: 1
2018/03/15
785
3,046
<issue_start>username_0: We have a solution with a pair of ClickOnce applications that are signed and published as part of the build. We have an on-premise TFS 2017 server, but until recently our projects were all being built using VS 2015. Under this scenario, we were able to build, sign, and publish the ClickOnce application and manifests automatically, and everything is fine. We recently deployed a new build agent with VS 2017 installed, and I cannot get that agent to build the project. The MSBuild step fails trying to sign the output with an error: ``` error MSB3482: An error occurred while signing: Failed to sign bin\x86\Release\app.publish\FooBar.exe ``` There is no additional information in the error message -- no explanation as to why the signing failed. The build agent is installed as a service, with it's log on credentials set to a real domain account, that's also a local admin on the build machine, and the certificate is installed into that user's certificate store. As an aside, if I take the `msbuild` command line out of the build agent's log and run that command on the build machine while logged in under the agent account, the build stage finishes fine with correctly signed output; it only fails when running through the TFS agent service. I'm at a loss where else to go to find why the signtool step is failing; has anyone else seen this issue?<issue_comment>username_1: Seems the build agent did not detect the environment changes (Maybe MSBuild Capability here) or something wrong with the detected version of MSBuild or missed other related capabilities after you installed the VS 2017. Just check the agent capabilities, make sure the capability **MSbuild 15.0** is detected. Generally the value should like this : ``` MSBuild_15.0 C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\ MSBuild_15.0_x64 C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\MSBuild\15.0\Bin\amd64\ ``` So, you can try below things to narrow down the issue: 1. If they are all there, just try to restart the agent service, then check it again. 2. If they are missed, you can try to manually add them (in Settings-> Agent Queues -> Agent Pool -> Agent -> Capabilities -> Add Capability). After that then trigger the build again. You can add some other needed capabilities with the same way. 3. Deploy a new agent, then check that again. 4. Try to build with the **MSBuild** task and specify the MSBuild location directly in it. Then check that again. 5. Recreate certificate, check in the changes, then try it again. (Go to the **property** of the Project -> Select the **Signing** tab -> Click '**create test certificate**' button) Upvotes: 0 <issue_comment>username_2: It's not a perfect "fix," but a workaround is to stop the build agent Windows service and to run the build agent in interactive mode from PowerShell. ``` PS C:\agent_directory> .\run.cmd Scanning for tool capabilities. Connecting to the server. 2018-07-24 18:19:39Z: Listening for Jobs ``` Upvotes: 1
2018/03/15
766
3,153
<issue_start>username_0: I'm starting to design an Android Application in which I need to be continuously listening for new commands and perform different actions based on the command received. So let's see an example of what I'd like to avoid if possible: ``` Public void onCommandReceived(String command) { Switch (command) { Case A: DoActionA(); break; ... Case Z: DoActionZ(); break; } } ``` Edit: I will receive a long string and I will have to parse the string to get the command. Is there any nicer implementation of this problem? Thanks a lot<issue_comment>username_1: May be this is what you are looking for ``` case 1: case 2: case 3: doSomthing(); break; case 4: doAnother(); break; ``` this will work because case 1 and 2 don't have break statements and will fall to case three . Upvotes: -1 <issue_comment>username_2: At some point you need to have a structure like your `switch (command)` example, at least in order to convert user input or program events into commands, and the humble switch statement is a good choice for this logic. There are alternatives, but the switch statement is clear and everyone knows instantly what it means. Here are some alternatives though, which might be useful depending on the specifics of your design: ### Decision table You can pre-populate a `Map<>` or similar with the mapping of commands to actions, something like this: ``` Map decisionTable = new HashMap<>(); decisionTable.put(A, new DoActionA()); decisionTable.put(B, new DoActionB()); ``` This would be initialised somewhere once, and then called like this: ``` Public void onCommandReceived(String command) { decisionTable.get(command).action(); } ``` I've left out the details, like error handling when the action is not present, the `Action` interface, and the individual classes implementing each action. ### Virtual dispatch If making a separate class for each action, like above, suits your design, then instead of passing a "dumb" command to your function, pass an instance of an interface. This is exactly like the decision table above, except the `Command` is the one with an interface and subclasses - you don't need to do the switch, because the command/action subclass already has that logic for you, for example: ``` interface Command { void action(); } class CommandA { @Override void action() { // do something } } ... Public void onCommandReceived(Command command) { command.action(); } ``` If you need to make different decisions based on which command *at multiple points int your code*, you might even want something as fancy as double dispatch, which I won't go into here. ### Switch But if this is the only pace in your code where you make a decision based on what kind of command it is, then the `switch` statement is not clever or tricky, which makes it exactly the sort of code you want to be looking at 6 months later - you and everyone else will understand it perfectly, and the value of that should not be underestimated. Upvotes: 1 [selected_answer]
2018/03/15
767
2,966
<issue_start>username_0: How can I make the border radius towards inside? ``` .circle { width:100px; height:100px; background-color:solid red; border-radius:-50px 50px -50px [enter image description here][1]50px; } ``` I know -50px is not acceptable but I just give a theoretical example. Please see the below image for reference. [![enter image description here](https://i.stack.imgur.com/ZKL3Q.jpg)](https://i.stack.imgur.com/ZKL3Q.jpg)<issue_comment>username_1: May be this is what you are looking for ``` case 1: case 2: case 3: doSomthing(); break; case 4: doAnother(); break; ``` this will work because case 1 and 2 don't have break statements and will fall to case three . Upvotes: -1 <issue_comment>username_2: At some point you need to have a structure like your `switch (command)` example, at least in order to convert user input or program events into commands, and the humble switch statement is a good choice for this logic. There are alternatives, but the switch statement is clear and everyone knows instantly what it means. Here are some alternatives though, which might be useful depending on the specifics of your design: ### Decision table You can pre-populate a `Map<>` or similar with the mapping of commands to actions, something like this: ``` Map decisionTable = new HashMap<>(); decisionTable.put(A, new DoActionA()); decisionTable.put(B, new DoActionB()); ``` This would be initialised somewhere once, and then called like this: ``` Public void onCommandReceived(String command) { decisionTable.get(command).action(); } ``` I've left out the details, like error handling when the action is not present, the `Action` interface, and the individual classes implementing each action. ### Virtual dispatch If making a separate class for each action, like above, suits your design, then instead of passing a "dumb" command to your function, pass an instance of an interface. This is exactly like the decision table above, except the `Command` is the one with an interface and subclasses - you don't need to do the switch, because the command/action subclass already has that logic for you, for example: ``` interface Command { void action(); } class CommandA { @Override void action() { // do something } } ... Public void onCommandReceived(Command command) { command.action(); } ``` If you need to make different decisions based on which command *at multiple points int your code*, you might even want something as fancy as double dispatch, which I won't go into here. ### Switch But if this is the only pace in your code where you make a decision based on what kind of command it is, then the `switch` statement is not clever or tricky, which makes it exactly the sort of code you want to be looking at 6 months later - you and everyone else will understand it perfectly, and the value of that should not be underestimated. Upvotes: 1 [selected_answer]
2018/03/15
1,198
3,884
<issue_start>username_0: I have a table of data such that specific columns of the information needs to be converted from a horizontal layout and inserted below the initial row. To Make things more complex any column with a value of zero needs to be ignored and each row may have a different column with a zero. I have gotten as far as inserting rows for the total count of columns with a value greater than 0 by using a countif formula in column "Q" for this vba. ``` Sub H2V() ' H2V Macro ' Integrate vertical UB-04 codes Worksheets("Sheet1 (2)").Activate Dim r, count As Range Dim LastRow As Long Dim temp As Integer Set r = Range("A:P") Set count = Range("Q:Q") LastRow = Range("B" & Rows.count).End(xlUp).Row For n = LastRow To 1 Step -1 temp = Range("Q" & n) If (temp > 1) Then Rows(n + 1 & ":" & n + temp).Insert Shift:=xlDown End If Next n End Sub ``` But I cannot for the life of me figure out how to pull the data from the horizontal set into the newly created rows to make it vertically integrated. Revised Example (more complete): [![Original Data Set](https://i.stack.imgur.com/SegQM.png)](https://i.stack.imgur.com/SegQM.png) [![Post VBA Run](https://i.stack.imgur.com/VL17b.png)](https://i.stack.imgur.com/VL17b.png) [![Macro Used](https://i.stack.imgur.com/GWHXE.png)](https://i.stack.imgur.com/GWHXE.png)<issue_comment>username_1: This is not the fastest solution, ill rework this code tomorrow, but it works, data\_sht is where your sample data is located and output\_sht is where Excel will place the modified data. ``` Sub data() Dim data_sht As Worksheet Dim output_sht As Worksheet Dim cell As Range Set data_sht = ThisWorkbook.Sheets("Sheet1") Set output_sht = ThisWorkbook.Sheets("Sheet2") Dim rng As Range Set rng = data_sht.Range("A1").CurrentRegion For Each cell In rng.Offset(1, 0) Header = rng.Cells(1, 1) If IsNumeric(cell) And cell.Value > 0 Then Object = rng.Cells(1, cell.Column) With output_sht If .Columns("B:B").Cells.Count < 1 Then lastrow = 2 Else lastrow = Range("B" & Rows.Count).End(xlUp).Row End If .Cells(1, 1) = Header .Cells(1, 2) = "Object" .Cells(1, 3) = "Value" .Cells(lastrow + 1, 1) = rng.Cells(cell.Row, 1) .Cells(lastrow + 2, 2) = Object .Cells(lastrow + 2, 3) = cell.Value End With End If Next cell With output_sht .Range("A1").CurrentRegion.RemoveDuplicates Columns:=Array(1, 2, 3), _ Header:=xlNo End With End Sub ``` Upvotes: 0 <issue_comment>username_2: you could try this ``` Option Explicit Sub main() Dim headers As Variant, names As Variant, data As Variant Dim iRow As Long With Worksheets("Sheet1 (2)") With .Range("A1").CurrentRegion headers = Application.Transpose(Application.Transpose(.Offset(, 1).Resize(1, .Columns.Count - 1).Value)) names = Application.Transpose(.Offset(1).Resize(.Rows.Count - 1, 1).Value) data = .Offset(1, 1).Resize(.Rows.Count - 1, .Columns.Count - 1).Value .ClearContents .Resize(1, 3).Value = Array("Name", "Object", "Value") End With For iRow = 1 To UBound(data) With .Cells(.Rows.Count, "B").End(xlUp) .Offset(1, -1).Value = names(iRow) .Offset(2, 0).Resize(UBound(headers)).Value = Application.Transpose(headers) .Offset(2, 1).Resize(UBound(data)).Value = Application.Transpose(Application.index(data, iRow, 0)) End With Next With .Range("B3", Cells(.Rows.Count, "B").End(xlUp)).SpecialCells(xlCellTypeConstants) .Offset(, 1).Replace what:="0", replacement:="", lookat:=xlWhole .Offset(, 1).SpecialCells(xlCellTypeBlanks).EntireRow.Delete End With End With End Sub ``` Upvotes: 2 [selected_answer]
2018/03/15
947
3,448
<issue_start>username_0: I want to compare two objects of Manager using three variables, deptNum, firstName and then lastName. I have set up these objects to be compared using a series of if and if-else statements. The error I am receiving is that the valReturn variable may not have been initialized, which I assume means an if statement somewhere along the way is not assigning a value correctly. I'm not sure where I made a mistake. If there is a more efficient way to set up this comparison, I would be happy to be educated further. Thank you. Below is my code: ``` public class ManagerComparator implements Comparator { public int compare(Manager first, Manager second) { int valReturn; if (first.getDeptNum() < second.getDeptNum()) valReturn = -1; else if (first.getDeptNum() > second.getDeptNum()) valReturn = 1; else if (first.getDeptNum() == second.getDeptNum()) { if (first.getFirstName().compareTo(second.getFirstName()) < 0) valReturn = -1; else if (first.getFirstName().compareTo(second.getFirstName()) > 0) valReturn = 1; else if (first.getFirstName().compareTo(second.getFirstName()) == 0) { if (first.getLastName().compareTo(second.getLastName()) < 0) valReturn = -1; else if (first.getLastName().compareTo(second.getLastName()) > 0) valReturn = 1; else if (first.getLastName().compareTo(second.getLastName()) == 0) valReturn = 0; } } return valReturn; } } ```<issue_comment>username_1: I would prefer a simple test against `0` with each incremental comparison. Return the result of each compare as you go. Like, ``` public class ManagerComparator implements Comparator { public int compare(Manager first, Manager second) { int r = Integer.compare(first.getDeptNum(), second.getDeptNum()); if (r != 0) { return r; } r = first.getFirstName().compareTo(second.getFirstName()); if (r != 0) { return r; } return first.getLastName().compareTo(second.getLastName()); } } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: The basic problem you are running into is that the compiler cannot determine that your `if ... else if ... else if ...` is exhaustive. If we look at your last if block it boils down to ``` if (a() < 0) valReturn = -1; else if (a() > 0) valReturn = 1; else if (a() == 0) valReturn = 0; ``` It is not feasible for the compiler to understand that `a()` will return the same value three times meaning that exactly one if condition will be met. The same would happen if you stored `a()` inside a temporary variable. Solution: make the last one an `else` instead of `else if`. Upvotes: 2 <issue_comment>username_3: You're using only `else if`. If you follow your logic you can see that all cases will enter one of the statements. However, the compiler doesn't know that. Simply changing the last `else if` in these instances to `else should resolve it. However, the code in Elliot's post would be a cleaner alternative. Upvotes: 1 <issue_comment>username_4: With Java 8, you can avoid all this boilerplate code and simply do: ``` Comparator managerComparator = Comparator.comparingInt(Manager::getDeptNum) .thenComparing(Manager::getFirstName) .thenComparing(Manager::getLastName); ``` It's self-explanatory, basically creates a comparator which does the same thing you're trying to do - compare by dept number, then by first name, then by last name. [Documentation](https://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html) Upvotes: 2
2018/03/15
816
2,787
<issue_start>username_0: I want to create a dataframe that has historical values with the values I calculated with. I have df1 as below: ``` df1: A B 0 2.0 3.0` 1 4.0 6.0 2 6.0 9.0 ``` I calculated mean of df1 as below: ``` m1 = df1.mean(axis=0) m1: A 4.0 B 6.0 ``` 'm1' is mean of entire data series from 0 to 2 (index). Instead of getting m1, what I want is to get a dataframe of mean for historical period. For example, I want to get mean value of every two periods (two consecutive indexes) and create them as a dataframe. The output I want is as below: ``` # I want mean of two consecutive periods of each column as below. df2: A B 0 3.0 4.5 1 5.0 7.5 ``` Can you help me on this? Thanks.<issue_comment>username_1: I would prefer a simple test against `0` with each incremental comparison. Return the result of each compare as you go. Like, ``` public class ManagerComparator implements Comparator { public int compare(Manager first, Manager second) { int r = Integer.compare(first.getDeptNum(), second.getDeptNum()); if (r != 0) { return r; } r = first.getFirstName().compareTo(second.getFirstName()); if (r != 0) { return r; } return first.getLastName().compareTo(second.getLastName()); } } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: The basic problem you are running into is that the compiler cannot determine that your `if ... else if ... else if ...` is exhaustive. If we look at your last if block it boils down to ``` if (a() < 0) valReturn = -1; else if (a() > 0) valReturn = 1; else if (a() == 0) valReturn = 0; ``` It is not feasible for the compiler to understand that `a()` will return the same value three times meaning that exactly one if condition will be met. The same would happen if you stored `a()` inside a temporary variable. Solution: make the last one an `else` instead of `else if`. Upvotes: 2 <issue_comment>username_3: You're using only `else if`. If you follow your logic you can see that all cases will enter one of the statements. However, the compiler doesn't know that. Simply changing the last `else if` in these instances to `else should resolve it. However, the code in Elliot's post would be a cleaner alternative. Upvotes: 1 <issue_comment>username_4: With Java 8, you can avoid all this boilerplate code and simply do: ``` Comparator managerComparator = Comparator.comparingInt(Manager::getDeptNum) .thenComparing(Manager::getFirstName) .thenComparing(Manager::getLastName); ``` It's self-explanatory, basically creates a comparator which does the same thing you're trying to do - compare by dept number, then by first name, then by last name. [Documentation](https://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html) Upvotes: 2
2018/03/15
836
3,189
<issue_start>username_0: I'm trying to use try/catch method to prevent the app from crashing whenever there is no value provided by the user and even with this try catch, it still says error. Thank you in advance to those kind souls who will reply ``` editText = (EditText) findViewById(R.id.editText); textView = (TextView) findViewById(R.id.textView); } public void btnSubmit_OnClick(View view) { try { double num = Double.parseDouble(editText.getText().toString()); }catch (Exception e) { Toast.makeText(MainActivity.this, e.getMessage(), Toast.LENGTH_SHORT).show(); } if (num >= 6.5 && num < 14) { textView.setText("You may have Diabetes or if you are already diagnosed, have Uncontrolled Diabetes. However, I still find it best that you contact your attending physician for proper assessment. Thank you."); } else if (num >= 14){ textView.setText("This is the highest level the application can read. Please contact your attending physician as either there is an error or you are suffering from a condition that warrants immediate referral. Thank you."); ```<issue_comment>username_1: I would prefer a simple test against `0` with each incremental comparison. Return the result of each compare as you go. Like, ``` public class ManagerComparator implements Comparator { public int compare(Manager first, Manager second) { int r = Integer.compare(first.getDeptNum(), second.getDeptNum()); if (r != 0) { return r; } r = first.getFirstName().compareTo(second.getFirstName()); if (r != 0) { return r; } return first.getLastName().compareTo(second.getLastName()); } } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: The basic problem you are running into is that the compiler cannot determine that your `if ... else if ... else if ...` is exhaustive. If we look at your last if block it boils down to ``` if (a() < 0) valReturn = -1; else if (a() > 0) valReturn = 1; else if (a() == 0) valReturn = 0; ``` It is not feasible for the compiler to understand that `a()` will return the same value three times meaning that exactly one if condition will be met. The same would happen if you stored `a()` inside a temporary variable. Solution: make the last one an `else` instead of `else if`. Upvotes: 2 <issue_comment>username_3: You're using only `else if`. If you follow your logic you can see that all cases will enter one of the statements. However, the compiler doesn't know that. Simply changing the last `else if` in these instances to `else should resolve it. However, the code in Elliot's post would be a cleaner alternative. Upvotes: 1 <issue_comment>username_4: With Java 8, you can avoid all this boilerplate code and simply do: ``` Comparator managerComparator = Comparator.comparingInt(Manager::getDeptNum) .thenComparing(Manager::getFirstName) .thenComparing(Manager::getLastName); ``` It's self-explanatory, basically creates a comparator which does the same thing you're trying to do - compare by dept number, then by first name, then by last name. [Documentation](https://docs.oracle.com/javase/8/docs/api/java/util/Comparator.html) Upvotes: 2
2018/03/15
407
1,244
<issue_start>username_0: I want to get all records of matching wiht query, I get multiple params and save them in query like that ``` query = Hash.new query[:user_id] = params["user_id"] if query["user_id"] query[:vehicle] = params["vehicle_id"] if query["vehicle_id"] trips = Trip.where(query) ``` I also want to add `params['created_at]` if these params are present but no idea how to do it<issue_comment>username_1: You could add a scope to your Trip model. ``` scope :created_after, ->(time) { where("created_at > ?", time) if time } ``` Then chain it like so: ``` Trip.created_after(params["created_at"]).where(query) ``` Not the most elegant, but it will work. If "created\_at" is not in the params, it will be nil and will just return all trips. Upvotes: 1 <issue_comment>username_2: Just for fun and riffing off the answer provided by username_1 and [this Q&A](https://stackoverflow.com/questions/49017124/rails-filter-when-param-is-present-for-large-model/49017379?noredirect=1#comment85101497_49017379), perhaps something like: ``` params.slice(:user_id, :vehicle_id).compact. each_with_object(Trip.created_after(params[:created_at])) do |(key, value), scope| scope.merge(Trip.send(key, value)) end ``` Upvotes: 0
2018/03/15
464
1,491
<issue_start>username_0: I have a table with 3 columns `id, type, value` like in image below. [![example](https://i.stack.imgur.com/rnZna.png)](https://i.stack.imgur.com/rnZna.png) What I'm trying to do is to make a query to get the data in this format: ``` type previous current month-1 666 999 month-2 200 15 month-3 0 12 ``` I made this query but it gets just the last value ``` select * from statistics where id in (select max(id) from statistics group by type) order by type ``` [![enter image description here](https://i.stack.imgur.com/sTNi6.png)](https://i.stack.imgur.com/sTNi6.png) EDIT: Live example <http://sqlfiddle.com/#!9/af81da/1> Thanks!<issue_comment>username_1: ``` select type, ifnull(max(case when seq = 2 then value end),0 ) previous, max( case when seq = 1 then value end ) current from ( select *, (select count(*) from statistics s where s.type = statistics.type and s.id >= statistics.id) seq from statistics ) t where seq <= 2 group by type ``` Upvotes: 1 <issue_comment>username_2: I would write this as: ``` select s.*, (select s2.value from statistics s2 where s2.type = s.type order by id desc limit 1, 1 ) value_prev from statistics s where id in (select max(id) from statistics s group by type) order by type; ``` This should be relatively efficient with an index on `statistics(type, id)`. Upvotes: 3 [selected_answer]
2018/03/15
935
2,489
<issue_start>username_0: I am trying to find a way to be able to enable/disable a USB port on my computer via command prompt or powershell. I have looked EVERYWHERE and the only possible solution I have found is by using the devcon disable command. However, when I try to use it, I always get this "Disable failed" in response. Is there something wrong with my command, or maybe am I trying to disable the wrong device? Of course, let me know if this isn't enough information. Thanks! ``` C:\WINDOWS\system32>devcon find usb* USB\VID_045E&PID_07BE\5&16B1AE89&1&7 : USB Composite Device USB\VID_1286&PID_204B\0000000000000000 : Marvell AVASTAR Bluetooth Radio Adapter USB\VID_045E&PID_07BF&MI_00\6&15943EE&0&0000 : Microsoft LifeCam Rear USB\VID_0BDA&PID_0307\201006010301 : Realtek USB 3.0 Card Reader USB\VID_045E&PID_07BF\5&16B1AE89&1&8 : USB Composite Device USB\ROOT_HUB30\4&3A935074&0&0 : USB Root Hub (USB 3.0) USB\VID_045E&PID_07BE&MI_00\6&2299716&0&0000 : Microsoft LifeCam Front USBSTOR\DISK&VEN_REALSIL&PROD_RTSUERLUN0&REV_1.00\0000 : SDHC Card USB\VID_045E&PID_07DC\034478250254 : USB Input Device USB\VID_046D&PID_C52B\5&16B1AE89&1&1 : USB Composite Device USB\VID_046D&PID_C52B&MI_02\6&733402B&0&0002 : USB Input Device USB\VID_046D&PID_C52B&MI_01\6&733402B&0&0001 : USB Input Device USB\VID_046D&PID_C52B&MI_00\6&733402B&0&0000 : Logitech USB Input Device 13 matching device(s) found. C:\WINDOWS\system32>devcon disable "@USB\ROOT_HUB30\4&3A935074&0&0" USB\ROOT_HUB30\4&3A935074&0&0 : Disable failed No matching devices found. ```<issue_comment>username_1: To help anybody finding this topic, I encountered this failure on WIN7 64bits. I solved it with a devcon 64bits (69kB) instead of devcon 32bits (55kB). See also [delphintipz.blogspot.com/2012/07/disable-failed-no-devices-disabled.html](http://delphintipz.blogspot.com/2012/07/disable-failed-no-devices-disabled.html) Upvotes: 1 <issue_comment>username_2: I had the same problem with Windows 10 and I solved when change the devcon executable file privileges. To work devcon.exe must have administrator privileges. You can find how to do it [here](https://www.windowscentral.com/how-run-app-administrator-windows-10). Upvotes: 0
2018/03/15
612
1,738
<issue_start>username_0: I am having a hard time trying to create a scatter plot that will show points of different classes with different colors and the corresponding label of the class in the legend. I have 52 samples, 2 features and 3 classes(class 1,2 and 3). The data are in `X` array and the labels in `labels` array. I want to plot the legend that will contain the class names (1,2 and 3) for the corresponding colors. --- **My code**: ``` import numpy as np import matplotlib.pyplot as plt x = np.random.rand(52,2) labels = np.ones(x.shape[0]) labels[0:8] = 1 labels[8:31] = 2 labels[31:] = 3 plt.scatter(x[:,0], x[:,1], c = labels, label = labels) plt.legend() plt.show() ``` --- **Result**: [![enter image description here](https://i.stack.imgur.com/ySrY9.png)](https://i.stack.imgur.com/ySrY9.png)<issue_comment>username_1: Taken from: <https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html> > > **3. Explicitly defining the elements in the legend** > > > For full control of which artists have a legend entry, it is possible > to pass an iterable of legend artists followed by an iterable of > legend labels respectively: > > > > ``` > legend((line1, line2, line3), ('label1', 'label2', 'label3')) > > ``` > > Upvotes: 0 <issue_comment>username_2: One way to do this would be to plot your classes separately: ``` x = np.random.rand(52,2) labels = np.ones(x.shape[0]) labels[0:8] = 1 labels[8:31] = 2 labels[31:] = 3 for i in np.unique(labels): plt.scatter(x[np.where(labels==i),0], x[np.where(labels==i),1], label=i) plt.legend() plt.show() ``` [![enter image description here](https://i.stack.imgur.com/XPRYa.png)](https://i.stack.imgur.com/XPRYa.png) Upvotes: 3 [selected_answer]
2018/03/15
569
1,799
<issue_start>username_0: I have two **sibling** components, one of which is a modal with a form (to add/edit users) and the other is a table of users. When a user is added or updated, I need to **sibling** table to be updated. I have tried to trigger the change in my `onSubmit()` in the modal/form component, but it does not seem to update the **sibling** component: ``` import { ChangeDetectorRef } from '@angular/core'; ``` and then: ``` constructor( private change: ChangeDetectorRef ) {} ``` and finally in the `onSubmit()`: ``` onSubmit() { this.users.postUser(this.userForm.value).subscribe( response => { console.log(response); this.modalRef.hide(); this.userForm.reset(); this.change.detectChanges(); }, (err) => { this.apiHandler.errorHandler(err); } ); } ```<issue_comment>username_1: Taken from: <https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html> > > **3. Explicitly defining the elements in the legend** > > > For full control of which artists have a legend entry, it is possible > to pass an iterable of legend artists followed by an iterable of > legend labels respectively: > > > > ``` > legend((line1, line2, line3), ('label1', 'label2', 'label3')) > > ``` > > Upvotes: 0 <issue_comment>username_2: One way to do this would be to plot your classes separately: ``` x = np.random.rand(52,2) labels = np.ones(x.shape[0]) labels[0:8] = 1 labels[8:31] = 2 labels[31:] = 3 for i in np.unique(labels): plt.scatter(x[np.where(labels==i),0], x[np.where(labels==i),1], label=i) plt.legend() plt.show() ``` [![enter image description here](https://i.stack.imgur.com/XPRYa.png)](https://i.stack.imgur.com/XPRYa.png) Upvotes: 3 [selected_answer]
2018/03/15
276
1,150
<issue_start>username_0: I have been using SSIS for VS 2017 for about a month without troubles. Suddenly, when I try to create a new SSIS project the following error dialog box appears: > > Could not load file or assembly 'Microsoft.DataTransformationServices.Wizards' or one of its dependencies. The system cannot find the file specified. > > > I have tried to uninstall SSIS package and to reinstall VS 2017 but the error still appears. Could you help me with it?<issue_comment>username_1: Actually the solution for me is simple. Some how the Integration Services is disabled. Re-enable it. Open your Visual Studio SSDT, go to **Tools >> Extension and Update** and Enable the "Microsoft Integration Services Projects". Upvotes: 3 <issue_comment>username_2: I was also getting the same error message when I tried to open `VS 2017 Integration Services` projects. I fixed it by enabling the feature for Integration Services in `TOOLS >> Extensions and Updates >> Integration services >> Enable`. Upvotes: 5 <issue_comment>username_3: By doing this it worked for meTOOLS >> Extensions and Updates >> Integration services >> Enable. Upvotes: 3
2018/03/15
610
1,704
<issue_start>username_0: I have a list of lists, and when I try to modify one entry I get an error. As an example here is a dummy code that produces the error: ``` list1 <- list(a = 2, b = 3) list2 <- list(c = "a", d = "b") mylist <- list(list1, list2) (mylist[[1]])[[1]] <- 555 ``` I can avoid this by creating a temp list from the list of lists. However, it is annoying if I have to deal with this a lot. Any ideas for one line of code that solves the issue?<issue_comment>username_1: Why the brackets? Just do `mylist[[1]][[1]] <- 555` This corresponds to ``` list1 <- list(a = 555, b = 3) list2 <- list(c = "a", d = "b") mylist <- list(list1, list2) ``` Upvotes: 2 <issue_comment>username_2: `(mylist[[1]])` and `mylist[[1]]` have the same value when evaluated but when you do : ``` mylist[[1]] <- 1 ``` The parser calls the function `[[<-` and executes : ``` `[[<-`(mylist,1,value=1) ``` Whereas when you do : ``` (mylist[[1]]) <- 1 ``` The parser calls the function `(<-` which is not defined (you'd have the same error with `(x) <- 1`). `(mylist[[1]])[[1]] <- 1` also calls `(<-` though it call `[[<-` first. I don't know exactly how the parser works but if you redefine : ``` `[[<-` <- function(a,b,value) {print("hello");base::`[[<-`(a,b,value)} ``` you will see it prints before giving an error. ``` (mylist[[1]])[[1]] <- 1 [1] "hello" Error in (mylist[[1]])[[1]] <- 1 : could not find function "(<-" ``` Make sure to clean this up: ``` rm(`[[<-`) ``` This would make your code work, though you should really just drop the parentheses: ``` `(<-` <- function(x,value) `<-`(x,value) (mylist[[1]])[[1]] <- 555 #works ``` Upvotes: 2 [selected_answer]
2018/03/15
538
1,698
<issue_start>username_0: I'm trying to change the `v-model` of a component by the parent component most I'm not getting. In the parent component I have a `showProgress` variable, I want it when I change it to `true` the child `v-model` switch to `true`. **ProgressModal.vue** ``` export default { name: 'progress-modal', props: ['title', 'text'], data: () => ({ show: true }), methods: { } } ``` I already tried to use ``` ``` Instead of `v-model` in `v-dialog` but it does not work :(<issue_comment>username_1: To enable usage of `v-model` by the parent, you have to define a `value` prop in the child and use it. ```html ... export default { name: 'progress-modal', props: ['title', 'text', 'value'], // added 'value' data: () => ({ ... ``` This way, when you use: ```html ``` ...the `value` inside `progress-modal` will have the value of parent's `showProgress`. Keeping it named `show` ======================= To use other internal name instead of `value` you can [**declare the `model` option**](https://v2.vuejs.org/v2/guide/components.html#Customizing-Component-v-model) in the component. ```html ... export default { name: 'progress-modal', props: ['title', 'text', 'show'], // added 'show' model: { // added model option prop: 'show' // }, // data: () => ({ }), // in this case, remove show from data ... ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Pass `value` prop as `value` to `v-dialog` component, and re-emit input from `v-dialog` component: ``` //CustomDialog.vue ... props:['value'] ``` and add v-model to your parent (custom dialog) ``` //Parent.vue ``` [Example](https://codepen.io/anon/pen/geMBJb) Upvotes: 2
2018/03/15
1,202
4,358
<issue_start>username_0: I try to replace ">>" with "hello". It would not work. Why? Any hint appeciated. Thanks. ``` a string with >> in it. Replace ">>" with "hello" function myFunction() { var str = document.getElementById("demo").innerHTML; var res = str.replace(">>", "hello"); document.getElementById("demo").innerHTML = res; } ```<issue_comment>username_1: You're getting HTML entities, use `.textContent` instead. `.innerHTML` returns the string parsed by the browser, so, for example, this letter `á` will be converted to `á`. Because of that, you're getting `>>` using `innerHTML`. ```html a string with >> in it. Replace ">>" with "hello" function myFunction() { console.log(document.getusername_1mentById("demo").innerHTML); // just to illustrate! var str = document.getusername_1mentById("demo").textContent; var res = str.replace(">>", "hello"); document.getusername_1mentById("demo").textContent = res; } ``` Aside note ========== * The next time in your development, use the function `addEventListener` to bind the events to your elements. * Try to void the repeated lookup of elements, so you must store this `document.getusername_1mentById("demo")` into a variable to avoid repeated search in the current DOM.\* ```js document.querySelector('button').addEventListener('click', myFunction); function myFunction() { var demousername_1ment = document.getusername_1mentById("demo"); console.log(demousername_1ment.innerHTML); // just to illustrate! var str = demousername_1ment.textContent; var res = str.replace(">>", "hello"); demousername_1ment.textContent = res; } ``` ```html a string with >> in it. Replace ">>" with "hello" ``` Resources ========= * [`username_1ment.innerHTML`](https://developer.mozilla.org/en-US/docs/Web/API/username_1ment/innerHTML) > > The username_1ment property `innerHTML` property is used to get or set a string representing serialized HTML describing the element's descendants. > > > * [`Node.textContent`](https://developer.mozilla.org/en-US/docs/Web/API/Node/textContent) > > The `Node.textContent` property represents the text content of a node and its descendants. > > > * [`EventTarget.addEventListener()`](https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/addEventListener) Upvotes: 2 <issue_comment>username_2: The problem is that you are getting and setting **[`.innerHTML`](https://developer.mozilla.org/en-US/docs/Web/API/username_1ment/innerHTML)**, which parses the string as HTML. And, of course, `>` has a special meaning in HTML. Instead, use **[`.textContent`](https://developer.mozilla.org/en-US/docs/Web/API/Node/textContent)**, which treats the string as raw text with no parsing done to it. ```js function myFunction() { var str = document.getusername_1mentById("demo").textContent; var res = str.replace(">>", "hello"); document.getusername_1mentById("demo").textContent = res; } ``` ```html a string with >> in it. Replace ">>" with "hello" ``` When the HTML parser looks at a string as HTML and finds valid HTML in the string, it parses it correctly as HTML, but if it finds stray `<` and `>` that don't make valid HTML, it escapes them into **[HTML entities](https://developer.mozilla.org/en-US/docs/Glossary/Entity)** of `<` and `>`, which you can see here: ```js // With .innerHTML: // Valid HTML in the string returns the string and the HTML console.log(document.querySelector("div").innerHTML); // HTML syntax, but not valid HTML, returns HTML entities console.log(document.querySelector("p").innerHTML); // With .textContent: // Valid HTML is not returned - only the text console.log(document.querySelector("div").textContent); // HTML Syntax, but not valid HTML, returns unaltered string console.log(document.querySelector("p").textContent); ``` ```css h1 { font-size:1.1em; } ``` ```html Both of the following will have .innerHTML used on them: ======================================================== A string with valid HTML in it. A string with >>> HTML syntax <<<, but not valid HTML in it. ``` Upvotes: 2 <issue_comment>username_3: The characters are encoded if you read them via .innerHtml. [![enter image description here](https://i.stack.imgur.com/4DvgT.png)](https://i.stack.imgur.com/4DvgT.png) Upvotes: 2 [selected_answer]
2018/03/15
423
1,452
<issue_start>username_0: I have configured some vhost like: ``` ServerName test.mywebsite.com ServerAlias test.mywebsite.com DocumentRoot /hosting/test AllowOverride All Order Allow,Deny Allow from All Require all granted SSLEngine on SSLCertificateFile /usr/local/apache/ssl/mywebsite.com.pem SSLCertificateKeyFile /usr/local/apache/ssl/mywebsite.com.key ``` Now i need to display a default webpage if domain root folder not have any index page like index.php or index.html. Actualy if open wesite it return 403 forbidden if index.html/php is not present. My idea is; Folder site no index.html/php show: defaultwebpage.html in /etc/var/http/hdocs/default.html How to this? Apache config.conf is configured with default root but if put the index.html here it is not displayed in case of lack. ``` DocumentRoot "/data/www/default" ```<issue_comment>username_1: Couldn't you use the ErrorDocument thing to redirect all 404 to a certain default page? If that default page is outside your webroot use alias to bring it in. So Alias /specialindex /etc/var/http/hdocs/ and ErrorDocument 403 /specialindex/default.html Looks strange, but should work. But, catches all 403 of course... Upvotes: 1 <issue_comment>username_2: The solution: ``` Options -Indexes ErrorDocument 403 /.noindex.html AllowOverride None Require all granted Alias /.noindex.html /usr/share/httpd/noindex/index.html ``` Upvotes: 1 [selected_answer]
2018/03/15
1,057
4,115
<issue_start>username_0: I am a complete newbie in Flutter, but I already like it. The question is: why is the column not expanding to the full height? [code is on gist.github](https://gist.github.com/kitttn/bac291c4856cabb1114aefe929c0ac90) ![screenshot](https://i.stack.imgur.com/gnDIQ.jpg)<issue_comment>username_1: A [Column](https://docs.flutter.io/flutter/widgets/Column-class.html) widget is flexible - it will try to take up as much space as it needs but no more. If you want it to take up all availible space, then wrap it in an [Expanded](https://docs.flutter.io/flutter/widgets/Expanded-class.html) widget. ``` createChildren() { return [ Image.network( testImg, height: imageSize, width: imageSize, ), Padding(padding: EdgeInsets.only(left: 16.0)), Expanded(child: Column( crossAxisAlignment: CrossAxisAlignment.start, mainAxisAlignment: MainAxisAlignment.spaceBetween, children: [ boldText("Some title"), Text("Address"), Text("Description") ], ), ), ]; } ``` Upvotes: 5 <issue_comment>username_2: Wrapping the column with a row with `MainAxisSize.max` ``` Row( mainAxisSize: MainAxisSize.max, children: [ Column( children: [ \_someWidget(), \_someWidget(), \_someWidget(), ], ), ], ) ``` Upvotes: 3 <issue_comment>username_3: Using `CrossAxisAlignment.stretch` works for me: ``` Column( crossAxisAlignment: CrossAxisAlignment.stretch, children: [ Text("Some title"), Text("Address"), Text("Description") ], ), ``` Upvotes: 5 <issue_comment>username_4: Don't use flex=1 inside Column, here you can see example ``` children: [ MyWidget(), Expanded( child: MyWidget(), ), MyWidget(), ], ``` Upvotes: 2 <issue_comment>username_5: Based on your layout I assume that your `Column` is nested inside a `Row`. Wrap your `Row` with `IntrinsicHeight` and your `Column` will automatically expand its height to be the same as the `Row`'s tallest child: ``` IntrinsicHeight( child: Row( children: [ Container(width: 40, height: 40, color: Colors.blue), // tallest child Container(width: 40, height: 60, color: Colors.blue), // height will be 60 Column( ... ), ], ), ) ``` Upvotes: 2 <issue_comment>username_6: You can wrap the `Column` inside an infinite width `SizedBox`. ``` SizedBox( width: double.infinity, child: Column( crossAxisAlignment: CrossAxisAlignment.center, mainAxisAlignment: MainAxisAlignment.center, children: [ ... ] ) ) ``` The above code snippet will place the children at the center of the `SizedBox`. It'll work especially well if you want to use a `Column` as the body of your `Scaffold`. In that case children will show up at the center of the screen. Upvotes: 2 <issue_comment>username_7: The code linked in the question is no longer available, so I can't give you a full working example for your case. However, one thing that will likely work is using `mainAxisAlignment` as follows: ```dart Column( mainAxisAlignment: MainAxisAlignment.spaceBetween, mainAxisSize: MainAxisSize.max, children: [ Text("Some title"), Text("Address"), Text("Description") ], ), ``` (`mainAxisSize: MainAxisSize.max` is actually the default value, butthis only works if you do not set it to `MainAxisSize.min`, so I included it here.) Setting `mainAxisAlignment` to `MainAxisAlignment.spaceAround` or `MainAxisAlignment.spaceEvenly` will also expand the column, but spacing between/around children will be handled differently. Try it out and see which you like best! This will cause the column to take all the space it can - if it still doesn't expand then that is due to its parent widget, and we can no longer access the code so I can't speak to that. Upvotes: 0
2018/03/15
920
2,836
<issue_start>username_0: I'd like to save a calculated histogram from a file so that I can reopen it without having to recompute it, but I'm not sure how I would go about saving and then reading it again. ``` image_path = "/Users/..../test.jpg" image = cv2.imread(image_path, 0) if image is not None: hist = cv2.calcHist([image], [0], None, [256], [0, 256]) cv2.imwrite("/Users/.../hist.jpg", hist) # What should this be? hist = cv2.imread("/Users/.../hist.jpg", 0) # And this? ``` EDIT: So I'd like to do something like this, but I'm not sure what the syntax would be. ``` with open('hist.txt', 'a') as fp: fp.write('%s %s %s', [hist_id, list(hist), color]) with open('hist.txt', 'r') as fp: lines = fp.readlines() for line in lines: hist_id = line[0] hist = np.array(eval(line[1])) color = line[2] cv2.compare(hist.....) ``` EDIT 2: ``` new_entry = [image, list(hist1), list(hist2)] for item in new_entry: fd.write("%s\t" % item) fd.write("\n") with open('hist.txt', 'r') as fd: lines = fd.readlines() for line in lines: line = line.split('\t') cv2.compareHist(numpy.array(line[1]), numpy.array(line[2])) ```<issue_comment>username_1: Note that `cv2.calcHist` returns a 1D array, not an image, so you cannot save it with `cv2.imwrite`, but rather use a standard data file, such as CSV (comma separated values) for instance. If it's OK for you to store only one histogram per file, the easiest solution is to use a simple text file: ``` import cv2 import numpy as np from matplotlib import pyplot as plt image = cv2.imread(image_path) hist = cv2.calcHist([image], [0], None, [256], [0,256]) with open('histo.txt', 'w') as file: file.write(list(hist)) # save 'hist' as a list string in a text file ``` then later: ``` with open('histo.txt', 'r') as file: hist = np.array(eval(file.read()) # read list string and convert to array ``` On the other hand, if your goal is not to **save** but to **plot** the histogram, matplotlib may be the easiest tool for that. This code snippet plots all three histograms for the R, G and B channels of your image: ``` import cv2 import numpy as np from matplotlib import pyplot as plt image = cv2.imread(image_path) colors = ('b','g','r') for n, color in enumerate(colors): hist = cv2.calcHist([image], [n], None, [256], [0,256]) plt.plot(hist, color=color) plt.xlim([0, 256]) plt.show() ``` Upvotes: 2 <issue_comment>username_2: Even if you wish to save the figure, `matplotlib` is a good way to do it. `matplotlib.pyplot.savefig(‘my_cv2hist.png’)` would work for you, if it is to save the histogram as an image that you are looking for. `imread` will just work fine if you are trying to read it back as the way you are trying to do it in your example code. Upvotes: 0
2018/03/15
620
1,947
<issue_start>username_0: How can i grep the first 3 letters of the below output and echo the result. example: ``` more /etc/group |grep -i 1900 ``` i get the below result. ``` h10shm:x:1900:h10adm ``` I just want the first 3 letters (in above output h10) of the about output and echo the output. Regards, Satvik<issue_comment>username_1: Note that `cv2.calcHist` returns a 1D array, not an image, so you cannot save it with `cv2.imwrite`, but rather use a standard data file, such as CSV (comma separated values) for instance. If it's OK for you to store only one histogram per file, the easiest solution is to use a simple text file: ``` import cv2 import numpy as np from matplotlib import pyplot as plt image = cv2.imread(image_path) hist = cv2.calcHist([image], [0], None, [256], [0,256]) with open('histo.txt', 'w') as file: file.write(list(hist)) # save 'hist' as a list string in a text file ``` then later: ``` with open('histo.txt', 'r') as file: hist = np.array(eval(file.read()) # read list string and convert to array ``` On the other hand, if your goal is not to **save** but to **plot** the histogram, matplotlib may be the easiest tool for that. This code snippet plots all three histograms for the R, G and B channels of your image: ``` import cv2 import numpy as np from matplotlib import pyplot as plt image = cv2.imread(image_path) colors = ('b','g','r') for n, color in enumerate(colors): hist = cv2.calcHist([image], [n], None, [256], [0,256]) plt.plot(hist, color=color) plt.xlim([0, 256]) plt.show() ``` Upvotes: 2 <issue_comment>username_2: Even if you wish to save the figure, `matplotlib` is a good way to do it. `matplotlib.pyplot.savefig(‘my_cv2hist.png’)` would work for you, if it is to save the histogram as an image that you are looking for. `imread` will just work fine if you are trying to read it back as the way you are trying to do it in your example code. Upvotes: 0
2018/03/15
4,048
13,142
<issue_start>username_0: Is it possible to use any datasets available via the `kaggle` API in Google Colab? I see the Kaggle API is used in [this Colab notebook](https://colab.research.google.com/drive/1eufc8aNCdjHbrBhuy7M7X6BGyzAyRbrF), but it's a bit unclear to me what datasets it provides access to.<issue_comment>username_1: You should be able to access any dataset on Kaggle via the API. In this example, only the datasets for competitions are being listed. You can see that datasets you can access with this command: ``` kaggle datasets list ``` You can also search for datasets by adding the -s tag and then the search term you're interested in. So this would give you a list of datasets about dogs: ``` kaggle datasets list -s dogs ``` You can find more information on the API and how to use it in the [documentation here](https://github.com/Kaggle/kaggle-api). Hope that helps! :) Upvotes: 4 <issue_comment>username_2: Have a look at [this](https://medium.com/@prakash_31206/kaggle-on-google-colab-easiest-way-to-transfer-datasets-and-remote-bash-e54c64054faa). It uses official kaggle api behind scene, but automates the process so you dont have to re-download manually every time your VM is taken away. Also, another issue i faced with using Kaggle API directly on Colab was the hassle of transferring Kaggle API token via Google Drive. Above method automates that as well. Disclaimer: I am one of the creators of Clouderizer. Upvotes: 2 <issue_comment>username_3: Step-by-step -- 1. Create an API key in Kaggle. To do this, go to kaggle.com/ and open your user settings page. [![settings nav](https://i.stack.imgur.com/jxGQv.png)](https://i.stack.imgur.com/jxGQv.png) 2. Next, scroll down to the API access section and click generate to download an API key. [![api token](https://i.stack.imgur.com/Hzlhp.png)](https://i.stack.imgur.com/Hzlhp.png) This will download a file called `kaggle.json` to your computer. You'll use this file in Colab to access Kaggle datasets and competitions. 3. Navigate to https://colab.research.google.com/. 4. Upload your `kaggle.json` file using the following snippet in a code cell: `from google.colab import files files.upload()` 5. Install the kaggle API using `!pip install -q kaggle` 6. Move the `kaggle.json` file into `~/.kaggle`, which is where the API client expects your token to be located: `!mkdir -p ~/.kaggle !cp kaggle.json ~/.kaggle/` 7. Now you can access datasets using the client, e.g., `!kaggle datasets list`. Here's a complete example notebook of the Colab portion of this process: **<https://colab.research.google.com/drive/1DofKEdQYaXmDWBzuResXWWvxhLgDeVyl>** This example shows uploading the `kaggle.json` file, the Kaggle API client, and using the Kaggle client to download a dataset. Upvotes: 8 [selected_answer]<issue_comment>username_4: `After the steps (1-6) above from username_3's answer`, to use dataset from a particular competition in colab, you can use the command: ``` !kaggle competitions download -c elo-merchant-category-recommendation ``` Here, `elo-merchant-category-recommendation` is the name of the competition. Upvotes: 1 <issue_comment>username_5: First of all, run this command to find out where this colab file exists, how it executes. [![enter image description here](https://i.stack.imgur.com/g1Hzw.png)](https://i.stack.imgur.com/g1Hzw.png) `!ls -d $PWD/*` It will show `/content/data /content/gdrive /content/models` In other words, your current directory is root/content/. Your working directory(pwd) is /content/. so when you do `!ls`, it will show `data gdrive models`. FYI, ! allows you to run linux commands inside colab. Google Drive keeps cleaning up the /content folder. Therefore, every session you use colab, downloaded data sets, kaggle json file will be gone. That's why it's important to automate the process, so you can focus on writing code, not setting up the environment every time. Run this in colab code block as an example with your own api key. open kaggle.json file. you will find them out. ``` # Info on how to get your api key (kaggle.json) here: https://github.com/Kaggle/kaggle-api#api-credentials !pip install kaggle {"username":"seunghunsunmoonlee","key":""} import json import zipfile import os with open('/content/.kaggle/kaggle.json', 'w') as file: json.dump(api_token, file) !chmod 600 /content/.kaggle/kaggle.json !kaggle config path -p /content !kaggle competitions download -c dog-breed-identification os.chdir('/content/competitions/dog-breed-identification') for file in os.listdir(): zip_ref = zipfile.ZipFile(file, 'r') zip_ref.extractall() zip_ref.close() ``` Then run `!ls` again. You will see all data you need. Hope it helps! Upvotes: 1 <issue_comment>username_6: Combined the top response to this [**Github gist as Colab Implementation**](https://gist.github.com/sugatoray/9244867c89fd3868ec8e5a5ba7341901#file-kaggle_import_dataset_public-ipynb). You can directly copy the code and use it. [How to Import a Dataset from Kaggle in Colab](https://gist.github.com/sugatoray/9244867c89fd3868ec8e5a5ba7341901#file-kaggle_import_dataset_public-ipynb) Method: ======= First a few things you have to do: 1. Sign up for Kaggle 2. Sign up for a competition you want to access data from (for example [LANL-Earthquake-Prediction competition](https://www.kaggle.com/c/LANL-Earthquake-Prediction)). 3. Download your credentials to access Kaggle API as `kaggle.json` ```py # Install kaggle packages !pip install -q kaggle !pip install -q kaggle-cli ``` ```py # Colab's file access feature from google.colab import files # Upload `kaggle.json` file uploaded = files.upload() ``` ```py # Retrieve uploaded file # print results for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format( name=fn, length=len(uploaded[fn]))) # Then copy kaggle.json into the folder where the API expects to find it. !mkdir -p ~/.kaggle !cp kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !ls ~/.kaggle ``` Now check if it worked! ```py #list competitions !kaggle competitions list -s LANL-Earthquake-Prediction ``` Upvotes: 2 <issue_comment>username_7: To download the competitve data on google colab from kaggle. I'm working on google colab and I've been through the same problem. but i did two tings . First you have to register your mobile number along with your country code. Second you have to click on last submission on the kaggle dataset page Then download kaggle.json file from kaggle.upload kaggle.json on the google colab After that on google colab run these code is given below. ``` !pip install -q kaggle !mkdir -p ~/.kaggle !cp kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !kaggle competitions download -c web-traffic-time-series-forecasting ``` Upvotes: 1 <issue_comment>username_8: Detailed approach: 1. Go to my account in your profile [![enter image description here](https://i.stack.imgur.com/6Lm8E.png)](https://i.stack.imgur.com/6Lm8E.png) 2. Scroll down, until you find an option **Create new Api Token**, this will download a file called kaggle.json [![enter image description here](https://i.stack.imgur.com/SSMFQ.png)](https://i.stack.imgur.com/SSMFQ.png) 3. Go to Colab upload the file kaggle.json [![enter image description here](https://i.stack.imgur.com/B9iya.png)](https://i.stack.imgur.com/B9iya.png) 4. pip install kaggle [![enter image description here](https://i.stack.imgur.com/2mHvK.png)](https://i.stack.imgur.com/2mHvK.png) 5. create a new folder named kaggle, copy kaggle.json into the kaggle folder, and set read-write permissions only for you(user). [![enter image description here](https://i.stack.imgur.com/SilkW.png)](https://i.stack.imgur.com/SilkW.png) 6.Go to Kaggle website.For example, you want to download any data, click on the three dots in the right hand side of the screen. Then click **copy API command** [![enter image description here](https://i.stack.imgur.com/Abgwo.png)](https://i.stack.imgur.com/Abgwo.png) 7. Go to colab, paste the API command [![enter image description here](https://i.stack.imgur.com/LEXoR.png)](https://i.stack.imgur.com/LEXoR.png) 8.When you do an `!ls`, you will see that our download is a zip file. [![enter image description here](https://i.stack.imgur.com/Damm4.png)](https://i.stack.imgur.com/Damm4.png) 9. To unzip the file use the following command [![enter image description here](https://i.stack.imgur.com/Umhm2.png)](https://i.stack.imgur.com/Umhm2.png) 10. Now, when you do `!ls` you'll find our csv file is extracted from the zip file. [![enter image description here](https://i.stack.imgur.com/6Y3Jz.png)](https://i.stack.imgur.com/6Y3Jz.png) 11. To read the file perform a simple `pd.read_csv`, import pandas [![enter image description here](https://i.stack.imgur.com/luON2.png)](https://i.stack.imgur.com/luON2.png) 12.As you see, we have successfully read our file into colab. [![enter image description here](https://i.stack.imgur.com/ohlrD.png)](https://i.stack.imgur.com/ohlrD.png) This downloads the kaggle dataset into google colab, where you can perform analysis and build amazing machine learning models or train neural networks. Happy Analysis!!! Upvotes: 3 <issue_comment>username_9: Most important part is before to download files: In the Kaggle webpage, in the Competition section you must clicked on: **Late Submission or on Join Competition** and **ACCEPT RULE AND CONDITIONS ON KAGGLE COMPETITION WEBPAGE** if not, after copying api file, and after launched downloading the dataset, 403 error shows as result. Upvotes: 0 <issue_comment>username_10: A hacky way: 1. Go to the dataset page after login 2. Open Chrome Developer Tools, then go to Network pane 3. Click Download button on Kaggle 4. When clicked you will see many requests in Network pane, find the request starting `archive.zip` 5. Right click on that request, then Copy -> Copy as cURL (bash). Now you copied the command 6. On Colab, paste the command and append an `!` to the beginnning of the command then run it This is definitely a less reliable way than the API, but still remains as an option. Upvotes: 0 <issue_comment>username_11: I find the accepted answer to be very comprehensive, but would like to add that: ``` !kaggle competitions download -c dogs-vs-cats ``` or most other downloads still wont work. You will probably get the following error: > > 403 - Forbidden > > > which is not very verbose. It wants to say: "Please visit kaggle.com and accept the rules (e.g. for that competition). You cannot accept through the API! It is explicitly stated in the docs (see [Public API documentation | Kaggle](https://www.kaggle.com/docs/api)): > > Just like participating in a Competition normally through the user interface, you must read and accept the rules in order to download data or make submissions. You cannot accept Competition rules via the API. You must do this by visiting the Kaggle website and accepting the rules there. > > > Yes, this could have been a comment, but I am missing enough reputation to comment. Upvotes: 0 <issue_comment>username_12: ``` import os os.makedirs("/content/.kaggle/") import json token = {"username":"<PASSWORD>username_<PASSWORD>","key":"your_kaggle_key_here"} with open('/content/.kaggle/kaggle.json', 'a+') as file: json.dump(token, file) import shutil os.makedirs("/.kaggle/") src="/content/.kaggle/kaggle.json" des="/.kaggle/kaggle.json" shutil.copy(src,des) os.makedirs("/root/.kaggle/") !cp /content/.kaggle/kaggle.json ~/.kaggle/kaggle.json !kaggle config set -n path -v /content #https://towardsdatascience.com/setting-up-kaggle-in-google-colab-ebb281b61463 !kaggle datasets download -d xhlulu/siim-covid19-resized-to-512px-png ``` Works for me on Colab as of 29-05-21! Upvotes: 0 <issue_comment>username_13: A quick guide to use Kaggle datasets inside Google Colab using Kaggle API (1) Download the Kaggle API token. * Go to “Account”, go down the page, and find the “API” section. * Click the “Create New API Token” button. * The “kaggle.json” file will be downloaded. ![](https://miro.medium.com/max/1400/1*-D3PXoaRd-SYKM7QN933TA.png) (2) Mount the Google drive to the Colab notebook. * It means giving access to the files in your google drive to Colab notebook. ``` from google.colab import drive drive.mount("/content/gdrive", force_remount=True) ``` (3) Upload the “kaggle.json” file into the folder in google drive where you want to download the Kaggle dataset. ![](https://miro.medium.com/max/630/1*-ah0kR4rcCbCaCiTihV9Aw.png) (4) Install Kaggle API. ``` !pip install kaggle ``` (5) Change the current working directory to where you want to download the Kaggle dataset. ``` %cd /content/gdrive/MyDrive/DataSets/house_price_data/ ``` (6) Run the following code to configure the path to “kaggle.json”. ``` import os os.environ['KAGGLE_CONFIG_DIR'] = "/content/gdrive/MyDrive/DataSets/house_price_data/" ``` (7) Download the dataset. ![](https://miro.medium.com/max/1400/1*gBoEuFY-uzihiBX_4kyhcw.png) ``` !kaggle competitions download -c house-prices-advanced-regression-techniques ``` Upvotes: 1
2018/03/15
4,238
13,894
<issue_start>username_0: This is my jQuery code. ``` function hideawsbutton() { $("#aws-display-table tbody tr").each(function(index){ parentdata = $.trim($(this).children('td:nth-child(7)').html()); console.log('parentdata ==> ' + parentdata); if( parentdata.length > 5 ) { $(this).children('td:nth-child(8)').html(''); } }); } ``` The code checks for value in a specific column `td:nth-child(7)` and if it's length is less than 5, then sets the adjoining column's `td:nth-child(8)` html to null/empty. I assumed that the table will always contain values but I have been told that the cell will have the data but embedded within a tag as shown below. This is my console log output as you can see the data is embedded in between the span tag. ``` parentdata ==> 15-02-2018-15:31:44PM\_Presentation1.pptx parentdata ==> ``` Can someone tell me how I can access the data within the span tag ?<issue_comment>username_1: You should be able to access any dataset on Kaggle via the API. In this example, only the datasets for competitions are being listed. You can see that datasets you can access with this command: ``` kaggle datasets list ``` You can also search for datasets by adding the -s tag and then the search term you're interested in. So this would give you a list of datasets about dogs: ``` kaggle datasets list -s dogs ``` You can find more information on the API and how to use it in the [documentation here](https://github.com/Kaggle/kaggle-api). Hope that helps! :) Upvotes: 4 <issue_comment>username_2: Have a look at [this](https://medium.com/@prakash_31206/kaggle-on-google-colab-easiest-way-to-transfer-datasets-and-remote-bash-e54c64054faa). It uses official kaggle api behind scene, but automates the process so you dont have to re-download manually every time your VM is taken away. Also, another issue i faced with using Kaggle API directly on Colab was the hassle of transferring Kaggle API token via Google Drive. Above method automates that as well. Disclaimer: I am one of the creators of Clouderizer. Upvotes: 2 <issue_comment>username_3: Step-by-step -- 1. Create an API key in Kaggle. To do this, go to kaggle.com/ and open your user settings page. [![settings nav](https://i.stack.imgur.com/jxGQv.png)](https://i.stack.imgur.com/jxGQv.png) 2. Next, scroll down to the API access section and click generate to download an API key. [![api token](https://i.stack.imgur.com/Hzlhp.png)](https://i.stack.imgur.com/Hzlhp.png) This will download a file called `kaggle.json` to your computer. You'll use this file in Colab to access Kaggle datasets and competitions. 3. Navigate to https://colab.research.google.com/. 4. Upload your `kaggle.json` file using the following snippet in a code cell: `from google.colab import files files.upload()` 5. Install the kaggle API using `!pip install -q kaggle` 6. Move the `kaggle.json` file into `~/.kaggle`, which is where the API client expects your token to be located: `!mkdir -p ~/.kaggle !cp kaggle.json ~/.kaggle/` 7. Now you can access datasets using the client, e.g., `!kaggle datasets list`. Here's a complete example notebook of the Colab portion of this process: **<https://colab.research.google.com/drive/1DofKEdQYaXmDWBzuResXWWvxhLgDeVyl>** This example shows uploading the `kaggle.json` file, the Kaggle API client, and using the Kaggle client to download a dataset. Upvotes: 8 [selected_answer]<issue_comment>username_4: `After the steps (1-6) above from username_3's answer`, to use dataset from a particular competition in colab, you can use the command: ``` !kaggle competitions download -c elo-merchant-category-recommendation ``` Here, `elo-merchant-category-recommendation` is the name of the competition. Upvotes: 1 <issue_comment>username_5: First of all, run this command to find out where this colab file exists, how it executes. [![enter image description here](https://i.stack.imgur.com/g1Hzw.png)](https://i.stack.imgur.com/g1Hzw.png) `!ls -d $PWD/*` It will show `/content/data /content/gdrive /content/models` In other words, your current directory is root/content/. Your working directory(pwd) is /content/. so when you do `!ls`, it will show `data gdrive models`. FYI, ! allows you to run linux commands inside colab. Google Drive keeps cleaning up the /content folder. Therefore, every session you use colab, downloaded data sets, kaggle json file will be gone. That's why it's important to automate the process, so you can focus on writing code, not setting up the environment every time. Run this in colab code block as an example with your own api key. open kaggle.json file. you will find them out. ``` # Info on how to get your api key (kaggle.json) here: https://github.com/Kaggle/kaggle-api#api-credentials !pip install kaggle {"username":"seunghunsunmoonlee","key":""} import json import zipfile import os with open('/content/.kaggle/kaggle.json', 'w') as file: json.dump(api_token, file) !chmod 600 /content/.kaggle/kaggle.json !kaggle config path -p /content !kaggle competitions download -c dog-breed-identification os.chdir('/content/competitions/dog-breed-identification') for file in os.listdir(): zip_ref = zipfile.ZipFile(file, 'r') zip_ref.extractall() zip_ref.close() ``` Then run `!ls` again. You will see all data you need. Hope it helps! Upvotes: 1 <issue_comment>username_6: Combined the top response to this [**Github gist as Colab Implementation**](https://gist.github.com/sugatoray/9244867c89fd3868ec8e5a5ba7341901#file-kaggle_import_dataset_public-ipynb). You can directly copy the code and use it. [How to Import a Dataset from Kaggle in Colab](https://gist.github.com/sugatoray/9244867c89fd3868ec8e5a5ba7341901#file-kaggle_import_dataset_public-ipynb) Method: ======= First a few things you have to do: 1. Sign up for Kaggle 2. Sign up for a competition you want to access data from (for example [LANL-Earthquake-Prediction competition](https://www.kaggle.com/c/LANL-Earthquake-Prediction)). 3. Download your credentials to access Kaggle API as `kaggle.json` ```py # Install kaggle packages !pip install -q kaggle !pip install -q kaggle-cli ``` ```py # Colab's file access feature from google.colab import files # Upload `kaggle.json` file uploaded = files.upload() ``` ```py # Retrieve uploaded file # print results for fn in uploaded.keys(): print('User uploaded file "{name}" with length {length} bytes'.format( name=fn, length=len(uploaded[fn]))) # Then copy kaggle.json into the folder where the API expects to find it. !mkdir -p ~/.kaggle !cp kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !ls ~/.kaggle ``` Now check if it worked! ```py #list competitions !kaggle competitions list -s LANL-Earthquake-Prediction ``` Upvotes: 2 <issue_comment>username_7: To download the competitve data on google colab from kaggle. I'm working on google colab and I've been through the same problem. but i did two tings . First you have to register your mobile number along with your country code. Second you have to click on last submission on the kaggle dataset page Then download kaggle.json file from kaggle.upload kaggle.json on the google colab After that on google colab run these code is given below. ``` !pip install -q kaggle !mkdir -p ~/.kaggle !cp kaggle.json ~/.kaggle/ !chmod 600 ~/.kaggle/kaggle.json !kaggle competitions download -c web-traffic-time-series-forecasting ``` Upvotes: 1 <issue_comment>username_8: Detailed approach: 1. Go to my account in your profile [![enter image description here](https://i.stack.imgur.com/6Lm8E.png)](https://i.stack.imgur.com/6Lm8E.png) 2. Scroll down, until you find an option **Create new Api Token**, this will download a file called kaggle.json [![enter image description here](https://i.stack.imgur.com/SSMFQ.png)](https://i.stack.imgur.com/SSMFQ.png) 3. Go to Colab upload the file kaggle.json [![enter image description here](https://i.stack.imgur.com/B9iya.png)](https://i.stack.imgur.com/B9iya.png) 4. pip install kaggle [![enter image description here](https://i.stack.imgur.com/2mHvK.png)](https://i.stack.imgur.com/2mHvK.png) 5. create a new folder named kaggle, copy kaggle.json into the kaggle folder, and set read-write permissions only for you(user). [![enter image description here](https://i.stack.imgur.com/SilkW.png)](https://i.stack.imgur.com/SilkW.png) 6.Go to Kaggle website.For example, you want to download any data, click on the three dots in the right hand side of the screen. Then click **copy API command** [![enter image description here](https://i.stack.imgur.com/Abgwo.png)](https://i.stack.imgur.com/Abgwo.png) 7. Go to colab, paste the API command [![enter image description here](https://i.stack.imgur.com/LEXoR.png)](https://i.stack.imgur.com/LEXoR.png) 8.When you do an `!ls`, you will see that our download is a zip file. [![enter image description here](https://i.stack.imgur.com/Damm4.png)](https://i.stack.imgur.com/Damm4.png) 9. To unzip the file use the following command [![enter image description here](https://i.stack.imgur.com/Umhm2.png)](https://i.stack.imgur.com/Umhm2.png) 10. Now, when you do `!ls` you'll find our csv file is extracted from the zip file. [![enter image description here](https://i.stack.imgur.com/6Y3Jz.png)](https://i.stack.imgur.com/6Y3Jz.png) 11. To read the file perform a simple `pd.read_csv`, import pandas [![enter image description here](https://i.stack.imgur.com/luON2.png)](https://i.stack.imgur.com/luON2.png) 12.As you see, we have successfully read our file into colab. [![enter image description here](https://i.stack.imgur.com/ohlrD.png)](https://i.stack.imgur.com/ohlrD.png) This downloads the kaggle dataset into google colab, where you can perform analysis and build amazing machine learning models or train neural networks. Happy Analysis!!! Upvotes: 3 <issue_comment>username_9: Most important part is before to download files: In the Kaggle webpage, in the Competition section you must clicked on: **Late Submission or on Join Competition** and **ACCEPT RULE AND CONDITIONS ON KAGGLE COMPETITION WEBPAGE** if not, after copying api file, and after launched downloading the dataset, 403 error shows as result. Upvotes: 0 <issue_comment>username_10: A hacky way: 1. Go to the dataset page after login 2. Open Chrome Developer Tools, then go to Network pane 3. Click Download button on Kaggle 4. When clicked you will see many requests in Network pane, find the request starting `archive.zip` 5. Right click on that request, then Copy -> Copy as cURL (bash). Now you copied the command 6. On Colab, paste the command and append an `!` to the beginnning of the command then run it This is definitely a less reliable way than the API, but still remains as an option. Upvotes: 0 <issue_comment>username_11: I find the accepted answer to be very comprehensive, but would like to add that: ``` !kaggle competitions download -c dogs-vs-cats ``` or most other downloads still wont work. You will probably get the following error: > > 403 - Forbidden > > > which is not very verbose. It wants to say: "Please visit kaggle.com and accept the rules (e.g. for that competition). You cannot accept through the API! It is explicitly stated in the docs (see [Public API documentation | Kaggle](https://www.kaggle.com/docs/api)): > > Just like participating in a Competition normally through the user interface, you must read and accept the rules in order to download data or make submissions. You cannot accept Competition rules via the API. You must do this by visiting the Kaggle website and accepting the rules there. > > > Yes, this could have been a comment, but I am missing enough reputation to comment. Upvotes: 0 <issue_comment>username_12: ``` import os os.makedirs("/content/.kaggle/") import json token = {"username":"your_username_here","key":"your_kaggle_key_here"} with open('/content/.kaggle/kaggle.json', 'a+') as file: json.dump(token, file) import shutil os.makedirs("/.kaggle/") src="/content/.kaggle/kaggle.json" des="/.kaggle/kaggle.json" shutil.copy(src,des) os.makedirs("/root/.kaggle/") !cp /content/.kaggle/kaggle.json ~/.kaggle/kaggle.json !kaggle config set -n path -v /content #https://towardsdatascience.com/setting-up-kaggle-in-google-colab-ebb281b61463 !kaggle datasets download -d xhlulu/siim-covid19-resized-to-512px-png ``` Works for me on Colab as of 29-05-21! Upvotes: 0 <issue_comment>username_13: A quick guide to use Kaggle datasets inside Google Colab using Kaggle API (1) Download the Kaggle API token. * Go to “Account”, go down the page, and find the “API” section. * Click the “Create New API Token” button. * The “kaggle.json” file will be downloaded. ![](https://miro.medium.com/max/1400/1*-D3PXoaRd-SYKM7QN933TA.png) (2) Mount the Google drive to the Colab notebook. * It means giving access to the files in your google drive to Colab notebook. ``` from google.colab import drive drive.mount("/content/gdrive", force_remount=True) ``` (3) Upload the “kaggle.json” file into the folder in google drive where you want to download the Kaggle dataset. ![](https://miro.medium.com/max/630/1*-ah0kR4rcCbCaCiTihV9Aw.png) (4) Install Kaggle API. ``` !pip install kaggle ``` (5) Change the current working directory to where you want to download the Kaggle dataset. ``` %cd /content/gdrive/MyDrive/DataSets/house_price_data/ ``` (6) Run the following code to configure the path to “kaggle.json”. ``` import os os.environ['KAGGLE_CONFIG_DIR'] = "/content/gdrive/MyDrive/DataSets/house_price_data/" ``` (7) Download the dataset. ![](https://miro.medium.com/max/1400/1*gBoEuFY-uzihiBX_4kyhcw.png) ``` !kaggle competitions download -c house-prices-advanced-regression-techniques ``` Upvotes: 1
2018/03/15
712
2,480
<issue_start>username_0: I am trying to label a slider as in [jQuery UI Slider Labels Under Slider](https://stackoverflow.com/questions/10224856/jquery-ui-slider-labels-under-slider). I am not dealing with a ui-slider however, but rather with the slider widget from oTree. The excellent answer to the question [Mandatory slider in oTree/django](https://stackoverflow.com/questions/39113641/mandatory-slider-in-otree-django) explains how to use a jQuery selector to select an oTree slider: ``` $('[data-slider] input[type="range"]') ``` I have a slider that shows the current selected value (0-100). What I would like to do is to add a few labels below the slider (e.g. "cold", "neutral", "warm" if the slider value is a temperature). I tried the following code to select the oTree slider and append labels, but no labels appear. ``` {% formfield player.mySliderInput type="Slider"} {% block scripts %} $(document).ready(function() { var mylabel = $('<label>25</label>').css('left', '25%'); $('[data-slider] input[type="range"]').append(mylabel); }); {% endblock %} ``` The HTML of the page looks as follows: ``` How hot is it? (0-100): ``` I am unsure of which object (div? input?) to select and append the labels to.<issue_comment>username_1: since then oTree has slightly changed the form class naming. use the code below: ``` {% block scripts %} $(function () { var SliderTouched = false; var selector = $('[data-slider] input[type="range"]'); selector.change(function () { SliderTouched = true; }); $("#form").submit(function (event) { if (!SliderTouched) { event.preventDefault(); } }); }); {% endblock %} ``` What it does is the following: 1. It sets a variable to `false` when the page is loaded. 2. Unless this variable (`SliderTouched`) remains false the form will not submit 3. When you touch the slider the variable sets True so the form can be submitted. Upvotes: 2 <issue_comment>username_2: Update.. As per OP comments, the selector that ended up being appended to for the oTree slider was `.controls` ``` var $label = $('sometext').css('left', '10%'); $('.controls').append($label); ``` --- Original... Using the has attribute selector (<https://api.jquery.com/has-attribute-selector/>) you can select by an attribute. ``` ``` Given above html.. You can select it with... ``` [data-slider] ``` Adding in `input[type="range"]` will further qualify result to only include type range. Upvotes: 2 [selected_answer]
2018/03/15
346
1,250
<issue_start>username_0: I've saved a .pgpass (properly configured) file in ``` `/Users/username/` `/anaconda` `/anaconda/bin/` ``` but the postgresql connection still requests my username and password. I'm using Anaconda and Spyder IDE. Where should I save the .pgpass file?<issue_comment>username_1: Check permissions. The `.pgpass` file has to have no group nor other permissions: ``` chmod u-x,go-rwx ~/.pgpass ``` A command line psql client would actually warn you about it: `WARNING: password file "/home/tometzky/.pgpass" has group or world access; permissions should be u=rw (0600) or less` Upvotes: 2 <issue_comment>username_2: Like [the documentation](https://www.postgresql.org/docs/current/static/libpq-pgpass.html) says: * The program must be using the C client library `libpq` to connect to PostgreSQL (the JDBC driver, for example, doesn't use `libpq`). * The `.pgpass` file must be in the home directory of the user that owns the process which uses `libpq` to connect to PostgreSQL. * You can override the default location by either putting the variable `PGPASSFILE` in the process' environment or (from v10 on) with the connection parameter `passfile`. * The `.pgpass` file must have permissions 0600 or less. Upvotes: 3
2018/03/15
2,684
10,597
<issue_start>username_0: I'm trying to port some old VB6 code to C# and .NET. There are a number of places where the old code uses a `RecordSet` to execute a SQL query and then loop through the results. No problem so far, but inside the loop the code makes changes to the current row, updating columns and even deleting the current row altogether. In .NET, I can easily use a `SqlDataReader` to loop through SQL query results, but updates are not supported. So I've been playing with using a `SqlDataAdapter` to populate a `DataSet`, and then loop through the rows in a `DataSet` table. But the `DataSet` doesn't seem very smart compared to the VB6's old `RecordSet`. For one thing, I need to provide update queries for each type of edit I have. Another concern is that a `DataSet` seems to hold everything in memory at once, which might be a problem if there are many results. What is the best way to duplicate this behavior in .NET? The code below shows what I have so far. Is this the best approach, or is there another option? ``` using (SqlConnection connection = new SqlConnection(connectionString)) { DataSet dataset = new DataSet(); using (SqlDataAdapter adapter = new SqlDataAdapter(new SqlCommand(query, connection))) { adapter.Fill(dataset); DataTable table = dataset.Tables[0]; foreach (DataRow row in table.Rows) { if ((int)row["Id"] == 4) { if ((int)row["Value1"] > 0) row["Value2"] = 12345; else row["Value3"] = 12345; } else if ((int)row["Id"] == 5) { row.Delete(); } } // TODO: adapter.UpdateCommand = new SqlCommand("?", connection); adapter.DeleteCommand = new SqlCommand("?", connection); adapter.Update(table); } } ``` *Note: I'm new to the company and can't very well tell them they have to change their connection strings or must switch to Entity Framework, which would be my choice. I'm really looking for a code-only solution.*<issue_comment>username_1: **Your constraints:** * Not using Entity Framework * DataSet seems to hold everything in memory at once, which might be a problem if there are many results. * a code-only solution ( no external libraries) **Plus** * The maximum number of rows that a DataTable can store is 16,777,216 row [MSDN](https://msdn.microsoft.com/en-us/library/system.data.datatable.aspx#Anchor_9) * To get high performance ``` //the main class to update/delete sql batches without using DataSet/DataTable. public class SqlBatchUpdate { string ConnectionString { get; set; } public SqlBatchUpdate(string connstring) { ConnectionString = connstring; } public int RunSql(string sql) { using (SqlConnection con = new SqlConnection(ConnectionString)) using (SqlCommand cmd = new SqlCommand(sql, con)) { cmd.CommandType = CommandType.Text; con.Open(); int rowsAffected = cmd.ExecuteNonQuery(); return rowsAffected; } } } //------------------------ // using the class to run a predefined patches public class SqlBatchUpdateDemo { private string connstring = "myconnstring"; //run batches in sequence public void RunBatchesInSequence() { var sqlBatchUpdate = new SqlBatchUpdate(connstring); //batch1 var sql1 = @"update mytable set value2 =1234 where id =4 and Value1>0;"; var nrows = sqlBatchUpdate.RunSql(sql1); Console.WriteLine("batch1: {0}", nrows); //batch2 var sql2 = @"update mytable set value3 =1234 where id =4 and Value1 =0"; nrows = sqlBatchUpdate.RunSql(sql2); Console.WriteLine("batch2: {0}", nrows); //batch3 var sql3 = @"delete from mytable where id =5;"; nrows = sqlBatchUpdate.RunSql(sql3); Console.WriteLine("batch3: {0}", nrows); } // Alternative: you can run all batches as one public void RunAllBatches() { var sqlBatchUpdate = new SqlBatchUpdate(connstring ); StringBuilder sb = new StringBuilder(); var sql1 = @"update mytable set value2 =1234 where id =4 and Value1>0;"; sb.AppendLine(sql1); //batch2 var sql2 = @"update mytable set value3 =1234 where id =4 and Value1 =0"; sb.AppendLine(sql2); //batch3 var sql3 = @"delete from mytable where id =5;"; sb.AppendLine(sql3); //run all batches var nrows = c.RunSql(sb.ToString()); Console.WriteLine("all patches: {0}", nrows); } } ``` I simulated that solution and it's working fine with a high performance because all updates /delete run as batch. Upvotes: 0 <issue_comment>username_2: I came up with an (untested) solution for a data table. It does require you to do some work, but it should generate update and delete commands for each row you change or delete automatically, by hooking up to the `RowChanged` and `RowDeleted` events of the `DataTable`. Each row will get it's own command, equivalent to `ADODB.RecordSet` update / delete methods. However, unlike the `ADODB.RecordSet` methods, this class will not change the underling database, but only create the SqlCommands to do it. Of course, you can change it to simply execute them on once they are created, but as I said, I didn't test it so I'll leave that up to you if you want to do it. However, please note I'm not sure how the `RowChanged` event will behave for multiple changes to the same row. Worst case it will be fired for each change in the row. The class constructor takes three arguments: 1. The instance of the `DataTable` class you are working with. 2. A `Dictionary` that provides mapping between column names and SqlDataTypes 3. An optional string to represent table name. If omitted, the `TableName` property of the `DataTable` will be used. Once you have the mapping dictionary, all you have to do is instantiate the `CommandGenerator` class and iterate the rows in the data table just like in the question. From that point forward everything is automated. Once you completed your iteration, all you have to do is get the sql commands from the `Commands` property, and run them. ``` public class CommandGenerator { private Dictionary \_columnToDbType; private string \_tableName; private List \_commands; public CommandGenerator(DataTable table, Dictionary columnToDbType, string tableName = null) { \_commands = new List(); \_columnToDbType = columnToDbType; \_tableName = (string.IsNullOrEmpty(tableName)) ? tableName : table.TableName; table.RowDeleted += table\_RowDeleted; table.RowChanged += table\_RowChanged; } public IEnumerable Commands { get { return \_commands; } } private void table\_RowChanged(object sender, DataRowChangeEventArgs e) { \_commands.Add(GenerateDelete(e.Row)); } private void table\_RowDeleted(object sender, DataRowChangeEventArgs e) { \_commands.Add(GenerateDelete(e.Row)); } private SqlCommand GenerateUpdate(DataRow row) { var table = row.Table; var cmd = new SqlCommand(); var sb = new StringBuilder(); sb.Append("UPDATE ").Append(\_tableName).Append(" SET "); var valueColumns = table.Columns.OfType().Where(c => !table.PrimaryKey.Contains(c)); AppendColumns(cmd, sb, valueColumns, row); sb.Append(" WHERE "); AppendColumns(cmd, sb, table.PrimaryKey, row); cmd.CommandText = sb.ToString(); return cmd; } private SqlCommand GenerateDelete(DataRow row) { var table = row.Table; var cmd = new SqlCommand(); var sb = new StringBuilder(); sb.Append("DELETE FROM ").Append(\_tableName).Append(" WHERE "); AppendColumns(cmd, sb, table.PrimaryKey, row); cmd.CommandText = sb.ToString(); return cmd; } private void AppendColumns(SqlCommand cmd, StringBuilder sb, IEnumerable columns, DataRow row) { foreach (var column in columns) { sb.Append(column.ColumnName).Append(" = @").AppendLine(column.ColumnName); cmd.Parameters.Add("@" + column.ColumnName, \_columnToDbType[column.ColumnName]).Value = row[column]; } } } ``` As I wrote, this is completely untested, but I think it should be enough to at least show the general idea. Upvotes: 1 <issue_comment>username_3: ADO.NET `DataTable` and `DataAdapter` provide the closest equivalent of ADO `Recordset` with applies separation of concens principle. `DataTable` contains the data and provides the change tracking information (similar to EF internal entity tracking) while `DataAdapter` provides a standard way to populate it from database (`Fill` method) and apply changes back to the database (`Update` method). With that being said, what are you doing is the intended way to port the ADO `Recordset` to ADO.NET. The only thing you've missed is that you are not always required to specify `Insert`, `Update` and `Delete` commands. As soon as your query is querying a single table (which I think was a requirement to get updateable `Recordset` anyway), you can use another ADO.NET player called [`DbCommandBuilder`](https://learn.microsoft.com/en-us/dotnet/api/system.data.common.dbcommandbuilder?view=netframework-4.7.1): > > Automatically generates single-table commands used to reconcile changes made to a `DataSet` with the associated database. > > > Every database provider provides implementation of this abstract class. The [MSDN example for SqlCommandBuilder](https://learn.microsoft.com/en-us/dotnet/api/system.data.sqlclient.sqlcommandbuilder?f1url=https%3A%2F%2Fmsdn.microsoft.com%2Fquery%2Fdev15.query%3FappId%3DDev15IDEF1%26l%3DEN-US%26k%3Dk(System.Data.SqlClient.SqlCommandBuilder)%3Bk(vs.objectbrowser)%3Bk(TargetFrameworkMoniker-.NETFramework%2CVersion%3Dv4.6)%26rd%3Dtrue%26f%3D255%26MSPPError%3D-2147217396&view=netframework-4.7.1#examples) is almost identical to your sample, so all you need before calling `Update` is (a bit counterintuitive): ``` var builder = new SqlCommandBuilder(adapter); ``` and that's it. Behind the scenes, > > The `DbCommandBuilder` registers itself as a listener for RowUpdating events that are generated by the DbDataAdapter specified in this property. > > > and dynamically generates the commands if they are not specifically set in the data adapter by you. Upvotes: 4 [selected_answer]
2018/03/15
439
1,359
<issue_start>username_0: I am trying to find a regular expression which basically matches start of a string but not having a specific character after that. By this I should be achieving same level routes. Example : Lets say I have the following strings and I need to get routes starting from LAX with no stops. 1. LAX-LAS-JFK 2. LAX-PHX-JFK 3. LAX-JFK 4. LAX-PHX The regex should match only route 3 and 4. I have tried this ^LAX-([^-])\* and it didn't work for me when I cross checked on <https://www.regextester.com/15>.<issue_comment>username_1: You can try this: ``` ^LAX(-[A-Z]+){1}$ ``` This matches ``` LAX-JFK LAX-PHX ``` but not ``` LAX-LAS-JFK LAX-PHX-JFK ``` Demo: [regex101](https://regex101.com/r/9TcBBS/1) --- Explanation: * `^` start * `$` end * `{1}` exact number of repetitions of a pattern, in this case `1` Fun fact: you can replace the `1` by `(number of stops + 1)`, and it will select only the routes with the defined number of stops ([another example](https://regex101.com/r/9TcBBS/2)). Upvotes: 3 [selected_answer]<issue_comment>username_2: So it sounds like you want to match with strings that only have 1 dash. Perhaps something like this `^(LAX)(-{1})[a-zA-Z]+$` would work? It will check to make sure the string LAX is in the beginning, followed by **one** dash and ending with alphabetical characters. Upvotes: 1
2018/03/15
834
2,153
<issue_start>username_0: I'm new in python and need get a value fron anoter value of same block, this is an example of JSON: ``` { u'Name': u'test', u'ProjectPath': u'/data/compose/test_pz4u6zug8swd2isfkcrzh9kjl', u'ResourceControl': { u'ResourceId': u'test', u'UserAccesses': [ ], u'AdministratorsOnly': True, u'TeamAccesses': [ ], u'SubResourceIds': [ ], u'Type': 6, u'Id': 8 }, u'EntryPoint': u'docker-compose.yml', u'Env': [ ], u'SwarmId': u'pz4u6zug8swd2isfkcrzh9kjl', u'Id': u'test_pz4u6zug8swd2isfkcrzh9kjl' }, { u'Name': u'ycf', u'ProjectPath': u'/data/compose/ycf_pz4u6zug8swd2isfkcrzh9kjl', u'ResourceControl': { u'ResourceId': u'ycf', u'UserAccesses': [ ], u'AdministratorsOnly': True, u'TeamAccesses': [ ], u'SubResourceIds': [ ], u'Type': 6, u'Id': 5 }, u'EntryPoint': u'docker-compose.yml', u'Env': None, u'SwarmId': u'pz4u6zug8swd2isfkcrzh9kjl', u'Id': u'ycf_pz4u6zug8swd2isfkcrzh9kjl' } ``` I need get the value of `Id` from filed `Name`, eg. I need to know value of `Id` (`test_pz4u6zug8swd2isfkcrzh9kjl`) from `Name` 'test', is possible? I get this JSON from a request: ``` r_get = requests.get(http://www.example.com:9000/api/endpoints/1/stacks) id_response = json.loads(r_get.content.decode('utf-8')) print(id_response) ```<issue_comment>username_1: If the result is a list of dictionaries, you can use a comprehension to rearrange the data: ``` id_for_name = {item[u'Name']: item[u'Id'] for item in id_response} print(id_for_name) # {u'test': u'test_pz4u6zug8swd2isfkcrzh9kjl', u'ycf': u'ycf_pz4u6zug8swd2isfkcrzh9kjl'} print(id_for_name['test']) # test_pz4u6zug8swd2isfkcrzh9kjl ``` Upvotes: 1 <issue_comment>username_2: You can cycle through all your records and look for the first case where `Name` matches your desired value. ``` for element in id_response: if element[u'Name'] == u'test': my_id = element[u'Id'] break else: # will only get here if we never break from the loop raise ValueError('No element named "test"') print(my_id) ``` Upvotes: 3 [selected_answer]
2018/03/15
339
1,116
<issue_start>username_0: It's technically very simple code of displaying two images next to each other: ```html ![](https://via.placeholder.com/518x278.jpg) ![](https://via.placeholder.com/1411x278.jpg) ``` However, the images do not fill the `col` container width 100% (as evident by the still visibile background-color? Why aren't they stretching to fill the container? I have tried setting `margin` to `0px` and explicitly stating `width: 100%;` as you see, but it doesn't work. I have no other formatting except the Boostrap default min.css.<issue_comment>username_1: `col-md-3` and `col-md-9` both have 15px padding. Adding `no-gutters` to the row div will remove the padding. Upvotes: 2 <issue_comment>username_2: > > Why aren't they stretching to fill the container? > > > That's because a Bootstrap row normally has gutters in it. (which means each column in a row has 15px left and right padding) Add the class `no-gutters` to the row to remove the gutters. And add the `px-0` class to the container to remove the horizontal padding on the container if needed. Upvotes: 2 [selected_answer]
2018/03/15
1,188
3,996
<issue_start>username_0: Im not very knowledgable when it comes to this, but I am learning. I have tried everything I can think of to add a variable to this, so that I dont have to edit this file. I wish to put the API key in a separate config file just to keep things neat and easier to make changes rather than having to remember all the files that have to be edited for these api keys. Here is the code I cannot figure out. ``` private $_nodes = array("https://omdbapi.com/?apikey=123456&i=%s"); ``` I tried this, but no joy. Cant figure it out. I tried many variations. Maybe this isnt even possible in an array, like I said, I have alot to learn ``` private $_nodes = array("https://omdbapi.com/?apikey='"$site_config['OMDBAPIKEY']"'&i=%s"); ``` I have managed to move all other API keys to the config, but this one just wont work<issue_comment>username_1: I think I see two problems. One certainly is a syntax error. You need to use the `.` operator to join the strings together. ``` private $_nodes = array("https://omdbapi.com/?apikey='" . $site_config['OMDBAPIKEY'] . "'&i=%s"); ``` But the `private` implies this is initializing an object property. Using your array value there will cause > > Fatal error: Constant expression contains invalid operations > > > If you want to assign it using that value, you can do it in the constructor instead. ``` private $_nodes; function __construct() { $this->_nodes = array("https://omdbapi.com/?apikey='". $site_config['OMDBAPIKEY'] . "'&i=%s"); } ``` (`$site_config` is undefined in this example, but I don't really know where it comes from. You'll have to define it in the scope of the constructor somehow. Maybe pass it as a parameter.) Upvotes: 2 <issue_comment>username_2: As [username_1's answer](https://stackoverflow.com/a/49310641/417562) says, the first problem you have is that you have to concatenate (join) the strings together using the concatenation `.` operator. As that answer says, you also can't define an object property with the value of a variable in a class definition. You *can*, however, use a constant value. So you would need to first of all change `$site_config` to be a `const` rather than a variable: ``` const Site_Config = array( 'OMDBAPIKEY' => 123456, /* other definitions */ ); class SomeUsefulClass { private $_nodes = array("https://omdbapi.com/?apikey='" . Site_Config['OMDBAPIKEY'] . "'&i=%s") } ``` Whether this is a good idea or not is decidedly open to question... Upvotes: 0 <issue_comment>username_3: The problem with your code is the concatenation of strings. In PHP you can do it in several ways As of PHP 5.6.0 it's possibel to use scalar expression involving numeric and string literals and/or constants in context of a class constant. It means that it's possible to do following ``` const KEY ='OMDBAPIKEY'; private $_nodes = array("https://omdbapi.com/?apikey={self::KEY}&i=%s"); ``` But it's not the cleanest way. The better option is to add it to constructor f.e you can see different types of concatenation in following code ``` php class Test { private $_nodes = []; public function __construct(array $site_config) { // simple concatenation with . $this-_nodes[] = "https://omdbapi.com/?apikey=". $site_config['OMDBAPIKEY'] ."&i=%s"; // using sprintf my prefered way, noticed escaped %s with i= $this->_nodes[] = sprintf("https://omdbapi.com/?apikey=%s&i=%%s", $site_config['OMDBAPIKEY']); // using complex curly syntax $this->_nodes[] = "https://omdbapi.com/?apikey={$site_config['OMDBAPIKEY']}&i=%s"; } public function printNodes() { var_dump($this->_nodes); } } $t = new Test(['OMDBAPIKEY' => 'abc']); $t->printNodes(); ``` It will print ``` array(3) { [0]=> string(36) "https://omdbapi.com/?apikey=abc&i=%s" [1]=> string(36) "https://omdbapi.com/?apikey=abc&i=%s" [2]=> string(36) "https://omdbapi.com/?apikey=abc&i=%s" } ``` Upvotes: 0
2018/03/15
647
2,413
<issue_start>username_0: Require attribute **on the input field** doesn't work when i implement google recaptcha. When the input is empty, the form is supposed not to be submitted. But the form gets submitted when input is empty. I followed [google's guide](https://developers.google.com/recaptcha/docs/invisible#auto_render) to implement that in its simplest form. ``` php if (isset($_POST['code']) && ($_POST['code'] == "whatever")) //do stuff? ?> ``` > > What i want to do is to make the recaptcha execute only when the input is not empty, else prevent form submit and recaptcha execution. > > > > ```html > > > > > Let me in > > > ``` > > > function onSubmit(token) { > document.getElementById("form1").submit(); > } > > > > > > > ><issue_comment>username_1: Looking over the documentation you linked to, you are probably better off getting rid of the `required` attribute and adding a check to your script to not submit the form if the I put field is null. So JavaScript would be checking if the field is empty and then give a validation alert or however works best for your situation. Upvotes: 1 <issue_comment>username_2: The reason it submits is your JavaScript function is calling submit directly, which is bypassing the HTML5-only validation. The submit button being a type of 'submit' will do the submission for you automatically so you don't need it. Upvotes: 0 <issue_comment>username_3: I know the question is 5 years old, but it's something that still happens and I'd like to share the most practical solution possible Google's instructions are as follows: ``` Submit function onSubmit(token) { document.getElementById("YOUR\_FORM\_ID").submit(); } ``` In this way, you will lose the validation of the inputs, because the javascript function will bypass it. But you can invoke the html validation directly by the javascript function as follows: ``` function onSubmit(token) { var form = document.getElementById("YOUR\_FORM\_ID"); if (form.checkValidity()) { form.submit(); } else { grecaptcha.reset(); form.reportValidity(); } } ``` This way, html validation will continue to occur and display indications of mandatory fields. The form will only be sent if all mandatory fields are filled in correctly (form validated). Credits: [@brandonaaron](https://github.com/ambethia/recaptcha/issues/302#issuecomment-621794131) Upvotes: 1
2018/03/15
654
2,402
<issue_start>username_0: I am using this, to make a function call in smart contract via metamask in react: ``` export default class EthereumForm1 extends Component { constructor (props) { super (props); const MyContract = window.web3.eth.contract(ContractABI); this.state = { ContractInstance: MyContract.at('ContractAddress') } this.doPause = this.doPause.bind (this); } doPause() { const { pause } = this.state.ContractInstance; pause ( { gas: 30000, gasPrice: 32000000000, value: window.web3.toWei (0, 'ether') }, (err) => { if (err) console.error ('Error1::::', err); console.log ('Contract should be paused'); }) } ``` What I want is to run jQuery code with loading gif: `$(".Loading").show();` while transaction is being processed and remove it after. Also, it would be good to add transaction status in div after, like in metamask(either passed or rejected). ![Metamask transaction status image](https://i.stack.imgur.com/Zumaa.png)<issue_comment>username_1: You don't need jquery to do this. React state can manage the status of the progress automatically. All you have to do is to call `setState` function. ``` class EthereumFrom1 extends React.Component { state = { loading: false }; constructor(props) { super(props); this.doPause = this.doPause.bind(this); } doPause() { const {pause} = this.state.ContractInstance; pause( { gas: 30000, gasPrice: 32000000000, value: window.web3.toWei(0, 'ether') }, (err) => { if (err) console.error('Error1::::', err); this.setState({loading: true}); }) } render() { return ( {this.state.loading ? Loading... : done } ) } } ``` Upvotes: 0 <issue_comment>username_2: What helped me was checking every few seconds if my transactiopn finished, and if yes hiding the loader. ``` if (latestTx != null) { window.web3.eth.getTransactionReceipt(latestTx, function (error, result) { if (error) { $(".Loading").hide(); console.error ('Error1::::', error); } console.log(result); if(result != null){ latestTx = null; $(".Loading").hide(); } }); } ``` Upvotes: 2 [selected_answer]
2018/03/15
594
2,153
<issue_start>username_0: sorry if the title isn't great but what I want to do is this: ``` componentDidMount() { axios.get('http://thecatapi.com/api/images/get?format=xml&results_per_page=9') .then((response) => { parseString(response.data, (err, result) => { result.response.data[0].images[0].image.map((cat) => this.setState({ cats: this.state.cats.concat(cat) })) }) }); } ``` this function, pulls from catapi, then sets state of my cat state array to every cat object. each cat object has 3 keys from the cat api, how can I add an extra key called `likes` for every array object? im thinking something like: ``` this.setState({ cats: this.state.cats.concat(cat), likes: 0 }) ``` but obviously that doens't work anyone got any ideas?<issue_comment>username_1: You don't need jquery to do this. React state can manage the status of the progress automatically. All you have to do is to call `setState` function. ``` class EthereumFrom1 extends React.Component { state = { loading: false }; constructor(props) { super(props); this.doPause = this.doPause.bind(this); } doPause() { const {pause} = this.state.ContractInstance; pause( { gas: 30000, gasPrice: 32000000000, value: window.web3.toWei(0, 'ether') }, (err) => { if (err) console.error('Error1::::', err); this.setState({loading: true}); }) } render() { return ( {this.state.loading ? Loading... : done } ) } } ``` Upvotes: 0 <issue_comment>username_2: What helped me was checking every few seconds if my transactiopn finished, and if yes hiding the loader. ``` if (latestTx != null) { window.web3.eth.getTransactionReceipt(latestTx, function (error, result) { if (error) { $(".Loading").hide(); console.error ('Error1::::', error); } console.log(result); if(result != null){ latestTx = null; $(".Loading").hide(); } }); } ``` Upvotes: 2 [selected_answer]
2018/03/15
488
1,783
<issue_start>username_0: I'm using gcloud on Windows to develop GAE stuff. The network here has a MITM root certificate by design so all SSL traffic can be snooped; I can install the root cert easily into a browser or Windows certificate store, but can't successfully get this work for Python, or more specifically, gcloud (which has its own Python bundled). The answers at [How to add a custom CA Root certificate to the CA Store used by Python in Windows?](https://stackoverflow.com/questions/39356413/how-to-add-a-custom-ca-root-certificate-to-the-ca-store-used-by-python-in-window) don't work - I've tried setting SSL\_CERT\_DIR and SSL\_CERT\_FILE environment variables to no avail, and the pip.ini solution isn't applicable as I'm not using pip.<issue_comment>username_1: Assuming all your credential setup is in order, for MITM you likely also need to set proxy settings, for instance ``` gcloud config set proxy/address 127.0.0.1 gcloud config set proxy/port 8080 gcloud config set proxy/type http ``` replacing address/port for your MITM and then tell the SDK to trust your local certificate authority: ``` gcloud config set core/custom_ca_certs_file cert.pem ``` Test by running some command, for example ``` gcloud projects list ``` You can use `--log-http` additional gcloud flag and/or tools like [burp](https://portswigger.net/burp) to further debug what certs/proxies are being used. Upvotes: 3 <issue_comment>username_2: The previous answer works for gcloud, but does not work with gsutil. gsutil currently ignores whatever value for ca certificates you have in the gcloud config, so you must add it to your boto config file. (on a gcp instance it's /etc/boto.cfg). Add these lines: ``` [Boto] ca_certificates_file = /path/to/cert.pem ``` Upvotes: 1
2018/03/15
364
1,404
<issue_start>username_0: I'm trying to load a serialized object within the class using a method like this one: ``` private void loadCatalog(MyClass myClassNew) { this = myClassNew; } ``` So I have this method in my ***MyClass***, and I receive as a parameter an object having the type of ***MyClass***. How can I do something like above? The code above gives me an error of which I'm not sure why. The object myClassNew is the same as the one before serializing, so I'm sure that I receive a valid object of type ***MyClass***.<issue_comment>username_1: You will get an error: ``` The left-hand side of an assignment must be a variable ``` "this" is not a variable that you can assign to. you can create fields in the object and do "this.field1 = myClassNew", but you cannot assign to "this". Upvotes: 0 <issue_comment>username_2: There is no way to do that. You must write code that copies each instance field of MyClass from the argument to `this`. For instance: ``` this.firstName = myClassNew.firstName; this.lastName = myClassNew.lastName; ``` You *could* use reflection do to this, but you probably shouldn’t. Unless MyClass is very simple, you may find that some fields require special handling. For example, copying a List reference would be very bad, unless it’s an unmodifable List, as the two objects should share a reference to the same List object. Upvotes: 2 [selected_answer]
2018/03/15
298
1,152
<issue_start>username_0: When I call Hexo clean in powershell it shows Deleted Database Delete public folder When I call Hexo generate and Hexo Server --open at localhost:4000 the site is broken and only display the text "Cannot GET /". What am I missing? Any help is appreciated.<issue_comment>username_1: You will get an error: ``` The left-hand side of an assignment must be a variable ``` "this" is not a variable that you can assign to. you can create fields in the object and do "this.field1 = myClassNew", but you cannot assign to "this". Upvotes: 0 <issue_comment>username_2: There is no way to do that. You must write code that copies each instance field of MyClass from the argument to `this`. For instance: ``` this.firstName = myClassNew.firstName; this.lastName = myClassNew.lastName; ``` You *could* use reflection do to this, but you probably shouldn’t. Unless MyClass is very simple, you may find that some fields require special handling. For example, copying a List reference would be very bad, unless it’s an unmodifable List, as the two objects should share a reference to the same List object. Upvotes: 2 [selected_answer]
2018/03/15
285
919
<issue_start>username_0: I'm creating a basic line graph with the following data: ``` Jan 2015 $1 Feb 2015 $2 Mar 2015 $3 ``` My monthly data is all inclusive of prior months, so it basically represents everything that has happened in the past as well as the current month results. In Tableau, when I set the dates to Quarterly it shows up as Q1 2015 ($1 + $2 + $3) where as I just want $3 (March 2015). How do I do this? Thanks!<issue_comment>username_1: Right now, on the rows (I assume) shelf, you have SUM(whatever your field is). Click the drop-down on that pill and change the calculation instead to Maximum. It would then be MAX(whatever your field is). Upvotes: 1 <issue_comment>username_2: Right now, the rows doesn't say SUM but rather AGG. I'm not sure how to proceed...am I able to filter out only Quarterly Dates? Perhaps, I can just show March 2015 and relabel as Q1 2015? Upvotes: 0
2018/03/15
326
1,215
<issue_start>username_0: I use TBS and OpenTBS for parsing office templates in a PHP script. Files of type `rtf` or `txt` are processed with TBS and those of types `docx` or `pptx` with OpenTBS. The merged files are used in an office context for letters, submissions and the lot, so I want tags that were not merged in the first place, for instance due to missing data, to be removed before the files are produced. A solution that covers both TBS and OpenTBS would be ideal. At this time I use fields like `[onload.nonexistant;ifempty=’’]` for TBS but the lengthy syntax is kind of awkward, error prone and doesn't work for OpenTBS. If an empty default value doesn't work, maybe there is some kind of cleanup operation for remaining fields?<issue_comment>username_1: Right now, on the rows (I assume) shelf, you have SUM(whatever your field is). Click the drop-down on that pill and change the calculation instead to Maximum. It would then be MAX(whatever your field is). Upvotes: 1 <issue_comment>username_2: Right now, the rows doesn't say SUM but rather AGG. I'm not sure how to proceed...am I able to filter out only Quarterly Dates? Perhaps, I can just show March 2015 and relabel as Q1 2015? Upvotes: 0
2018/03/15
1,302
4,770
<issue_start>username_0: I'm porting an SDK written in C++ from Windows to Linux. There are other binaries, but at its simplest, our SDK is this: * core.dll - implicitly loaded DLL ("libcore.so" shared library on Linux) * tests.exe - an app use to test the DLL (uses google test) All of my binaries must live in ***one*** folder somewhere that apps can find. I've achieved that on Windows. I wanted to achieve the same thing in Linux. I'm failing miserably To illustrate, Here's the basic project tree. We use CMake. After I build I've got ``` mysdk |---CMakeLists.txt (has add_subdirectory() statements for "tests" and "core") |---/tests (source code + CMakeLists.txt) |---/core (source code + CMakeLists.txt) |---/build (all build ouput, CMake output, etc) |---tests (build output) |---core (build output) ``` The goal is to "flatten" the "build" tree and put all the binary outputs of tests, core, etc into one folder. 1. I tried adding CMake's "install" command, to each of my `CMakeLists.txt` files (e.g. `install(TARGETS core DESTINATION bin`). I then then executed `sudo make install` after my normal build. This put all my binaries in `/usr/local/bin` with no errors. But when I ran `tests` from there, it failed to find `libcore.so`, even though it was sitting right there in the same folder > > > ``` > tests: error while loading shared libraries: libcore.so: Cannot open shared object file: No such file or directory > > ``` > > 2. I read up on the `LD_LIBRARY_PATH` environment variable and so tried adding that folder (`/usr/local/bin`) into it and running. I can see I've properly altered `LD_LIBRARY_PATH` but it still doesn't work. "tests" still can't find `libcore.so`. I even tried changing the PATH environment variable as well. Same result. 3. In frustration, I tried brute-force copying the output binaries to a temporary subfolder (of `/mysdk/build`) and running `tests` from there. To my surprise it ran. Then I realized why: Instead of loading the local copy of `libcore.so` it had loaded the one from the build output folder (as if the full path were "baked in" to the app at build time). Subsequently deleting that build-output copy of `libcore.so` made "tests" fail altogether as before, instead of loading the local copy. So maybe the path really *was* baked in. I'm at a loss. I've read the CMake tutorial and reference. It makes this sound so easy. Aside from the obvious (What am I doing wrong?) I would appreciate if anyone could answer any of the following questions: 1. What is the correct way to control where my app looks for my shared libraries? 2. Is there a relationship between my project build structure and how my binaries must then appear when installed? 3. Am I even close to the right way of doing this? 4. Is it possible I've somehow inadvertently "baked" (into my app) full paths to my shared libraries? Is that a thing? I use all CMAKE variables in my CMakeLists files.<issue_comment>username_1: You can run `ldd file` to print the shared object dependencies for `file`. It will tell you where are its dependencies being read from. You can export the environment variable `LD_LIBRARY_PATH` with the paths you want the linker to look for. If a dependency is not found, try adding the path where that dependency is located at to `LD_LIBRARY_PATH` and then run `ldd` again (make sure you export the variable). Also, make sure the dependencies have the right permissions. Upvotes: 2 <issue_comment>username_2: Updating **LD\_LIBRARY\_PATH** is an option. Another option is using RPATH. Please check the example. <https://github.com/username_2/cmake-examples/blob/master/005-executable-with-shared-library/CMakeLists.txt> ``` cmake_minimum_required(VERSION 2.8) # Project project(005-executable-with-shared-library) # Directories set(example_BIN_DIR bin) set(example_INC_DIR include) set(example_LIB_DIR lib) set(example_SRC_DIR src) # Library files set(library_SOURCES ${example_SRC_DIR}/library.cpp) set(library_HEADERS ${example_INC_DIR}/library.h) set(executable_SOURCES ${example_SRC_DIR}/main.cpp) # Setting RPATH # See https://cmake.org/Wiki/CMake_RPATH_handling set(CMAKE_INSTALL_RPATH ${CMAKE_INSTALL_PREFIX}/${example_LIB_DIR}) # Add library to project add_library(library SHARED ${library_SOURCES}) # Include directories target_include_directories(library PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}/${example_INC_DIR}) # Add executable to project add_executable(executable ${executable_SOURCES}) # Linking target_link_libraries(executable PRIVATE library) # Install install(TARGETS executable DESTINATION ${example_BIN_DIR}) install(TARGETS library DESTINATION ${example_LIB_DIR}) install(FILES ${library_HEADERS} DESTINATION ${example_INC_DIR}) ``` Upvotes: 1
2018/03/15
567
1,909
<issue_start>username_0: The expression `[][[]]` evaluates to `undefined` in JavaScript. My understanding of this was that the compiler sees the second set of `[...]` and interprets that to be an array subscript operator (because you can't have two arrays next to each other). So the compiler knows that the inner expression, `[]`, must be an index, and so after evaluating it, it coerces it to a number. `Number([])` evaluates to `0`, and so we have `[][0]`, which is `undefined`. However, `[1][[]]` does not evaluate to `1` as I would expect, but rather to `undefined` suggesting that in this case (or maybe also in the previous case), `[]` *isn't* being coerced to a number. It seems that I must use the unary `+` to force the type coercion: ``` [1][+[]] // returns 1 ``` So *if* the inner `[]` in the expression `[][[]]` is not being coerced to a number, then why does that expression evaluate to `undefined`?<issue_comment>username_1: You are trying to get data with object.property.property so objects have properties, yes true but properties dont have properties, so it makes undefined. Think like this; ``` let myArray = [1, 2, 3]; console.log(myArray['length']); // -> 3 console.log(myArray['length']['length']); // -> undefined ``` And there i think there is a problem, []**[**[]**]** These brackets doesnt affect the code. Try to result it with some Codes; ``` let myArray = [[[1, 2, 3], 2, 3], 2, 3]; console.log(myArray[0][[0]]); // -> [1, 2, 3] console.log(myArray[0][0]); // -> [1, 2, 3] ``` Upvotes: -1 <issue_comment>username_2: The faulty assumption was that the expression that evaluates to the index is coerced to a number. It is in fact coerced to string as are all object keys (except for Symbols, which stay Symbols). Thus, `[1][[]]` turns into `[1][""]`, and since the `""` property doesn't exist on the array, we get `undefined` instead of `1`. Upvotes: 2 [selected_answer]
2018/03/15
1,321
4,421
<issue_start>username_0: Q: Is there a way to choose in the constructor of operatorClass which member object from the container it operates on? (iter->b instead of iter->a) More detail: I think I've simplified this as much as I can, but we'll see what the crowd thinks. I have an abstract template class for an algorithm I use a dozen times. Within implementations of the algorithm, specific details of the math differ but the overall process is the same. There are a few implemented pairs where the math is identical except for the names of the member objects in the data structure passed to the algorithm. In my example here, I'd like to be able to define in the constructor of operatorClass which member of myContainer the function operates on. Is there another level of abstraction I can add to define this behavior? The goal is to not have to duplicate the math done in `myoperation`, with only a find-and-replace to change \_a to \_b. The asterisks in main() and the subsequent commented out `_b` code show the implementation I'm aiming for. ``` #include #include struct myContainer { double a, b, c; // Constructor myContainer(double ia, double ib, double ic) { a = ia; b = ib; c = ic; }; }; class operatorClass { public: operatorClass() { // How to on implementation of class, tell the function // operate\_on\_list\_of\_containers() which parameter to operate on? }; double myoperation(const std::list \*mylist) { std::list::const\_iterator iter = mylist.begin(); std::list::const\_iterator end = mylist.end(); double sum\_of\_squares = 0; while ( iter != end ) { sum\_of\_squares += (iter->a \* iter->a); iter++; } return sum\_of\_squares; }; }; int main() { std::cout << "Hello world" << std::endl; // Create a linked list of myContainer objects myContainer mc1(2.1, 3.4, 7.2); myContainer mc2(0.7, 2.9, 3.1); myContainer mc3(5.2, 6.3, 0.83); std::list mylist; mylist.push\_back(mc1); mylist.push\_back(mc2); mylist.push\_back(mc3); // Create object for the algorithm operatorClass myalgorithm\_a; // operatorClass myalgorithm\_b; // \*\*\*\*\*\* double ssq\_a = 0; //double ssq\_b = 0; ssq\_a = myalgorithm\_a.myoperation(&mylist); //ssq\_b = myalgorithm\_b.myoperation(&mylist); std::cout << "The SSQ for a is " << ssq\_a << std::endl; //std::cout << "The SSQ for b is " << ssq\_b << std::endl; return 0; } ``` IRL, operatorClass is a child class/implementation of an abstract template class for the metropolis-hastings algorithm (MH). The linked list is necessary because the dimensions of the linked list are also estimated with a birth-death process (the containers are created or killed ~20-30 times per iteration of the combined algorithm). In my problem, I have several pairs of identical MH algorithms differing only by a set of parameters (e.g. {prior\_mass, mean\_mass, mass} and {prior\_width, mean\_width, width}). My example here covers the most complicated subset of these parameters (mean, width), that are contained within the linked-list<issue_comment>username_1: Just pass in a functor, which can be a lambda, as is done with generic algorithms in the `std` library: ``` auto sum_of_squares = std::accumulate(mylist.begin(), mylist.end(), 0.0, [](auto const&x, auto sum) { return sum + x.a*x.a; }); ``` Upvotes: 0 <issue_comment>username_2: This is a perfect excuse to use a pointer-to-variable-member: Change the `operatorClass` this way: ``` class operatorClass { private: double myContainer::*ptr_; public: operatorClass(double myContainer::*ptr) { ptr_ = ptr; }; double myoperation(std::list \*mylist) { std::list::iterator iter = mylist->begin(); std::list::const\_iterator end = mylist->end(); double sum\_of\_squares = 0; while ( iter != end ) { sum\_of\_squares += (\*iter).\*ptr\_ \* (\*iter).\*ptr\_; iter++; } return sum\_of\_squares; }; }; ``` And then, in `main`: ``` operatorClass myalgorithm_a(&myContainer::a); operatorClass myalgorithm_b(&myContainer::b); operatorClass myalgorithm_c(&myContainer::c); ``` And done! PS: Your `myoperation()` function is written weirdly. In modern C++ it would be something like: ``` double myoperation(const std::list &mylist) { double sum\_of\_squares = 0; for (auto &x: mylist) { sum\_of\_squares += x.\*ptr\_ \* x.\*ptr\_; } return sum\_of\_squares; }; ``` Upvotes: 3 [selected_answer]