date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/15
544
1,964
<issue_start>username_0: **UPDATE**: It seems likes an UX issue, I created an issue on GitHub, you can track here: <https://github.com/reactstrap/reactstrap/issues/910#issuecomment-374369572> --- I am using Reactstrap. I want to show tooltip only when after hovering mouse 2 seconds. When I click the button immediately, I don't want it be shown. But right now when I click the button immediately, it shows the tooltip. I tried both controlled `Tooltip` and `UncontrolledTooltip`, but neither works. For `Tooltip`, I tried to add `onClick={() => {}}`, but also does not help. How to not show tooltip when click the button immediately? Thanks `[Live demo](https://codesandbox.io/s/wz70vo15ww)` ``` constructor(props) { super(props); this.state = { isTooltipOpen: false }; } onToggleTooltip = () => { const { isTooltipOpen } = this.state; this.setState({ isTooltipOpen: !isTooltipOpen }); }; render() { const { isTooltipOpen } = this.state; return ( Controlled Tooltip Uncontrolled Tooltip {}} > Discard draft Hello world! ); } ```<issue_comment>username_1: Make use of the setTimeOut() function. To prevent loosing context (this), use the arrow function as a simple way to preserve proper execution context. ``` setTimeout(() => { this.setState({ isTooltipOpen: !isTooltipOpen }); }, 2000); ``` See updated, working, example: <https://codesandbox.io/s/j4ojkyv3l3> You can also take a look at <https://www.npmjs.com/package/react-hover-observer>. This is a React component that notifies its children of hover interactions. It also supports delayed hover and hover-off, which can help reduce unintentional triggering. Upvotes: 2 [selected_answer]<issue_comment>username_2: As in the github issue linked by you it was mentioned that in version ^8.2.0 there is a `trigger` property. So you can pass `trigger="hover focus"` and omit `"click"` now. That worked fine for me. Upvotes: 0
2018/03/15
576
2,119
<issue_start>username_0: I have created a bash script which first activates a python virtual environment, and then runs a python file. When I execute the bash script manually, the python file runs as intended. Bash script code: sample.sh ``` #!/usr/bin/env bash source ./project/bin/activate python3 /home/abc/project/server/sample.py ``` However, when I try to run this bash script using cron, the python file does not execute. cron: ``` 16 12 * * * /home/abc/sample.sh > /home/abc/bulkcat.log 2>&1 ``` When this cron triggers at the specified time, the python file inside my bash script does not run and the log file is empty. What seems to be wrong with my code?<issue_comment>username_1: It might well be the relative path you're using in the source command. Cron will run your script from a different directory, so ``` source ./project/bin/activate ``` will likely not be a valid path. Try ``` source /home/abc/project/bin/activate ``` ... guessed path based on the full path in your `python3 ...` line. Upvotes: 1 <issue_comment>username_2: Cron writes logs and you can find the error which occured when it tried to execute the task - an answer to [this question](https://unix.stackexchange.com/questions/207/where-are-cron-errors-logged) mentions usual locations where to look for these logs. Most common issues are: * cron is using `sh` and not `bash` ignoring shebang in your script - you can try configuring your cron job like `6 12 * * * /bin/bash /home/abc/sample.sh > /home/abc/bulkcat.log 2>&1` * the script not having permission to be executable set - this can be fixed by running `chmod 700 /home/abc/sample.sh` or `chmod 755 /home/abc/sample.sh` - the latter should be used only if you want to allow other users to read and execute your script * as mentioned already in another answer, always use absolute paths in cron job as cron might execute your script from other directory than you expect - I'm also using wrapper bash script in such scenarios - I give absolute path to the script in cron job and the first thing the bash script does is `cd /desired/work/directory` Upvotes: 0
2018/03/15
634
2,016
<issue_start>username_0: I am trying to extract information from a JSON file from Google Finance. The `requests.get()` is working but then I get stuck. I have searched quite a bit and nothing suggested seems to work. This is what I have: ``` import requests from json import loads params={'q': 'NASDAQ:AAPL','output': 'json'} response = requests.get('https://finance.google.com/finance', params=params, allow_redirects=False, timeout=10.0) print(response.status_code) print(response.content) ``` The output is “200” which is the ok output I believe. `print(response.content)` gives me the full JSON string so that seems to be working ok. However, trying to pass it into “data” so that I can work with further, extract various bits. This is what I have tried: * `data = response.json()` gives me `JSONDecodeError: Expecting value: line 2 column 1 (char 1)` * `data = json.load(response)` gives me `AttributeError: 'Response' object has no attribute 'read'` * `data = json.loads(response)` gives me `TypeError: the JSON object must be str, bytes or bytearray, not 'Response'` I tried `data = json.loads(response.decode("utf-8"))` and that gives me `AttributeError: 'Response' object has no attribute 'decode'`. I have also tried some text scrubbing suggestions, nothing has worked yet.<issue_comment>username_1: Try using `response.content` or `response.text` to convert to JSON. **Ex:** ``` json.loads(response.content) ``` or ``` json.loads(response.text) ``` Upvotes: 0 <issue_comment>username_2: I printed the text. and I found that first some data is not json string... \n // chars... ``` jsonstr = response.text[4:] #remove first part (not json data) data = loads(jsonstr) print(data) print("t=",data[0]['t']) ``` output ``` [{'t': 'AAPL', 'kr_annual_date': '2017', 'hi': '180.52', 'keyratios': [{'title': 'Net profit margin', 'annual': '21.09%', 'recent_quarter': '25.67%', 'ttm': '22.21%'}, {'title': 'Operating margin', 'annual': '26.76%', 'rece .... .... com/'}]}] t= AAPL ``` Upvotes: 1
2018/03/15
761
2,890
<issue_start>username_0: I learning React-Native with Windows 10, Using Git Bash Command Prompt and Android Studio Emulator. I have Install React Native globally by using `npm install -g react-native-cli`, I always create Project using `react-native init ProjectName` and **My Project Launching Method** is go into Project Directory with Git Bash CMD and execute `react-native run-android`. I notice that most of the Project in Github are older version which consist of `index.android.js` and `index.ios.js` (My Project is newer version so it only have `index.js`). I been trying to open some older version Project, but it seem like it just does not launch with the **method** I mentioned above. I also have tried `react-native start` method from [How to open existing project in React Native?](https://stackoverflow.com/questions/44764999/how-to-open-existing-project-in-react-native) but I still not able to launch it. The **error message** when I trying to launch the project is `Command unrecognized. Make sure that you have run npm install and that you are inside a react-native project.` This Problem only occur when I trying to launch project from Github, when I open my own Project there is no such problem. ps. If I try use `npm install` in the project directory, it will have some error. [![enter image description here](https://i.stack.imgur.com/8LZq7.png)](https://i.stack.imgur.com/8LZq7.png) I have try to search from Net, most of their launching method are the same as above and does not work for me, so wanna ask that: 1. Is my method of launching React-Native is wrong in the very beginning? 2. Is there anything I need to install so that I can open old version React Native Project? 3. How can I open this project with my current situation? <https://github.com/parkerdan/SampleNavigation> Thank you. Extra Info (React Native CLI and Node Version): ``` $ react-native -v react-native-cli: 2.0.1 react-native: n/a - not inside a React Native project directory > process.version 'v8.9.4' (Node Version) ```<issue_comment>username_1: After you clone your project from git and cd to project folder, make sure you run `npm install` before you can `react-native run-android`. Upvotes: 0 <issue_comment>username_2: After whole day of Searching, I have found the Solution. Whenever open any Project from Github or any sources, we have to ensure that the we have install all the required Dependencies, as Most of the Project using different type of dependencies that you may have not ever install. **Solution:** 1. Download project `git clone https://github.com/parkerdan/SampleNavigation.git` **2. Go to the Project Directory and Install dependencies (IMPORTANT)** Note: For unknown reason, `npm install` does not work ``` npm i ``` or `yarn i` 3. Run Project (or any launch method that work you) `react-native run-android` Upvotes: 2 [selected_answer]
2018/03/15
684
2,127
<issue_start>username_0: My project is all about tradesman and customer where customer post their job choose location and the tradesman can find the job of his area but the condition is tradesman can add multiple area So the Question is there is a tradesman whose added three location which is like ``` $locs=Array ( [122] => 2 [123] => 3 [124] => 6 ) ``` I wanted to fetch records from the database where these location ID available 2,3,6 what will be the Query ```sql SELECT j.*, l.location_name, c.customer_name, c.customer_status, s.service_title FROM ".TABLE_JOB." AS j LEFT JOIN ".TABLE_LOCATION." AS l ON (l.location_id=j.location_id) LEFT JOIN ".TABLE_CUSTOMER." AS c ON (c.customer_id=j.customer_id) LEFT JOIN ".TABLE_SERVICE." AS s ON (s.service_id=j.service_id) WHERE j.status='PENDING' AND c.customer_status='ACTIVATED' AND j.location_id='".$locs."' ```<issue_comment>username_1: you can use this code. ``` $location = ''; foreach($locs as $key=>$value){ $location .= $value.","; } $location = rtrim($location,","); select j.*, l.location_name, c.customer_name, c.customer_status, s.service_title from ".TABLE_JOB." as j left join ".TABLE_LOCATION." as l on (l.location_id=j.location_id) left join ".TABLE_CUSTOMER." as c on (c.customer_id=j.customer_id) left join ".TABLE_SERVICE." as s on(s.service_id=j.service_id) where j.status='PENDING' AND c.customer_status='ACTIVATED' and j.location_id IN ($location); ``` This way you will get all the records with location\_id in 2,3,6 . Hope it helps. Upvotes: 0 <issue_comment>username_2: use `implode()` function to convert array into string and then `in` in mysql query. ``` $locsIds = implode(',', $locs); select j.*, l.location_name, c.customer_name, c.customer_status, s.service_title from ".TABLE_JOB." as j left join ".TABLE_LOCATION." as l on (l.location_id=j.location_id) left join ".TABLE_CUSTOMER." as c on (c.customer_id=j.customer_id) left join ".TABLE_SERVICE." as s on(s.service_id=j.service_id) where j.status='PENDING' AND c.customer_status='ACTIVATED' and j.location_id IN ({$locsIds}) ``` Upvotes: 2 [selected_answer]
2018/03/15
951
2,598
<issue_start>username_0: When designing a responsive layout, what size is the standard for using a `max-width` container in 2018? Currently I am using **1140px** to fit the standard screen size of **1366px**.<issue_comment>username_1: I'd say use the bootstrap container sizes, as they are used pretty often and are quite the average below web development. These are the sizes: > > xs (for phones - screens less than 768px wide) > > > sm (for tablets - screens equal to or greater than 768px wide) > > > md (for small laptops - screens equal to or greater than 992px wide) > > > lg (for laptops and desktops - screens equal to or greater than 1200px wide) > > > So yes, I'd use 1200px as your desktop container size. Upvotes: 4 [selected_answer]<issue_comment>username_2: Here are the material breakpoints: <https://material.io/guidelines/layout/responsive-ui.html> They use a variety of columns and breakpoints: < 480 / 4 cols / xsmall 481 - 600 / 8 cols / xsmall 601 - 840 / 8 cols / small 841 - 960 / 12 cols / small 961 - 1280 / 12 cols / medium 1281 - 1440 / 12 cols / large 1441 - 1600 / 12 cols / large 1601 - 1920 / 12 cols / large (HD desktops) > 1920 / 12 cols / large (Retina desktops) [Adaptive breakpoints in Material Design [img]](https://i.stack.imgur.com/tWuXU.png) Upvotes: 2 <issue_comment>username_3: UPDATE 2022 ----------- here are my latest breakpoint/container recommendations based on today's global usage statistics: ``` $breakpoints: ( xs: 0, sm: 600px, // mobile landscape, tablet (portrait) md: 1024px, // chromebooks, ipad pro (portrait), tablet (landscape) lg: 1600px, // most laptops and desktops xl: 2048px // 2k and up ); $container-max-width: ( xs: 480px, sm: 960px, md: 1280px, lg: 1920px, xl: 3840px ); ``` PREVIOUS ANSWER --------------- Based on usage statistics most common desktop resolutions are 1366 and 1920. See usage stats: <https://gs.statcounter.com/screen-resolution-stats> As such, I find that Bootstrap default breakpoints are too small and redefine containers as below. ``` $grid-breakpoints: ( xs: 0, sm: 768px, md: 992px, lg: 1366px, xl: 1920px ); $container-max-widths: ( sm: 720px, md: 960px, lg: 1280px, xl: 1840px ); ``` EDIT: After further reflection these container max-widths are more proportional to the screen widths. This deviates more drastically from Bootstrap defaults, however. ``` $container-max-widths: ( sm: 720px, md: 930px, lg: 1280px, xl: 1800px ); ``` Upvotes: 3
2018/03/15
759
2,532
<issue_start>username_0: I want to change the time format from string, I am having the values in hh:mm:ss in 24 hr format and I want to make it hh:mm:a in 12 hr format. I have tried - ``` NSDateFormatter *format = [[NSDateFormatter alloc] init]; [format setDateFormat:@"hh:mm:ss"]; NSDate *date123 = [format dateFromString:temp123]; [format setDateFormat:@"hh:mm a"]; NSString* newDateString = [format stringFromDate:date123]; ``` please help me.<issue_comment>username_1: Try below functions as per your need, ``` +(NSString*)getStringFromDate:(NSDate*)pDate withFormat:(NSString*)pDateFormat{ NSDateFormatter *dtFormatter = [[NSDateFormatter alloc] init]; [dtFormatter setDateFormat:pDateFormat]; return [dtFormatter stringFromDate:pDate]; } +(NSDate*)getDateFromString:(NSString*)pStrDate withFormat:(NSString*)pDateFormat{ NSDateFormatter *dtFormatter = [[NSDateFormatter alloc] init]; [dtFormatter setDateFormat:pDateFormat]; return [dtFormatter dateFromString:pStrDate]; } +(NSString*)dateStringFromString:(NSString *)dateToConver format:(NSString *)fromFormat toFormat:(NSString *)toFormat{ NSDate *date = [Helper getDateFromString:dateToConver withFormat:fromFormat]; return [Helper getStringFromDate:date withFormat:toFormat]; } ``` Upvotes: 0 <issue_comment>username_2: if your `temp123` in 24 hour format, then use `HH` instead of `hh` ``` NSDateFormatter *format = [[NSDateFormatter alloc] init]; [format setDateFormat:@"HH:mm:ss"]; NSDate *date123 = [format dateFromString:temp123]; [format setDateFormat:@"hh:mm a"]; [format setAMSymbol:@"am"]; [format setPMSymbol:@"pm"]; NSString* newDateString = [format stringFromDate:date123]; ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: your time string is in 24 hrs format. Objective C ``` NSString *strTime = @"23:10:00"; NSDateFormatter *dateFormater = [[NSDateFormatter alloc] init]; [dateFormater setDateFormat:@"HH:mm:ss"]; NSDate *originalTimeFromString = [dateFormater dateFromString:strTime]; [dateFormater setDateFormat:@"hh:mm a"]; NSString *newDateString = [dateFormater stringFromDate:originalTimeFromString]; NSLog(@"%@",newDateString); ``` Swift 4 ``` let strTime = "23:10:00"; let dateFormater = DateFormatter() dateFormater.dateFormat = "HH:mm:ss" let originalTimeFromString = dateFormater.date(from: strTime) dateFormater.dateFormat = "hh:mm a" let newTimeString = dateFormater.string(from: originalTimeFromString!) print("Converted Time===>>>",newTimeString) ``` Upvotes: 0
2018/03/15
849
2,766
<issue_start>username_0: I am getting a base64 string from server in json response.It may contain image,zip file,pdf,mp3 or video.If i am going to decode the base 64 string then in the imageview the image is showing,but i want to download the file and open it in mobile gallery in case if it is image,if it is pdf it will open in pdf reader like that....how to achieve this? ``` "attach":[{"id":6,"name":"Ticket.pdf","thread_id":17,"size":"356660","type":"application\/pdf","poster":"ATTACHMENT","created_at":"2018-03-13 11:40:32","updated_at":"2018-03-13 11:40:32","file":"base64 encoded string","driver":"local","path":"\/home\/jamboree\/public_html\/sayarnew\/storage\/app\/private\/2018\/3\/13"}] ```<issue_comment>username_1: Try below functions as per your need, ``` +(NSString*)getStringFromDate:(NSDate*)pDate withFormat:(NSString*)pDateFormat{ NSDateFormatter *dtFormatter = [[NSDateFormatter alloc] init]; [dtFormatter setDateFormat:pDateFormat]; return [dtFormatter stringFromDate:pDate]; } +(NSDate*)getDateFromString:(NSString*)pStrDate withFormat:(NSString*)pDateFormat{ NSDateFormatter *dtFormatter = [[NSDateFormatter alloc] init]; [dtFormatter setDateFormat:pDateFormat]; return [dtFormatter dateFromString:pStrDate]; } +(NSString*)dateStringFromString:(NSString *)dateToConver format:(NSString *)fromFormat toFormat:(NSString *)toFormat{ NSDate *date = [Helper getDateFromString:dateToConver withFormat:fromFormat]; return [Helper getStringFromDate:date withFormat:toFormat]; } ``` Upvotes: 0 <issue_comment>username_2: if your `temp123` in 24 hour format, then use `HH` instead of `hh` ``` NSDateFormatter *format = [[NSDateFormatter alloc] init]; [format setDateFormat:@"HH:mm:ss"]; NSDate *date123 = [format dateFromString:temp123]; [format setDateFormat:@"hh:mm a"]; [format setAMSymbol:@"am"]; [format setPMSymbol:@"pm"]; NSString* newDateString = [format stringFromDate:date123]; ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: your time string is in 24 hrs format. Objective C ``` NSString *strTime = @"23:10:00"; NSDateFormatter *dateFormater = [[NSDateFormatter alloc] init]; [dateFormater setDateFormat:@"HH:mm:ss"]; NSDate *originalTimeFromString = [dateFormater dateFromString:strTime]; [dateFormater setDateFormat:@"hh:mm a"]; NSString *newDateString = [dateFormater stringFromDate:originalTimeFromString]; NSLog(@"%@",newDateString); ``` Swift 4 ``` let strTime = "23:10:00"; let dateFormater = DateFormatter() dateFormater.dateFormat = "HH:mm:ss" let originalTimeFromString = dateFormater.date(from: strTime) dateFormater.dateFormat = "hh:mm a" let newTimeString = dateFormater.string(from: originalTimeFromString!) print("Converted Time===>>>",newTimeString) ``` Upvotes: 0
2018/03/15
414
1,323
<issue_start>username_0: I have a SQL query which was initially this- ``` DELETE FROM table_1 WHERE column_1 IN ( SELECT column_1 FROM table_2 WHERE column_3 < CURRENT - INTERVAL(N) MONTH TO MONTH) ``` Now my advisory has told to use Financial Close Year instead of CURRENT like: ``` DELETE FROM table_1 WHERE column_1 IN ( SELECT column_1 FROM table_2 WHERE column_3 < FINANCIAL YEAR CLOSE - INTERVAL(N) MONTH TO MONTH) ``` FINANCIAL YEAR CLOSE being March End. I have no idea as to how I should incorporate the changes.<issue_comment>username_1: Since FINANCIAL YEAR CLOSE can be different from one company to another it is not a built in function and has to be calculated manually. You need to use CURRENT and extract the year part from that date and create a new date from [current year]-03-31 and use that in your query. Some googling makes me think this is an Informix database from IBM. Upvotes: 1 [selected_answer]<issue_comment>username_2: You could always use TO\_DATE() with a specific mask, something like: ``` $dbaccess stores7 - Database selected. > SELECT (TO_DATE("2018-03-01","%Y-%m-%d") - INTERVAL (1) MONTH TO MONTH) FROM TABLE(SET{1}); (constant) 2018-02-01 00:00:00.00000 1 row(s) retrieved. > Database closed. $ ``` Whatever you put as a date string is really up to you ;) Upvotes: 1
2018/03/15
1,589
5,697
<issue_start>username_0: I am creating an app with SoundCloud api and trying to append the string value which is entered by user but it showing me null in interface how can I append that in url? MainActivity.java ``` b.setOnClickListener(new View.OnClickListener(){ @Override public void onClick(View p1) { // TODO: Implement this method search = et.getText().toString(); Log.d("search",search); Intent i = new Intent(MainActivity.this, SoundCloud.class); i.putExtra("mysearch", search); startActivity(i); Log.d("CustomUrl",SCService.CustomUrl); Toast.makeText(MainActivity.this,""+SCService.CustomUrl,Toast.LENGTH_SHORT).show(); Retrofit retrofit = new Retrofit.Builder() .baseUrl(Config.API_URL) .addConverterFactory(GsonConverterFactory.create()).build(); SCService Scservice = retrofit.create(SCService.class); Call call = Scservice.getTrack(search); call.enqueue(new Callback(){ @Override public void onResponse(Call call, Response response) { // TODO: Implement this method if (response.isSuccessful()) { Track track = response.body(); streamUrl = track.getStreamUrl(); trackname = track.getTitle(); Log.e("track", track.getTitle()); Toast.makeText(MainActivity.this, trackname, Toast.LENGTH_SHORT).show(); Toast.makeText(MainActivity.this, "url" + streamUrl, Toast.LENGTH_SHORT).show(); //addToList(trackname); } } @Override public void onFailure(Call p1, Throwable p2) { // TODO: Implement this method Toast.makeText(MainActivity.this,"error"+p2.getMessage(),Toast.LENGTH_SHORT).show(); //showMessage("Network Error: " + p2.getMessage()); Log.d("error", p2.getMessage()); } }); } }); ``` ScService.java ``` public interface SCService { public String CustomUrl = Config.API_URL+"/resolve.json?url="+SoundCloud.SCURL+"&client_id="+Config.CLIENT_ID; @GET("/resolve.json?url={SCURL}"+"&client_id="+Config.CLIENT_ID) Call getTrack(@Path("SCURL")String SCURL); } ``` Searched string is in my SoundCloud.java class ``` SCURL = receivedmsg.getString("mysearch"); Log.d("intentdata",SCURL); tv.setText("Search Results for " + SCURL); ``` I want append that SCURL String in GET parameter but it showing me null value for SCURL, how could i achieve this? Update: I have changed my code as above but got this error ``` enter code here 03-15 13:22:14.866 6149 6149 D search com.sk.scdoenloader /https://m.soundcloud.com/dharmaworldwide/houseofcards-mixmaster-05b 03-15 13:22:14.967 6149 6149 D CustomUrl com.sk.scdoenloader https://api.soundcloud.com/resolve.json?url=null&client_id=iZIs9mchVcX5lhVRyQGGAYlNPVldzAoX 03-15 13:22:15.299 6149 6149 E AndroidRuntime com.sk.scdoenloader FATAL EXCEPTION: main 03-15 13:22:15.299 6149 6149 E AndroidRuntime com.sk.scdoenloader Process: com.sk.scdoenloader, PID: 6149 03-15 13:22:15.299 6149 6149 E AndroidRuntime com.sk.scdoenloader java.lang.IllegalArgumentException: No Retrofit annotation found. (parameter #1) 03-15 13:22:15.299 6149 6149 E AndroidRuntime com.sk.scdoenloader for method SCService.getTrack 03-15 13:22:15.299 6149 6149 E AndroidRuntime com.sk.scdoenloader at retrofit2.ServiceMethod$Builder.methodError(ServiceMethod.java:720) 03-15 13:22:15.299 6149 6149 E AndroidRuntime com.sk.scdoenloader at retrofit2.ServiceMethod$Builder.methodError(ServiceMethod.java:711) 03-15 13:22:15.299 6149 6149 E AndroidRuntime com.sk.scdoenloader at retrofit2.ServiceMethod$Builder.parameterError(ServiceMethod.java:729) 03-15 13:22:15.299 6149 6149 E AndroidRuntime com.sk.scdoenloader ```<issue_comment>username_1: You have to use **`@Path`** here ``` @GET("/resolve.json?url={url}&client_id={clientId}") Call getTrack(@Path("url") String url,@Path("clientId") String clientId); ``` Upvotes: 4 <issue_comment>username_2: You can use **DYNAMIC** URL with retrofit !! (With *`@Get`* ) ``` public interface SCService{ @GET public Call getTrack(@Url String url); } ``` Then you can call with your *Custom string url* ``` service.getTrack("/resolve.json?url="+SoundCloud.SCURL+"&client_id="+ Config.CLIENT_ID + SCURL) ``` Upvotes: 3 <issue_comment>username_3: What you want here are GET parameters, not path, so use `@Query("scurl") String scurl` More details are [here](https://stackoverflow.com/a/24100442/541624) By the way, it's kinda strange (at least in Java) to use all caps for your variable name, `scurl` is more appropriate than `SCURL` here Upvotes: 1 <issue_comment>username_4: You have to use `@Query` here ``` @GET("Account/RegisterMobileNumber") Call sendSMSToMobileNumber(@Query("mobileNumber") String mobileNumber); ``` Upvotes: 1
2018/03/15
352
1,122
<issue_start>username_0: Is there way to convert two dimentional array into single dimentional array without using foreach loop in php. Below is the actual array ``` Array ( [0] => Array ( [male] => male [female] => female ) [1] => Array ( [male] => male1 [female] => female1 ) ) ``` And **Output** will be like ``` Array ( [0] = > male [1] = > female [2] = > male1 [3] = > female1 ) ```<issue_comment>username_1: You can use `reduce` and use `array_merge` ``` $array = array( ... ); //Your array here $result = array_reduce($array, function($c,$v){ return array_merge(array_values($c),array_values($v)); }, array()); ``` This will result to: ``` Array ( [0] => male [1] => female [2] => male1 [3] => female1 ) ``` Upvotes: 2 <issue_comment>username_2: This loops through your multidimensional array and stores the results in the new array variable $newArray. ``` $newArray = array(); foreach($multi as $array) { foreach($array as $k=>$v) { $newArray[$k] = $v; } } ``` Upvotes: 0
2018/03/15
324
1,112
<issue_start>username_0: I am a total beginner in R, so this might be an obvious question. I have a dataframe with 129 variables, some numeric, some string. Basically, I am trying to loop through each variable to calculate the mean/mode(depending on the variable), standard dev (if applicable), and frequency, and have it some sort of nice package I could export. I have tried using a for loop, but I can only get it to go through rows, not columns. Anyone have any suggestions? Thanks!!<issue_comment>username_1: You can use `reduce` and use `array_merge` ``` $array = array( ... ); //Your array here $result = array_reduce($array, function($c,$v){ return array_merge(array_values($c),array_values($v)); }, array()); ``` This will result to: ``` Array ( [0] => male [1] => female [2] => male1 [3] => female1 ) ``` Upvotes: 2 <issue_comment>username_2: This loops through your multidimensional array and stores the results in the new array variable $newArray. ``` $newArray = array(); foreach($multi as $array) { foreach($array as $k=>$v) { $newArray[$k] = $v; } } ``` Upvotes: 0
2018/03/15
476
1,374
<issue_start>username_0: from the table below, I need to get the visits which having EndTime between 20:00 to 00:00. (highlighted in yellow) [![enter image description here](https://i.stack.imgur.com/jbMFp.png)](https://i.stack.imgur.com/jbMFp.png) this is the query I am using now, but if I am not getting any result. if I change the time 00:00 to 23:59, I am getting the other two results, but not the one with endtime = 00:00. ``` Select VisitID, StartTime, duration, dateadd(minute,Duration,StartTime) as EndTime from visit where StartTime between '2018-02-01' and '2018-02-02 23:59' and cast(dateadd(minute,Duration,StartTime) as Time) >= '20:00' and cast(dateadd(minute,Duration,StartTime) as Time) <= '00:00' ``` Please assist. thanks.<issue_comment>username_1: You can use `reduce` and use `array_merge` ``` $array = array( ... ); //Your array here $result = array_reduce($array, function($c,$v){ return array_merge(array_values($c),array_values($v)); }, array()); ``` This will result to: ``` Array ( [0] => male [1] => female [2] => male1 [3] => female1 ) ``` Upvotes: 2 <issue_comment>username_2: This loops through your multidimensional array and stores the results in the new array variable $newArray. ``` $newArray = array(); foreach($multi as $array) { foreach($array as $k=>$v) { $newArray[$k] = $v; } } ``` Upvotes: 0
2018/03/15
1,534
4,168
<issue_start>username_0: Given list\_a and list\_b. I want to run list\_b through a function that gives all possible sublist of list\_b (this part of the code works). I then want to take every sublist of list\_b, and see if that sublist is also a sublist of list\_a. If it is I should get a list of all the indexes, or splices where that sublist appears in list\_a. I'm able to get the code to work for sublist of length one, but cannot get it to work for longer lists. Here's my current code to solve this problem: ``` import numpy as np a = [0,1,2,3,0,2,3] b = [0,2,3] sublists = [] def all_sublists(my_list): """ make a list containg every sublist of a my_list""" for i in range(len(my_list)): n = i+1 while n <= len(my_list): sublist = my_list[i:n] sublists.append(sublist) n += 1 def sublists_splice(sublist, my_list): """if sublist is in my_list print sublist and the corresponding indexes""" values = np.array(my_list) print(str(sublist) + " found at " + str(np.where(values == sublist)[0])) all_sublists(b) for sublist in sublists: sublists_splice(sublist, a) ``` This is the output of the code: ``` [0] found at [0 4] [0, 2] found at [] [0, 2, 3] found at [] [2] found at [2 5] [2, 3] found at [] [3] found at [3 6] /home/nicholas/Desktop/sublists.py:27: DeprecationWarning: elementwise == comparison failed; this will raise an error in the future. ``` Here's what I'd like to get: ``` [0] found at [0 4] [0, 2] found at [4:6] [0, 2, 3] found at [4:7] [2] found at [2 5] [2, 3] found at [2:4 5:7] [3] found at [3 6] ``` I'm assuming there's a pythonic way to approach this. While I've tried a few bits of code they've all been very long and haven't worked... One last note. I do need them to be sublists not subsets as order matters. I appreciate any help. Thank you.<issue_comment>username_1: This is one solution using `itertools.combinations`. Note I have made it as *lazy* as possible, but that does not mean it is the most efficient solution available. ``` from itertools import combinations import numpy as np a = [0,1,2,3,0,2,3] b = [0,2,3] def get_combs(x): return (list(c) for i in range(1, len(x)) for c in combinations(x, i)) def get_index_arr(x, y): n = len(x) lst = (y[i:i+n] for i in range(len(y)-len(x)+1)) return (i for i, j in enumerate(lst) if x == j) combs = get_combs(b) d = {tuple(c): list(get_index_arr(c, a)) for c in combs} # {(0,): [0, 4], # (0, 2): [4], # (0, 3): [], # (2,): [2, 5], # (2, 3): [2, 5], # (3,): [3, 6]} ``` Upvotes: 1 <issue_comment>username_2: Using tools from [Find boolean mask by pattern](https://stackoverflow.com/questions/48988038/find-boolean-mask-by-pattern/49002944) ``` def rolling_window(a, window): #https://stackoverflow.com/q/7100242/2901002 shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) c = np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) return c def vview(a): #based on @jaime's answer: https://stackoverflow.com/a/16973510/4427777 return np.ascontiguousarray(a).view(np.dtype((np.void, a.dtype.itemsize * a.shape[1]))) def sublist_loc(a, b): a, b = np.array(a), np.array(b) n = min(len(b), len(a)) sublists = [rolling_window(b, i) for i in range(1, n + 1)] target_lists = [rolling_window(a, i) for i in range(1, n + 1)] locs = [[np.flatnonzero(vview(target_lists[i]) == s) for s in vview(subl)] \ for i, subl in enumerate(sublists)] for i in range (n): for j, k in enumerate(sublists[i]): print(str(k) + " found starting at index " + str(locs[i][j])) return sublists, target_lists, locs _ = sublist_loc(a, b) [0] found starting at index [0 4] [2] found starting at index [2 5] [3] found starting at index [3 6] [0 2] found starting at index [4] [2 3] found starting at index [2 5] [0 2 3] found starting at index [4] ``` As an added benefit, all the `rolling_window` and `vview` calls are just to views of the original arrays, so there is no big memory hit to storing the combinations. Upvotes: 3 [selected_answer]
2018/03/15
621
1,946
<issue_start>username_0: I'm new to **Bootstrap** I have a HTML like: ``` First Column Second Column ``` Now there are only two columns of 3 size each and will not cover the full length of the row so there will be blank space at the right side of the row on the view. These columns are dynamic like these may be One, two, three or may be four in a row at a time. I want this, **if there are less then four columns with size 3 in the row, those columns should be in center of the row instead of left side and blank space in right side.** Please suggest, how can I manage this kind of case?<issue_comment>username_1: ```css .row { display: flex; justify-content: center; } ``` ```html First Column Second Column ``` you can use offset class or just add these css ``` .row { display: flex; justify-content: center; } ``` Upvotes: 2 <issue_comment>username_2: According to your requirement and if you are just new to bootstrap, I will recommend you to implement **[bootstrap4](https://getbootstrap.com/)**, as this framework is based on **Flexbox** and its very easy to align the columns here **Reference:** * [Bootstrap Grid](https://getbootstrap.com/docs/4.0/layout/grid/) * [Bootstrap Flex Utilities](https://getbootstrap.com/docs/4.0/utilities/flex/) ```css .col-3 { padding-top: .75rem; padding-bottom: .75rem; background-color: rgba(86, 61, 124, .15); border: 1px solid rgba(86, 61, 124, .2); } ``` ```html 1 2 ``` --- But if you are working in a code that already using bootstrap3 and can't switch to bootstrap4, use a custom class of yours in `row` and use `flexbox` ```css .flex-row { display: flex; justify-content: center; flex-wrap: wrap; } .flex-row .col-xs-3 { padding-top: .75rem; padding-bottom: .75rem; background-color: rgba(86, 61, 124, .15); border: 1px solid rgba(86, 61, 124, .2); } ``` ```html 1 2 ``` Upvotes: 3 [selected_answer]
2018/03/15
613
1,844
<issue_start>username_0: I am new to programming. I am using javascript right now. I wanted codes to get end date for a execution. I have got Startdate, weekdays(days in which execution occures) and number of executions to occure. How can i get end date??? For example Start date is '15 - 03 - 2018' days to be executed are Sunday ie '0' and friday '5' Number of executions from start date to end date is 5 End date of execution here should be '30-03-2018' which is to be retrieved. Any idea...<issue_comment>username_1: ```css .row { display: flex; justify-content: center; } ``` ```html First Column Second Column ``` you can use offset class or just add these css ``` .row { display: flex; justify-content: center; } ``` Upvotes: 2 <issue_comment>username_2: According to your requirement and if you are just new to bootstrap, I will recommend you to implement **[bootstrap4](https://getbootstrap.com/)**, as this framework is based on **Flexbox** and its very easy to align the columns here **Reference:** * [Bootstrap Grid](https://getbootstrap.com/docs/4.0/layout/grid/) * [Bootstrap Flex Utilities](https://getbootstrap.com/docs/4.0/utilities/flex/) ```css .col-3 { padding-top: .75rem; padding-bottom: .75rem; background-color: rgba(86, 61, 124, .15); border: 1px solid rgba(86, 61, 124, .2); } ``` ```html 1 2 ``` --- But if you are working in a code that already using bootstrap3 and can't switch to bootstrap4, use a custom class of yours in `row` and use `flexbox` ```css .flex-row { display: flex; justify-content: center; flex-wrap: wrap; } .flex-row .col-xs-3 { padding-top: .75rem; padding-bottom: .75rem; background-color: rgba(86, 61, 124, .15); border: 1px solid rgba(86, 61, 124, .2); } ``` ```html 1 2 ``` Upvotes: 3 [selected_answer]
2018/03/15
908
2,295
<issue_start>username_0: would like some help with the following problem. I currently have a panda dataframe with 3 columns - test1, test2, test3 What I hope to achieve is result in the result\_column, where the logic will be: 1) If value in test1 **AND** test2 > 0, then return value of test3 2) Else If value test1 **AND** test2 < 0, then return **NEGATIVE** value of test3 3) Otherwise return 0 ``` test1 test2 test3 result_column 0 0.5 0.1 1.25 1.25 1 0.2 -0.2 0.22 0 2 -0.3 -0.2 1.12 -1.12 3 0.4 -0.3 0.34 0 4 0.5 0 0.45 0 ``` This is my first time posting a question on python and pandas. Apologies in advance if the formatting here is not optimum. Appreciate any help I can get!<issue_comment>username_1: I think need [`numpy.select`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.select.html) with conditions chained by `&` (`AND`) or select all tested columns by subset `[[]]`, compare ant test by [`DataFrame.all`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html): ``` m1 = (df.test1 > 0) & (df.test2 > 0) #alternative #m1 = (df[['test1', 'test2']] > 0).all(axis=1) m2 = (df.test1 < 0) & (df.test2 < 0) #alternative #m2 = (df[['test1', 'test2']] < 0).all(axis=1) df['result_column'] = np.select([m1,m2], [df.test3, -df.test3], default=0) print (df) test1 test2 test3 result_column 0 0.5 0.1 1.25 1.25 1 0.2 -0.2 0.22 0.00 2 -0.3 -0.2 1.12 -1.12 3 0.4 -0.3 0.34 0.00 4 0.5 0.0 0.45 0.00 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Clever use of `np.sign` and logic If both `> 0` or both `< 0` then product is `1`, otherwise product is `-1` or `0`. If both `> 0` then sign of sum is `1`. If both `< 0` then sign of sum is `-1`, else `0`. The product of these things is exactly what we want. ``` v = np.sign(df[['test1', 'test2']].values) df.assign(result_column=v.prod(1) * np.sign(v.sum(1)) * df.test3 + 0) test1 test2 test3 result_column 0 0.5 0.1 1.25 1.25 1 0.2 -0.2 0.22 0.00 2 -0.3 -0.2 1.12 -1.12 3 0.4 -0.3 0.34 0.00 4 0.5 0.0 0.45 0.00 ``` Upvotes: 2
2018/03/15
1,164
3,113
<issue_start>username_0: I have a ASP function as below for rounding up the amount: ``` function GetRoundedVal(amount) NoOfRight = right(formatnumber(amount,2),1) if NoOfRight = 0 then roundedAmount = amount elseif NoOfRight = 1 then roundedAmount = amount - 0.01 elseif NoOfRight = 2 then roundedAmount = amount - 0.02 elseif NoOfRight = 3 then roundedAmount = amount + 0.02 elseif NoOfRight = 4 then roundedAmount = amount + 0.01 elseif NoOfRight = 5 then roundedAmount = amount elseif NoOfRight = 6 then roundedAmount = amount - 0.01 elseif NoOfRight = 7 then roundedAmount = amount - 0.02 elseif NoOfRight = 8 then roundedAmount = amount + 0.02 elseif NoOfRight = 9 then roundedAmount = amount + 0.01 else end if GetRoundedVal = roundedAmount end function ``` The result should be like this: ``` +----------------+--------+ | Original Value | Result | +----------------+--------+ | 19.91 | 19.90 | Original Value - 0.01 | 19.92 | 19.90 | Original Value - 0.02 | 19.93 | 19.95 | Original Value + 0.02 | 19.94 | 19.95 | Original Value + 0.01 | 19.95 | 19.95 | | 19.96 | 19.95 | Original Value - 0.01 | 19.97 | 19.95 | Original Value - 0.02 | 19.98 | 20.00 | Original Value + 0.02 | 19.99 | 20.00 | Original Value + 0.01 +----------------+--------+ ``` The question is can we do this directly on T-SQL? SQL Server V14. If it even possible. Thanks.<issue_comment>username_1: try This ``` DECLARE @Inp TABLE ( Org DECIMAL(10,4), Res DECIMAL(10,2) ) INSERT INTO @Inp ( Org ) VALUES(19.91), (19.92), (19.93), (19.94), (19.95), (19.96), (19.97), (19.98), (19.99) SELECT *, Result = CAST( Org + CASE RIGHT(CAST(Org AS DECIMAL(10,2)),1) WHEN 1 THEN -0.01 WHEN 2 THEN -0.02 WHEN 3 THEN 0.02 WHEN 4 THEN 0.01 WHEN 5 THEN 0 WHEN 6 THEN -0.01 WHEN 7 THEN -0.02 WHEN 8 THEN 0.02 WHEN 9 THEN 0.01 ELSE 0 END AS DECIMAL(10,2)) FROM @Inp ---------- ``` Upvotes: 1 <issue_comment>username_2: You can try this. ``` DECLARE @MyTable TABLE(OriginalValue DECIMAL(18,2)) INSERT INTO @MyTable VALUES (19.91), (19.92), (19.93), (19.94), (19.95), (19.96), (19.97), (19.98), (19.99) SELECT OriginalValue, ROUND( ( OriginalValue / 0.05 ), 0, 0 ) * 0.05 Result FROM @MyTable ``` Result: ``` OriginalValue Result --------------------------------------- --------------------------------------- 19.91 19.90000000 19.92 19.90000000 19.93 19.95000000 19.94 19.95000000 19.95 19.95000000 19.96 19.95000000 19.97 19.95000000 19.98 20.00000000 19.99 20.00000000 ``` Upvotes: 3 [selected_answer]
2018/03/15
590
2,420
<issue_start>username_0: I am new to the `web development`. Here, I am using the jquery `datePicker`. In this I have a input box and that user can s`elect only year and month and not date`. So, I used, ``` $('#' + duration[k]).datepicker({ format: 'MM yyyy', viewMode: "months", minViewMode: "months", }); ``` I did this, Here my input are getting created dynamically. ``` var duration = ["StartDuration", "EndDuration"]; var commentText = document.createElement('input'); commentText.setAttribute("type", "text"); commentText.name = "month"; commentText.id = duration[k]; commentText.className = 'form-control'; commentText.rows = '3'; commentText.placeholder = "Enter" + " " + duration[k]; commentText.setAttribute("readOnly", "true"); commentText.setAttribute("ng-model", duration[k]); ``` Used for loop on the array and then created this input box. Now what happening is here, It gets datepicker but it is having a `date-month-year` . So, I want to have only year and month. Can any one pleas help me with this ?<issue_comment>username_1: You need to use dateformat like this : `dateFormat: 'MM yy'` Check the example : ```js $(function() { $('.date-picker').datepicker( { changeMonth: true, changeYear: true, showButtonPanel: true, dateFormat: 'MM yy', onClose: function(dateText, inst) { var month = $("#ui-datepicker-div .ui-datepicker-month :selected").val(); var year = $("#ui-datepicker-div .ui-datepicker-year :selected").val(); $(this).datepicker('setDate', new Date(year, month, 1)); } }); }); ``` ```css .ui-datepicker-calendar { display: none; } ``` ```html Your Date : ``` Here is [fiddle](http://jsfiddle.net/DBpJe/13732/) Upvotes: 1 <issue_comment>username_2: You can do like this ``` $('.date-picker').datepicker( { changeMonth: true, changeYear: true, showButtonPanel: true, dateFormat: 'MM yy', }); ``` Jquery datepicker provides these two - changeMonth,changeYear property for it and make format like MM yy. Upvotes: 0
2018/03/15
429
1,586
<issue_start>username_0: I am using **Django** to develop a web application on the linux server. When it comes to deployment, a proposed way is to use **Nginx** to communicate with the client and **Django** only need to communicate with **Nginx** with **uwsgi** protocol. The relationship is client<-Http->Nginx<-uwsgi->Django server Latter I found that I can also push the server on public network with `python manage.py runserver 0:0:0:0` It seems that the Nginx can help serve the static file and media. My question is, what is exactly the benefit of using Nginx as the middleman?<issue_comment>username_1: From the [docs on runserver](https://docs.djangoproject.com/en/2.0/ref/django-admin/#runserver): > > DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay. We’re in the business of making Web frameworks, not Web servers, so improving this server to be able to handle a production environment is outside the scope of Django.) > > > Upvotes: 2 <issue_comment>username_2: `runserver` is for debugging, in production we use gunicorn/uwsgi to boot the django app, so the question can be `Do we need Nginx if we have configured gunicorn/uwsgi for Django`, the answer is `YES`, becuase compared with gunicorn/uwsgi, Nginx has following advantages : * security, it can be configured to deny or allow certain ip addresses * load balancing * handle static files * cache * ... refer to more features: <https://en.wikipedia.org/wiki/Nginx#HTTP_proxy_and_Web_server_features> Upvotes: 1
2018/03/15
372
1,349
<issue_start>username_0: I meet a problem when I tried to write the json data to static file in my spring boot project. It occurs that the fileNotFound though I ensure that I have the file. The absolute path is just ok. [Here is my project structure](https://i.stack.imgur.com/aV1dT.png) I write the path: /data/user.json How should I change the path?<issue_comment>username_1: From the [docs on runserver](https://docs.djangoproject.com/en/2.0/ref/django-admin/#runserver): > > DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay. We’re in the business of making Web frameworks, not Web servers, so improving this server to be able to handle a production environment is outside the scope of Django.) > > > Upvotes: 2 <issue_comment>username_2: `runserver` is for debugging, in production we use gunicorn/uwsgi to boot the django app, so the question can be `Do we need Nginx if we have configured gunicorn/uwsgi for Django`, the answer is `YES`, becuase compared with gunicorn/uwsgi, Nginx has following advantages : * security, it can be configured to deny or allow certain ip addresses * load balancing * handle static files * cache * ... refer to more features: <https://en.wikipedia.org/wiki/Nginx#HTTP_proxy_and_Web_server_features> Upvotes: 1
2018/03/15
422
1,649
<issue_start>username_0: We migrated a project from TFS 2015 to VSTS recently as a Scrum project as it was in TFS. But we want the project to be using the CMMI process template. Now, how can we migrate the existing project from Scrum to CMMI process template?<issue_comment>username_1: According to MS documentation ([here](https://learn.microsoft.com/en-us/vsts/work/customize/process/manage-process#change-the-process-used-by-team-projects)) ... > > You can change the process a team project uses from a system process > or inherited process to an inherited process. You can only change team > projects to use another process that inherits from the same system > process. That is, you can change an Agile-based team project to any > process you created from the Agile system process as well as to the > Agile process. Whereas, you can't change a Scrum-based team project to > an Agile-derived inherited process. > > > But you could still create a new team project based on CMMI and then move your source code and workitems to that new team project. To move workitems you can export them to Excel, create a new Excel connection to VSTS, that is connected to the new team project, and then copy the workitems and pushing them from the new Excel file into the new project. Upvotes: 4 [selected_answer]<issue_comment>username_2: You can custom a process when using Hosted XML, the changes made to the process template are then applied to all team projects using that process. More information: [Customize a process when using Hosted XML](https://learn.microsoft.com/en-us/vsts/work/customize/import-process/customize-process) Upvotes: 2
2018/03/15
352
1,398
<issue_start>username_0: Can anybody just tell me is it possible to achieve? I am trying to create a website using angularjs- front end and falcon- back end. So, how could i integrate them so that my form data get posted to back end? Thanks in advance!<issue_comment>username_1: Use Node js instead of the falcon-back end and also brush up using angularjs documentation and nodejs documentation Upvotes: -1 <issue_comment>username_2: Yes, It is possible to achieve. I have Falcon and Angular 5 running in production and deployed on AWS Beanstalk and they've been playing very nicely together. It's hard to answer your question without knowing what you have already tried. If you have not started yet, I suggest you start with the simplest setup for both Falcon and Angular (I'm going to assume Angular 5). This is a very rough plan to get started: 1) Install Falcon library; copy [the example API from Falcon](https://falconframework.org/) and run the server. 2) Download the example APP from [Angular Tutorial](https://angular.io/tutorial/toh-pt6) 3) Edit [HeroService](https://stackblitz.com/angular/nlvrooxxdvy?file=src%2Fapp%2Fhero.service.ts) to get data from your API server. If everything goes well, you should be able to get here within an hour or so. Once you see Angular calling your Falcon backend and fetch data you know you've got a basis to start from. Good luck! Upvotes: 2
2018/03/15
1,243
3,990
<issue_start>username_0: This is a simple program ``` #include void get(int,int); void main() { int a,b; get(a,b); printf("In main"); printf("%d",a); } void get(int m,int n) { printf("enter the value"); scanf("%d%d",&m,&n); } ``` and I got an output is ``` enter the value 4 5 ``` in main: ``` 0 ``` Why is the value of `m` in `get()` not assigned to `a` in `main()`? What's my mistake?<issue_comment>username_1: Your scanf reads into function local variables which store the values you give as parameters to the function. Their values are not visible in the variables you give as parameters to the function. You probably want to use pointers to variables as parameters, then the read values can end up in the variables pointed to by those pointer-parameters. Upvotes: 0 <issue_comment>username_2: This is basically happening due to scope of variables, since integers are passed by value and not by reference. You need to return values of `a` and `b` for them to be present in `main()`. see more here <https://www.tutorialspoint.com/cprogramming/c_function_call_by_value.htm> ``` #inlcude int get(int,int); void main() { int a,b; a = get(a,b); printf("In main"); printf("%d",a); } int get(int m,int n){ printf("enter the value"); scanf("%d%d,&m,&n); return m; } ``` > > Enter the value > > > 10 20 > > > In main > > > 10 > > > Also, read about indenting the code so that it's more readable. Upvotes: 0 <issue_comment>username_3: You're passing your main variables via value. Read about how you can pass by reference, [here](https://stackoverflow.com/questions/13654138/what-exactly-is-the-difference-between-pass-by-reference-in-c-and-in-c). `scanf` requires addresses of variables in order to modify them; so you need to pass their addresses like this: ``` get(&a, &b); ``` And you can modify your `get()` method like this: ``` void get(int* pM,int* pN) { printf("enter the value"); scanf("%d%d, pM, pN); } ``` Upvotes: 2 <issue_comment>username_4: > > Why is the value of m in get() not assigned to a in main()? What's my mistake? > > > First, you need to understand the concept of parameter passing in C. *[If you are not aware of formal and actual parameters, check [**this**](https://stackoverflow.com/questions/156767/whats-the-difference-between-an-argument-and-a-parameter)]* Technically, everything in C is pass-by-value. Here, ``` get(a,b); ``` you are passing the value of `a` and `b` variable to function `get()`. The value of *actual parameter* `a` and `b` will be copied to *formal parameters* `m` and `n` *[in this case, the value of `a` and `b` variable is garbage since you have not initialized them]*. Any modification to the value of formal parameters (`m` and `n`) in the calling function will not reflect in the actual parameters (`a` and `b`) because formal parameter storage is separate. Hence, the value of `m` in `get()` does not assigned to `a` in `main()`. Below part of the answer is based on the assumption that you are aware of the concept of pointers in C language. If not, I would suggest to pick choose a good C language book/tutorial and go through the concept of pointers. C language provides a facility to pass a pointer to a function which is also pass-by-value only. It copies the value of the pointer, i.e. the address, to the function formal parameters and you can access the value stored at that address by dereferencing the pointer. Hence, any changes made in the value at the address passed will reflect in the calling function actual parameters. So, you can do: ``` #include void get(int \*, int \*); int main() { int a, b; get(&a, &b); printf("In main\n"); printf("a : %d, b = %d\n", a, b); } void get(int \*m,int \*n) { printf("Enter the value:\n"); scanf("%d%d", m, n); // m holds the address of a and n holds the address of b variable. printf("Value entered:\n"); printf("%d %d\n", \*m, \*n); //dereferencing the pointer m and n } ``` Upvotes: 0
2018/03/15
1,100
3,276
<issue_start>username_0: ``` library(rgdal) library(maptools) library(gstat) library(sp) data <- read.table("meuse.txt", sep="", header=TRUE) # read txt file # transform the data frame into a spatial data frame coordinates(data) <- ~ x + y ## Set the coordinate system proj4string(data) <- CRS("+init=epsg:4326") ## the epsg numbers can be found here: http://spatialreference.org/ref/ # import the border shp file border <- readOGR("meuse_area.shp", "meuse_area") proj4string(border) <- CRS("+init=epsg:4326") # import a raster from a ArcInfo ASCII format zinc <- read.asciigrid("zinc.asc") proj4string(zinc) <- CRS("+init=epsg:4326") # Let's first create a prediction grid for the interpolation, starting from # the shape file vals <- border@bbox deltaLong <- as.integer((vals[1, 2] - vals[1, 1]) + 1.5) deltaLat <- as.integer((vals[2, 2] - vals[2, 1]) + 1.5) gridRes <- 0.5 # change this value to change the grid size (in metres) gridSizeX <- deltaLong / gridRes gridSizeY <- deltaLat / gridRes grd <- GridTopology(vals[, 1], c(gridRes, gridRes), c(gridSizeX, gridSizeY)) pts <- SpatialPoints(coordinates(grd)) pts1 <- SpatialPointsDataFrame(as.data.frame(pts), data=as.data.frame(rep(1, nrow(as.data.frame(pts))))) Overlay <- overlay(pts1, border) pts1$border <- Overlay nona <- na.exclude(as.data.frame(pts1)) coordinates(nona) <- ~ x + y gridded(nona) <- TRUE proj4string(nona) <- CRS("+init=epsg:4326") # remember to set the coordinate # system also for the prediction grid writeAsciiGrid(nona, "prediction_grid.asc") # For the Co-kriging we need to obtain the value of the covariate for each # observation over <- overlay(zinc, data) data$zinc <- over$zinc.asc str(as.data.frame(data)) # also the prediction grid need to be overlayed with the covariate over <- overlay(zinc, nona) nona$zinc <- over$zinc.asc # for the cokriging, the first thing to do is create an object with the # function gstat() that contains both the variable and the covariate str(data) complete.cases("data") str(zinc) complete.cases("zinc") g <- gstat(id="lead", formula=lead ~ 1, data=data) g <- gstat(g, id="zinc", formula=zinc ~ 1, data=data) # Fitting the variogram # first, plot the residual variogram vario <- variogram(g) ``` > > Error in na.fail.default(list(zinc = c(NA, NA, NA, NA, NA, NA, NA, NA, > : missing values in object > > > I know that there is no missing in zinc when I edit the object in Notepad. What did I miss? There is no NA in zinc.asc.[These are my data.](https://www.dropbox.com/s/8xqtiu1595bzjhk/NEW%20EXPERIMENT.rar?dl=0) I want to perform cokriging and I am stuck with variogram.<issue_comment>username_1: One cause of the confusion here may be that sp used to include it's own overlay function. <https://www.rdocumentation.org/packages/sp/versions/0.9-7/topics/overlay> However the most recent version of the sp package replaced the "overlay" function with "over". I do no believe raster::overlay behaves the same as the overlay function from sp 0.9-7 Upvotes: 0 <issue_comment>username_2: also you can change the two lines : ``` overlay=overlay(pts1,border) pts1$border=Overlay ``` by ``` pts1$border=over(pts1,border) ``` Upvotes: 2 [selected_answer]
2018/03/15
617
2,093
<issue_start>username_0: How to sort a list using comparator in Descending order (based on salary which is a long value) --- ``` class Empl{ private String name; private long salary; public Empl(String n, long s){ this.name = n; this.salary = s; } public String getName() { return name; } public void setName(String name) { this.name = name; } public long getSalary() { return salary; } public void setSalary(long salary) { this.salary = salary; } } ```<issue_comment>username_1: Try this: **Descending** ``` Collections.sort(modelList, new Comparator() { @Override public int compare(Empl o1, Empl o2) { return o2.getSalary().compareTo(o1.getSalary()); } }); ``` **Ascending** ``` Collections.sort(modelList, new Comparator() { @Override public int compare(Empl o1, Empl o2) { return o1.getSalary().compareTo(o2.getSalary()); } }); ``` Upvotes: 1 <issue_comment>username_2: Use this way ``` Collections.sort(employeeList, new Comparator() { @Override public int compare(Empl lhs, Empl rhs) { return ((Long) rhs.getSalary()).compareTo((Long) lhs.getSalary()); } }); ``` Upvotes: 0 <issue_comment>username_3: If you are using Java 8 you can use [Stream#sorted](https://developer.android.com/reference/java/util/stream/Stream.html#sorted(java.util.Comparator%3C?%20super%20T%3E)) with [Comparator#comparingLong](https://developer.android.com/reference/java/util/Comparator.html#comparingLong(java.util.function.ToLongFunction%3C?%20super%20T%3E)) like this : ``` list = list.stream() .sorted(Comparator.comparingLong(Empl::getSalary).reversed()) .collect(toList()); ``` Note the `.reversed()` when you use it the list is sorted descendant, else if you don't use it, it is sorted ascendant. Upvotes: 3 [selected_answer]<issue_comment>username_4: If you’re using Java-8 then you can call the default sort method directly on the list providing the necessary comparator. ``` list.sort(Comparator.comparingLong(Empl::getSalary).reversed()); ``` Upvotes: 0
2018/03/15
682
2,177
<issue_start>username_0: I have two different queries from two tables. The first query I have is: ``` select sum(total_amount) as total_amount, supplier_name from tbL_supplierAccountLedger where DATE >= '2017-01-01' and DATE <= '2017-12-31' group by supplier_name ``` The output of this is ``` Total Amount | Supplier name 4000 A 5000 B 8000 C 9000 D ``` Here is my another query with different tablename ``` SELECT SUM(RET_AMOUNT)as returnamount, SUPPLIER_NAME FROM tbl_PurchaseReturns where CAST(date as DATE) >= '2017-01-01' and CAST(date as DATE) <= '2017-12-31' group by SUPPLIER_NAME ``` The output of this is ``` Return Amount | Supplier name 1000 A 2000 B 500 C ``` I want a query that automatically subtracts table B from table A. Below is the expected output. ``` total amount | Supplier Name 3000 A 3000 B 7500 C 9000 D ```<issue_comment>username_1: use derived query and union both result, with the RET\_AMOUNT of tbl\_PurchaseReturns as negative value. And finally group by supplier\_name ``` SELECT SUM(total_amount), supplier_name FROM ( SELECT sum(total_amount) as total_amount, supplier_name from tbL_supplierAccountLedger where DATE >= '2017-01-01' and DATE <= '2017-12-31' group by supplier_name UNION ALL SELECT SUM(-RET_AMOUNT) as returnamount, supplier_name FROM tbl_PurchaseReturns where CAST(date as DATE) >= '2017-01-01' and CAST(date as DATE) <= '2017-12-31' group by supplier_name ) AS D GROUP BY supplier_name ``` Upvotes: 2 <issue_comment>username_2: Do the `JOIN`s ``` SELECT s.supplier_name, r.total_amount - coalesce(returnamount, 0) as amount from ( SELECT supplier_name , SUM(total_amount) as total_amount FROM tbL_supplierAccountLedger WHERE ... GROUP BY supplier_name )s LEFT JOIN ( SELECT SUPPLIER_NAME , SUM(RET_AMOUNT)as returnamount FROM tbl_PurchaseReturns WHERE ... GROUP BY SUPPLIER_NAME ) r on r.SUPPLIER_NAME= s.supplier_name ``` Upvotes: 2
2018/03/15
780
2,796
<issue_start>username_0: Why image is not in UIButton, when using that code ? I have an image URL to use that image in my app. ``` //Show Image from URL let url = URL(string:"https://static.pexels.com/p…/247932/pexels-photo-247932.jpeg") let session = URLSession.shared session.dataTask(with: url!, completionHandler: { (data, response, error) in if (error == nil) && (data != nil) { DispatchQueue.main.async { let img = UIImage(data: data!) print(img!) self.otlBtnTakeImage.imageView?.image = UIImage(data: data!) } } }).resume() ```<issue_comment>username_1: Set image like this ``` button.setImage(img, for: .normal) // To set image with title. button.setBackgroundImage(img, for: .normal) // set btn background image. ``` Upvotes: 0 <issue_comment>username_2: An easy way to make the same code run asynchronously, not blocking the UI, is by using GCD: ``` let url = URL(string: image.url) DispatchQueue.global().async { let data = try? Data(contentsOf: url!) //make sure your image in this url does exist, otherwise unwrap in a if let check / try-catch DispatchQueue.main.async { self.otlBtnTakeImage.setImage = UIImage(data: data!) } } ``` Upvotes: 0 <issue_comment>username_3: ``` //Show Image from URL let url = URL(string:"https://static.pexels.com/p…/247932/pexels-photo-247932.jpeg") let session = URLSession.shared session.dataTask(with: url!, completionHandler: { (data, response, error) in if (error == nil) && (data != nil) { DispatchQueue.main.async { let img = UIImage(data: data!) print(img!) self.otlBtnTakeImage.setImage(img, for: .normal) } } }).resume() ``` The above changes may solve your issue and a display image in the button Upvotes: 0 <issue_comment>username_4: You can set image of UIButton like this. ``` let image = UIImage(named: "imagename.png") yourButton.setBackgroundImage(image, for: .normal) yourButton.setImage(image, for: .normal) ``` Upvotes: 0 <issue_comment>username_5: You can use this code to use and show the image in UIButton. ``` let url = URL(string:"https://static.pexels.com/photos/247932/pexels-photo-247932.jpeg") let session = URLSession.shared session.dataTask(with: url!, completionHandler: { (data, response, error) in if (error == nil) && (data != nil) { DispatchQueue.main.async { let img = UIImage(data: data!) print(img!) self.otlBtnTakeImage.setImage(img, for: .normal) } } }).resume() ``` Upvotes: 2 [selected_answer]
2018/03/15
2,335
7,297
<issue_start>username_0: I am doing key value pair mapping for the first time and not been able to approach. I have a key value pair like : ``` trips= { date1: [ { "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": {"id":"veh1", "number": "AN01001"} }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": {"id":"veh2", "number": "AN01002"} }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": {"id":"veh3", "number": "AN01003"} } ], date2: [ { "id": 1, "Place": "Lucknow", "Number": "001", "Vehicle": {"id":"veh1", "number": "AN01002"} }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": {"id":"veh3", "number": "AN01003"} } ], date3: [ { "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": {"id":"veh1", "number": "AN01001"} }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": {"id":"veh2", "number": "AN01002"} } ] } for (date in trips) { var places = trips[date] for (var i = 0; i < places.length; ++i) { var place = places[i] console.log('place', place) console.log('Vehicle', place.Vehicle) } } ``` Inside dates the data are stored in form of array which have key value pair. I need to print all the dates which have vehicle id as "veh2" inside it. I am trying to loop through the data. but not finding the right way after a certain point where the array starts. I have been able to loop through one nested key value pair ``` for (key in trips){ var value= trips[key] for (k in value) { //further nested logic } } ```<issue_comment>username_1: you can use `foreach` like this : ``` trips.foreach((item) =>{ //here item is date1, date2 // then you can do item.id = 2; // every stuff you want !!! ``` }); Upvotes: -1 <issue_comment>username_2: I think your major problem is the wrong structured data as @username_3 already mentioned, when having this corrected it is pretty easy to run through everything: ```js trips= { date1: [ { "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": {"id":"veh1", "number": "AN01001"} }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": {"id":"veh2", "number": "AN01002"} }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": {"id":"veh3", "number": "AN01003"} } ], date2: [ { "id": 1, "Place": "Lucknow", "Number": "001", "Vehicle": {"id":"veh1", "number": "AN01002"} }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": {"id":"veh3", "number": "AN01003"} } ], date3: [ { "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": {"id":"veh1", "number": "AN01001"} }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": {"id":"veh2", "number": "AN01002"} } ] } for (date in trips) { var places = trips[date] for (var i = 0; i < places.length; ++i) { var place = places[i] console.log('place', place) console.log('Vehicle', place.Vehicle) } } ``` Upvotes: 0 <issue_comment>username_3: With poper formatted objects and arrays, you could filter single places where the id matches. ```js var trips = { date1: [{ id: 1, Place: "Delhi", Number: "001", Vehicle: { id: "veh1", number: "AN01001" } }, { id: 2, Place: "Bangalore", Number: "002", Vehicle: { id: "veh2", number: "AN01002" } }, { id: 3, Place: "Pune", Number: "003", Vehicle: { id: "veh3", number: "AN01003" } }], date2: [{ id: 1, Place: "Lucknow", Number: "001", Vehicle: { id: "veh1", number: "AN01002" } }, { id: 3, Place: "Pune", Number: "003", Vehicle: { id: "veh3", number: "AN01003" } }], date3: [{ id: 1, Place: "Delhi", Number: "001", Vehicle: { id: "veh1", number: "AN01001" } }, { id: 2, Place: "Bangalore", Number: "002", Vehicle: { id: "veh2", number: "AN01002" } }] }, id = "veh2", result = Object.keys(trips).reduce(function (r, k) { return r.concat(trips[k].filter(function (place) { return place.Vehicle.id === id; })); }, []); console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 0 <issue_comment>username_4: You can do it with the following code. ```js trips = { date1: [{ "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": { "id": "veh1", "number": "AN01001" } }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": { "id": "veh2", "number": "AN01002" } }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": { "id": "veh3", "number": "AN01003" } } ], date2: [{ "id": 1, "Place": "Lucknow", "Number": "001", "Vehicle": { "id": "veh1", "number": "AN01002" } }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": { "id": "veh3", "number": "AN01003" } } ], date3: [{ "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": { "id": "veh1", "number": "AN01001" } }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": { "id": "veh2", "number": "AN01002" } } ] }; var results = []; for (var date in trips) { for (var index = 0; index < trips[date].length; index++) { var data = trips[date][index]; var vehicle = data.Vehicle; if (vehicle.number == 'AN01002') { results.push(data); } } } console.log(results); ``` Upvotes: 0 <issue_comment>username_5: Here's one way to get all the dates that contain your Vehicle with id === veh2: ``` const t = Object.entries(trips); const res = t.filter((dates) => { return dates[1].some((d, e) => { return d.Vehicle.id == 'veh2'; }); }); console.log(res); ``` This would return arrays for date1 and date3 (which contain veh2) EDIT (version2): This returns the dates much cleaner: ``` let res = []; for(let i in trips) { const found = trips[i].filter((dates) => dates.Vehicle.id == 'veh2'); found.length && res.push(trips[i]); } console.log(res); ``` Upvotes: 1 [selected_answer]
2018/03/15
1,880
6,057
<issue_start>username_0: I've added a feature in my app which is in Swift 4, which allows a view to orientate to landscape or portrait as it displays a chart. I've created two separate views for each orientation and I've created the logic to handle the process. It works okay except for one minor niggle which I can solve if I can determine the calling ViewController. I've tried using ``` self.window?.rootViewController?.presentedViewController ``` Which has not proved to be accurate. I've checked the parameters of both application and window without any success. Is this possible to do or do I need to rely on a global variable instead?<issue_comment>username_1: you can use `foreach` like this : ``` trips.foreach((item) =>{ //here item is date1, date2 // then you can do item.id = 2; // every stuff you want !!! ``` }); Upvotes: -1 <issue_comment>username_2: I think your major problem is the wrong structured data as @username_3 already mentioned, when having this corrected it is pretty easy to run through everything: ```js trips= { date1: [ { "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": {"id":"veh1", "number": "AN01001"} }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": {"id":"veh2", "number": "AN01002"} }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": {"id":"veh3", "number": "AN01003"} } ], date2: [ { "id": 1, "Place": "Lucknow", "Number": "001", "Vehicle": {"id":"veh1", "number": "AN01002"} }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": {"id":"veh3", "number": "AN01003"} } ], date3: [ { "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": {"id":"veh1", "number": "AN01001"} }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": {"id":"veh2", "number": "AN01002"} } ] } for (date in trips) { var places = trips[date] for (var i = 0; i < places.length; ++i) { var place = places[i] console.log('place', place) console.log('Vehicle', place.Vehicle) } } ``` Upvotes: 0 <issue_comment>username_3: With poper formatted objects and arrays, you could filter single places where the id matches. ```js var trips = { date1: [{ id: 1, Place: "Delhi", Number: "001", Vehicle: { id: "veh1", number: "AN01001" } }, { id: 2, Place: "Bangalore", Number: "002", Vehicle: { id: "veh2", number: "AN01002" } }, { id: 3, Place: "Pune", Number: "003", Vehicle: { id: "veh3", number: "AN01003" } }], date2: [{ id: 1, Place: "Lucknow", Number: "001", Vehicle: { id: "veh1", number: "AN01002" } }, { id: 3, Place: "Pune", Number: "003", Vehicle: { id: "veh3", number: "AN01003" } }], date3: [{ id: 1, Place: "Delhi", Number: "001", Vehicle: { id: "veh1", number: "AN01001" } }, { id: 2, Place: "Bangalore", Number: "002", Vehicle: { id: "veh2", number: "AN01002" } }] }, id = "veh2", result = Object.keys(trips).reduce(function (r, k) { return r.concat(trips[k].filter(function (place) { return place.Vehicle.id === id; })); }, []); console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 0 <issue_comment>username_4: You can do it with the following code. ```js trips = { date1: [{ "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": { "id": "veh1", "number": "AN01001" } }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": { "id": "veh2", "number": "AN01002" } }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": { "id": "veh3", "number": "AN01003" } } ], date2: [{ "id": 1, "Place": "Lucknow", "Number": "001", "Vehicle": { "id": "veh1", "number": "AN01002" } }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": { "id": "veh3", "number": "AN01003" } } ], date3: [{ "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": { "id": "veh1", "number": "AN01001" } }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": { "id": "veh2", "number": "AN01002" } } ] }; var results = []; for (var date in trips) { for (var index = 0; index < trips[date].length; index++) { var data = trips[date][index]; var vehicle = data.Vehicle; if (vehicle.number == 'AN01002') { results.push(data); } } } console.log(results); ``` Upvotes: 0 <issue_comment>username_5: Here's one way to get all the dates that contain your Vehicle with id === veh2: ``` const t = Object.entries(trips); const res = t.filter((dates) => { return dates[1].some((d, e) => { return d.Vehicle.id == 'veh2'; }); }); console.log(res); ``` This would return arrays for date1 and date3 (which contain veh2) EDIT (version2): This returns the dates much cleaner: ``` let res = []; for(let i in trips) { const found = trips[i].filter((dates) => dates.Vehicle.id == 'veh2'); found.length && res.push(trips[i]); } console.log(res); ``` Upvotes: 1 [selected_answer]
2018/03/15
1,870
5,979
<issue_start>username_0: In my case, all the files have been created via API can be retrieved, however the created file in a specific folder in google drive itself cannot be retrieved. Example: I have a folder named "Test". Now the user manually uploads or creates a file called "test.txt" in "Test" folder. The problem is I cannot retrieved the file created because it is created manually not using the API. I have already use the **<https://www.googleapis.com/auth/drive>** which I have read in stackoverflow. But still cannot. Any help would be appreciated.<issue_comment>username_1: you can use `foreach` like this : ``` trips.foreach((item) =>{ //here item is date1, date2 // then you can do item.id = 2; // every stuff you want !!! ``` }); Upvotes: -1 <issue_comment>username_2: I think your major problem is the wrong structured data as @username_3 already mentioned, when having this corrected it is pretty easy to run through everything: ```js trips= { date1: [ { "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": {"id":"veh1", "number": "AN01001"} }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": {"id":"veh2", "number": "AN01002"} }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": {"id":"veh3", "number": "AN01003"} } ], date2: [ { "id": 1, "Place": "Lucknow", "Number": "001", "Vehicle": {"id":"veh1", "number": "AN01002"} }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": {"id":"veh3", "number": "AN01003"} } ], date3: [ { "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": {"id":"veh1", "number": "AN01001"} }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": {"id":"veh2", "number": "AN01002"} } ] } for (date in trips) { var places = trips[date] for (var i = 0; i < places.length; ++i) { var place = places[i] console.log('place', place) console.log('Vehicle', place.Vehicle) } } ``` Upvotes: 0 <issue_comment>username_3: With poper formatted objects and arrays, you could filter single places where the id matches. ```js var trips = { date1: [{ id: 1, Place: "Delhi", Number: "001", Vehicle: { id: "veh1", number: "AN01001" } }, { id: 2, Place: "Bangalore", Number: "002", Vehicle: { id: "veh2", number: "AN01002" } }, { id: 3, Place: "Pune", Number: "003", Vehicle: { id: "veh3", number: "AN01003" } }], date2: [{ id: 1, Place: "Lucknow", Number: "001", Vehicle: { id: "veh1", number: "AN01002" } }, { id: 3, Place: "Pune", Number: "003", Vehicle: { id: "veh3", number: "AN01003" } }], date3: [{ id: 1, Place: "Delhi", Number: "001", Vehicle: { id: "veh1", number: "AN01001" } }, { id: 2, Place: "Bangalore", Number: "002", Vehicle: { id: "veh2", number: "AN01002" } }] }, id = "veh2", result = Object.keys(trips).reduce(function (r, k) { return r.concat(trips[k].filter(function (place) { return place.Vehicle.id === id; })); }, []); console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 0 <issue_comment>username_4: You can do it with the following code. ```js trips = { date1: [{ "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": { "id": "veh1", "number": "AN01001" } }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": { "id": "veh2", "number": "AN01002" } }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": { "id": "veh3", "number": "AN01003" } } ], date2: [{ "id": 1, "Place": "Lucknow", "Number": "001", "Vehicle": { "id": "veh1", "number": "AN01002" } }, { "id": 3, "Place": "Pune", "Number": "003", "Vehicle": { "id": "veh3", "number": "AN01003" } } ], date3: [{ "id": 1, "Place": "Delhi", "Number": "001", "Vehicle": { "id": "veh1", "number": "AN01001" } }, { "id": 2, "Place": "Bangalore", "Number": "002", "Vehicle": { "id": "veh2", "number": "AN01002" } } ] }; var results = []; for (var date in trips) { for (var index = 0; index < trips[date].length; index++) { var data = trips[date][index]; var vehicle = data.Vehicle; if (vehicle.number == 'AN01002') { results.push(data); } } } console.log(results); ``` Upvotes: 0 <issue_comment>username_5: Here's one way to get all the dates that contain your Vehicle with id === veh2: ``` const t = Object.entries(trips); const res = t.filter((dates) => { return dates[1].some((d, e) => { return d.Vehicle.id == 'veh2'; }); }); console.log(res); ``` This would return arrays for date1 and date3 (which contain veh2) EDIT (version2): This returns the dates much cleaner: ``` let res = []; for(let i in trips) { const found = trips[i].filter((dates) => dates.Vehicle.id == 'veh2'); found.length && res.push(trips[i]); } console.log(res); ``` Upvotes: 1 [selected_answer]
2018/03/15
740
2,467
<issue_start>username_0: so the problem I have is quite simple. I have a span `Show More` and with this element I am expanding a section to show more text. The problem I have is that I want to use this function in other sections of the page. This is the jquery code which I am using to toggleClass active. ``` $(".spDetails").click(function() { $(".divFees").toggleClass("feesActive"); }); ``` My question is: I don't want to write this line of code for every element using this toggle, is there a way to create this for multiple elements with different class names or IDs? Thanks. Right, so here is my HTML: (shortened version!) ``` Text Text Show More Text Text Show More ``` I m trying to hide and display only when there child is clicked.<issue_comment>username_1: You can just append your desired elements in the selector using `,`. ``` $(".divFees , .otherOnes , #otherOneWithId").toggleClass("feesActive"); ``` Upvotes: 1 <issue_comment>username_2: If the toggleable section comes after the toggling `span` : ```js $(".spDetails").on("click", function(e) { $(this).next().toggleClass("active"); }); ``` ```css [id*='div'] { height: 60px; background: #f2f2f2; border: #f6f6f6; } .active { background: #555555; } ``` ```html Toggle 1 Toggle 2 ``` Or with extra attributes to specify the target: ```js $(".spDetails").on("click", function(e) { var target = $($(this).data("toggle")); target.toggleClass("active"); }); ``` ```css [id*='div'] { height: 60px; background: #f2f2f2; border: #f6f6f6; } .active { background: #555555; } ``` ```html Toggle 1 Toggle 2 ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: I imagine two ways: Easy way: Make function with args for your event. Example: ``` function setToogleClass(button, toggleElement, toggleClass) { $('.' + button).click(function() { $('.' + toggleElement).toggleClass(toggleClass); }); } setToggleClass("spDetails", "divFees", "feesActive") ``` Hard way: Set event function by all classes who has className and who found child or parent or current elements. ``` function initToggleEvents() { var toggleButtons = $('toggle-button'); var activeClassName = 'toggle-active'; toggleButtons.forEach(function(toggleButton) { toggleButton.click(function(event) { // if event.target has parentNode that toggleClass(activeClassName) // else another rules } } } ``` Upvotes: 0
2018/03/15
738
2,313
<issue_start>username_0: I am trying to perform vlookup from one sheet to another in same workbook. But getting the run-time error "*object variable or with block variable not set*". If anyone can have a look and help me out here. Kind of stuck here from few days Also I am unable to print values in immediate block for the particular range for debugging ``` Dim c1 As Range Dim Values1 As Range Dim Values2 As Range Values1 = Sheet2.Range("A2:A14") 'Getting the error here Values2 = Sheet1.Range("A2:C14") AC_Row = Sheet2.Range("H1").Row AC_Col = Sheet2.Range("H1").Column For Each c1 In Values1 Debug.Print c1.Value 'Is this correct way to print?? Sheet2.Cells(AC_Row, AC_Col) = Application.WorksheetFunction.VLookup(c1, Values2, 2, True) AC_Row = AC_Row + 1 Next c1 ```<issue_comment>username_1: Please use `Set` keyword to set `Range` object like below: ``` Set Values1 = Sheet2.Range("A2:A14") Set Values2 = Sheet1.Range("A2:C14") ``` Upvotes: 2 <issue_comment>username_2: Thank You folks, I was able to achieve the desired result. Please find below my code. ``` Dim rng As Range Dim rng1 As Range Dim c1 As Range Dim datavalue_Target As Variant Dim datavalue_OuterB As Variant Dim x As Integer Dim y As Integer x = 2 y = 2 Lastrow_A = Sheets("Sheet2").Cells(Sheets("Sheet2").Rows.Count, "A").End(xlUp).Row Set rng = Application.Range("Sheet2!A2:A" & Lastrow_A) Lastrow_C = Sheets("Sheet1").Cells(Sheets("Sheet1").Rows.Count, "C").End(xlUp).Row Set rng1 = Application.Range("Sheet1!A2:C" & Lastrow_C) 'Debug.Print Sheet2.Cells(1, 2).Value For Each c1 In rng.Cells 'Debug.Print c1.Value On Error Resume Next datavalue_Target = Application.WorksheetFunction.VLookup(c1, rng1.Cells, 2, False) If Err = 0 Then Sheets("Sheet2").Range("H" & x).Value = datavalue_Target End If On Error GoTo 0 x = x + 1 Next c1 For Each c1 In rng.Cells 'Debug.Print c1.Value On Error Resume Next datavalue_OuterB = Application.WorksheetFunction.VLookup(c1, rng1.Cells, 3, False) If Err = 0 Then Sheets("Sheet2").Range("I" & y).Value = datavalue_OuterB End If On Error GoTo 0 y= y + 1 Next c1 ``` Upvotes: 0
2018/03/15
259
952
<issue_start>username_0: While using the `tf.train.MonitoredTrainingSession`, is it possible to save all the checkpoints. It has a parameter (`save_checkpoint_secs=600`) to specify after how much we want to save a checkpoint but there is no option to specify how many checkpoints you can save. While using the simple `tf.train.Saver()`, there is an option to specify `max_to_keep`.<issue_comment>username_1: You can pass a `tf.train.Saver` using a `tf.train.Scaffold` to a `tf.train.MonitoredTrainingSession`: ``` import tensorflow as tf scaffold = tf.train.Scaffold(saver=tf.train.Saver(max_to_keep=10)) with tf.train.MonitoredTrainingSession(scaffold=scaffold) as sess: ... ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Sorry for coming a bit late on this. If you are using the tf.train.saver you would not specify any saver information from monitored session else all the information from the saver will be overridden Upvotes: 0
2018/03/15
1,631
6,836
<issue_start>username_0: I want to filter some ArrayList of datas with search, In my Activity's onCreate: ``` arrayList = getListItemData(); filteredArrayList = new ArrayList<>(); filteredArrayList.addAll(arrayList); adapter = new NameAdapter(filteredArrayList); itemList.setAdapter(adapter); searchBox.addTextChangedListener(new TextWatcher() { @Override public void beforeTextChanged(CharSequence s, int start, int count, int after) { } @Override public void onTextChanged(CharSequence s, int start, int before, int count) { adapter.getFilter().filter(s.toString()); } @Override public void afterTextChanged(Editable s) { } }); ``` my adapter with filterable: ``` public class NameAdapter extends RecyclerView.Adapter implements Filterable { private ArrayList arrayList; private CustomFilter filter; public NameAdapter(ArrayList items) { arrayList = items; filter = new CustomFilter(NameAdapter.this); } @Override public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View view = LayoutInflater.from(parent.getContext()).inflate(R.layout.item\_name, parent, false); return new ViewHolder(view); } @Override public void onBindViewHolder(final ViewHolder holder, int position) { holder.data = arrayList.get(position); } @Override public int getItemCount() { return branchArrayList.size(); } @Override public Filter getFilter() { return filter; } public class ViewHolder extends RecyclerView.ViewHolder { public final View view; public final TextView branch; public Name data; public ViewHolder(View view) { super(view); this.view = view; branch = view.findViewById(R.id.textView\_name); } } public class CustomFilter extends Filter { private NameAdapter adapter; private CustomFilter(NameAdapter adapter) { super(); this.adapter = adapter; } @Override protected FilterResults performFiltering(CharSequence constraint) { filteredArrayList.clear(); final FilterResults results = new FilterResults(); if (constraint.length() == 0) { filteredArrayList.addAll(arrayList); } else { final String filterPattern = constraint.toString().toLowerCase().trim(); for (final Name name : arrayList) { if (name.getName().toLowerCase().startsWith(filterPattern)) { filteredBranchArrayList.add(name); } } } results.values = filteredArrayList; results.count = filteredArrayList.size(); return results; } @Override protected void publishResults(CharSequence constraint, FilterResults results) { this.adapter.notifyDataSetChanged(); } } } ``` filter doesn't work for some reason it clears the recyclerview when I type something<issue_comment>username_1: 1st make a copy of the `branchArrayList` in the constructor.like this :- ``` private ArrayList branchCopy = new ArrayList<>; public BranchAdapter(ArrayList items) { branchArrayList = items; branchCopy.addAll(items); filter = new CustomFilter(BranchAdapter.this); } ``` your `performingFilter` ``` @Override protected FilterResults performFiltering(CharSequence constraint) { ArrayList branchFilter = new ArrayList<>; final FilterResults results = new FilterResults(); if (constraint.length() == 0) { branchFilter.addAll(branchArrayList); } else { final String filterPattern = constraint.toString().toLowerCase().trim(); for (final Branch branch : branchCopy) { if (branch.getBranchName().toLowerCase().startsWith(filterPattern)) { branchFilter.add(branch); } } } results.values = branchFilter ; results.count = branchFilter.size(); return results; } ``` Your `publishResults` ``` @Override protected void publishResults(CharSequence constraint, FilterResults results) { branchArrayList = (ArrayList) results.values; // you have done nothing with the filter results notifyDataSetChanged(); } ``` **Before notifying change the mainList** !! ``` branchArrayList = (ArrayList) results.values; ``` *add this line* to `publishResults` > > You have done **NOTHING** with the *filter results* > > > Upvotes: 3 [selected_answer]<issue_comment>username_2: **Try to change your Adapter as** ``` public class BranchAdapter extends RecyclerView.Adapter implements Filterable { private ArrayList mArrayList; private ArrayList mFilteredList; private CustomFilter filter; public BranchAdapter(ArrayList items) { mArrayList = items; mFilteredList = items; filter = new CustomFilter(BranchAdapter.this); } @Override public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View view = LayoutInflater.from(parent.getContext()).inflate(R.layout.item\_branch, parent, false); return new ViewHolder(view); } @Override public void onBindViewHolder(final ViewHolder holder, int position) { holder.data = branchArrayList.get(position); holder.branch.setText(String.valueOf(mArrayList.get(position).getBranchCode() + " / " + mArrayList.get(position).getBranchName())); } @Override public int getItemCount() { return mFilteredList.size(); } @Override public Filter getFilter() { return filter; } public class ViewHolder extends RecyclerView.ViewHolder { public final View view; public final TextView branch; public Branch data; public ViewHolder(View view) { super(view); this.view = view; branch = view.findViewById(R.id.textView\_branch); } } public class CustomFilter extends Filter { private BranchAdapter adapter; private CustomFilter(BranchAdapter adapter) { super(); this.adapter = adapter; } @Override protected FilterResults performFiltering(CharSequence constraint) { String charString = charSequence.toString(); if (charString.isEmpty()) { mFilteredList = mArrayList; } else { ArrayList filteredList = new ArrayList<>(); for (Branch branch : mArrayList) { if (branch.getBranchName().toLowerCase().contains(charString)) { filteredList.add(branch); } } mFilteredList = filteredList; } FilterResults filterResults = new FilterResults(); filterResults.values = mFilteredList; return filterResults; } @Override protected void publishResults(CharSequence constraint, FilterResults results) { mFilteredList = (ArrayList) filterResults.values; notifyDataSetChanged(); // this.adapter.notifyDataSetChanged(); } } } ``` Upvotes: 0 <issue_comment>username_3: update your `branchArrayList` in the `publishResults method` before doing `notifyDataSetChanged` Upvotes: 0 <issue_comment>username_4: update your `PublishResults()` with below code ``` @Override protected void publishResults(CharSequence charSequence, FilterResults filterResults) { filteredBranchArrayList = (ArrayList) filterResults.values; notifyDataSetChanged(); } ``` Upvotes: 0
2018/03/15
705
2,859
<issue_start>username_0: I am new in selenium. I need a browser without a graphical interface because the project will start with Jenkins. I decided to use ChromeDriver in Headdless mode. When I use ChrimeDriver in normal mode, I can click on all elements: ``` WebDriver driver = new ChromeDriver(); List allElem = driver.findElements(By.ByXPath("//div[@id='accordian']/div/ul/li")); for(int i=0; i ``` But when I use Headdless mode then I have: ElementNotVisibleException: element not visible. What could be wrong? Thank you for every clue. ``` ChromeOptions chromeOptions = new ChromeOptions(); chromeOptions.addArguments("--headless"); //chromeOptions.addArguments("--start-maximized"); WebDriver driver = new ChromeDriver(chromeOptions); List allElem = driver.findElements(By.ByXPath("//div[@id='accordian']/div/ul/li")); for(int i=0; i ```<issue_comment>username_1: While working with *Selenium Client v3.11.0*, *Chrome Driver v2.36* and *Chrome Browser v65.x* in *Headless Mode*, you need to pass the following arguments through an instance of *ChromeOptions* Class while initializing the *WebDriver* and the *Web Browser* as follows : ``` System.setProperty("webdriver.chrome.driver", "C:\\path\\to\\chromedriver.exe"); ChromeOptions chromeOptions = new ChromeOptions(); chromeOptions.addArguments("--headless"); chromeOptions.addArguments("start-maximized"); chromeOptions.addArguments("--disable-gpu"); chromeOptions.addArguments("--disable-extensions"); WebDriver driver = new ChromeDriver(chromeOptions); driver.get("https://www.google.co.in"); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You need to pass `"--headless"`, chrome option like below. ``` ChromeOptions chromeOptions = new ChromeOptions(); chromeOptions.addArguments("--headless"); WebDriver driver = new ChromeDriver(chromeOptions); ``` For entire list of chrome options, refer the following URL. It explains every command line switches in detail. <https://peter.sh/experiments/chromium-command-line-switches/> While working with headless mode, I encountered `org.openqa.selenium.UnhandledAlertException` due to not handling popping out of Alert Boxes. So it is better if you could handle the alert boxes. ``` String alertText = alert.getText(); System.out.println("ERROR: (ALERT BOX DETECTED) - ALERT MSG : " + alertText); alert.accept(); File outputFile = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE); String imageDetails = "D://Images" File screenShot = new File(imageDetails).getAbsoluteFile(); FileUtils.copyFile(outputFile, screenShot); System.out.println("Screenshot saved: {}" + imageDetails); driver.close(); ``` Upvotes: 0
2018/03/15
716
2,851
<issue_start>username_0: Is it possible to reverse the properties of `[Pscustomobject]` ? I have to setup resources in queue order. After testing is over , i have to teardown the resources in reverse order. below is the sample code. ``` $volume= @{Name='Vol1';size = "100gb"} $VolumeCollection = @{Name = 'VolColl'; Volume = $volume} $ResourceQueue = [pscustomobject]@{ Volume = $Volume VolumeCollection = $VolumeCollection } function SEtup-Resources { param ( [psobject]$resource ) $resource.PSObject.Properties | foreach-object { switch ($_.name) { "volume" { "Volume is created" } "VolumeCollection" { "volcoll is created" } } } } function TearDown-Resources { param ( [psobject]$resource ) # I have to reverse the object properties $resource.PSObject.Properties | foreach-object { switch ($_.name) { "volume" { "Volume is deleted" } "VolumeCollection" { "volcoll is deleted" } } } } Write-host "-------------------------" Write-host "Setup resources" Write-host "-------------------------" SEtup-Resources -resource $ResourceQueue Write-host "-------------------------" Write-host "teardown resources" Write-host "-------------------------" TearDown-Resources -resource $ResourceQueue ``` The result should be ``` ------------------------- Setup resources ------------------------- Volume is created volcoll is created ------------------------- teardown resources ------------------------- volcoll is deleted volume is deleted ``` But i could not find the way to reverse the properties of an object. How to reverse the pscustomobject properties in powershell?<issue_comment>username_1: If you only need to alter order of few properties, you could just list them manually to `Select-Object`: ``` $ResourceQueue | Select-Object VolumeCollection, Volume ``` For more generic solution one could use `Get-Member`to get an array of properties, use `[Array]::reverse` to reverse order and then `Select-Object` to get the properties in desired order. I came out with this: ``` $props = @() $MyObject | Get-Member | ForEach-Object { $props += $_.name } [Array]::Reverse($props) $MyObject | Select-Object $props ``` Upvotes: 2 <issue_comment>username_2: You can do it this way: ``` $object = '' | select PropertyA, PropertyB, PropertyC $object.PropertyA = 1234 $object.PropertyB = 'abcd' $object.PropertyC = 'xyz' $properties = ($object | Get-Member -MemberType NoteProperty).Name [Array]::Reverse($properties) $object | select $properties ``` The result is ``` PropertyC PropertyB PropertyA --------- --------- --------- xyz abcd 1234 ``` Upvotes: 1
2018/03/15
429
1,224
<issue_start>username_0: I have 2 tables 'table1' and 'table2'. `table1` has 10,000 records and `table2` has 5,000 records. Both tables have "RECORD\_ID" column. All RECORD\_ID's that are in `table2` can be found in `table1`. I want to UPDATE the "PRICE" column of `table1` based on the "RECORD\_ID" column of `table2`. ``` update table1 set PRICE = table2.PRICE where RECORD_ID = table2.RECORD_ID ``` I got this error message: > > SQL0206N "table2.PRICE" is not valid in the context where it is used > SQLSTATE=42703 > > > I am using DB2.<issue_comment>username_1: You have to use a join like this: ``` UPDATE Table1 SET Table1.Column = T2.Column FROM Table1 T1 INNER JOIN Table2 T2 ON T1.PK = T2.FK; ``` Upvotes: 1 <issue_comment>username_2: ``` UPDATE table1 SET table1.price = (SELECT table2.price FROM table2 WHERE table2.record_id = table1.record_id) ``` Upvotes: 2 <issue_comment>username_3: Try this: ``` UPDATE table1 f1 SET f1.price =( SELECT f2.price FROM table2 f2 WHERE f2.record_id = f1.record_id ) WHERE exists ( SELECT f2.price FROM table2 f2 WHERE f2.record_id = f1.record_id ) ``` Upvotes: 2 [selected_answer]
2018/03/15
797
2,882
<issue_start>username_0: **Is there a simple way to change the schedule of a kubernetes cronjob** like `kubectl change cronjob my-cronjob "10 10 * * *"`? Or any other way without needing to do `kubectl apply -f deployment.yml`? The latter can be extremely cumbersome in a complex CI/CD setting because manually editing the deployment yaml is often not desired, especially not if the file is created from a template in the build process. **Alternatively, is there a way to start a cronjob manually?** For instance, a job is scheduled to start in 22 hours, but I want to trigger it manually once now without changing the cron schedule for good (for testing or an initial run)?<issue_comment>username_1: You can update only the selected field of resourse by patching it ``` patch -h Update field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch. JSON and YAML formats are accepted. Please refer to the models in https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html to find if a field is mutable. ``` As provided in comment for ref : ``` kubectl patch cronjob my-cronjob -p '{"spec":{"schedule": "42 11 * * *"}}' ``` Also, in current kubectl versions, to launch a onetime execution of a declared cronjob, you can manualy create a job that adheres to the cronjob spec with ``` kubectl create job --from=cronjob/mycron ``` Upvotes: 5 <issue_comment>username_2: I have a friend who developed a kubectl plugin that answers exactly that ! It takes an existing cronjob and just create a job out of it. See <https://github.com/vic3lord/cronjobjob> Look into the README for installation instructions. Upvotes: 0 <issue_comment>username_3: The more recent versions of k8s (from 1.10 on) support the following command: `$ kubectl create job my-one-time-job --from=cronjobs/my-cronjob` Source is this solved [k8s github issue](https://github.com/kubernetes/kubernetes/pull/60039). Upvotes: 5 [selected_answer]<issue_comment>username_4: And if you want to do patch a k8s cronjob schedule with the [Python `kubernetes` library](https://github.com/kubernetes-client/python), you can do this like that: ``` from kubernetes import client, config config.load_kube_config() v1 = client.BatchV1beta1Api() body = {"spec": {"schedule": "@daily"}} ret = v1.patch_namespaced_cron_job( namespace="default", name="my-cronjob", body=body ) print(ret) ``` Upvotes: 0 <issue_comment>username_5: From @username_3 answer above `kubectl patch my-cronjob -p '{"spec":{"schedule": "42 11 * * *"}}'`, I was getting the error: **unable to parse "'{spec:{schedule:": yaml: found unexpected end of stream** If someone else is facing a similar issue, replace the last part of the command with - ``` "{\"spec\":{\"schedule\": \"42 11 * * *\"}}" ``` Upvotes: 1
2018/03/15
866
2,963
<issue_start>username_0: Unable to use for each loop in trigger I am getting > > Error Code: 1064 > > > ``` DELIMITER $$ CREATE TRIGGER `TRG_AU_DEVICES_HOWLONG` AFTER UPDATE ON `devices` FOR EACH ROW BEGIN DECLARE lastid INTEGER; DECLARE a, b, c VARCHAR(255); SET @lastid := (SELECT deviceId FROM devices ORDER BY packetDate DESC LIMIT 1); DECLARE cur1 CURSOR FOR SELECT alertType,deviceId FROM alerts WHERE alerts.deviceId = @lastid ; OPEN cur1; read_loop: LOOP FETCH cur1 INTO a, b; insert into test(alertType,deviceId) values(a,b); END LOOP; CLOSE cur1; END; $$ DELIMITER ; ``` Unable to use for each loop in trigger, am getting `Error Code: 1064`.How to use for each loop in trigger<issue_comment>username_1: You can update only the selected field of resourse by patching it ``` patch -h Update field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch. JSON and YAML formats are accepted. Please refer to the models in https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html to find if a field is mutable. ``` As provided in comment for ref : ``` kubectl patch cronjob my-cronjob -p '{"spec":{"schedule": "42 11 * * *"}}' ``` Also, in current kubectl versions, to launch a onetime execution of a declared cronjob, you can manualy create a job that adheres to the cronjob spec with ``` kubectl create job --from=cronjob/mycron ``` Upvotes: 5 <issue_comment>username_2: I have a friend who developed a kubectl plugin that answers exactly that ! It takes an existing cronjob and just create a job out of it. See <https://github.com/vic3lord/cronjobjob> Look into the README for installation instructions. Upvotes: 0 <issue_comment>username_3: The more recent versions of k8s (from 1.10 on) support the following command: `$ kubectl create job my-one-time-job --from=cronjobs/my-cronjob` Source is this solved [k8s github issue](https://github.com/kubernetes/kubernetes/pull/60039). Upvotes: 5 [selected_answer]<issue_comment>username_4: And if you want to do patch a k8s cronjob schedule with the [Python `kubernetes` library](https://github.com/kubernetes-client/python), you can do this like that: ``` from kubernetes import client, config config.load_kube_config() v1 = client.BatchV1beta1Api() body = {"spec": {"schedule": "@daily"}} ret = v1.patch_namespaced_cron_job( namespace="default", name="my-cronjob", body=body ) print(ret) ``` Upvotes: 0 <issue_comment>username_5: From @username_3 answer above `kubectl patch my-cronjob -p '{"spec":{"schedule": "42 11 * * *"}}'`, I was getting the error: **unable to parse "'{spec:{schedule:": yaml: found unexpected end of stream** If someone else is facing a similar issue, replace the last part of the command with - ``` "{\"spec\":{\"schedule\": \"42 11 * * *\"}}" ``` Upvotes: 1
2018/03/15
509
1,701
<issue_start>username_0: I need to design a text box [textbox](https://i.stack.imgur.com/drie0.jpg) As in the the above picture. It should have two text box and if i edit one it should reflect in another(via versa). Kindly help me on this. Thanks in advance<issue_comment>username_1: Is this what you want? ```js var textarea1 = document.getElementById("textarea1"); var textarea2 = document.getElementById("textarea2"); var button = document.getElementById("button"); function Show(button) { if (button.innerHTML = "Show") { button.innerHTML = "Hide" ; textarea2.style.display = "inline"; } else { button.innerHTML = "Show" ; textarea2.style.display = "none"; } } function change1() { textarea2.value = textarea1.value; } function change2() { textarea1.value = textarea2.value; } ``` ```html Show ``` Upvotes: 0 <issue_comment>username_2: @Keshav solution will update everytime you finish editing the text area If you want to update it directly when you press the key, you can use jQuery and with this code: ``` Test test test var textarea1 = $('.textarea-1'); var textarea2 = $('.textarea-2'); textarea1.keyup(function() { textarea2.val(textarea1.val()); }); textarea2.keyup(function() { textarea1.val(textarea2.val()); }); ``` Upvotes: 0 <issue_comment>username_3: Does this meet your requirements? ```js function showPopup() { document.getElementById('2').style.display = "block"; } function syncValueWith2() { document.getElementById('2').value = document.getElementById('1').value; } function syncValueWith1() { document.getElementById('1').value = document.getElementById('2').value; } ``` ```html ``` Upvotes: 2 [selected_answer]
2018/03/15
1,722
6,893
<issue_start>username_0: As my expectation, user should not see any page until they signed in. **current behavior when executing app:** show main page(about one second) -> show login page **expected behavior:** show login page -> signed in -> show main page Questions: 1. How to modify the current behavior to expected behavior ? 2. According to running flow(attached below), the login page is triggered before "MainActivity's fragment: onCreate". Why the main page showed up before login page ? 3. After the main thread calling the startActivityForResult(), should it stop and waiting for user's login ? why it keep running ? Thank you very much for your help. --- There are one activity and two fragments in the APP. FirebaseAuth UI(login page) is triggered on onResume() method. ``` Here is the app's running flow: MainActivity: onCreate: MainActivity: onStart: MainActivity: onResume: MainActivity: startLoginProcess: Show Login page MainActivity's fragment: onCreate: MainActivity's fragment: onActivityCreated: MainActivity's fragment: onStart: MainActivity's fragment: onResume: MainActivity's fragment: onPause: MainActivity: onPause: MainActivity's fragment: onStop: MainActivity: onStop: ``` --- AndroidManifest.xml ``` xml version="1.0" encoding="utf-8"? ``` --- ``` public class MainActivity extends AppCompatActivity { private static final int RC_SIGN_IN = 1; private static final String TAG = "MainActivity"; private FirebaseAuth mFirebaseAuth; private FirebaseAuth.AuthStateListener mAuthStatListener; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); initializeScreen(); setupLogin(); Log.d(TAG, "onCreate: "); } @Override protected void onStart() { super.onStart(); Log.d(TAG, "onStart: "); } @Override protected void onStop() { super.onStop(); Log.d(TAG, "onStop: "); } @Override protected void onResume() { super.onResume(); mFirebaseAuth.addAuthStateListener(mAuthStatListener); Log.d(TAG, "onResume: "); } @Override protected void onPause() { super.onPause(); mFirebaseAuth.removeAuthStateListener(mAuthStatListener); Log.d(TAG, "onPause: "); } private void initializeScreen() { ViewPager viewPager = (ViewPager) findViewById(R.id.viewpager); PanelFragmentAdapter panelFragmentAdapter = new PanelFragmentAdapter(getSupportFragmentManager(), MainActivity.this); viewPager.setOffscreenPageLimit(2); viewPager.setAdapter(panelFragmentAdapter); TabLayout tabLayout = (TabLayout) findViewById(R.id.sliding_tabs); tabLayout.setupWithViewPager(viewPager); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); // RC_SIGN_IN is the request code you passed into startActivityForResult(...) when starting the sign in flow. if (requestCode == RC_SIGN_IN) { if (resultCode == RESULT_OK) { Toast.makeText(this, "signed in success", Toast.LENGTH_SHORT).show(); } else if (resultCode == RESULT_CANCELED) { Toast.makeText(this, "user canceled", Toast.LENGTH_SHORT).show(); finish(); } else { Toast.makeText(this, "signed in failed", Toast.LENGTH_SHORT).show(); } } } private void setupLogin() { mFirebaseAuth = FirebaseAuth.getInstance(); mAuthStatListener = new FirebaseAuth.AuthStateListener() { @Override public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) { FirebaseUser user = firebaseAuth.getCurrentUser(); if (user != null) { // user is signed in Toast.makeText(MainActivity.this, "signed in", Toast.LENGTH_SHORT).show(); } else { // user is signed out startLoginProcess(); } } }; } private void startLoginProcess() { Log.d(TAG, "startLoginProcess: Show Login page"); startActivityForResult( AuthUI.getInstance() .createSignInIntentBuilder() .setIsSmartLockEnabled(false) .setAvailableProviders(Arrays.asList( new AuthUI.IdpConfig.EmailBuilder().build(), new AuthUI.IdpConfig.GoogleBuilder().build())) .build(), RC_SIGN_IN); } /** * Created by yorick on 2018/2/2. */ public class PanelFragmentAdapter extends FragmentPagerAdapter { private Context context; private String[] mTitles = new String[]{"menu", "order"}; public PanelFragmentAdapter(FragmentManager fm, Context context) { super(fm); this.context = context; } @Override public Fragment getItem(int position) { Fragment fragment = null; switch (position) { case 0: fragment = MenuFragment.newInstance(); break; case 1: fragment = PanelOrderFragment.newInstance(); break; default: fragment = MenuFragment.newInstance(); } return fragment; } @Override public int getCount() { return 2; } @Override public CharSequence getPageTitle(int position) { return mTitles[position]; } } ``` }<issue_comment>username_1: Is this what you want? ```js var textarea1 = document.getElementById("textarea1"); var textarea2 = document.getElementById("textarea2"); var button = document.getElementById("button"); function Show(button) { if (button.innerHTML = "Show") { button.innerHTML = "Hide" ; textarea2.style.display = "inline"; } else { button.innerHTML = "Show" ; textarea2.style.display = "none"; } } function change1() { textarea2.value = textarea1.value; } function change2() { textarea1.value = textarea2.value; } ``` ```html Show ``` Upvotes: 0 <issue_comment>username_2: @Keshav solution will update everytime you finish editing the text area If you want to update it directly when you press the key, you can use jQuery and with this code: ``` Test test test var textarea1 = $('.textarea-1'); var textarea2 = $('.textarea-2'); textarea1.keyup(function() { textarea2.val(textarea1.val()); }); textarea2.keyup(function() { textarea1.val(textarea2.val()); }); ``` Upvotes: 0 <issue_comment>username_3: Does this meet your requirements? ```js function showPopup() { document.getElementById('2').style.display = "block"; } function syncValueWith2() { document.getElementById('2').value = document.getElementById('1').value; } function syncValueWith1() { document.getElementById('1').value = document.getElementById('2').value; } ``` ```html ``` Upvotes: 2 [selected_answer]
2018/03/15
994
3,472
<issue_start>username_0: I've been working on a script in Powershell to get paths from a CSV file and move those files at the corresponding path to a new destination elsewhere. often with a different filename. *I am using Version 5.0* For example: ``` Source Destination : C:\1\2\3\File.pdf, D:\3\7\8\9\FILE1.pdf ``` Now I used the following script and it was initially able to move some of the files: ``` Import-CSV "R:\MoveFiles.csv" -Delimiter "," -ErrorAction Stop | ForEach-Object{Move-Item -path $_.Source -Destination $_.Destination} ``` Although around half way through executing it started to return this error: > > Move-Item : Could not find a part of the path. At line:1 char:238 > + ... Each-Object{Move-Item -Literalpath $*.Source -Destination $*.Destina ... > > + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > + CategoryInfo : WriteError: (Q:\RECORDS\PRIV...-4-20\_N1969.pdf:FileInfo) [Move-Item], > DirectoryNotFoundException > > + FullyQualifiedErrorId : MoveFileInfoItemIOError,Microsoft.PowerShell.Commands.MoveItemCommand > > > As far as I can tell there are no special characters that would prevent the path being found. If I replace Move-Item for Copy-Item it returns the same error. I have also checked the paths to see if they are true or not. I am at my wits end with this. Not sure what else to try. I am after all a complete novice. Thank you **NB:** I worked out a solution to this issue. It would appear that the *Move-Item* cmdlet does not like creating directories. Instead I made the directories first with *New-Item -directories*, getting the content from a text document where every line represented a path (no headers). After creating empty directories first the original script worked as intended. For anyone interested here is the directories script: ``` #CREATE DIRECTORIES FROM CSV cd $name = Get-Content ".\Create_New_Directories\Move_Directories_Test.txt" Foreach ($_ in $name) { New-Item -Force -verbose -path $_ -Type Directory } Out-File ".\Create_New_Directories\Newoutput.txt" ``` Thank you everyone for your help.<issue_comment>username_1: To debug such cases, consider `Move-Item`'s `-WhatIf` parameter. Like so, ``` ... | ForEach-Object{Move-Item -whatif -path $_.Source -Destination $_.Destination} ``` This will print the intended operation, so you can double-check paths for any sheenigans. > > What if: Performing the operation "Move File" on target "Item: > C:\Temp\SomeFile.xml Destination: C:\Temp\Somewhere\SomeFile.xml". > > > Upvotes: 1 <issue_comment>username_2: Not sure. But your error message indicates it's a write error of DirectoryNotFound. So perhaps you should be making sure you have the perms on the target side and are not exceeding any character limits in the length of the path. Some other things to consider/try: Your CSV file should be in the format (the first line must be the headers): ``` Source,Destination C:\1\2\3\SomeFile.pdf,D:\1\2\3\SomeFile.pdf C:\1\2\3\SomeFile2.pdf,D:\1\2\3\SomeFile2.pdf ``` Also you are not santizing your input so if you made the CSV file in Excel you might have leading or trailing spaces. In that case either clean the file editing in Notepad or try `$_.Source.trim()` and `$_.Destination.trim()` And like the other guy said the -whatif switch is useful and so is -verbose. You might also try Move-Item -Force and/or opening powershell as an Administrator. Good Luck! ;-) Upvotes: 0
2018/03/15
464
1,575
<issue_start>username_0: I have a list of words `['Ip', 'Name', 'Error']`. Reading a log file, the script should test if that line contains one of the words from the list. Didn't succeed with 'if list in line' ... any idea ?<issue_comment>username_1: To debug such cases, consider `Move-Item`'s `-WhatIf` parameter. Like so, ``` ... | ForEach-Object{Move-Item -whatif -path $_.Source -Destination $_.Destination} ``` This will print the intended operation, so you can double-check paths for any sheenigans. > > What if: Performing the operation "Move File" on target "Item: > C:\Temp\SomeFile.xml Destination: C:\Temp\Somewhere\SomeFile.xml". > > > Upvotes: 1 <issue_comment>username_2: Not sure. But your error message indicates it's a write error of DirectoryNotFound. So perhaps you should be making sure you have the perms on the target side and are not exceeding any character limits in the length of the path. Some other things to consider/try: Your CSV file should be in the format (the first line must be the headers): ``` Source,Destination C:\1\2\3\SomeFile.pdf,D:\1\2\3\SomeFile.pdf C:\1\2\3\SomeFile2.pdf,D:\1\2\3\SomeFile2.pdf ``` Also you are not santizing your input so if you made the CSV file in Excel you might have leading or trailing spaces. In that case either clean the file editing in Notepad or try `$_.Source.trim()` and `$_.Destination.trim()` And like the other guy said the -whatif switch is useful and so is -verbose. You might also try Move-Item -Force and/or opening powershell as an Administrator. Good Luck! ;-) Upvotes: 0
2018/03/15
1,511
6,492
<issue_start>username_0: I have two machines, A and B. A sends an HTTP request to B and asks for some document. B responds back and sends the requested document and gives a 200 OK message, but machine A is complaining that the document is not received because of a network failure. Does HTTP code 200 also work as acknowledgment that the document is received?<issue_comment>username_1: > > Does the HTTP 200 code also work as an acknowledgment that document has been received? > > > No. Not at all. It is not even a guarantee that the document was completely transmitted. The response code is in the first line of the response stream. The server could fail, or be disconnected from the client anywhere between sending the first line and the last byte of the response. The server may not even know this has happened. In fact, there is no way that the server can know if the client received a complete (or partial) HTTP response. There is no provision for an acknowledgment in the HTTP protocol. Now you could implement an application protocol over the top of HTTP in which the client is required to send a second HTTP request to the server to say "yes, I got the document". But this would involve some "application logic" implemented in the user's browser; e.g. in Javascript. Upvotes: 8 [selected_answer]<issue_comment>username_2: Absolutely not. HTTP 200 is generated by the server, and only means that it understood the request and thinks it is able to fulfill it (e.g. the file is actually there). All sorts of errors may occur during the transmission of the full response document (network connection breaking, packet loss, etc) which will not show up in the HTTP response, but need to be detected separately. Upvotes: 4 <issue_comment>username_3: A pretty good guide to the HTTP protocol is found here: <http://blog.catchpoint.com/2010/09/17/anatomyhttp/> You should make a distinction between the HTTP protocol and the underlying stream transport protocol, which should be reliable for HTTP purposes. The stream transport protocol will ACKnowledge all data transmission, including the response, so that both ends of exchange will affirm that the data is transmitted correctly. If the transport stream fails, then you will get a 'network failure' or similar error. When this happens, the HTTP protocol cannot continue; the data is no longer reliable or even complete. What a 200 OK message means, at the HTTP level, is that the server has the document you're after and is about to transmit it to you. Normally you will get a content-length header as well, so you will be able to ascertain if/when the body is complete as an additional check on top of the stream protocol. From the HTTP protocol perspective, a response receives no acknowledgement, so once a response has been sent there is no verification. However, as the stream transport is reliable, the act of sending the response will either be successful or result in an error. This does verify whether the document has been received by the network target (as noted by TripeHound, in the case of non-direct connection, e.g. a proxy, this is not a guarantee of delivery to the final target). Upvotes: 4 <issue_comment>username_4: HTTP is designed with an awareness of the possibility of various sorts of "middleboxes" - proxies operating with or without the knowledge of the client. If there is a proxy involved, then even knowing that the server had transmitted all the data and recieved an normal close connection would not tell you anything about whether the document has been received *by the machine who generated the HTTP request*. Upvotes: 2 <issue_comment>username_5: A sends a request to B. There may be all kinds of obstacles in the way that prevent the request from reaching B. In the case of https, the request may be reaching B but be rejected and it counts as if it hadn't reached B. In all these cases, B will not send any status at all. Once the request reaches B, and there are no bugs crashing B, and no hardware failure etc. B will examine the request and determine what to do and what status to report. If A requested a file that is there and A is allowed access, B will start sending a "status 200" together with the file data. Again all kinds of things can go wrong. A may receive nothing, or the "status 200" with no data or incomplete data etc. (By "receive" I mean that data arrives on the Ethernet cable, or through WiFi). Usually the user of A will use some library that handles the ugly bits. With some decent library, the user can expect that they either get some error, or a status complete with the corresponding data. If a status 200 arrives at A with only half the data, the user will (depending on the design of the library) receive an error, not a status, and definitely not a status 200. Or you may have a library that reports the status 200 and tells you "here's the first 2,000 bytes", "here's the next 2,000 bytes" and so on, and at some point when things go wrong, you might be told "sorry, there was an error, the data is incomplete". But in general, the case that the user gets a status 200, and no data, will not happen. Upvotes: -1 <issue_comment>username_6: It's very simple to see that the `200 OK` response code can't be a guarantee of anything about the response document. It's sent *before* the document is transmitted, so only a violation of causality could allow it to be dependent on successful reception of the document. It only serves as an indicator that the request was received properly and the server believes that it's able to fulfill the request. If the request requires extra processing (e.g. running a script), rather than just returning a static document, the response code should generally be sent after this has been completed, so it's normally an indicator that this was successful (but there are situations where this is not feasible, such as requests with persistent connections and push notifications -- the script could fail later). On a more general level, it's never possible to provide an absolute guarantee that all messages have been received in any protocol, due to the [Two Generals Problem](https://en.wikipedia.org/wiki/Two_Generals%27_Problem). No acknowledgement system can get around this, because at some point there has to be a last acknowledgement; there's no way to know if this is received successfully, because that would require another acknowledgement, contradicting the premise that it was the last one. Upvotes: 3
2018/03/15
1,527
6,452
<issue_start>username_0: when i run this method it shows below error, ``` public bool SaveDocument(out string newDocumentNo, ReciptUpdate reciptUpdate) { newDocumentNo = "MB120055"; return true; } ``` The error is > > ArgumentException: Type must not be ByRef > > Parameter name: type > > ><issue_comment>username_1: > > Does the HTTP 200 code also work as an acknowledgment that document has been received? > > > No. Not at all. It is not even a guarantee that the document was completely transmitted. The response code is in the first line of the response stream. The server could fail, or be disconnected from the client anywhere between sending the first line and the last byte of the response. The server may not even know this has happened. In fact, there is no way that the server can know if the client received a complete (or partial) HTTP response. There is no provision for an acknowledgment in the HTTP protocol. Now you could implement an application protocol over the top of HTTP in which the client is required to send a second HTTP request to the server to say "yes, I got the document". But this would involve some "application logic" implemented in the user's browser; e.g. in Javascript. Upvotes: 8 [selected_answer]<issue_comment>username_2: Absolutely not. HTTP 200 is generated by the server, and only means that it understood the request and thinks it is able to fulfill it (e.g. the file is actually there). All sorts of errors may occur during the transmission of the full response document (network connection breaking, packet loss, etc) which will not show up in the HTTP response, but need to be detected separately. Upvotes: 4 <issue_comment>username_3: A pretty good guide to the HTTP protocol is found here: <http://blog.catchpoint.com/2010/09/17/anatomyhttp/> You should make a distinction between the HTTP protocol and the underlying stream transport protocol, which should be reliable for HTTP purposes. The stream transport protocol will ACKnowledge all data transmission, including the response, so that both ends of exchange will affirm that the data is transmitted correctly. If the transport stream fails, then you will get a 'network failure' or similar error. When this happens, the HTTP protocol cannot continue; the data is no longer reliable or even complete. What a 200 OK message means, at the HTTP level, is that the server has the document you're after and is about to transmit it to you. Normally you will get a content-length header as well, so you will be able to ascertain if/when the body is complete as an additional check on top of the stream protocol. From the HTTP protocol perspective, a response receives no acknowledgement, so once a response has been sent there is no verification. However, as the stream transport is reliable, the act of sending the response will either be successful or result in an error. This does verify whether the document has been received by the network target (as noted by TripeHound, in the case of non-direct connection, e.g. a proxy, this is not a guarantee of delivery to the final target). Upvotes: 4 <issue_comment>username_4: HTTP is designed with an awareness of the possibility of various sorts of "middleboxes" - proxies operating with or without the knowledge of the client. If there is a proxy involved, then even knowing that the server had transmitted all the data and recieved an normal close connection would not tell you anything about whether the document has been received *by the machine who generated the HTTP request*. Upvotes: 2 <issue_comment>username_5: A sends a request to B. There may be all kinds of obstacles in the way that prevent the request from reaching B. In the case of https, the request may be reaching B but be rejected and it counts as if it hadn't reached B. In all these cases, B will not send any status at all. Once the request reaches B, and there are no bugs crashing B, and no hardware failure etc. B will examine the request and determine what to do and what status to report. If A requested a file that is there and A is allowed access, B will start sending a "status 200" together with the file data. Again all kinds of things can go wrong. A may receive nothing, or the "status 200" with no data or incomplete data etc. (By "receive" I mean that data arrives on the Ethernet cable, or through WiFi). Usually the user of A will use some library that handles the ugly bits. With some decent library, the user can expect that they either get some error, or a status complete with the corresponding data. If a status 200 arrives at A with only half the data, the user will (depending on the design of the library) receive an error, not a status, and definitely not a status 200. Or you may have a library that reports the status 200 and tells you "here's the first 2,000 bytes", "here's the next 2,000 bytes" and so on, and at some point when things go wrong, you might be told "sorry, there was an error, the data is incomplete". But in general, the case that the user gets a status 200, and no data, will not happen. Upvotes: -1 <issue_comment>username_6: It's very simple to see that the `200 OK` response code can't be a guarantee of anything about the response document. It's sent *before* the document is transmitted, so only a violation of causality could allow it to be dependent on successful reception of the document. It only serves as an indicator that the request was received properly and the server believes that it's able to fulfill the request. If the request requires extra processing (e.g. running a script), rather than just returning a static document, the response code should generally be sent after this has been completed, so it's normally an indicator that this was successful (but there are situations where this is not feasible, such as requests with persistent connections and push notifications -- the script could fail later). On a more general level, it's never possible to provide an absolute guarantee that all messages have been received in any protocol, due to the [Two Generals Problem](https://en.wikipedia.org/wiki/Two_Generals%27_Problem). No acknowledgement system can get around this, because at some point there has to be a last acknowledgement; there's no way to know if this is received successfully, because that would require another acknowledgement, contradicting the premise that it was the last one. Upvotes: 3
2018/03/15
630
1,738
<issue_start>username_0: I have a data frame which contains all the conditions. ``` cond.df = data.frame( mpg = c(21,18.7,22.8), gear = c(4,3,2), carb = c(4,3,2) ) ``` So for my first output, I want a filtered data frame which is equivalent to ``` mtcars %>% filter(mpg == 21, gear == 4, carb = 4) ``` My desired output would be a list with `n` data frames. ``` list(mtcars %>% filter(mpg == 21, gear == 4, carb = 4), mtcars %>% filter(mpg == 18.7, gear == 3, carb = 3), mtcars %>% filter(mpg == 22.8, gear == 2, carb = 2)) ``` Also, if possible I want a solution for an unknown number of columns from `cond.df`. I am aware that if I only have one variable I can use `%in%`, e.g. ``` mtcars %>% filter(gear %in% c(3,4)) ``` However, I have more than one variable. Thanks<issue_comment>username_1: You can use apply to go over your cond.df row-wise, and then use an anonymous function to filter: ``` apply(cond.df,1, function(x) mtcars %>% # the 1 is for row wise filter(mpg == x[1], gear == x[2], carb == x[3])) ``` Upvotes: 0 <issue_comment>username_2: I would propose to use an `inner_join` of `mtcars` on your `cond.df`. This way, it can match on arbitrarily many variables in `cond.df`. I changed your conditions data frame a bit such that the second and third row actually match something. ``` library(dplyr) cond.df = data.frame( mpg = c(21,18.7,22.8), gear = c(4,3,4), carb = c(4,2,1) ) ``` This creates a dataframe with the filtered/joined dataframes in each row. ``` result <- cond.df %>% rowwise() %>% do( dfs = inner_join(as.data.frame(.), mtcars) ) ``` In case you need it as a list of dataframes, just convert it. ``` as.list(result)$dfs ``` Upvotes: 2
2018/03/15
843
3,243
<issue_start>username_0: I want to use Google libphonenumber in my angular project using Typescript. I have searched a lot on the internet and found a lot of stuff but could not find anything that could serve my purpose. Most of the content available shows the code in JavaScript. If I use the same code in typescript, it shows a lot of errors like `cannot find name require` or `module not found`. Please tell me how/where/what to write the code. Also, plz tell me which package to install as there are many - libphonenumber, google-libphonenumber, angular-libphonenumber<issue_comment>username_1: you may either go with libphonenumber or google-libphonenumber as both of this library having a good number of installs also google-libphonenumber seems to be more powerful Upvotes: 2 <issue_comment>username_2: When dealing with CommonJS libraries, in TypeScript just like this **google-libphonenumber**, I'd like to suggest 2 ways about it **(Tested by me and works well)**. Initially, I'd like to suggest to install from NPM just like this: `npm install --save google-libphonenumber`. Then, here we go both ways of using it: ### 1st Method Just import it directly ``` import libphonenumber from 'google-libphonenumber'; class Something { constructor() {//Just example, you can chose any method const phoneUtil = libphonenumber.PhoneNumberUtil.getInstance(); console.log( phoneUtil.getSupportedRegions() );//It should works and give you some output } } ``` ### 2nd Method You still can make the power of Typescript typing or just use the existing one by: `npm install --save-dev @types/google-libphonenumber`. Since you said that you using Angular, so you can declare that typing just installed at `src/tsconfig.app.json` ( I am using Angular Version 7 ). Here is an example I have made: ``` { ... "compilerOptions": { ... "types": [ "google-libphonenumber" ] }, ... } ``` Then you can just import it like usual, in Typescript "typings" way like follow: ``` import { PhoneNumberUtil } from 'google-libphonenumber'; class Something { constructor() {//Just example, you can chose any method const phoneUtil: PhoneNumberUtil = PhoneNumberUtil.getInstance(); console.log( phoneUtil.getSupportedRegions() );//It should works and give you some output } } ``` Upvotes: 4 <issue_comment>username_3: Use ng2-tel-input . The only lib stable with angular 7 . ngx-tel-input has more features . But it has issues with ang7 . Downloads from [here](https://www.npmjs.com/package/ng2-tel-input/v/2.0.3) Exemple of use : ``` ``` Upvotes: 0 <issue_comment>username_4: You can also add the `@types/google-libphonenumber` package as a dev dependency without having to install the `google-libphonenumber` package too. This is achieved by using a root `index.d.ts` file with triple-slash directives and module declarations ``` /// declare module "google-libphonenumber"; ``` I tend to do this with google libraries in general -- bonus: no imports required to consume these types in your codebase * Example from a current project ``` /// /// /// declare module 'google.maps'; declare module 'gtag.js'; declare module 'google.analytics'; ``` Upvotes: 0
2018/03/15
827
3,197
<issue_start>username_0: I have written code for email verification. I want to change my login controller behavior so that it will only allow verified users only. I have status field in database that will store user is verified or not by storing 0/1. Now on login request I have to check email, password, as well as status code is equal to 1. If verified, the user will be redirected to dashboard otherwise redirect to login with error message. I have done all email verification things. Please let me know what inputs you want.<issue_comment>username_1: you may either go with libphonenumber or google-libphonenumber as both of this library having a good number of installs also google-libphonenumber seems to be more powerful Upvotes: 2 <issue_comment>username_2: When dealing with CommonJS libraries, in TypeScript just like this **google-libphonenumber**, I'd like to suggest 2 ways about it **(Tested by me and works well)**. Initially, I'd like to suggest to install from NPM just like this: `npm install --save google-libphonenumber`. Then, here we go both ways of using it: ### 1st Method Just import it directly ``` import libphonenumber from 'google-libphonenumber'; class Something { constructor() {//Just example, you can chose any method const phoneUtil = libphonenumber.PhoneNumberUtil.getInstance(); console.log( phoneUtil.getSupportedRegions() );//It should works and give you some output } } ``` ### 2nd Method You still can make the power of Typescript typing or just use the existing one by: `npm install --save-dev @types/google-libphonenumber`. Since you said that you using Angular, so you can declare that typing just installed at `src/tsconfig.app.json` ( I am using Angular Version 7 ). Here is an example I have made: ``` { ... "compilerOptions": { ... "types": [ "google-libphonenumber" ] }, ... } ``` Then you can just import it like usual, in Typescript "typings" way like follow: ``` import { PhoneNumberUtil } from 'google-libphonenumber'; class Something { constructor() {//Just example, you can chose any method const phoneUtil: PhoneNumberUtil = PhoneNumberUtil.getInstance(); console.log( phoneUtil.getSupportedRegions() );//It should works and give you some output } } ``` Upvotes: 4 <issue_comment>username_3: Use ng2-tel-input . The only lib stable with angular 7 . ngx-tel-input has more features . But it has issues with ang7 . Downloads from [here](https://www.npmjs.com/package/ng2-tel-input/v/2.0.3) Exemple of use : ``` ``` Upvotes: 0 <issue_comment>username_4: You can also add the `@types/google-libphonenumber` package as a dev dependency without having to install the `google-libphonenumber` package too. This is achieved by using a root `index.d.ts` file with triple-slash directives and module declarations ``` /// declare module "google-libphonenumber"; ``` I tend to do this with google libraries in general -- bonus: no imports required to consume these types in your codebase * Example from a current project ``` /// /// /// declare module 'google.maps'; declare module 'gtag.js'; declare module 'google.analytics'; ``` Upvotes: 0
2018/03/15
504
1,987
<issue_start>username_0: In my Django app, I have an Attribute model which has a many-to-many relationship to a MeasurementMethod model. I put an inline for MeasurementMethod in the admin interface for Attribute, but I don't think it is useful to have a separate interface for managing MeasurementMethods at all; there's no reason why a user would say, "Gee, I wonder what Attributes can be measured by water displacement." However, this left no way to create new MeasurementMethods from the inline editor until I found [<NAME>'s post](https://binary-data.github.io/2015/07/21/django-admin-manytomany-inline-enable-add-edit-buttons/), which says that I need to `admin.site.register(MeasurementMethod)` first. I did that, and sure enough the edit and create buttons appeared. But now on the admin page, where there's a list of apps and the models that can be managed, there's an entry for MeasurementMethod that I don't want. Is there a way to get rid of it? Or is there a better way to accomplish this?<issue_comment>username_1: You could create a custom admin site [docs](https://docs.djangoproject.com/en/2.0/ref/contrib/admin/#customizing-the-adminsite-class) and then override the `index` method/view. Make sure you register your models with this new admin site and hook it up in the urls.py file. Upvotes: 0 <issue_comment>username_2: The solution is to register the MeasurementMethod class with a custom admin class that overrides `has_module_permission`: ``` @admin.register(MeasurementMethod) class MeasurementMethodAdmin(admin.ModelAdmin): def has_module_permission(self, request): return False ``` Then the class can still be edited inline. > > `ModelAdmin.has_module_permission(request)` > > Should return True if displaying the module on the admin index page and accessing the module’s index page is permitted, False otherwise. ... Overriding it does not restrict access to the add, change or delete views ... > > > Upvotes: 4 [selected_answer]
2018/03/15
938
3,354
<issue_start>username_0: ``` int lastSpace = fullName.lastIndexOf(" "); ``` here all the code ``` import java.util.Scanner; public class java_13 { public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.println("Enter your full name"); String fullName = input.nextLine(); int firstSpace = fullName.indexOf(" "); String firstName = fullName.substring(0, firstSpace); int lastSpace = fullName.lastIndexOf(" "); String lastName = fullName.substring(lastSpace + 1); System.out.println("\n" + lastName + ", " + firstName); ``` also why we use ( +1 ) here ``` String lastName = fullName.substring(lastSpace + 1); ```<issue_comment>username_1: Because the space divides firstName and lastName. The input is supposed to look like "firstName lastName". If you take the position of the " " space from it, then the "lastName" part begins on the next character, hence the +1 to the position for getting the substring. Upvotes: 4 [selected_answer]<issue_comment>username_2: In Java, the index starts with 0. For example : ``` String name = "<NAME>"; name[0]='M'; name[1]='o'; ... name[6]=' '; //Space ``` Space separates First name and Last name. Hence, `0 to index[" "]-1` is First name `index[" "]+1 to length()` is Second name Hope its clear now. Upvotes: 2 <issue_comment>username_3: Here the full name is composed of a first name and the last name, this code search the first blank space in order to separate the first name from the last name But I recommend allowing the user to enter both of the last and the first name instead of the full name because it is username_2 to know how much words compose the first name and the same thing for the last name. Upvotes: 2 <issue_comment>username_4: As @username_1 mentioned, you get the index of the space, but in last name is beginning at position with index +1, that is the reason, just check this outputs ``` {...} public static final String NAME1 = "<NAME>"; public static final String NAME2 = "<NAME>"; public static final char SPLITCHAR = ' '; {...} public static void splitNamesBySpaceIndex() { // name1 int firstSpace = NAME1.indexOf(" "); int lastSpace = NAME1.lastIndexOf(" "); String firstName = NAME1.substring(0, firstSpace); String lastName = NAME1.substring(lastSpace + 1); System.out.println("lastname: \'" + lastName + "\', firstname: \'" + firstName + "\'"); lastName = NAME1.substring(lastSpace); System.out.println("lastname: \'" + lastName + "\', firstname: \'" + firstName + "\'"); } ``` Output looks like: ``` lastname: 'Papadopulos', firstname: 'Julian' lastname: ' Papadopulos', firstname: 'Julian' ``` As you can see, in the second case you will parse last name like `' Papadopulos'`, which is not correct, its caused by taking start index the index of the space- this is the reason, why you need position with index +1. Upvotes: 2 <issue_comment>username_5: ``` This is like this... FIRSTNAME LASTNAME 012345678901234567 FIRSTNAME Starts from 0 Index of " " is 9 LASTNAME Starts from 10 That is why you need to add (+1) here indexOf(" ") {9} but you have to start with LASTNAME which is at 10 If you do not add (+1) then Output will be " LASTNAME" ``` Upvotes: 2
2018/03/15
702
2,569
<issue_start>username_0: I'm using angular5 and angular firebase2. I have a simple question. I'm trying to make keyword comparing module now. Everything is fine, and actually, but observable is firing twice. I have no clue why it's firing exactly twice. Here's my code. ```js /*add-store.component.ts*/ serverCheckKeyword(value) { this.test = this.storeService.checkCategory(); this.test.subscribe( data => { if (data) { // Here, firing twice // some categories in Server this.objToArr(data); } else { // no categories in server console.log('No data in server'); this.nothingData(value); } }); } objToArr(data) { //... // I think I'm using this wrong. This part was the problem what I've figured out. this.storeService.addCategory(this.sortKeyword[keywordIdx], size+1); } ``` ```js /*store.service.ts*/ addCategory(data, id) { const path = `category/${data.name}`; const item = { id: id, categoryName: data.name }; this.db.object(path).update(item) .catch(err => console.log(err)); } // Load all of Categories checkCategory() { return this.db.list('category').valueChanges(); } ```<issue_comment>username_1: This is probably because you went on another routed component, and you forgot to unsubscribe from the subscription. Add a reference to your subscription ``` this['http_call'] = this.test.subscribe(...); ``` And delete it on destroy ``` ngOnDestroy() { this['http_call'].unsubscribe(); } ``` Otherwise, this can come from the fact that Firebase listens to databse events. This means that if you made a change to your base, your observer will be notified. You can prevent that by "cancelling" the database listener with this ``` return this.db.list('category').valueChanges() .pipe(take(1)); ``` Upvotes: 2 <issue_comment>username_2: Here's my answer to solve this issue. I just `this.test.unsubscribe()` after finishing logic. ``` serverCheckKeyword(value) { this.test = this.storeService.checkCategory() .subscribe( data => { // some categories in Server if (data) { // Here was firing twice this.objToArr(data); } else { // no categories in server this.nothingData(value); } }, err => { console.log(err); }); ``` } ... ``` this.storeService.addCategory(this.sortKeyword[keywordIdx], size + 1); this.test.unsubscribe(); ``` It works well! Thank you all :) Upvotes: 0
2018/03/15
880
3,284
<issue_start>username_0: I want to use Singleton to show ads, but it doesn't work well. When I don't use Singleton and use only ViewController, it works well.(can through "vampLoadStart" and "vampDidReceive") How can I solve it? **Pattern1: when I use Singleton (can't load and show ad)** VAMPAdReward.swift ``` import Foundation import UIKit import VAMP class VAMPAdReward: NSObject,VAMPDelegate{ static let sharedInstance = VAMPAdReward() var adReward:VAMP! override init() { super.init() } func loadAdReward(parentViewController: UIViewController) { adReward = VAMP() adReward.setPlacementId("26812") //test ID adReward.delegate = self adReward.setRootViewController(self) } func showAdReward(){ if adReward.isReady() { print("show ad") adReward.show() }else{ print("couldn't show ad") } } func vampLoadStart(_ placementId: String!, adnwName: String!) { print("start loading") } func vampDidReceive(_ placementId: String!, adnwName: String!) { print("finished loading") } } ``` ViewController ``` import UIKit class ViewController: UIViewController { var adReward: VAMPAdReward! override func viewDidLoad() { super.viewDidLoad() self.view.backgroundColor = UIColor.white VAMPAdReward.sharedInstance.loadAdReward(parentViewController: self) } override func touchesBegan(_ touches: Set, with event: UIEvent?) { super.touchesBegan(touches, with: event) //when touch screen, show Ad VAMPAdReward.sharedInstance.showAdReward() } } ``` **Pattern2: when I don't use Singleton (can load and show ad)** ``` import UIKit import VAMP class ViewController: UIViewController, VAMPDelegate { var ad: VAMP! override func viewDidLoad() { super.viewDidLoad() self.view.backgroundColor = UIColor.white //load ad ad = VAMP() ad.setPlacementId("59755") //test ID ad.delegate = self ad.setRootViewController(self) ad.load() } override func touchesBegan(_ touches: Set, with event: UIEvent?) { super.touchesBegan(touches, with: event) ad.show() } func vampLoadStart(\_ placementId: String!, adnwName: String!) { print("start loading") //through } func vampDidReceive(\_ placementId: String!, adnwName: String!) { print("finished loading") //through } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } } ```<issue_comment>username_1: Correct method in singleton ``` func loadAdReward(parentViewController: UIViewController) { adReward = VAMP() adReward.setPlacementId("26812") //test ID adReward.delegate = self adReward.setRootViewController(parentViewController) adReward.load() } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Follow these step for accomplish your singleton class ``` // MARK: - Singleton final class Singleton { // Can't init is singleton private init() { } // MARK: Shared Instance static let shared = Singleton() // MARK: Local Variable var emptyStringArray : [String] = [] } ``` Upvotes: 2
2018/03/15
269
981
<issue_start>username_0: When I executed this code, ``` while($row = mysql_fetch_array($res)) ``` there was an error of the following plan: > > Warning: mysql\_fetch\_array() expects parameter 1 to be resource, > boolean given in String of treatment:while($row = > mysql\_fetch\_array($res)) > > ><issue_comment>username_1: Correct method in singleton ``` func loadAdReward(parentViewController: UIViewController) { adReward = VAMP() adReward.setPlacementId("26812") //test ID adReward.delegate = self adReward.setRootViewController(parentViewController) adReward.load() } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Follow these step for accomplish your singleton class ``` // MARK: - Singleton final class Singleton { // Can't init is singleton private init() { } // MARK: Shared Instance static let shared = Singleton() // MARK: Local Variable var emptyStringArray : [String] = [] } ``` Upvotes: 2
2018/03/15
310
997
<issue_start>username_0: I want to set timezone to GMT+8 ``` $data = Carbon::now(); $data->setTimezone(8); ``` but the result given is ``` Carbon @1521099609 {#2145 date: 2018-03-15 14:40:09.759487 Asia/Krasnoyarsk (+07:00) } ``` I have no idea why it happen, so I must use timezone name to get exact date I want?<issue_comment>username_1: Correct method in singleton ``` func loadAdReward(parentViewController: UIViewController) { adReward = VAMP() adReward.setPlacementId("26812") //test ID adReward.delegate = self adReward.setRootViewController(parentViewController) adReward.load() } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Follow these step for accomplish your singleton class ``` // MARK: - Singleton final class Singleton { // Can't init is singleton private init() { } // MARK: Shared Instance static let shared = Singleton() // MARK: Local Variable var emptyStringArray : [String] = [] } ``` Upvotes: 2
2018/03/15
588
2,331
<issue_start>username_0: I can't find information about the potential tags that might get included in a PDOL (per kernel type, i.e. Visa, Mastercard, etc.). I've already looked in all the Book A-D, Book 1-4 pdfs to no avail (exception is C-1). I am particularly interested in lists for C-2 and C-3. My problem is that currently I construct the PDOL related data based on a `switch` statement like this: ``` switch (item.getTag()) { case AMOUNT_AUTHORISED_NUMERIC: out.write(ByteUtils.leftPad(ByteUtils.intToUnpackedBcd(txData.getAmountAuthorized()), 6)); break; case AMOUNT_OTHER_NUMERIC: out.write(ByteUtils.leftPad(ByteUtils.intToUnpackedBcd(txData.getAmountOther()), 6)); break; case TERMINAL_COUNTRY_CODE: out.write(ByteUtils.intToUnpackedBcd(terminalConfiguration.getCountryCode().getNumeric())); break; case TRANSACTION_CURRENCY_CODE: out.write(ByteUtils.leftPad(ByteUtils.intToUnpackedBcd(txData.getCurrency().getISOCodeNumeric()), 2)); break; ... ``` For this approach I must know what tags can be requested in PDOL in order to add a `case`. The alternative approach is to fill a `Map` with all the tags and their respective values that I have information for and just look in it when constructing the PDOL related data. I think that is kind of messy and redundant I am trying to avoid it.<issue_comment>username_1: This approach is not leading in the right direction. You should think of more generic DOL building code that handles requested tag lengths including trimming and padding depending on the object type as specified in EMV book 3 chapter 5.4. Upvotes: 1 <issue_comment>username_2: Filling a map with the tags isn't messy at all. It's more flexible and convenience where you have the tag as key and the value as value. We do that for our EMV sdk. Often time you don't know what tags or how many tags you will get back or what kind of tags you will get back. Having them in a hash map is good for many reason, if you need to pass them to an API to run a transaction, you can easily throw the the hash map in which contains all your tag and values already. If you need to display them out, displaying the keys and values from hash map is pretty easy as well. Upvotes: 0
2018/03/15
1,177
4,486
<issue_start>username_0: I'm building a chat app and using mongo for storage. I have built a document structure. ``` { _id: sender_id: receiver_id: subject: created_at: updated_at: messages: [ { _id: message: author_id: attatchments: [x,y,z], read: created_at: }, { _id: message: author_id: attatchments: [x,y,z], read: created_at: } ] } ``` I'm confused whether this is a good approach when it comes to performance and document size. Is there any better way to do it or this is fine?? Thanks in advance<issue_comment>username_1: In Mongo data is stored in the form you want to query it. The chat problem can be easily addressed with Relational Stores, however if you are keen to use Mongo, IMO flat structure is the best one. You may create a unique chatId for each pair of a sender and receiver. Store each chat messages as a separate documents. ``` { _id: chatId: 1234, sender_id: receiver_id: subject: updated_at: message: { message: messageId: 1, author_id: attatchments: [x,y,z], read: created_at: } }, { _id: chatId: 1234, sender_id: receiver_id: subject: updated_at: message: { message: messageId: 2 author_id: attatchments: [x,y,z], read: created_at: } } ``` Chats will happen message by message (and not in batch). The flat structure gives me quick read/write but also help me providing a search. I can even provide pagination, something like show last 20 messages in a window where use can click to load more. (Something like below) ``` db.collection.find( {chatId: 1234, message.messageId: {$gte:1} ).sort({updated_at : -1}) .limit(20) ``` No doubt, the number of documents will grow very fast but Mongo reads are always awesome when you have proper indexes on your fields. --- --- At the end, read my first line again. "In Mongo data is stored in the form you want to query it". Having large number of documents is not a problem if you have correct indexes and that is a basic quality of any data stores. Considering mongo's array operators, I won't favour having an array of messages. Consider you have a single document per chat with array of messages and there are 10K (or imagine any large number) messages with attachments. Do you want to load all of them in-memory when you query the chat document ? or you are just interested into latest 1 or 2 or 20 messages? Now, thing about splitting a single collection in two relational collections: IMO go for any Relational Data stores. --- **How to take Decision:** Best way to design it is to list down your requirement. If you are exposing your chat store as a service, make a list of the endpoints that service is going to expose. How many different type of queries you may need to execute in near future. What will be the search keys. How many chat messages you want to return in a single API call. and etc All these answers will help you design your data store. Upvotes: 3 <issue_comment>username_2: If you want you can devide your schema like this. ``` // coversation Schema { _id: sender_id: receiver_id: subject: created_at: updated_at: messagesId: [ ] //here you will store the _id of conversation occur between both. } // Message Schema { _id : message: author_id: attatchments: [x,y,z], read: created_at: } ``` Upvotes: 1 <issue_comment>username_3: for the chat, its better to normalize as the relational schemas. otherwise it's painful to manage nested things if you need to update or do something complex. this is better to way to implement the chat,and it also holds the use case for if user want to delete the message/conversation for him only.by tracking deleted\_by property if deleted\_by equals to participants of the conversation,permanently remove the conversation/message! conversation schema ``` { id:String, participants:[String], //user ids created_at:Date, deleted_by:[String] ... } ``` message schema ``` { id:String, conversation_id:String, sender:String, content:String, read_by:[String] //user ids deleted_by:[String] ... } ``` Upvotes: 1
2018/03/15
1,489
4,971
<issue_start>username_0: I am trying to run a container. I already have the image uploaded to private Docker registry. I want to write a compose file to download and deploy the image. But I want to pass the TAG name as a variable from the docker-compose run command.My compose file looks like below. How can I pass the value for KB\_DB\_TAG\_VERSION as part of docker-compose up command? ``` version: '3' services: db: #build: k-db user: "1000:50" volumes: - /data/mysql:/var/lib/mysql container_name: k-db environment: - MYSQL_ALLOW_EMPTY_PASSWORD=yes image: XX:$KB_DB_TAG_VERSION image: k-db ports: - "3307:3306" ```<issue_comment>username_1: You can create a [`.env`](https://docs.docker.com/compose/env-file/) file on the directory where you execute the `docker-compose up` command (and your `docker-compose.yml` file is located) with the following content: ``` KB_DB_TAG_VERSION=kb-1.3.20-v1.0.0 ``` Your `docker-compose.yml` file should look like the following (added `{` and `}`): ``` version: '3' services: db: user: "1000:50" volumes: - /data/mysql:/var/lib/mysql container_name: k-db environment: - MYSQL_ALLOW_EMPTY_PASSWORD=yes image: XX:${KB_DB_TAG_VERSION} image: k-db ports: - "3307:3306" ``` After making the above changes , check whether the changes are reflected or not using the command `docker-compose config`. The variable will be replaced by the variable value. Please refer to the page [here](https://docs.docker.com/compose/compose-file/#variable-substitution) to understand more about variable replacement. Upvotes: 6 <issue_comment>username_2: You have two options (option `2.` overrides `1.`): 1. Create the [.env file](https://docs.docker.com/compose/env-file/) as already suggested in another [answer](https://stackoverflow.com/a/49294173/3937850). 2. Prepend `KEY=VALUE` pair(s) to your `docker-compose` command, e.g: ``` KB_DB_TAG_VERSION=kb-1.3.20-v1.0.0 docker-compose up ``` Exporting it earlier in a script should also work, e.g.: ``` export KB_DB_TAG_VERSION=kb-1.3.20-v1.0.0 docker-compose up ``` Keep in mind that these options just pass an environment varible to the `docker-compose.yml` file, not to a container. For an environment variable to be actually passed to a container you always need something like this in your `docker-compose.yml`: ``` environment: - KB_DB_TAG_VERSION=$KB_DB_TAG_VERSION ``` Upvotes: 8 [selected_answer]<issue_comment>username_3: In your docker-compose.yml file add ``` env_file: - .env_file ``` to your `db` service where .env\_file is your .env file (change its name accordingly). ``` version: '3' services: db: #build: k-db user: "1000:50" volumes: - /data/mysql:/var/lib/mysql container_name: k-db env_file: - .env_file environment: - MYSQL_ALLOW_EMPTY_PASSWORD=yes image: XX:$KB_DB_TAG_VERSION image: k-db ports: - "3307:3306" ``` Upvotes: -1 <issue_comment>username_4: Just to supplement what has been outlined by others, in particular by @JakubKukul For security purposes you probably wouldn't want to keep vulnerable information such as username/password in your docker-compose files if they're under version control. You can map environment variables that you have on your host to environment variables inside container as well. In this case it could be something like the following: ``` version: '3' services: db: #build: k-db user: "1000:50" volumes: - /data/mysql:/var/lib/mysql container_name: k-db environment: - MYSQL_ALLOW_EMPTY_PASSWORD=yes - MYSQL_PASSWORD=${<PASSWORD>} image: XX:$KB_DB_TAG_VERSION image: k-db ports: - "3307:3306" ``` where `MYSQL_PASSWORD` would be both: 1. An environment variable on your host (maybe just in the current shell session) 2. An environment variable inside the containers from the `db` service Upvotes: 4 <issue_comment>username_5: It's possible to pass environment variables to containers on the command line *without* specifying the values in files. Add an `environment` key with the variable's name *only* (no value or assignment operator) on the container's service definition in the docker-compose file: ``` db: ... environment: - KB_DB_TAG_VERSION ``` Used in that way, with no assignment, means that docker-compose will look up the environment variable in the current environment/shell: ``` KB_DB_TAG_VERSION=mytagversion docker-compose up ``` Ref: <https://docs.docker.com/compose/environment-variables/#pass-environment-variables-to-containers> Upvotes: 3 <issue_comment>username_6: For Windows, instead of export, use: ``` $env.KB_DB_TAG_VERSION = "1.3.20-v1.0.0" docker-compose up ``` Upvotes: 0 <issue_comment>username_7: ``` docker-compose --env-file .\.env up ``` <https://docs.docker.com/compose/environment-variables/> Upvotes: 3
2018/03/15
852
2,980
<issue_start>username_0: I want to load a script that loads another script based on condition and this script is adding a variable to the global window. ``` conosle.log(window.someVariable) ``` load-something.js ``` function loadScript( path ) { const head = document.getElementsByTagName('head')[0]; const script = document.createElement('script'); script.src = path; head.append(script); } if(condition) { loadScript('pathToJsFile.js'); } ``` pathToJsFile.js ``` window.someVariable = ... ``` My problem is that `someVariable` is undefined. Is it possible to force the script to block?<issue_comment>username_1: How about this? ```html Hello, world! var condition = true; var data; function loadDoc() { var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { data = this.responseText; alert(data); } }; xhttp.open("GET", "test.json", true); xhttp.send(); } if (condition) { loadDoc(); } ``` Upvotes: 0 <issue_comment>username_2: Here is the code to add code to head dynamically ``` #linking style sheet var linkElement=document.createElement('link'); linkElement.href='resources/css/animate.min.css'; linkElement.rel='stylesheet'; # linking script var scriptElement = document.createElement('script'); scriptElement.src = "resources/js/check_session.js"; # add to head tag -> header document.getElementsByTagName('head')[0].appendChild(linkElement); document.getElementsByTagName('head')[0].appendChild(scriptElement); ``` Upvotes: -1 <issue_comment>username_3: You can use [onload](https://developer.mozilla.org/en-US/docs/Web/API/GlobalEventHandlers/onload) property of DOM element, more clean example can be found [here](https://javascript.info/onload-onerror) For sure you can't expect that "window.someVariable" will be defined in the next block you can read about loading/executing order in this [answer](https://stackoverflow.com/questions/8996852/load-and-execute-order-of-scripts). As a solution you can add something like "isAllLoadedd" to your 'load-something.js': ``` let pathsToLoad = [], leftToLoad = 0; function loadScript( path ) { let head = document.getElementsByTagName('head')[0]; let script = document.createElement('script'); script.src = path; script.onload = () => leftToLoad--; head.append(script); } if(condition) { pathsToLoad.push('pathToJsFile.js') } leftToLoad = pathsToLoad.length; for (let file of pathsToLoad) loadScript(file); function isAllLoaded() { return !!leftToLoad.length; } ``` And then in your block with "someVariable": ``` let _int = setInterval(function() { if (isAllLoaded()) { console.log(someVariable); clearInterval(_int); } }, 100) ``` But in your particular case, you can all leave as is, and just use your someVariable after it will be defined :), but in any case you will need to wait until your dependency will be loaded. Upvotes: 0
2018/03/15
887
2,928
<issue_start>username_0: It is possible to do a regular expression which represent all the strings with lower numeric value than this? 1.4.7. I want to apply it to get all the users with a lower version of my app than a specified one. For example, if I have 5 users, each one with this version: ``` 1.4.22 1.4.12 1.4.7 1.4.6 1.4.1 1.3.20 ``` Then, the regular expression must return 1.4.6, 1.4.1 and 1.3.20, but not 1.4.22 and not 1.4.12 because 1.4.22 and 1.4.12 are higher version numbers than 1.4.7 It is possible to do it with a regular expression?<issue_comment>username_1: How about this? ```html Hello, world! var condition = true; var data; function loadDoc() { var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { data = this.responseText; alert(data); } }; xhttp.open("GET", "test.json", true); xhttp.send(); } if (condition) { loadDoc(); } ``` Upvotes: 0 <issue_comment>username_2: Here is the code to add code to head dynamically ``` #linking style sheet var linkElement=document.createElement('link'); linkElement.href='resources/css/animate.min.css'; linkElement.rel='stylesheet'; # linking script var scriptElement = document.createElement('script'); scriptElement.src = "resources/js/check_session.js"; # add to head tag -> header document.getElementsByTagName('head')[0].appendChild(linkElement); document.getElementsByTagName('head')[0].appendChild(scriptElement); ``` Upvotes: -1 <issue_comment>username_3: You can use [onload](https://developer.mozilla.org/en-US/docs/Web/API/GlobalEventHandlers/onload) property of DOM element, more clean example can be found [here](https://javascript.info/onload-onerror) For sure you can't expect that "window.someVariable" will be defined in the next block you can read about loading/executing order in this [answer](https://stackoverflow.com/questions/8996852/load-and-execute-order-of-scripts). As a solution you can add something like "isAllLoadedd" to your 'load-something.js': ``` let pathsToLoad = [], leftToLoad = 0; function loadScript( path ) { let head = document.getElementsByTagName('head')[0]; let script = document.createElement('script'); script.src = path; script.onload = () => leftToLoad--; head.append(script); } if(condition) { pathsToLoad.push('pathToJsFile.js') } leftToLoad = pathsToLoad.length; for (let file of pathsToLoad) loadScript(file); function isAllLoaded() { return !!leftToLoad.length; } ``` And then in your block with "someVariable": ``` let _int = setInterval(function() { if (isAllLoaded()) { console.log(someVariable); clearInterval(_int); } }, 100) ``` But in your particular case, you can all leave as is, and just use your someVariable after it will be defined :), but in any case you will need to wait until your dependency will be loaded. Upvotes: 0
2018/03/15
735
2,544
<issue_start>username_0: I mean, for example if I set a start time as 1:00PM and an end time as 7:00 PM, I need break time to get displayed as default value which is 3:00PM.<issue_comment>username_1: How about this? ```html Hello, world! var condition = true; var data; function loadDoc() { var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { data = this.responseText; alert(data); } }; xhttp.open("GET", "test.json", true); xhttp.send(); } if (condition) { loadDoc(); } ``` Upvotes: 0 <issue_comment>username_2: Here is the code to add code to head dynamically ``` #linking style sheet var linkElement=document.createElement('link'); linkElement.href='resources/css/animate.min.css'; linkElement.rel='stylesheet'; # linking script var scriptElement = document.createElement('script'); scriptElement.src = "resources/js/check_session.js"; # add to head tag -> header document.getElementsByTagName('head')[0].appendChild(linkElement); document.getElementsByTagName('head')[0].appendChild(scriptElement); ``` Upvotes: -1 <issue_comment>username_3: You can use [onload](https://developer.mozilla.org/en-US/docs/Web/API/GlobalEventHandlers/onload) property of DOM element, more clean example can be found [here](https://javascript.info/onload-onerror) For sure you can't expect that "window.someVariable" will be defined in the next block you can read about loading/executing order in this [answer](https://stackoverflow.com/questions/8996852/load-and-execute-order-of-scripts). As a solution you can add something like "isAllLoadedd" to your 'load-something.js': ``` let pathsToLoad = [], leftToLoad = 0; function loadScript( path ) { let head = document.getElementsByTagName('head')[0]; let script = document.createElement('script'); script.src = path; script.onload = () => leftToLoad--; head.append(script); } if(condition) { pathsToLoad.push('pathToJsFile.js') } leftToLoad = pathsToLoad.length; for (let file of pathsToLoad) loadScript(file); function isAllLoaded() { return !!leftToLoad.length; } ``` And then in your block with "someVariable": ``` let _int = setInterval(function() { if (isAllLoaded()) { console.log(someVariable); clearInterval(_int); } }, 100) ``` But in your particular case, you can all leave as is, and just use your someVariable after it will be defined :), but in any case you will need to wait until your dependency will be loaded. Upvotes: 0
2018/03/15
888
3,337
<issue_start>username_0: can anyone let me know why the part of if statement is not working? but else work perfectly? it should be ( if there is nothing inside administrator table which is (username=varchar,, password= varchar) then let the administrator register himself. ``` if (click == buttonAdmin) { Connection con =myConnection.getConnection(); PreparedStatement ps; ResultSet rs; try { ps=con.prepareStatement("SELECT * FROM `adminstrator` "); rs=ps.executeQuery(); while (rs.next()) { String username = rs.getString(1); String password = rs.getString(2); if ( password.equals("") && username.equals("")) { new AdminNewRegister(); } else { new AdminLogin(); System.out.println("else"); } } } catch (Exception e) { System.out.println(e.getMessage()); } } ```<issue_comment>username_1: How about this? ```html Hello, world! var condition = true; var data; function loadDoc() { var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { data = this.responseText; alert(data); } }; xhttp.open("GET", "test.json", true); xhttp.send(); } if (condition) { loadDoc(); } ``` Upvotes: 0 <issue_comment>username_2: Here is the code to add code to head dynamically ``` #linking style sheet var linkElement=document.createElement('link'); linkElement.href='resources/css/animate.min.css'; linkElement.rel='stylesheet'; # linking script var scriptElement = document.createElement('script'); scriptElement.src = "resources/js/check_session.js"; # add to head tag -> header document.getElementsByTagName('head')[0].appendChild(linkElement); document.getElementsByTagName('head')[0].appendChild(scriptElement); ``` Upvotes: -1 <issue_comment>username_3: You can use [onload](https://developer.mozilla.org/en-US/docs/Web/API/GlobalEventHandlers/onload) property of DOM element, more clean example can be found [here](https://javascript.info/onload-onerror) For sure you can't expect that "window.someVariable" will be defined in the next block you can read about loading/executing order in this [answer](https://stackoverflow.com/questions/8996852/load-and-execute-order-of-scripts). As a solution you can add something like "isAllLoadedd" to your 'load-something.js': ``` let pathsToLoad = [], leftToLoad = 0; function loadScript( path ) { let head = document.getElementsByTagName('head')[0]; let script = document.createElement('script'); script.src = path; script.onload = () => leftToLoad--; head.append(script); } if(condition) { pathsToLoad.push('pathToJsFile.js') } leftToLoad = pathsToLoad.length; for (let file of pathsToLoad) loadScript(file); function isAllLoaded() { return !!leftToLoad.length; } ``` And then in your block with "someVariable": ``` let _int = setInterval(function() { if (isAllLoaded()) { console.log(someVariable); clearInterval(_int); } }, 100) ``` But in your particular case, you can all leave as is, and just use your someVariable after it will be defined :), but in any case you will need to wait until your dependency will be loaded. Upvotes: 0
2018/03/15
1,066
3,581
<issue_start>username_0: I have below initial C++ code: ``` class Lambda { public: int compute(int &value){ auto get = [&value]() -> int { return 11 * value; }; return get(); } }; int main(){ Lambda lambda; int value = 77; return lambda.compute(value); } ``` which compiled (using -O1) with clang generates below ASM: ``` main: # @main push rax mov dword ptr [rsp + 4], 77 mov rdi, rsp lea rsi, [rsp + 4] call Lambda::compute(int&) pop rcx ret Lambda::compute(int&): # @Lambda::compute(int&) push rax mov qword ptr [rsp], rsi mov rdi, rsp call Lambda::compute(int&)::{lambda()#1}::operator()() const pop rcx ret Lambda::compute(int&)::{lambda()#1}::operator()() const: # @Lambda::compute(int&)::{lambda()#1}::operator()() const mov rax, qword ptr [rdi] mov eax, dword ptr [rax] lea ecx, [rax + 4*rax] lea eax, [rax + 2*rcx] ret ``` Questions: 1. What is the `{lambda()#1}` which appears in the ASM? To my knowledge it might be an closure which encapsulates the function object (i.e. lambda body). Please confirm if so. 2. Is a new closure generated every time `compute()` is triggered? Or is the same instance?<issue_comment>username_1: How about this? ```html Hello, world! var condition = true; var data; function loadDoc() { var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { data = this.responseText; alert(data); } }; xhttp.open("GET", "test.json", true); xhttp.send(); } if (condition) { loadDoc(); } ``` Upvotes: 0 <issue_comment>username_2: Here is the code to add code to head dynamically ``` #linking style sheet var linkElement=document.createElement('link'); linkElement.href='resources/css/animate.min.css'; linkElement.rel='stylesheet'; # linking script var scriptElement = document.createElement('script'); scriptElement.src = "resources/js/check_session.js"; # add to head tag -> header document.getElementsByTagName('head')[0].appendChild(linkElement); document.getElementsByTagName('head')[0].appendChild(scriptElement); ``` Upvotes: -1 <issue_comment>username_3: You can use [onload](https://developer.mozilla.org/en-US/docs/Web/API/GlobalEventHandlers/onload) property of DOM element, more clean example can be found [here](https://javascript.info/onload-onerror) For sure you can't expect that "window.someVariable" will be defined in the next block you can read about loading/executing order in this [answer](https://stackoverflow.com/questions/8996852/load-and-execute-order-of-scripts). As a solution you can add something like "isAllLoadedd" to your 'load-something.js': ``` let pathsToLoad = [], leftToLoad = 0; function loadScript( path ) { let head = document.getElementsByTagName('head')[0]; let script = document.createElement('script'); script.src = path; script.onload = () => leftToLoad--; head.append(script); } if(condition) { pathsToLoad.push('pathToJsFile.js') } leftToLoad = pathsToLoad.length; for (let file of pathsToLoad) loadScript(file); function isAllLoaded() { return !!leftToLoad.length; } ``` And then in your block with "someVariable": ``` let _int = setInterval(function() { if (isAllLoaded()) { console.log(someVariable); clearInterval(_int); } }, 100) ``` But in your particular case, you can all leave as is, and just use your someVariable after it will be defined :), but in any case you will need to wait until your dependency will be loaded. Upvotes: 0
2018/03/15
878
3,092
<issue_start>username_0: please does anyhow know how to set a recurring background job to run every 28 days.that is the job should run based on 28Days and not the 28Days of the month. this is my current implemntation. ``` string CropExpression = "0 20 */28 * * "; RecurringJob.AddOrUpdate(() => _chargesJob.ChargeCustomerAccountMonthly(account.Id),CropExpression); ``` This current implementation runs the job on the 28th of every month which is not ideal for my app. What i want to achieve is that 28days should be calculated based on the current datetime. For example if Datetime.now =2018/3/15, the recurring job should start counting from this date and elapse into the next month till it reaches 28 days.<issue_comment>username_1: How about this? ```html Hello, world! var condition = true; var data; function loadDoc() { var xhttp = new XMLHttpRequest(); xhttp.onreadystatechange = function() { if (this.readyState == 4 && this.status == 200) { data = this.responseText; alert(data); } }; xhttp.open("GET", "test.json", true); xhttp.send(); } if (condition) { loadDoc(); } ``` Upvotes: 0 <issue_comment>username_2: Here is the code to add code to head dynamically ``` #linking style sheet var linkElement=document.createElement('link'); linkElement.href='resources/css/animate.min.css'; linkElement.rel='stylesheet'; # linking script var scriptElement = document.createElement('script'); scriptElement.src = "resources/js/check_session.js"; # add to head tag -> header document.getElementsByTagName('head')[0].appendChild(linkElement); document.getElementsByTagName('head')[0].appendChild(scriptElement); ``` Upvotes: -1 <issue_comment>username_3: You can use [onload](https://developer.mozilla.org/en-US/docs/Web/API/GlobalEventHandlers/onload) property of DOM element, more clean example can be found [here](https://javascript.info/onload-onerror) For sure you can't expect that "window.someVariable" will be defined in the next block you can read about loading/executing order in this [answer](https://stackoverflow.com/questions/8996852/load-and-execute-order-of-scripts). As a solution you can add something like "isAllLoadedd" to your 'load-something.js': ``` let pathsToLoad = [], leftToLoad = 0; function loadScript( path ) { let head = document.getElementsByTagName('head')[0]; let script = document.createElement('script'); script.src = path; script.onload = () => leftToLoad--; head.append(script); } if(condition) { pathsToLoad.push('pathToJsFile.js') } leftToLoad = pathsToLoad.length; for (let file of pathsToLoad) loadScript(file); function isAllLoaded() { return !!leftToLoad.length; } ``` And then in your block with "someVariable": ``` let _int = setInterval(function() { if (isAllLoaded()) { console.log(someVariable); clearInterval(_int); } }, 100) ``` But in your particular case, you can all leave as is, and just use your someVariable after it will be defined :), but in any case you will need to wait until your dependency will be loaded. Upvotes: 0
2018/03/15
679
2,630
<issue_start>username_0: I had import all the angular material module in my app.module ``` @NgModule({imports: [ CdkTableModule, MatAutocompleteModule, MatButtonModule, MatCardModule, MatCheckboxModule, MatIconModule, MatInputModule, MatProgressBarModule, MatProgressSpinnerModule, MatRadioModule, MatRippleModule, MatSelectModule, MatSidenavModule, MatSliderModule, MatSlideToggleModule, MatSnackBarModule],}) export class AppModule {} ``` Now my program will redirect to either login page or layout page. The code is shown at below: ``` import {NgModule} from '@angular/core'; import {Routes, RouterModule} from '@angular/router'; import {AuthGuard} from './shared/guard/auth.guard'; const routes: Routes = [ {path: '', loadChildren: './layout/layout.module#LayoutModule', canActivate: [AuthGuard]}, {path: 'login', loadChildren: './login/login.module#LoginModule'}]; @NgModule({ imports: [RouterModule.forRoot(routes, {useHash: true})], exports: [RouterModule] }) export class AppRoutingModule {} ``` How can I use all the angular material from here instead of import all the material again in Login.module and Layout.module I had done it with compoent: `LoginComponent` and I work but how can I do it in loadChildren?<issue_comment>username_1: Create an array of the imported modules ```js const matModules = [CdkTableModule,MatAutocompleteModule,MatButtonModule]; //etc. ``` Use it in your decorator in `imports` and `exports` and create a new module ```js @NgModule({ imports: [...matModules], exports: [...matModules] }) export class MaterialConfigurationModule {} ``` Now you can simply import this module into your other modules. ```js @NgModule({ imports: [MaterialConfigurationModule], }) export class AppModule {} ``` Upvotes: 3 <issue_comment>username_2: Angular `NgModule`-s declarations are only applicable to the current components. Means if you have imported `MaterialModule` in the `AppModule`, only components that are added in the declarations of the `AppModule` can use elements from the `MaterialModule`. You can create a custom `MaterialModule`, import all declarations into it and export them from that module. Then in the project you can use this custom module in each module you want to use. What about the bundle size, tree shaking will just throw away those declarations that are not used by that module. ``` NgModule({ imports: [...here goes material modules], exports: [...here goes material modules] }) export class CustomMaterialModule ... NgModule({ imports: [CustomMaterialModule] }) export class YourModule { } ``` Upvotes: 5 [selected_answer]
2018/03/15
762
2,615
<issue_start>username_0: I have a bunch of files in a directory that I want zipped: ``` for i in $dir/*; do if [[ ! "$i" =~ "zip" ]]; then zip $i else echo "$i has already been zipped" fi done ``` If I populate the directory with a new files and run the script again, it will re-zip all files that don't contain the string "zip". How do I skip those files that are unzipped but already have a zip file in the same directory? For example, the script should not zip `testing.log` again because it already has a zip version in the same directory: ``` testing.log testing.log.zip ``` **UPDATE - 03/15/2018** Thanks for all your feedback! The script now looks like: ``` for i in $dir/*.log; do if [[ ! -f "${i}.zip" ]]; then zip $i else echo "$i has already been zipped" fi done ```<issue_comment>username_1: You better manage a history file. For example if you are zipping mylog.log file put that file in a history file like below ``` echo "$i" >> history.txt zip $i ``` and next time while you start the same script it should check whether that file is already available in that `history.txt` file or not. If available in history file do not zip it. So your final script will be look like below:- ``` if grep "$i" history.txt ; then echo "$i already zipped else echo "$i" >> history.txt zip $i fi ``` At the very beginning create a history file manually else `if` condition will give you an error message. Hope this will help Upvotes: 1 <issue_comment>username_2: ``` for i in $dir/*; do if [ -f "${i}.zip" -o "${i}" = "${i%.zip}" ]; then echo "${i} has already been zipped" continue fi zip $i done ``` Where `[ -f "${i}.zip" ]` checks the existence of the corresponding zip file and `[ "${i}" = "${i%.zip}" ]` checks whether the file itself is a zip file. Upvotes: 2 <issue_comment>username_3: I would go with something like ``` for i in $dir/*; do if [ -z ${i##*.zip} ]; then # $i ends with ".zip" echo "already a zip file: $i" elif [ -f ${i}.zip ] ; then # there already exists a zipped file named $i.zip echo "already has zipped sibling: $i" else # actually zip $i into $i.zip zip $i.zip $i fi done ``` Upvotes: 1 <issue_comment>username_4: You can even use a character class to help with initial file selection by excluding files with `zip` in their extension. For example ``` for i in "$dir"/*.[^z]*; do ## exclude all files w/ext beginning with 'z' zip "$i" done ``` Just another approach to take that comes at the issue from another angle. Upvotes: 2
2018/03/15
381
1,541
<issue_start>username_0: I am writing unit test using Mockito for a method which returns Account object. I am creating a new account as following: ``` Private Account testAccount = new Account("name", "type"); ``` Code is not crashing but I am always getting this exception when I debug: > > Method threw 'java.lang.RuntimeException' exception. Cannot evaluate android.accounts.Account.toString() > > > And `testAccount.name` and `testAccount.type` is always `null`. Can someone tell me if I am doing something wrong or if there is a proper way to mock it and get same account name and type as defined at time of initialisation?<issue_comment>username_1: A colleague at work figured this out that we have to do reflection for the Account object as its fields are defined **final**, so you have to do it as following: ``` Account account = new Account("MyAccount", "SomeType"); Field nameField = account.getClass().getDeclaredField("name"); nameField.setAccessible(true); nameField.set(account, "your value"); // Set whatever field you want to configure Field typeField = account.getClass().getDeclaredField("type"); typeField.setAccessible(true); typeField.set(account, "your value"); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Im guessing you are running unittests. The Account class is only available as a Stub at unittest runtime. If you need the Account class you must run your tests on an Android Emulator (Instrumented Tests). Other option would be to mock all methods you need from the Stub. Upvotes: 0
2018/03/15
338
1,342
<issue_start>username_0: I am looking for some solution for my following question. I get the following JSON (sample one). ``` [{"id":1, "name":firstname}, {"id":2, "name":secondname}] ``` But I need `{"id":1, "name":firstname}, {"id":2, "name":secondname}` I tried with parse, stringify, replace, slice. But there is no response like what I need. I need to remove that array symbol []. Can anyone suggest a idea or solution? Thanks.<issue_comment>username_1: A colleague at work figured this out that we have to do reflection for the Account object as its fields are defined **final**, so you have to do it as following: ``` Account account = new Account("MyAccount", "SomeType"); Field nameField = account.getClass().getDeclaredField("name"); nameField.setAccessible(true); nameField.set(account, "your value"); // Set whatever field you want to configure Field typeField = account.getClass().getDeclaredField("type"); typeField.setAccessible(true); typeField.set(account, "your value"); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Im guessing you are running unittests. The Account class is only available as a Stub at unittest runtime. If you need the Account class you must run your tests on an Android Emulator (Instrumented Tests). Other option would be to mock all methods you need from the Stub. Upvotes: 0
2018/03/15
964
3,802
<issue_start>username_0: As first sorry for my bad english. I have a Carousel View that works really well. ``` ` ` ``` I want to remove the border if a item getting and loosing focus (see picture below). How can I manage this? I know there is something with a Storyboard, but i dont know how to use it. Please help me. ![enter Getting Focus Border / Loosing Focus Border](https://i.stack.imgur.com/vfbQO.png)<issue_comment>username_1: You can bind the `Model` (here should be your `Bilder` class)'s property to the border's background, after that, when you select an item, the Model's corresponding property will change and the Border will also change with the property's changing. The `Model`/`Bilder` class should implement the [INotifyPropertyChanged](https://learn.microsoft.com/en-us/uwp/api/windows.ui.xaml.data.inotifypropertychanged) interface. The following is a simple example. Firstly, add a `borderBrush` property in your `Bilder` class and implement the [INotifyPropertyChanged](https://learn.microsoft.com/en-us/uwp/api/windows.ui.xaml.data.inotifypropertychanged) interface. ``` public class Bilder : INotifyPropertyChanged { public string Bild { get; set; } private SolidColorBrush borderBrush; public SolidColorBrush BorderColor { get { return borderBrush; } set { borderBrush = value; OnPropertyChanged(nameof(BorderColor)); } } public event PropertyChangedEventHandler PropertyChanged; protected void OnPropertyChanged(string name) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(name)); } } } ``` Secondly, modify the `Carousel.ItemTemplate` to add a border, ``` ``` Then you can operate the `Carousel_SelectionChanged` to change the corresponding item's property to make the items with border. When you initialize the `Bilder` object, you maybe need to set the default `BorderColor` property to `Colors.Transparent`. Here is the Page.xaml.cs code with `Carousel_SelectionChanged` event handler, ``` public MainPage() { this.InitializeComponent(); this.Loaded += MainPage_Loaded; OperateItems = new List(); } List OperateItems; private void MainPage\_Loaded(object sender, RoutedEventArgs e) { //these two lines code could be deleted if you don't set the default selected item Carousel.SelectedIndex = 0; ((Bilder)Carousel.SelectedItem).BorderColor = new SolidColorBrush(Colors.Gray); } private void Carousel\_SelectionChanged(object sender, SelectionChangedEventArgs e) { OperateItems.Clear(); foreach (Bilder item in e.AddedItems) { OperateItems.Add(item); } foreach (Bilder item in e.RemovedItems) { OperateItems.Add(item); } foreach (Bilder item in Carousel.Items) { if (OperateItems.Contains(item)) { item.BorderColor = new SolidColorBrush(Colors.Gray); } else { item.BorderColor = new SolidColorBrush(Colors.Transparent); } } } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Atleast i slove my Problem. My Pictures was to big. The border came from Size down the Picture. I found a Code with that u does not loose any Quality by sizing ``` public static BitmapImage ResizedImage(BitmapImage sourceImage, int maxWidth, int maxHeight) { var origHeight = sourceImage.PixelHeight; var origWidth = sourceImage.PixelWidth; var ratioX = maxWidth / (float)origWidth; var ratioY = maxHeight / (float)origHeight; var ratio = Math.Min(ratioX, ratioY); var newHeight = (int)(origHeight * ratio); var newWidth = (int)(origWidth * ratio); sourceImage.DecodePixelWidth = newWidth; sourceImage.DecodePixelHeight = newHeight; return sourceImage; } ``` Upvotes: 0
2018/03/15
694
2,890
<issue_start>username_0: I created my new Google Cloud project in Mumbai region because a majority of my users are in India for this project. This is a port of an existing project where I've been happily using Cloud Firestore before, for a small part of functionality. While not critical to main features of my project it is still pretty important. However, when I try to provision a Cloud Firestore project in my Firebase console, it shows me this error: `Cannot enable Firestore for this project Currently Firestore cannot be enabled in this project's region` I'd much rather have the project hosted in asia-south1 because all of the content in this project is Indian language and people focused. Does anybody know of any way to make this work? I've got an Android app in the project that uses quite a few Firebase features, including Cloud functions, RTDB, Auth, and so on through the `google-services.json` mechanism. I'd use the RTDB for this feature as well, but the schema and requirements are much better suited to the new firestore. Current alternatives I'm considering: 1. Create a separate firebase project just for cloud firestore and attempt to [configure multiple projects](https://firebase.google.com/docs/configure/) in my Android app 2. Recreate all the projects in us-central and give up on using the mumbai region (even though it is super fast for my users) 3. Give up on using Cloud Firestore and make a separate API somewhere else for the feature Does anyone have any thoughts on this?<issue_comment>username_1: Firestore is currently in beta. I don't know what regions it's currently enabled for. I chose Australia for my initial project and got the same result as you. Then I created a new project in region us-east1 it appears to be enabled (see [Firebase Firestore - selecting region to store data?](https://stackoverflow.com/questions/48472534/firebase-firestore-selecting-region-to-store-data?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa)). (Sorry that's probably not the answer you're looking for). Presumably they'll roll it out worldwide when it comes out of beta. My thoughts.. create your project in the US. Migrate to Mumbai when it comes out of Beta.. it's hard to know the context of your project though. Best of luck mate. Upvotes: 2 <issue_comment>username_2: I opened up a ticket with Firebase support, to which they responded with an "It's only available in the regions that it's available in, right now" kind of answer. And no, they don't have a committable timeline to when it will become available. So we went with alternative #1. I made a "companion" firestore project in the US region, and figured out how to use 2 firebase projects for this in the Android app. When Firestore comes out of beta (soon, hopefully?) we'll add Firestore to the South-1 region and copy the data over. Upvotes: 3 [selected_answer]
2018/03/15
1,452
6,357
<issue_start>username_0: I have a problem. When I select `cell` of `UICollectionView`, I must go to next `ViewController`, but it not works. `NavigationController`, pushed `ViewController`, my `object` are initialized, but I stay at current controller ![ScreenShot1](https://i.stack.imgur.com/kotA1.png) What goes wrong? Why NavigationController after pushing don't know about RoomViewController? it's absent in ViewControllers list: ![ScreenShoot2](https://i.stack.imgur.com/6JGwQ.png) StoryBoard screenshot: ![ScreenShot3](https://i.stack.imgur.com/QRssv.png) initial ViewController, which push MainViewController code: ``` func showMainViewController(house: NLHouse) { let mainController = self.storyboard?.instantiateViewController(withIdentifier: "MainViewController") as? MainViewController if let mainController = mainController { mainController.house = house self.navigationController?.pushViewController(mainController, animated: true) print() } } ``` MainViewController, which push RoomViewController code: ``` class MainViewController: UIViewController { @IBOutlet weak var devicesCollectionView: UICollectionView! var house: NLHouse? override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) print() devicesCollectionView.reloadData() } } extension MainViewController: UICollectionViewDelegate, UICollectionViewDataSource { func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { return house?.rooms?.count ?? 0 } func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { let roomCell = devicesCollectionView.dequeueReusableCell(withReuseIdentifier: "roomCell", for: indexPath) as? RoomsCollectionViewCell if let roomCell = roomCell, let rooms = house?.rooms { roomCell.setRoomInfoInCell(room: rooms[indexPath.item]) } return roomCell ?? UICollectionViewCell() } func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath) { if let roomController = self.storyboard?.instantiateViewController(withIdentifier: "RoomViewController") as? RoomViewController, let rooms = house?.rooms { roomController.room = rooms[indexPath.item] self.navigationController?.pushViewController(roomController, animated: true) print() } } } ``` push code: ``` func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath) { if let roomController = self.storyboard?.instantiateViewController(withIdentifier: "RoomViewController") as? RoomViewController, let rooms = house?.rooms { roomController.room = rooms[indexPath.item] self.navigationController?.pushViewController(roomController, animated: true) print() } ``` pushed class code: ``` import UIKit class RoomViewController: UIViewController { @IBOutlet weak var roomNameLabel: UILabel! @IBOutlet weak var devicesCollectionView: UICollectionView! var room: NLRoom? override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { if let room = room { if let roomName = room.roomName { roomNameLabel.text = roomName } devicesCollectionView.reloadData() } } } extension RoomViewController: UICollectionViewDelegate, UICollectionViewDataSource { func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { return room?.devices?.count ?? 0 } func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { let deviceCell = devicesCollectionView.dequeueReusableCell(withReuseIdentifier: "deviceCell", for: indexPath) as? DevicesCollectionViewCell if let deviceCell = deviceCell, let devices = room?.devices { deviceCell.setDeviceInfoInCell(device: devices[indexPath.item]) } return deviceCell ?? UICollectionViewCell() } } ```<issue_comment>username_1: Please add ``` super.viewWillAppear(animated) ``` as first line below function of RoomViewController ``` override func viewWillAppear(_ animated: Bool) ``` What is happening is you are instantiating the controller but when you push the viewWillAppear doesn't invoke the defined methods required for view to appear because super.viewWillAppear is not called. or It might happen that you the controller where you have called the below function is not itself in the navigation stack. Make sure the VC is part of a navigation controller's view controllers, then only this will be effective. ``` self.navigationController?.pushViewController(roomController, animated: true) ``` Upvotes: -1 <issue_comment>username_2: What i see on your screenshot: you have viewController that is root for your navigation controller. Two additional view controllers: `main` and `room`. Tell me, how you present your `MainViewController`? I assume that you push your `MainViewController` from `ViewController` with creation new `UINavigationController` and setting `MainViewController` as root viewController for it. Then you try to push `RootViewController` from first `UINavigationController`. If you want your project work well - remove ViewController from storyboard, connect MainViewController instead it. This way, when you will push `RoomViewController` they will be in the same navigation stack with your `MainViewController`. You need your stucture look like this: [![enter image description here](https://i.stack.imgur.com/1knEI.png)](https://i.stack.imgur.com/1knEI.png) Add your full project in zip file to investigate for sure. **UPDATE:** After reviewing suorce code of ViewController i found that `showMainViewController` method is being called from background (after receiving data from API). Dispatching push on main queue solved problem. For future: anything you do with UI do in main thread. Upvotes: 2 [selected_answer]
2018/03/15
679
2,010
<issue_start>username_0: I like to upgrade the ruby version from `2.4.2` to `2.5.0` in my rails application. All specs/tests fail where I use turbolinks. Is there a known issue with turbolinks and ruby `2.5.0`? Here is the output on the terminal. ``` Failure/Error: expect(request).to redirect_to company_salesmen_path(salesman.company) NoMethodError: undefined method `get?' for 302:Integer # /Users/dennish/.rvm/gems/ruby-2.5.0/gems/turbolinks-5.1.0/lib/turbolinks/assertions.rb:37:in `turbolinks_request?' # /Users/dennish/.rvm/gems/ruby-2.5.0/gems/turbolinks-5.1.0/lib/turbolinks/assertions.rb:6:in `assert_redirected_to' # ./spec/requests/salesmen_spec.rb:206:in `block (3 levels) in ' ``` This is the spec: ``` describe 'DELETE /salesman/:id' do subject(:request) do delete salesman_path(salesman), headers: auth_headers end let!(:salesman) { create :salesman } it 'destroys salesman' do expect { request }.to change { Salesman.count }.by(-1) end it 'redirects to index' do expect(request).to redirect_to company_salesmen_path(salesman.company) end end ```<issue_comment>username_1: I had this same issue it seems to be a compatibility issue with Turbolinks 5.1 and Rails 5.0.x. Downgrading to Turbolinks 5.0.1 solved it for me. Upvotes: 2 <issue_comment>username_2: By renaming the `request` to `http_request` is solving this. Upvotes: 2 <issue_comment>username_3: The **root cause** of this error is: ```rb subject(:request) ``` By assigning `:request` we are [overwriting rails internals](https://github.com/turbolinks/turbolinks-rails/issues/38#issuecomment-391086418) - hence it breaks and the tests fail wierdly. ### The Solution Just go with the default *(no name)* ```rb subject { delete salesman_path(salesman) } ``` Or you can [rename the subject](https://stackoverflow.com/a/53321936/2235594): ```rb subject(:http_request) { delete salesman_path(salesman) } ``` Both solutions will make the tests succeed. Upvotes: 3 [selected_answer]
2018/03/15
219
778
<issue_start>username_0: I am using resharper and it really saves my time except for a solution that has many projects inside. My team tfs and we develop every demand in a new branch. I want to disable resharper all branchs of this solution. Is this possible or are there any other solutions to speed up resharper?<issue_comment>username_1: Tools > Options > Resharper And click 'Suspend Now'. --- As for speeding up your solution, Jetbrains has written a nice article on it: <https://www.jetbrains.com/help/resharper/Speeding_Up_ReSharper.html> Upvotes: 2 <issue_comment>username_2: unfortunately, there is no such feature right now <https://dotnettools-support.jetbrains.com/hc/en-us/community/posts/206618065-How-to-disable-ReSharper-for-a-specific-solution-> Upvotes: 1
2018/03/15
363
1,218
<issue_start>username_0: I want to change my state to `on` if `status == 1` and state to `off` if `status == 0`. I'm able to do that on the table in this way ``` On Off | ``` How can I achieve the same thing in select box options ``` {{configuration.rule.onSuccess}} On Off | ``` I need on and off `{{configuration.rule.onSuccess}}` instead of showing 0 and 1.if I use ng-if , it's showing both on and off.How can i do this?<issue_comment>username_1: Add ng-model to select tag. Since you want to bind the option value to onSuccess variable. Here is the working code. Changed some scope variables for displaying the sample demo ```html (function() { var app = angular.module("testApp", ['ui.bootstrap', 'angular.filter']); app.controller('testCtrl', ['$scope', '$http', function($scope, $http) { }]); }()); On Off {{configuration.rule.onSuccess}} ON OFF ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: One way to do this: ```js var myApp = angular.module('myApp', []); myApp.controller('indexController', function($scope){ $scope.configuration = { "role": { "onSuccess": 0 } }; }) ``` ```html On Off ``` Upvotes: -1
2018/03/15
2,943
9,921
<issue_start>username_0: I've just started with ZeroMQ and I'm trying to get a Hello World to work with PyZMQ and asyncio in Python 3.6. I'm trying to de-couple the functionality of a module with the pub/sub code, hence the following class setup: **Edit 1**: Minimized example **Edit 2**: Included solution, see answer down for how. ``` import asyncio import zmq.asyncio from zmq.asyncio import Context # manages message flow between publishers and subscribers class HelloWorldMessage: def __init__(self, url='127.0.0.1', port='5555'): self.url = "tcp://{}:{}".format(url, port) self.ctx = Context.instance() # activate publishers / subscribers asyncio.get_event_loop().run_until_complete(asyncio.wait([ self.pub_hello_world(), self.sub_hello_world(), ])) # generates message "Hello World" and publish to topic 'world' async def pub_hello_world(self): pub = self.ctx.socket(zmq.PUB) pub.connect(self.url) # message contents msg = "Hello World" print(msg) # keep sending messages while True: # --MOVED-- slow down message publication await asyncio.sleep(1) # publish message to topic 'world' # async always needs `send_multipart()` await pub.send_multipart([b'world', msg.encode('ascii')]) # WRONG: bytes(msg) # processes message "Hello World" from topic 'world' async def sub_hello_world(self): sub = self.ctx.socket(zmq.SUB) sub.bind(self.url) sub.setsockopt(zmq.SUBSCRIBE, b'world') # keep listening to all published message on topic 'world' while True: msg = await sub.recv_multipart() # ERROR: WAITS FOREVER print('received: ', msg) if __name__ == '__main__': HelloWorldMessage() ``` Problem ======= With the above code only 1 `Hello World` is printed and then waits forever. If I press ctrl+c, I get the following error: ``` python helloworld_pubsub.py Hello World ^CTraceback (most recent call last): File "helloworld_pubsub_stackoverflow.py", line 64, in HelloWorldMessage() File "helloworld\_pubsub\_stackoverflow.py", line 27, in \_\_init\_\_ self.sub\_hello\_world(), File "/\*path\*/zeromq/lib/python3.6/asyncio/base\_events.py", line 454, in run\_until\_complete self.run\_forever() File "/\*path\*/zeromq/lib/python3.6/asyncio/base\_events.py", line 421, in run\_forever self.\_run\_once() File "/\*path\*/zeromq/lib/python3.6/asyncio/base\_events.py", line 1395, in \_run\_once event\_list = self.\_selector.select(timeout) File "/\*path\*/zeromq/lib/python3.6/selectors.py", line 445, in select fd\_event\_list = self.\_epoll.poll(timeout, max\_ev) KeyboardInterrupt ``` Versions: `libzmq: 4.2.3`, `pyzmq: 17.0.0`, `Ubuntu 16.04` Any insights are appreciated.<issue_comment>username_1: De-coupling for an OOP separation of concerns is fine, yetlet's also spend some care on debugging the code: ----------------------------------------------------------------------------------------------------------- **1)** ZeroMQ **`PUB/SUB`** Scalable Formal Communication Archetype is known for years to [**require some time** before **`PUB/SUB`**-s get indeed **ready**](https://stackoverflow.com/a/26688152/3666197) so as to broadcast / accept messages. Thus one ought prefer to setup the infrastructure best inside the **`.__init__()`** and not right before **`SUB`**-s are supposed to already receive some payload(s) In my view, this would be a safer design approach: ``` class HelloWorldMessage: """ __doc__ [DEF-ME] [DOC-ME] USAGE: with HelloWorldMessage() as aContextManagerFUSEd_class_INSTANCE: # may use aContextManagerFUSEd_class_INSTANCE # and shall safely # gracefully terminate locally spawned ZeroMQ resources PARAMETERS: RETURNS: THROWS: EXAMPLE: REF.s: [TEST-ME] [PERF-ME] [PUB-ME] """ def __init__( self, url = '127.0.0.1', port = '5555' ): self._url = "tcp://{}:{}".format( url, port ) #---------------------------------------------------- CONTEXT: self._ctx = Context.instance(); print( "INF: zmq.asyncio.Context() set" if ( zmq.ZMQError() == 0 ) else "ERR[1]: {0:}".format( zmq.ZMQError() ) ) #---------------------------------------------------- SUB: self._sub = self._ctx.socket(zmq.SUB ); print( "INF: zmq.SUB set" if ( zmq.ZMQError() == 0 ) else "ERR[2]: {0:}".format( zmq.ZMQError() ) ) self._sub.bind( self._url ); print( "INF: zmq.SUB.bind() done" if ( zmq.ZMQError() == 0 ) else "ERR[3]: {0:}".format( zmq.ZMQError() ) ) self._sub.setsockopt( zmq.LINGER, 1 ); print( "INF: zmq.SUB LINGER set" if ( zmq.ZMQError() == 0 ) else "ERR[4]: {0:}".format( zmq.ZMQError() ) ) self._sub.setsockopt( zmq.SUBSCRIBE, b'world');print( "INF: zmq.SUB subscribed" if ( zmq.ZMQError() == 0 ) else "ERR[5]: {0:}".format( zmq.ZMQError() ) ) #---------------------------------------------------- PUB: self._pub = self._ctx.socket(zmq.PUB ); print( "INF: zmq.PUB set" if ( zmq.ZMQError() == 0 ) else "ERR[6]: {0:}".format( zmq.ZMQError() ) ) self._pub.setsockopt( zmq.LINGER, 1 ); print( "INF: zmq.PUB LINGER set" if ( zmq.ZMQError() == 0 ) else "ERR[7]: {0:}".format( zmq.ZMQError() ) ) self._pub.connect( self._url ); print( "INF: zmq.PUB.connect() done" if ( zmq.ZMQError() == 0 ) else "ERR[8]: {0:}".format( zmq.ZMQError() ) ) #---------------------------------------------------- ... def __enter__( self ): #---------------------------------------------------- with as : CONTEXT MANAGER \_\_enter\_\_()-auto-METHOD return self def \_\_exit\_\_( self, exc\_type, exc\_value, traceback ): #---------------------------------------------------- with as : CONTEXT MANAGER \_\_exit\_\_()-auto-METHOD self.try\_to\_close( self.\_pub ); self.try\_to\_close( self.\_sub ); pass; self.\_ctx.term() return ################################################################ # # A PUB-SENDER ------------------------------------ async def pub\_hello\_world( self ): self.\_pObj = PubHelloWorld(); print( "INF: pObj set on PUB-side" if ( self.\_pObj.msg\_pub() # instance-fuse(d) == "Hello World" ) else "ERR[9]: {0:}".format( "Hello World" ) ) try: while True: # keep sending messages self.\_sMsg = self.\_pObj.msg\_pub(); print( "INF: pObj.msg\_pub() called" if ( self.\_sMsg != None ) else "ERR[A]: {0:}".format( "msg == ?" ) ) pass; print( self.\_sMsg ) # publish message to topic 'world' # async always needs `send\_multipart()` await self.\_pub.send\_multipart( [ b'world', bytes( self.\_sMsg ) ] ); print( "INF: await .send\_multipart()" if ( zmq.ZMQError() == 0 ) else "ERR[B]: {0:}".format( zmq.ZMQError() ) ) # slow down message publication await asyncio.sleep( 1 ); print( "NOP: await .sleep( 1 )" if ( zmq.ZMQError() == 0 ) else "ERR[C]: {0:}".format( zmq.ZMQError() ) ) except: pass; print( "EXC: thrown on PUB side" if ( zmq.ZMQError() == 0 ) else "ERR[D]: {0:}".format( zmq.ZMQError() ) ) finally: self.\_pub.close(); print( "FIN: PUB.close()-d" if ( zmq.ZMQError() == 0 ) else "ERR[E]: {0:}".format( zmq.ZMQError() ) ) ################################################################ # # A SUB-RECEIVER --------------------------------- async def sub\_hello\_world( self ): self.\_sObj = SubHelloWorld(); print( "INF: sObj set on SUB-side" if ( None # instance-fuse(d) == self.\_sObj.msg\_receive("?") ) else "ERR[F]: {0:}".format( "?" ) ) try: while True: # keep listening to all published message on topic 'world' pass; print( "INF: await .recv\_multipart() about to be called now:" ) self.\_rMsg = await self.\_sub.recv\_multipart() pass; print( "INF: await .recv\_multipart()" if ( zmq.ZMQError() == 0 ) else "ERR[G]: {0:}".format( zmq.ZMQError() ) ) pass; print( 'ACK: received: ', self.\_rMsg ) self.\_sObj.msg\_receive( self.\_rMsg ); print( 'ACK: .msg\_receive()-printed.' ) except: pass; print( "EXC: thrown on SUB side" if ( zmq.ZMQError() == 0 ) else "ERR[H]: {0:}".format( zmq.ZMQError() ) ) finally: self.\_sub.close(); print( "FIN: SUB.close()-d" if ( zmq.ZMQError() == 0 ) else "ERR[I]: {0:}".format( zmq.ZMQError() ) ) # ---------close()--------------------------------------- def try\_to\_close( self, aSocketINSTANCE ): try: aSocketINSTANCE.close(); except: pass; return ``` **2)** Best used using a **`with HelloworldMessage() as ... :`** context-manager Upvotes: -1 <issue_comment>username_2: There were 2 errors with my code: 1. As mentioned by @username_1, the **`PUB/SUB`** communication archetype needs some time for initialization (see his/her answer). I had to move `await asyncio.sleep(1)` above the code of publishing (`await pub.send_multipart([b'world', msg.encode('ascii')])`) 2. I encoded the message wrong. `bytes(msg)` --> `msg.encode('ascii')` This answer is most closely related to my question, but please look at @username_1 for certain design choices when implementing PyZMQ. Advice ====== It seems that PyZMQ in an **`asyncio.get_event_loop()`** doesn't give an error traceback, therefore, wrap your code in a `try` & `except` block, e.g.: ``` import traceback import logging try: while True: msg_received = await sub.recv_multipart() # do other stuff except Exception as e: print("Error with sub world") logging.error(traceback.format_exc()) ``` Upvotes: 2 [selected_answer]
2018/03/15
347
1,462
<issue_start>username_0: I have a running docker container with some service running inside it. Using that service, I want to pull a file from the host into the container. * docker cp won't work because that command is run from the host. I want to trigger the copy from the container * mounting host filesystem paths into the container is not possible without stopping the container. I cannot stop the container. I can, however, install other things inside this Ubuntu container * I am not sure scp is an option since I don't have the login/password/keys to the host from the running container Is it even possible to pull/copy a file into a container from a service running inside the container? What are my possibilities here? ftp? telnet? What are my options? Thanks<issue_comment>username_1: I don't think you have many options. An idea is that if: * the host has a web server (or FTP server) up and running * and the file is located in the appropriate directory (so that it can be served) maybe you can use `wget` or `curl` to get the file. Keep in mind that you might need credentials though... --- IMHO, if what you are asking for is doable, it is a security hole. Upvotes: 1 <issue_comment>username_2: Pass the host path as a parameter to your docker container, customize the docker image to read the file from the path(read above in parameter) and use the file as required. You could validate the same in docker entry point script. Upvotes: 0
2018/03/15
343
1,271
<issue_start>username_0: In PHP, say if I have code like this: ``` $aValue = functionThatReturnsAValue(); // the function might return a string or null; $anotherValue = $aValue ? process($aValue) : null; ``` only for brievity (IDK is this a good practice or not and also regarding the performance, etc), I used to change the code like so: ``` $anotherValue = ($aValue = functionThatReturnsAValue()) ? process($aValue) : null; ``` My questions are: 1. Is this style even a good practice? 2. How can I use this with JavaScript? I wrote with same style but got error. Thank you.<issue_comment>username_1: I don't think you have many options. An idea is that if: * the host has a web server (or FTP server) up and running * and the file is located in the appropriate directory (so that it can be served) maybe you can use `wget` or `curl` to get the file. Keep in mind that you might need credentials though... --- IMHO, if what you are asking for is doable, it is a security hole. Upvotes: 1 <issue_comment>username_2: Pass the host path as a parameter to your docker container, customize the docker image to read the file from the path(read above in parameter) and use the file as required. You could validate the same in docker entry point script. Upvotes: 0
2018/03/15
695
1,754
<issue_start>username_0: I have several divs with fixed class names: ``` ``` There can be multiples of these and these divs are not always in order. They are dynamically added. ``` ``` I want to group all the `type-a` and `type-b` divs and show them in front of `type-c` divs (which can be done by floating `type-a` & `type-b` to the right and `type-c` to the left). Also between `type-a` and `type-b` I want to show the `type-a` divs first. And `type-b` second. Without re-arranging the above HTML or using JavaScript, I'm trying to get this output: ``` [type-a] [type-a] [type-b] [type-b] [type-b] [type-c] [type-c] [type-c] ``` Is this possible with just CSS? This is the [fiddle](https://jsfiddle.net/txv2j6bs/6/) with the divs floated left and right. The ordering that needs to happen now is between `type-a` and `type-b`.<issue_comment>username_1: You can use `flexbox` and the `order` property: ```css .container>div { width: 20px; height: 20px; background-color: yellow; } .container { display: inline-flex; } .type-a { order: 1; } .type-b { order: 2; } .type-c { order: 3; } ``` ```html A B C C B B C A ``` Order also works with `Grid`: ```css .container>div { background-color: yellow; } .type-a { order: 1; } .type-b { order: 2; } .type-c { order: 3; } .container { display: grid; grid-template-columns: repeat(auto-fit, 20px); } ``` ```html A B C C B B C A ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: use `flex` display: ```css .container { display:flex; flex-flow:column; } .type-a { order:1; } .type-b { order:2; } .type-c { order:3; } ``` ```html a b c c b b c a ``` Upvotes: 2
2018/03/15
507
1,887
<issue_start>username_0: Hi guys I have written the following code in order to be able to toggle a class on and off an element on click. The element: `#### test` The functionality: ``` function newFunctionTest() { var termsToggles = document.querySelectorAll('.swatch-label-size'); for (var i = 0; i < termsToggles.length; i++) { termsToggles[i].addEventListener('click', toggleTerms); } } function toggleTerms() { var termsSection = document.querySelector('.swatch-label-size'); termsSection.classList.toggle('js-swatch-open'); } ``` I have three instances on the element with ".swatch-label-size" class in my DOM, but the function only works when I click the first one. Nothing happens on click of the second or third element. Have I not not bound my function to all instances of the class properly?<issue_comment>username_1: You are again getting the element inside the listener function `toggleTerms` so remove that and it works. Just click the text in the snippet below to get the effect of class being toggled. For simplicity, I have toggled the class that change the font color: ```js function newFunctionTest() { var termsToggles = document.querySelectorAll('.swatch-label-size'); for (var i = 0; i < termsToggles.length; i++) { termsToggles[i].addEventListener('click', toggleTerms); } } function toggleTerms() { this.classList.toggle('js-swatch-open'); } //initialize listener newFunctionTest(); ``` ```css .js-swatch-open{ color: red; } ``` ```html #### test1 #### test2 #### test3 ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: The termsSection in toggleTerms would always be the first matched element, you may consider to change it into this ``` function toggleTerms(event) { var termsSection = event.target; termsSection.classList.toggle('js-swatch-open'); } ``` Upvotes: 0
2018/03/15
473
1,682
<issue_start>username_0: 1. I get 419 unknown status when making a post request via ajax to API. 2. I know this problem is because of the CORS request. I added it on client side but do I need to also set request headers , allow-origin etc from API also ? if yes how do I include response headers in API ? > > NOTE > ---- > > > I have already included all the token stuff according to the docs of > laravel-5.5 and internal requests works perfectly. > > I am using AJAX. > > ><issue_comment>username_1: Laravel returns a 419 error often when failing CSRF verifications. This sounds like your problem, because you're making a POST request. Make sure you add the CSRF token to the request. If you're using jQuery you could do something like this: ``` $.ajaxSetup({ headers: { 'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content') } }); ``` Make sure you add this to your tag: ``` ``` **Edit** I did some research and I'm pretty sure it's the CSRF token which is missing. The only place where a 419 status code is return in Laravel is here: [`Exceptions/Handler.php`](https://github.com/laravel/framework/blob/5.6/src/Illuminate/Foundation/Exceptions/Handler.php#L204). If dive a little bit deeper you find that the only place where a `TokenMismatchException` is thrown is at [VerifyCsrfToken](https://github.com/laravel/framework/blob/5.6/src/Illuminate/Foundation/Http/Middleware/VerifyCsrfToken.php#L70). So either the CSRF token is missing or it is wrong. Upvotes: 0 <issue_comment>username_2: I finally solved it. I defined my api routes in web.php instead of routes/api.php Everything now works perfect without any errors Upvotes: 1
2018/03/15
504
1,469
<issue_start>username_0: I have successfully added Sinch to my android project, calls work well on my S3 Mini, however, I have an OPPO A83 Phone (Android 7.1.1) that is unable to establish a call and accept incoming calls. I checked the logs and android studio shows me this: ``` 03-15 15:50:22.748 26432-4088/ph.com.app.app I/System.out: [OkHttp] sendRequest>> 03-15 15:50:22.748 26432-4088/ph.com.app.app I/System.out: [OkHttp] sendRequest<< 03-15 15:50:23.265 26432-4088/ph.com.app.app I/System.out: Close in OkHttp 03-15 15:50:23.265 26432-4088/ph.com.app.app W/PubNubListener: Attempt 5 -> IO exception while subscribing for data. IllegalStateException rethrown as IOException due to Android API change 03-15 15:50:23.265 26432-4088/ph.com.app.app E/PubNubListener: Permanently failing subscribe after 6 attempts. [ 03-15 15:50:23.277 26432: 4279 D/ ] [Posix_connect Debug]Process ph.com.app.app :80 ``` Is this a compatibility issue? Thanks!<issue_comment>username_1: I've managed to resolve my issue by moving my codes to a new project leaving build files in the old folder. Looks like an ordinary Clean won't do good. Upvotes: 2 [selected_answer]<issue_comment>username_2: Above solution does work, but simply from Android Studio menu **INVALIDATE CACHE AND RESTART** also fixed the problem for me. Upvotes: 0
2018/03/15
1,320
4,923
<issue_start>username_0: I am looking for some kind of a mapping function `f()` that does something similar to this: ``` f(str) = '' f(complex) = 0j f(list) = [] ``` Meaning that it returns an object of type that evaluates to `False` when cast to `bool`. Does such a function exist?<issue_comment>username_1: No, there is no such mapping. Not every type of object has a falsy value, and others have more than one. Since the truth value of a class can be customized with the [`__bool__` method](https://docs.python.org/3/reference/datamodel.html#object.__bool__), a class could theoretically have an infinite number of (different) falsy instances. That said, most builtin types return their falsy value when their constructor is called without arguments: ``` >>> str() '' >>> complex() 0j >>> list() [] ``` Upvotes: 7 [selected_answer]<issue_comment>username_2: Nope, and in general, there may be no such value. The Python [data model](https://docs.python.org/3/reference/datamodel.html#object.__bool__) is pretty loose about how the truth-value of a type may be implemented: > > `object.__bool__(self)` > > > Called to implement truth value testing and the built-in operation > bool(); should return False or True. When this method is not defined, > `__len__()` is called, if it is defined, and the object is considered true if its result is nonzero. If a class defines neither `__len__()` > nor `__bool__()`, all its instances are considered true. > > > So consider: ``` import random class Wacky: def __bool__(self): return bool(random.randint(0,1)) ``` What should `f(Wacky)` return? Upvotes: 5 <issue_comment>username_3: Not all types have such a value to begin with. Others may have many such values. The most correct way of doing this would be to create a type-to-value dict, because then you could check if a given type was in the dict at all, and you could chose which value is the correct one if there are multiple options. The drawback is of course that you would have to somehow register every type you were interested in. Alternatively, you could write a function using some heuristics. If you were very careful about what you passed into the function, it would probably be of some limited use. For example, all the cases you show except `complex` are containers that generalize with `cls()`. `complex` actually works like that too, but I mention it separately because `int` and `float` do not. So if your attempt with the empty constructor fails by returning a truthy object or raising a `TypeError`, you can try `cls(0)`. And so on and so forth... **Update** [@username_2's answer](https://stackoverflow.com/a/49294330/2988730) actually suggests a clever workaround that will work for most classes. You can extend the class and forcibly create an instance that will be falsy but otherwise identical to the original class. You have to do this by extending because dunder methods like `__bool__` are only ever looked up on the class, never on an instance. There are also many types where such methods can not be replaced on the instance to begin with. As @username_1's now-deleted comment points out, you can selectively call `object.__new__` or `t.__new__`, depending on whether you are dealing with a very special case (like `int`) or not: ``` def f(t): class tx(t): def __bool__(self): return False try: return object.__new__(tx) except TypeError: return tx.__new__(tx) ``` This will only work for 99.9% of classes you ever encounter. It is possible to create a contrived case that raises a `TypeError` when passed to `object.__new__` as `int` does, and does not allow for a no-arg version of `t.__new__`, but I doubt you will ever find such a thing in nature. See the [gist](https://gist.github.com/username_1/d6081ef6aa8d5170d22127fc595c5400) @username_1 made to demonstrate this. Upvotes: 3 <issue_comment>username_4: This is actually called an [identity element](https://en.wikipedia.org/wiki/Identity_element), and in programming is most often seen as part of the definition of a [monoid](https://en.wikipedia.org/wiki/Monoid). In python, you can get it for a type using the `mzero` function in the [PyMonad](https://pypi.python.org/pypi/PyMonad) package. Haskell calls it [mempty](https://hackage.haskell.org/package/base-4.10.1.0/docs/Data-Monoid.html#v:mempty). Upvotes: 3 <issue_comment>username_5: No such function exists because it's not possible in general. A class may have no falsy value or it may require reversing an arbitrarily complex implementation of `__bool__`. What you *could* do by breaking everything else is to construct a new object of that class and forcibly assign its `__bool__` function to one that returns `False`. Though I suspect that you are looking for an object that would otherwise be a valid member of the class. In any case, this is a Very Bad Idea in classic style. Upvotes: 2
2018/03/15
577
2,109
<issue_start>username_0: I have a form in that there is a input field and checkbox ``` ``` Now i have a function ``` function handleChange(chk){ var arr=chk.value.split('_'); if(chk.checked == true){ document.getElementsByName("t_c[arr[0]]").disabled = false; }else{ document.getElementsByName("t_c[arr[0]]").disabled = true; } } ``` I want to disable corresponding input field on uncheck and enable when checked. but i am confuse how to use proper javascript syntax for `getElementsByName("t_c[arr[0]]")`<issue_comment>username_1: You can get the value of `arr[0]` after split to determine which `input` box needs to be enable or disable based on the value of the `checkbox`: ```js function handleChange(chk){ var arr = chk.value.split('_'); if(chk.checked){ document.getElementsByName("t_c[]")[arr[0]].disabled = false; }else{ document.getElementsByName("t_c[]")[arr[0]].disabled = true; } } ``` ```html ``` Upvotes: 1 <issue_comment>username_2: The only problem with your code was that you need to get the element which is at the `arr[0]`th index of the result, which was given by `getElementsByName`. Correcting that, your code works fine: ```js function handleChange(chk) { var arr = chk.value.split('_'); if (chk.checked == true) { document.getElementsByName("t_c[]")[arr[0]].disabled = false; } else { document.getElementsByName("t_c[]")[arr[0]].disabled = true; } } ``` ```html ``` Upvotes: 1 <issue_comment>username_3: Try this as it works fine and solves your problem, and if not yet then let me know. ```js function handleChange(chk) { var inputbox = document.getElementsByClassName("inputbox"); var check = document.getElementsByClassName("check"); for(i = 0; i < inputbox.length; i++) { if(check[i].checked == false) { inputbox[i].disabled = true; } else { inputbox[i].disabled = false; } } } ``` ```html ``` Upvotes: 0
2018/03/15
1,008
3,653
<issue_start>username_0: i use Laravel passport for auth in route api.php ``` Route::get('/todos', function(){ return 'hello'; })->middleware('auth:api'); ``` but when open localhost:8000/api/todos I see the following error ``` InvalidArgumentException Route [login] not defined. * @return string * * @throws \InvalidArgumentException */ public function route($name, $parameters = [], $absolute = true) { if (! is_null($route = $this->routes->getByName($name))) { return $this->toRoute($route, $parameters, $absolute); } throw new InvalidArgumentException("Route [{$name}] not defined."); } /** * Get the URL for a given route instance. * * @param \Illuminate\Routing\Route $route * @param mixed $parameters * @param bool $absolute * @return string * * @throws \Illuminate\Routing\Exceptions\UrlGenerationException */ protected function toRoute($route, $parameters, $absolute) { return $this->routeUrl()->to( $route, $this->formatParameters($p ``` I want if the user was not authenticated Do not redirect to any page and only see the page<issue_comment>username_1: Did you enter above-mentioned URL directly in browser search bar? If you did its wrong way because you also need to enter API token with your request\_\_!! To check either request includes token or not make your own middleware. Command to create Middleware ``` php artisan make:middleware CheckApiToken ``` <https://laravel.com/docs/5.6/middleware> change middleware handle method to ``` public function handle($request, Closure $next) { if(!empty(trim($request->input('api_token')))){ $is_exists = User::where('id' , Auth::guard('api')->id())->exists(); if($is_exists){ return $next($request); } } return response()->json('Invalid Token', 401); } ``` Like This Your Url should be like this <http://localhost:8000/api/todos?api_token=API_TOKEN_HERE> Upvotes: 5 [selected_answer]<issue_comment>username_2: Use Postman and set the Header `Accept: application/json` otherwise Laravel Passport would never know it's an API client and thus redirect to a /login page for the web. see below image to see where to set the accept parameter: [![enter image description here](https://i.stack.imgur.com/7KmrO.png)](https://i.stack.imgur.com/7KmrO.png) Upvotes: 7 <issue_comment>username_3: You also have to add another header Key: X-Requested-With and value: XMLHttpRequest Upvotes: 2 <issue_comment>username_4: You also have to add another header Key: Accept and value: application/json Upvotes: 3 <issue_comment>username_5: **Check Your Header Request to put** > > Authorization = Bearer {your token} > > > Upvotes: 3 <issue_comment>username_6: In the following of [@username_2](https://stackoverflow.com/users/10178490/eki) answer, This error is because you didn't set "Accept" field in your headers. **To avoid this error**, add a middleware with priority to Authenticate to check that: 1. add an extra middleware with below handler ``` public function handle($request, Closure $next) { if(!in_array($request->headers->get('accept'), ['application/json', 'Application/Json'])) return response()->json(['message' => 'Unauthenticated.'], 401); return $next($request); } ``` 2. set priority in app/Http/Kernel.php ``` protected $middlewarePriority = [ ... \App\Http\Middleware\MyMiddleware::class, // new middleware \App\Http\Middleware\Authenticate::class, ... ]; ``` 3. add new middleware to your route ``` Route::get('/todos', function(){ return 'hello'; })->middleware('MyMiddleware', 'auth:api'); ``` Upvotes: 2
2018/03/15
1,042
3,681
<issue_start>username_0: There are two ways to use tern\_for\_vim plugin in HTML files as the webpage say. [use tern\_for\_vim plugin in HTML files](https://stackoverflow.com/questions/22792956/using-tern-for-vim-plugin-in-html-files) Both method can work ,both of them can't make js completion menu pop up automatically. Method1: ``` 1.vim test.html 2.:setlocal omnifunc=tern#Complete 3.To input `` after `document.` ``` Now js completion pop up. Two issues remain for this method. 1.To write `setlocal omnifunc=tern#Complete` in .vimrc can't work. Why? 2.How to make js completion menu pop up automatically after `document.` ,instead of input ? Method2: ``` sudo cp .vim/bundle/tern_for_vim/after/ftplugin/javascript_tern.vim .vim/bundle/tern_for_vim/after/ftplugin/html_tern.vim ``` You should input after `document.` in order to call js completion menu for your html file edited. The js completion menu for js file edited can't pop up after `document.` automatically. 1.How to make js completion menu pop up automatically after `document.` ,instead of input ? (same as in Method1 the second item.)<issue_comment>username_1: Did you enter above-mentioned URL directly in browser search bar? If you did its wrong way because you also need to enter API token with your request\_\_!! To check either request includes token or not make your own middleware. Command to create Middleware ``` php artisan make:middleware CheckApiToken ``` <https://laravel.com/docs/5.6/middleware> change middleware handle method to ``` public function handle($request, Closure $next) { if(!empty(trim($request->input('api_token')))){ $is_exists = User::where('id' , Auth::guard('api')->id())->exists(); if($is_exists){ return $next($request); } } return response()->json('Invalid Token', 401); } ``` Like This Your Url should be like this <http://localhost:8000/api/todos?api_token=API_TOKEN_HERE> Upvotes: 5 [selected_answer]<issue_comment>username_2: Use Postman and set the Header `Accept: application/json` otherwise Laravel Passport would never know it's an API client and thus redirect to a /login page for the web. see below image to see where to set the accept parameter: [![enter image description here](https://i.stack.imgur.com/7KmrO.png)](https://i.stack.imgur.com/7KmrO.png) Upvotes: 7 <issue_comment>username_3: You also have to add another header Key: X-Requested-With and value: XMLHttpRequest Upvotes: 2 <issue_comment>username_4: You also have to add another header Key: Accept and value: application/json Upvotes: 3 <issue_comment>username_5: **Check Your Header Request to put** > > Authorization = Bearer {your token} > > > Upvotes: 3 <issue_comment>username_6: In the following of [@username_2](https://stackoverflow.com/users/10178490/eki) answer, This error is because you didn't set "Accept" field in your headers. **To avoid this error**, add a middleware with priority to Authenticate to check that: 1. add an extra middleware with below handler ``` public function handle($request, Closure $next) { if(!in_array($request->headers->get('accept'), ['application/json', 'Application/Json'])) return response()->json(['message' => 'Unauthenticated.'], 401); return $next($request); } ``` 2. set priority in app/Http/Kernel.php ``` protected $middlewarePriority = [ ... \App\Http\Middleware\MyMiddleware::class, // new middleware \App\Http\Middleware\Authenticate::class, ... ]; ``` 3. add new middleware to your route ``` Route::get('/todos', function(){ return 'hello'; })->middleware('MyMiddleware', 'auth:api'); ``` Upvotes: 2
2018/03/15
258
1,072
<issue_start>username_0: I am making a login page, and i would like it if "Successfull login" appears in the console if the username and password match the database. So i simply added a `System.out.println();` in the if statement, but it gets the error "unreachable statement". Why is that? Here is the loop: ``` if (user.equalsIgnoreCase(userFromDB) && hashedPass.equals(passFromDB)) { return "Correct username and password!"; System.out.println("Login successfull using username \"" + user + "\""); } ```<issue_comment>username_1: This is not a loop, it is a conditional statement and even if it would be a loop it woudln't change a thing. This is an unreachable statement because return is the place where you get out of the method and return a value of an expression that is next to the `return` key word. Upvotes: 4 [selected_answer]<issue_comment>username_2: Put your System.out.println() before the return statement. The return statement take the compiler out of the function and the println() statement won't be executed at all. Upvotes: 0
2018/03/15
345
1,147
<issue_start>username_0: I have CSV files get updated every day and we process the files and delete the files older than 30 days based on the date in the filename. Example filenames : `XXXXXXXXXXX_xx00xx_**20171001**.000000_0.csv` I would like to schedule the job in crontab to delete 30 days older files daily. Path could be `/mount/store/ XXXXXXXXXXX_xx00xx_**20171001**.000000_0.csv` ``` if [ $(date -d '-30 days' +%Y%m%d) -gt $D ]; then rm -rf $D fi ``` this above script doesn't seem to help me. Kindly help me on this. I have been trying this for last two days. Using CENTOS7 Thanks.<issue_comment>username_1: This is not a loop, it is a conditional statement and even if it would be a loop it woudln't change a thing. This is an unreachable statement because return is the place where you get out of the method and return a value of an expression that is next to the `return` key word. Upvotes: 4 [selected_answer]<issue_comment>username_2: Put your System.out.println() before the return statement. The return statement take the compiler out of the function and the println() statement won't be executed at all. Upvotes: 0
2018/03/15
212
830
<issue_start>username_0: Is there a way to transform the table "sris" into these 3 tables {"tbl\_student,tbl\_records,tbl\_subject"} where the data in "sris" table will be distributed to the 3 tables to have that relationship [SEE DATABASE DIAGRAM](https://i.stack.imgur.com/PnIEl.png)<issue_comment>username_1: This is not a loop, it is a conditional statement and even if it would be a loop it woudln't change a thing. This is an unreachable statement because return is the place where you get out of the method and return a value of an expression that is next to the `return` key word. Upvotes: 4 [selected_answer]<issue_comment>username_2: Put your System.out.println() before the return statement. The return statement take the compiler out of the function and the println() statement won't be executed at all. Upvotes: 0
2018/03/15
507
1,753
<issue_start>username_0: I'm working with a big database (20 Gb). The structure is like that: ``` `Count` int(11) NOT NULL AUTO_INCREMENT, `Sensor Name` varchar(100) NOT NULL, `Date` datetime NOT NULL, `Value` decimal(18,4) DEFAULT NULL ``` Now, I'm testing to reduce the size of database by using the 'sensorID' (int) instead of 'Sensor Name'. ``` `Count` int(11) NOT NULL AUTO_INCREMENT, `SensorID` smallint(5) unsigned NOT NULL, `Date` datetime NOT NULL, `Value` float DEFAULT NULL, ``` I have an other table ('definition') to map between 'sensorID' and 'Sensor Name'. ``` CREATE TABLE `definition` ( `SensorID` smallint(5) NOT NULL AUTO_INCREMENT, `Sensor Name` varchar(100)) ``` So,how can I can replace the `Sensor Name` by 'sensorID' with the most efficient? Now, I'm using "SELECT" the 'Date' and 'Value' of each from old table and insert it to new table with the sensorID ``` INSERT INTO newtable (SensorID, `Date`, `Value`) SELECT 3681, `Date`, `Value` from oldtable where Sensor Name = 'abc'; ``` with '3681' is the ID I got from 'definition' table , but it takes me a week for 50% data. JOIN is not good ideal because with 20 Gb, it needs huge resources to do.<issue_comment>username_1: This is not a loop, it is a conditional statement and even if it would be a loop it woudln't change a thing. This is an unreachable statement because return is the place where you get out of the method and return a value of an expression that is next to the `return` key word. Upvotes: 4 [selected_answer]<issue_comment>username_2: Put your System.out.println() before the return statement. The return statement take the compiler out of the function and the println() statement won't be executed at all. Upvotes: 0
2018/03/15
303
1,021
<issue_start>username_0: Currently, i am using SQL server 2012, I have to show database size before user backup database file.So I run the query ``` sp_helpdb 'MyDB' ``` It shows me `db_size` is 1444.93 MB but when I backup database file, It actual size is 70 MB. Why database size so different? please let me known which one is the actual size of the database.<issue_comment>username_1: You have backup compression turned on the instance which would account for the size differences. Also database file may have empty space which is not backed up. Upvotes: 1 <issue_comment>username_2: ***To Check Database Size in SQL Server for both Azure and On-Premises-*** **Method 1 – Using ‘sys.database\_files’ System View** ``` SELECT DB_NAME() AS [database_name], CONCAT(CAST(SUM( CAST( (size * 8.0/1024) AS DECIMAL(15,2) ) ) AS VARCHAR(20)),' MB') AS [database_size] FROM sys.database_files; ``` **Method 2 – Using ‘sp\_spaceused’ System Stored Procedure** ``` EXEC sp_spaceused ; ``` Upvotes: 0
2018/03/15
492
1,567
<issue_start>username_0: i have 2 tables and i left join them , i want to add extra dummy column to check if that row doesn't exist on the right table this table A ``` TableID|TableName 0 table 1 1 table 2 2 table 3 ``` and this the table B ``` TableName|isSuper table 1 0 table 2 1 ``` and i use this query to left join them ``` SELECT A.TABLE_NAME, NVL(B.IsSUper|,0) FROM A LEFT JOIN B on A.TableName=B.TableName ``` and the result will be ``` TableName| isSuper table1 0 table2 1 table3 0 ``` i want to add extra dummy value to check if that row existed in table b like that ``` TableName| isSuper | existedOnB table1 0 1 table2 1 1 table3 0 0 ``` how to achieve that ?<issue_comment>username_1: One option uses a `CASE` expression: ``` SELECT A.TABLE_NAME, NVL(B.IsSuper, 0), CASE WHEN B.IsSuper IS NULL THEN 0 ELSE 1 END AS existedOnB FROM A LEFT JOIN B ON A.TableName = B.TableName ``` The marker for a record in table `A` which did not match to anything in `B` would be that any of the `B` fields would be `NULL`. So we may check for this condition using a `CASE` expression. Upvotes: 1 <issue_comment>username_2: I think simple `case` expression is enough to check row exists or not and put `1` or `0` ``` SELECT a.TABLE_NAME, NVL(B.IsSUper, 0) isSuper, CASE WHEN B.IsSUper IS NOT NULL THEN 1 ELSE 0 END existedOnB FROM tableA a LEFT JOIN tableB b on a.TableName = b.TableName ``` Upvotes: 2
2018/03/15
1,100
4,579
<issue_start>username_0: I have been struggling with the following problem for two days and can not get my head around it. I am trying to serve a static pdf in a Spring Boot rest application. It should be very straight forward but I just cannot get it to work. First I simply placed the pdf in the resource folder and tried to load it directly from the javascript code, like this: ``` var newWindow = window.open(/pdf/test.pdf, ''); ``` That resulted in a new window with a pdf not showing any content. Saving the pdf to disk from the browser and investigating the contents revealed that they were different from the original. I am showing screenshots from Atom for both (original first), in ISO-8859-1 encoding: [![snippet from original pdf](https://i.stack.imgur.com/sx5ts.png)](https://i.stack.imgur.com/sx5ts.png) [![same part, pdf as saved from browser](https://i.stack.imgur.com/ptl0Z.png)](https://i.stack.imgur.com/ptl0Z.png) My conclusion so far: Spring or Tomcat somehow changed the binary content. Maybe it is encoding it? In Base64? Then I tried to implement it on the server side, to see what is going on. I implemented a rest controller that would serve the pdf contents. An interesting find is that it initially gave the same results as with the direct approach. I used the classPathResource to get a handle to the pdf file. But when I load the pdf directly from a path using FileInputStream and File, it works. See code below: ``` @RequestMapping(value = "/test.pdf", method = RequestMethod.GET, produces = "application/pdf") public void getFile(HttpServletResponse response) { try { DefaultResourceLoader loader = new DefaultResourceLoader(); /* does not work ClassPathResource pdfFile = new ClassPathResource("test.pdf"); InputStream is = pdfFile.getInputStream(); */ /* works */ InputStream is = new FileInputStream(new File("z:\\downloads\\test.pdf")); IOUtils.copy(is, response.getOutputStream()); response.setHeader("Content-Disposition", "inline; filename=test.pdf"); response.setContentType("application/pdf"); response.flushBuffer(); } catch (IOException ex) { throw new RuntimeException("IOError writing file to output stream"); } } ``` What is going on here? Why is Spring/Tomcat changing the binary data when using ClassPathResource or when serving it directly? I would be grateful for some help here. I cannot use a direct path because the pdf will be in a jar file eventually, so I will need ClassPathResource or some other ResourceLoader.<issue_comment>username_1: Writing directly to the response output stream might be nuking your ability to set headers. I tested your code with `curl` as the user agent and `Content-Type` was missing from the response which would cause your client to apply a transformation and mess up the content. Rewriting your body to look like this will fix the problem: ``` @RequestMapping(value = "/test.pdf", method = RequestMethod.GET, produces = "application/pdf") public ResponseEntity getFile() { try { ClassPathResource pdfFile = new ClassPathResource("test.pdf"); HttpHeaders headers = new HttpHeaders(); headers.add(HttpHeaders.CONTENT\_DISPOSITION, "inline; filename=test.pdf"); InputStream is = pdfFile.getInputStream(); return new ResponseEntity<>( new InputStreamResource(is), headers, HttpStatus.OK); } catch (IOException ex) { throw new RuntimeException("IOError writing file to output stream"); } } ``` I get from the fact that you're streaming responses that these are potentially large and you care about efficiency. The best way to do this is to return an implementation of `StreamingResponseBody` in which you implement the `write` method using a fast NIO stream-to-stream copy. Upvotes: 0 <issue_comment>username_2: Ok, finally I found the culprit and it was in a completely unexpected corner. I am using IntelliJ with Maven for this project and as it turns out, IntelliJ corrupted the content of the pdf when copying it to the /target folder. And of course tomcat was serving this file, not the one in the /src folder... so it had nothing to do with ClassPathResource or Spring. It was Maven. I had to disable filtering for (binary) pdf in the pom.xml: ``` org.apache.maven.plugins maven-resources-plugin pdf ``` which solved the issue. Now a direct request of the file (localhost:8080/test.pdf) as well as the rest controller approach work. @username_1: thanks for the quick reply although it didn't solve the issue. Upvotes: 3
2018/03/15
540
1,432
<issue_start>username_0: My file contains records like this: ``` 11001^1^100^2015-06-05 22:35:21.543^^0122648d-4352-4eec-9327-effae0c34ef2^2016060601 ``` I am supposed to split the file with the character `^`. But I am getting `ArrayIndexOutOfBoundsException` error: Here is my program: ``` val spark = SparkSession.builder().appName("KPI 1").master("local").getOrCreate() val data = spark.read.textFile("/some/path/to/Set_Top_Box_Data.txt").rdd val raw = data.map{ record => val rec = record.trim().toString.split("^") (rec(0),rec(2)) } raw.collect().foreach(println) spark.stop ``` And here is the associated error trace: ```none 18/03/15 13:38:33 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0) java.lang.ArrayIndexOutOfBoundsException: 2 at KPI1.FilterForId1001$$anonfun$1.apply(FilterForId1001.scala:12) at KPI1.FilterForId1001$$anonfun$1.apply(FilterForId1001.scala:11) at scala.collection.Iterator$$anon$11.next(Iterator.scala:410) at scala.collection.Iterator$class.foreach(Iterator.scala:891) ... ```<issue_comment>username_1: `^` is a regex that denotes start of the line. You need to escape it, and you should be good. ``` val raw = data.map{record=>{ val rec = record.trim().toString.split("\\^") (rec(0),rec(2)) }} ``` Upvotes: 2 <issue_comment>username_2: You should split this way: ``` .split("\\^") ``` instead of ``` split("^") ``` Upvotes: 3 [selected_answer]
2018/03/15
579
2,087
<issue_start>username_0: I tried by below url <https://api.xero.com/api.xro/2.0/Contacts> but got error as oauth\_problem=consumer\_key\_unknown&oauth\_problem\_advice=Consumer%20key%20was%20not%20recognised how to solve this error ? what is the Pattern of URL for get method using Consumer key and consumer secret key?<issue_comment>username_1: Best place to start would be <https://developer.xero.com/> There you will find documentation about how to OAuth an app, and links to SDKs with examples on making calls. You can get your key and secret by signing up to <https://developer.xero.com/myapps> Upvotes: 0 <issue_comment>username_2: Access to Xero API endpoints is controlled with oAuth 1.0a - therefore each request requires several parameters be passed and the payload have a correct oauth signature. If you just want to get the data without coding, I recommend trying the API previewer - login at <https://app.xero.com/> set the "Accept Type" to json. Another option is Postman ... <https://developer.xero.com/documentation/tools/postman> If you are using code - I recommend using one of SDKs as they handle all the gnarly bits of oauth signatures, etc. For PHP users we just create a Sample App that shows off most endpoints. <https://github.com/XeroAPI/xero-php-sample-app> Upvotes: 0 <issue_comment>username_3: This is because of authorization you are not authorized. Now Xero deprecated Oauth1 and using Oauth2 you need the following: client\_id client\_secret redirectUri //call back URL which Xero hit For getting this you need to fill the following : Go to the URL <https://developer.xero.com/myapps/> (To create new app). App name (name of the app eg xyz). Company or application URL (your application's live URL eg: https:www.google.com) OAuth 2.0 redirect URIs (call back URL which Xero hit on an authorization eg: <http://localhost:10010/api/callback>) Then generate secret and save Code example for node : <https://github.com/XeroAPI/node-oauth2-example/blob/master/index.js> In place of authorizationCallback use oauthCallback function. Upvotes: 1
2018/03/15
414
1,404
<issue_start>username_0: I want my `TextView` in my `ViewSwitcher` to switch colors between green and white every new line. I figure out how to change the color, but only of the entire `TextView` with the method: ``` SetTextColor() ```<issue_comment>username_1: You would need to use TextFormatted property of the TextView. Some sample you can find on Xamarin forums: <https://forums.xamarin.com/discussion/9403/setting-a-textview-to-a-spannable-string> Upvotes: 0 <issue_comment>username_2: > > I want my TextView in my ViewSwitcher to switch colors between green and white every new line. I figure out how to change the color, but only of the entire TextView > > > You can use `TextFormatted` together with `Android.Text.SpannableString` to set the color: ``` int lineCount = tvResult.LineCount; SpannableString spannableText =new SpannableString(tvResult.Text); for (int i = 0; i < lineCount; i++) { int start=tvResult.Layout.GetLineStart(i); int end = tvResult.Layout.GetLineEnd(i); if (i % 2 == 1) { spannableText.SetSpan(new ForegroundColorSpan(Android.Graphics.Color.Yellow), start, end, SpanTypes.Composing); } else { spannableText.SetSpan(new ForegroundColorSpan(Android.Graphics.Color.Green), start, end,SpanTypes.Composing); } } tvResult.TextFormatted = spannableText; ``` Notes: tvResult is just a TextView control. Upvotes: 3 [selected_answer]
2018/03/15
640
3,145
<issue_start>username_0: I want to run cmd as an administrator on VSTS. Actaully I am trying to install git-tfs with chocolatey tool manager on VSTS hosted agent, So I am running the following command on VSTS command line task: @"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "iex ((New-Object System.Net.WebClient).DownloadString('<https://chocolatey.org/install.ps1>'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin" or, Alternatively VSTS also provides the chocolatey task for installation, you can see this in the screenshot : [![enter image description here](https://i.stack.imgur.com/PM3iE.png)](https://i.stack.imgur.com/PM3iE.png) Both of the above approaches giving the same error: [error]System.Management.Automation.RuntimeException: Installation of Chocolatey to default folder requires Administrative permissions. Please run from elevated prompt. Please see <https://chocolatey.org/install> for details and alternatives if needing to install as a non-administrator. ---> System.Management.Automation.RuntimeException: Installation of Chocolatey to default folder requires Administrative permissions. Please run from elevated prompt. Please see <https://chocolatey.org/install> for details and alternatives if needing to install as a non-administrator. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------<issue_comment>username_1: You generally can't. If something requires admin access and you're using the hosted agent, you can't do that thing. For your specific problem, I'd start by looking at this resource, which the error message gave to you: > > "Please see <https://chocolatey.org/install> for details and alternatives if needing to install as a non-administrator." > > > Upvotes: 2 <issue_comment>username_2: I have just done a quick test which was the following: 1. Set up a VSTS Build using Hosted 2017 Build Agent 2. Added PowerShell Task with the following contents choco list --local-only 3. Ran the build This command correctly output the list of Chocolatey Packages that are currently installed via Chocolatey. This tells me that Chocolatey is already installed on the Hosted 2017 Build Agent, and as a result, you shouldn't need to install it again. Instead, you should be able to install additional applications using it. **NOTE:** The packages that you try to install will still be subject to the same permissions though. So if you are trying to install an application that requires administrative permissions, then you will likely run into the same problems. Upvotes: 3 [selected_answer]
2018/03/15
821
3,165
<issue_start>username_0: Now I got Error ``` Uncaught exception 'Braintree\Exception\Configuration' with message 'Braintree\Configuration::merchantId needs to be set (or accessToken needs to be passed to Braintree\Gateway). ``` Question is that if my merchant ID is not how its creating sub merchant because i able to see sub merchant account in my dashboard but i am going to call this method: ``` $webhookNotification = Braintree\WebhookNotification::parse($sampleNotification['bt_signature'], $sampleNotification['bt_payload']); ``` it says ``` Uncaught exception 'Braintree\Exception\Configuration' with message 'Braintree\Configuration::merchantId needs to be set (or accessToken needs to be passed to Braintree\Gateway). ```<issue_comment>username_1: Full disclosure: I work at Braintree. If you have any further questions, feel free to contact [support][support] The [merchant ID](https://articles.braintreepayments.com/control-panel/important-gateway-credentials#merchant-id) is a required API credential for all Braintree API Calls, along with with the public and private key. You are able to see submerchants in your dashboard without the merchant ID because our system recognizes your login to the Dashboard as valid authentication instead of relying on the API Credentials. When using our SDKs, you will need to [set up your API Credentials appropriately](https://developers.braintreepayments.com/start/hello-server/php#or-use-composer). You can find the API Credentials for your account by following the [instructions in our documentation](https://articles.braintreepayments.com/control-panel/important-gateway-credentials#api-keys). We now support both [class level and instance methods](https://developers.braintreepayments.com/reference/general/class-level-vs-instance-methods/php). **Class Level Example** ``` Braintree_Configuration::environment('sandbox'); Braintree_Configuration::merchantId('use_your_merchant_id'); Braintree_Configuration::publicKey('use_your_public_key'); Braintree_Configuration::privateKey('use_your_private_key'); ``` **Instance Method Example** ``` $gateway = new Braintree_Gateway([ 'environment' => 'sandbox', 'merchantId' => 'use_your_merchant_id', 'publicKey' => 'use_your_public_key', 'privateKey' => 'use_your_private_key' ]); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I use Laravel. In my case, the problem was from Config file Cache. For some reason Laravel not generate the config cache from command: `php artisan config:cache`. I solved deleting config cache: ``` php artisan config:clear ``` But the real problem in my case was Laravel config cache generation. I hope it's useful. **UPDATE** My config cache not worked because I put the `env()` helper NOT in the configuration files, but in others (in my case: AppServiceProvider). In production mode the .env parameters must be called only from config files.. > > If you are using the config:cache command during deployment, you must > make sure that you are only calling the env function from within your > configuration files, and not from anywhere else in your application. > > > Upvotes: 0
2018/03/15
503
2,046
<issue_start>username_0: I read that when executing a `TOP` query without an `ORDER BY` that the results returned by the SQL Server may differ between executions because it's not guaranteed that you receive the same top x values due to the missing sorting. I wondered if it is the same in the following sample: In a table `person` there are columns `id`, `name` and `age`. The table contains thousands of rows, but all persons have the same age. The id is unique. I now want to page through this table with a page size of e.g. 20 and order by age. ``` SELECT * FROM person ORDER BY age -- ,id -- is this necessary? OFFSET 0 ROWS FETCH NEXT 20 ROWS ONLY; ``` Does the same issue occur as with the `TOP` clause because the condition that it is ordered by matches a lot more records than the page size? Do I have to additionaly order by `id` to make sure that when going through the pages that every record occurs exactly once? <http://sqlfiddle.com/#!18/93988/3><issue_comment>username_1: Yes. `ORDER BY` is *the* control you have to tell the server what ordering guarantees you want it to provide. Anything you fail to say in the `ORDER BY` clause is fair game for the system to fail to respect. For dependable paging1, you need to ensure that you have enough expressions in your `ORDER BY` clause so that every row's position will be *uniquely* defined. For a single table `SELECT`, this is most easily achieved by including all primary key column(s) after any other ordering criteria. --- 1But note that this still assumes that the data in the table is *static*. Getting "once and only once" display of data when using paging and facing the possibilities of inserts, updates and deletes can be tricky and often here you'll wind up accepting the occasional paging mishap. Upvotes: 3 [selected_answer]<issue_comment>username_2: Adding the Id will help to a certain extent but regardless of how you page, SQL select statements are not deterministic so you cannot make assumptions of uniqueness across query executions. Upvotes: 0
2018/03/15
618
2,188
<issue_start>username_0: I need to assign a new variable to user identity like ``` Yii::$app->user->identity->staff_name = 'myName'; ``` I have added `public $staff_name;` in the identityClass , which is usually `common\models\User`, but in my case is `common\models\Person`. But when I print `Yii::$app->user->identity` the ``` Yii::$app->user->identity->staff_name value is blank. ``` Why it is blank ?<issue_comment>username_1: `Steps to follow` ----------------- * You should add the public attribute `staff_name` to the `safe` rules see below ``` public function rules(){ return[ [['staff_name'],'safe'] ] ; } ``` * If you are using **`dektrium/yii2-user`** use the following way to add to the parent rules add to your `Person` model ``` public function rules() { return array_merge (parent::rules (), [ 'staff_nameSafe' => ['staff_name' , 'safe' ] ] ); } ``` * Then you need to load this attribute manually too it won't have the value automatically like other model attributes which are the actual table fields so use `afterFind` in your `Person` model like below. Note: i am using the hardcoded string `SAMPLE STAFF NAME` for the `staff_name` adjust it according to your needs ``` public function afterFind() { parent::afterFind (); $this->staff_name="SAMPLE STAFF NAME"; } ``` `Test` ------ Now you can use `print_r(Yii::$app->user->identity->staff_name);` and it will print the name ``` SAMPLE STAFF NAME ``` Upvotes: 1 <issue_comment>username_2: Using rules does not works for me Instead i use a public parameter ``` public $staff_name=''; ``` Then you can set it via afterfind like this ``` public function afterFind() { parent::afterFind (); $this->staff_name=\common\models\staff::findOne($this->staffid)->staffname; } ``` Suppose you get staff name in your staff table, so $this->staff\_name is your public parameter `\common\models\staff` is your models of staff table in common\models namespace `$this->staffid` is your staff id in your user identity staffname is your staff name field in your staff table then you can use it like `Yii::$app->user->identity->staff_name` Upvotes: 0
2018/03/15
460
1,760
<issue_start>username_0: Am using Xcode 9.2, swift 4. After automatically signing the app and submitting to App Store for review, I get this email with the message: "The file libswiftCore.dylib doesn’t have the correct code signature. Make sure you’re using the correct signature, rebuild your app using the current public (GM) version of Xcode". Can someone help with this challenge. Am developing native IOS app, no xamarin. And how will I check that libswiftCore.dylib has the correct code signature? How do I rebuild the app with the current public (GM) version of Xcode? Help and solution is needed Please<issue_comment>username_1: Managed to fix it somehow... 1. I deleted the package, obj, and bin folder. 2. Restored the nuget, using your latest official version 3.0.1 and 3.1.0. 3. Used Xamarin Studio to archive. 4. From XCode, i open the archive and created the SwiftSupport folder and copied 5. all the swift dynamic library files `*.dylib` into it 6. Exported the archive with XCode. 7. Uploaded with application loader Hope it helps someone. Upvotes: 1 <issue_comment>username_2: Do the following steps: 1. Delete the obj and bin folder in your iOS project. 2. Check your nuget packages are Xamarin.Swift3 or Xamarin.Swift4. 3. Use "Xamarin Studio" or "Visual Studio for Mac" to archive your app. 4. Open the archive by "Finder". 5. Add a folder named "SwiftSupport" in it. (Don't clone it into the archive) 6. Clone the swift dynamic library files `*.dylib` (which you need) into it. **Note**: They should be copied from the right toolchain of Xcode. For example, if your nuget package is Xamarin.Swift3, they should be copied form Xcode 8.3.3 (GM). 7. Exported the archive with "Xcode". 8. Uploaded with "Application Loader". Upvotes: 0