date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/15
2,608
8,967
<issue_start>username_0: I need to work with an intricate configuration of repositories. I have 5 of them: 1. A remote central repository on machine 1. 2. My local repository on my notebook (machine 2). 3. A bare repository on machine 3. 4. A repository on machine 3. 5. A repository on machine 4 where we do code review. So, my understanding that it works this way: 1. On my laptop (machine 2) I clone / pull from the central repository located on machine 1. 2. I push the local repo to the machine 3 (using the bare repository as a "intermediate"). Now I did some changes on the machine 3 and I want to push these changes to machine 4. Here are the instructions that I need to follow: 1. On machine 3 do all work in your test-branch, commit. 2. Push to your bare repo on machine 3: git push origin test-branch 3. On your laptop: fetch new commits from the machine-3 repo: git fetch machine3 4. Check out your branch from machine 3: git checkout -b test-branch machine-3/test-branch 5. Fetch commits from machine-4: git fetch origin 6. git rebase origin/master 7. git push origin HEAD:refs/for/master I have problems with step 4. I get the following error: > > fatal: 'machine3/test-branch' is not a commit and a branch 'test-branch' cannot be created from it > > > **ADDED** When I execute ``` git rev-parse machine3/test-branch ``` On my laptop (machine 2) I get: > > machine3/test-branch > > > fatal: ambiguous argument 'machine3/test-branch': unknown revision or path not in the working tree. > > > > ``` > Use '--' to separate paths from revisions, like this: > 'git [...] -- [...]' > > ``` > ><issue_comment>username_1: The question is complex / convolute, the answer is simple. There was a mismatch between the alias and machine3. The alias for the remote that has been used, was not for machine3. The machine3 had another alias. Upvotes: 1 [selected_answer]<issue_comment>username_2: For those who found this searching for an answer to `fatal: 'origin/remote-branch-name' is not a commit and a branch 'local-branch-name' cannot be created from it`, you may also want to try this first: ``` git fetch --all ``` If you run `git checkout -b local-branch-name origin/remote-branch-name` without `fetch`ing first, you can run into that error. The reason it says "is not a commit" rather than something clearer like "branch doesn't exist" is because git takes the argument where you specified `origin/remote-branch-name` and tries to resolve it to a commit hash. You can use tag names and commit hashes as an argument here, too. If they fail, it generates the same error. If git can't resolve the branch you provide to a specific commit, it's usually because it doesn't have the freshest list of remote branches. `git fetch --all` fixes that scenario. The `--all` flag is included in case you have multiple remotes (e.g. `origin`, `buildserver`, `joespc`, etc.), because `git fetch` by itself defaults to your first remote-- usually `origin`. You can also run `fetch` against a specific remote; e.g., `git fetch buildserver` will only fetch all the branches from the `buildserver` remote. To list all your remotes, run the command `git remote -v`. You can omit the `--all` flag from `git fetch` if you only see one remote name (e.g. `origin`) in the results. Upvotes: 8 <issue_comment>username_3: If you're checking out a branch from a tag (like `git checkout -b XXXX v0.1.1`) , you can try `git fetch --tags` first. Upvotes: 2 <issue_comment>username_4: I managed to fix this with this settings, just update the config with this command `git config -e --global` and add this config. ``` [remote "origin"] url = https://git.example.com/example.git (you can omit this URL) fetch = +refs/heads/*:refs/remotes/origin/* ``` and then you can `git fetch --all` Upvotes: 4 <issue_comment>username_5: I used git workflow in visual studio code as shown in the below diagram to solve mine: [![create a feature branch using git on vscode](https://i.stack.imgur.com/rt32W.png)](https://i.stack.imgur.com/rt32W.png) Upvotes: -1 <issue_comment>username_6: The [solution](https://stackoverflow.com/a/55406283/6309) from [username_2](https://stackoverflow.com/users/3986790/j-d-mallen) involves `git fetch --all` Starting with VSCode 1.53 (Jan. 2021), you will have ([issue 110811](https://github.com/microsoft/vscode/issues/110811) and [PR 111090](https://github.com/microsoft/vscode/pull/111090)): > > Changing `autofetch` to a string config which has "`current`", "`all`" and "`off`". > > > This allows users to fetch from **all remotes** instead of just their `origin`. > > For developers working on a Fork, this should make extensions work more intuitively. > > > So if you set the VSCode setting `git.autofetch` to "`all`", you won't even have to type `git fetch --all`. --- [<NAME>](https://stackoverflow.com/users/1601580/charlie-parker) adds in [the comments](https://stackoverflow.com/questions/49297153/why-is-it-not-a-commit-and-a-branch-cannot-be-created-from-it?noredirect=1#comment132349480_49297153): > > What about `git pull`? > > > That supposes you are on a current branch with a remote tracking branch already set up. The original question does not have such a branch in place, which means git pull would not know what to merge from fetched branches. Upvotes: -1 <issue_comment>username_7: For this issue: ``` fatal: 'machine3/test-branch' is not a commit and a branch 'test-branch' cannot be created from it ``` For me, I should have checked out to `test-branch` first, then it worked fine and created a new branch from `test-branch`. Upvotes: 0 <issue_comment>username_8: We had this error: ``` fatal: 'origin/newbranch' is not a commit and a branch 'newbranch' cannot be created from it ``` because we did a minimalistic clone using: ``` git clone --depth 1 --branch 'oldbranch' --single-branch '<EMAIL>:user/repo.git' ``` For us the minimalistic fix was: ``` git remote set-branches --add 'origin' 'newbranch' git fetch 'origin' git checkout --track 'origin/newbranch' ``` Assuming the remote is called 'origin' and the remote branch is called 'newbranch'. Upvotes: 6 <issue_comment>username_9: We were having this exact error in a Windows machine running gitbash on a folder synced with google drive. **It was caused by having the feature "Enable experimental built-in file system monitor" enabled.** After uninstalling and reinstalling gitbash with this feature disabled it was fixed. [![enter image description here](https://i.stack.imgur.com/BKOns.png)](https://i.stack.imgur.com/BKOns.png) Upvotes: 1 <issue_comment>username_10: I found this question troubleshooting simpler problem: I recieved same error trying to create a lobal branch from remote. I resolved it by creating branch from commit hash: ``` git ls-remote origin remote-branch returned refs/heads/remote-branch git checkout -b local-branch ``` Upvotes: 2 <issue_comment>username_11: I had the problem where I `git checkout name-branch` and it was created but when i check the branch with `git branch` nothing happens after hours of trying to figure out I tried to run the command `git push GitHub-URL name-branch` and it pushes the commit directly to the branch hope this helps Upvotes: 0 <issue_comment>username_12: My issues was I had a ***space*** in my new branch name Issue: `git checkout -b my-new-branch name` instead of `git checkout -b my-new-branch-name` Upvotes: 0 <issue_comment>username_13: The branch does not exist on the remote origin specified. Double check: You might be trying to pull oranges from a grapes tree. Upvotes: 0 <issue_comment>username_14: Check your git config in .git folder, and validate this content ``` [core] repositoryformatversion = 0 fileMode = false bare = false logallrefupdates = true ignorecase = true precomposeunicode = true [remote "origin"] url = https://github.com/ABC/project0001.git fetch = +refs/heads/*:refs/remotes/origin/* [branch "main"] remote = origin merge = refs/heads/main [pull] ff = no [branch "develop"] remote = origin merge = refs/heads/develop ``` `[remote "origin"]` <-- this section Upvotes: 0 <issue_comment>username_15: For those who found this searching for an answer to `fatal: 'origin/remote-branch-name' is not a commit and a branch 'local-branch-name' cannot be created from it`, another possibility we just found is the remote branch has been merged already :embarrassed: Upvotes: 0 <issue_comment>username_16: I just encountered this when I used the "copy" tool on GitHub next to the branch name of a coworker's PR. His branch was named like `coworker/ticket-number` and `git checkout --track coworker/ticket-number` was what came out of the paste. When I went back to my usual workflow, using `git branch -r | grep ticket-number` I found that the actual branch name I needed to use was `origin/coworker/ticket-number`. Upvotes: 0
2018/03/15
357
1,204
<issue_start>username_0: I an having trouble understanding why the "square" class isn't relative to the parent, "container". i.e - I set the container to have a height of 200px; the "square" class is inside the container, and is floated left. When I set margin-top to be 5%, it is not the 5% of 200px but something else. Why? ```css .container { height: 200px; background-color: gray; } .square { width: 20%; height: 70%; margin-left: 20px; margin-top: 5%; /* this guy right here */ background-color: white; float: left; } ``` ```html ```<issue_comment>username_1: The `margin` property takes the width of the parent not the height. You could use some JavaScript to create height related margins. Upvotes: 0 <issue_comment>username_2: Vertical padding and margin are relative to the width of the parent. Top and bottom on the other hand are not. Try to place a div inside another. Use the top property (for example: top: 25%) and make sure you specify a position (e.g. position: relative;) I've forked and adjusted your code example to demonstrate what I mean: <https://codepen.io/anon/pen/Yaqmva> ``` top: 5%; position: relative; ``` Upvotes: 3 [selected_answer]
2018/03/15
361
1,198
<issue_start>username_0: I have the following json file in mongodb: ``` { "_id" : ObjectId("59de156faf75d539b47e8db3"), "user" : "user1", "item" : { "32a1fsd32asfd65asdf65" : { ... }, "32a1fsd32asfd555" : { }, ... } } ``` I want to perform a query and delete one of the two items. As a matter of fact, my database contains several users. Therefore, in order to retrieve the specific one from the mongodb i am performing the following: How can I retrieve also a specific item and delete all its fields (for example 32a1fsd32asfd65asdf65)?<issue_comment>username_1: The `margin` property takes the width of the parent not the height. You could use some JavaScript to create height related margins. Upvotes: 0 <issue_comment>username_2: Vertical padding and margin are relative to the width of the parent. Top and bottom on the other hand are not. Try to place a div inside another. Use the top property (for example: top: 25%) and make sure you specify a position (e.g. position: relative;) I've forked and adjusted your code example to demonstrate what I mean: <https://codepen.io/anon/pen/Yaqmva> ``` top: 5%; position: relative; ``` Upvotes: 3 [selected_answer]
2018/03/15
3,680
11,958
<issue_start>username_0: I need to join many tables on one main table, from which I need all the rows. Should I do left join for all of them in the same query or should I use temp tables, physical intermediary tables or window functions? Currently the query will take very long time to complete for some 85K rows with indexes (the ones the query engine suggested) and optimized field data types. I choose to pursue temp tables. By using five of them I was able to drastically decrease query times by distributing the queries among the temp tables. Also I added a temp table to put in all the subqueries, drastically reducing query times on each subquery. Here is the original query: ``` CREATE VIEW [dwh].[Facts Tickets LVL 2 V] AS SELECT /* Level 1 fields */ T.[Ticket ID] ,T.[Brand ID] ,T.[Category ID] ,T.[Channel ID] ,T.[Custom field ID] ,T.[Brand Name] ,T.[Company Group Name] ,T.[Ticket creator User ID] ,T.[Created (datetime)] ,T.[Ticket URL] ,T.[Shared URL] ,T.[Ticket type] ,T.[Status group] ,T.[Importance] ,T.[Allow channelback] ,T.[Has incidents] ,T.[Is Hidden] ,T.[Has draft-reply] ,T.[Has staff answer] ,T.[Is Assigned] ,T.[Is Assigned to bot] ,T.[Is Deleted] ,T.[Is Expired] ,T.[Is Locked] ,T.[Is Spam] ,T.[Has Attachments] ,T.[Has Satisfaction entry] ,T.[Ticket age (days)] ,T.[Ticket age (group)] ,T.[Replies (count)] ,T.[Comments (count)] ,T.[Rows (count)] ,T.[Datasync ID] ,T.[DWH Processing (datetime)] ,T.[DWH Status] /* Level 2 fields */ ,RAC.[Replies by Agent (count)] -- +2 sec ,RATC.[Replies by Ticket Creator (count)] -- +2 sec ,FCR.[First Customer-reply (datetime)] -- +2 sec ,LCR.[Last Customer-reply (datetime)] -- +1 sec ,LCR.[Due (datetime)] -- +0 sec ,FAR.[First Agent-reply (datetime)] -- +1 sec ,LAR.[Agent User ID] -- ++++ ,LAR.[Last Agent-reply (datetime)] -- ++++ ,LAR.[Updated (datetime)] -- ++++ ,TS.[Satisfaction, scored] -- ++++ ,TCWT.[Ticket creator wait time (minutes)] ,AWT.[Agent wait time (minutes)] ,ARS.[Agent total wait time (minutes)] ,ARS.[Ticket creator total wait time (minutes)] FROM [dwh].[Facts Tickets LVL 1 T] AS T --85K in 3 sec /* ################## Ticket satisfaction queries ################## */ /* */ LEFT JOIN ( SELECT [Ticket ID], [Satisfaction, scored] -- 317 0 sec, 317 1 sec FROM [dwh].[Facts Ticket Satisfactions LVL 1 V] WHERE [Ticket Satisfaction ID] IN ( SELECT MAX([Ticket Satisfaction ID]) FROM [dwh].[Facts Ticket Satisfactions LVL 1 V] GROUP BY [Ticket ID] ) ) as TS ON T.[Ticket ID] = TS.[Ticket ID] /* ################## Response-statistics queries ################## */ /* Ticket creator wait time */ LEFT JOIN ( SELECT [Ticket ID], [Agent reply-time (seconds)] / 60 AS [Ticket creator wait time (minutes)] -- 445K in 2 sec, 445K in 3 sec, 418K in 5 sec FROM [dwh].[Facts Response-statistics LVL 1 T] WHERE [Response-statistics ID] IN ( SELECT MAX([Response-statistics ID]) -- 445K in 2 sec FROM [dwh].[Facts Response-statistics LVL 1 T] GROUP BY [Ticket ID] ) ) AS TCWT ON T.[Ticket ID] = TCWT.[Ticket ID] /* Agent wait time */ LEFT JOIN ( SELECT [Ticket ID], [Agent wait-time (seconds)] / 60 AS [Agent wait time (minutes)] -- 445K in 1 sec, 418K in 5 sec FROM [dwh].[Facts Response-statistics LVL 1 T] WHERE [Response-statistics ID] IN ( SELECT MIN([Response-statistics ID]) -- Flag: takes agent first wait, not last wait time FROM [dwh].[Facts Response-statistics LVL 1 T] GROUP BY [Ticket ID] ) ) AS AWT ON T.[Ticket ID] = AWT.[Ticket ID] /* Accumulated stats */ LEFT JOIN ( SELECT [Ticket ID], SUM([Agent reply-time (seconds)]) / 60 AS [Ticket creator total wait time (minutes)], SUM([Agent wait-time (seconds)]) / 60 AS [Agent total wait time (minutes)] --445K in 4 sec FROM [dwh].[Facts Response-statistics LVL 1 T] GROUP BY [Ticket ID]) AS ARS ON T.[Ticket ID] = ARS.[Ticket ID] /* ################## Reply queries ################## */ -- 85K in 20 sec /* [Replies by Agent (count)]. 547K in 11 sec */ LEFT JOIN ( SELECT [Ticket ID], COUNT([Reply ID]) AS [Replies by Agent (count)] -- 575K in 3 sec FROM [dwh].[Facts Replies LVL 1 T] WHERE [By Agent (Yes/No)] = 'Yes' GROUP BY [Ticket ID] ) AS RAC ON T.[Ticket ID] = RAC.[Ticket ID] /* [Replies by Ticket Creator (count)]. 377K in 33 sec */ LEFT JOIN ( SELECT [Ticket ID], COUNT([Reply ID]) AS [Replies by Ticket Creator (count)] -- 398K in 3 sec FROM [dwh].[Facts Replies LVL 1 T] WHERE [By Agent (Yes/No)] = 'No' GROUP BY [Ticket ID] ) AS RATC ON T.[Ticket ID] = RATC.[Ticket ID] /* First Customer Reply */ LEFT JOIN ( SELECT [Ticket ID], [Creation (datetime)] AS [First Customer-reply (datetime)] -- 398K in 5 sec, 398K in 8 sec FROM [dwh].[Facts Replies LVL 1 T] WHERE [Reply ID] IN ( SELECT MIN([Reply ID]) -- 398K in 4 sec FROM [dwh].[Facts Replies LVL 1 T] WHERE [By Agent (1/0)] = 0 GROUP BY [Ticket ID] ) ) AS FCR ON T.[Ticket ID] = FCR.[Ticket ID] /* Last Customer Reply. 376K in 26 sec*/ --<<-- Bottleneck LEFT JOIN ( SELECT [Ticket ID], [Creation (datetime)] AS [Last Customer-reply (datetime)], [Due (datetime)] -- 398K in 5 sec, 8 sec FROM [dwh].[Facts Replies LVL 1 T] WHERE [Reply ID] IN ( SELECT MAX([Reply ID]) -- 398 in 4 sec FROM [dwh].[Facts Replies LVL 1 T] WHERE [By Agent (1/0)] = 0 GROUP BY [Ticket ID] ) ) AS LCR ON T.[Ticket ID] = LCR.[Ticket ID] /* First Agent Reply */ LEFT JOIN ( SELECT [Ticket ID], [Creation (datetime)] AS [First Agent-reply (datetime)] -- 6 sec, 9 sec FROM [dwh].[Facts Replies LVL 1 T] WHERE [Reply ID] IN ( SELECT MIN([Reply ID]) -- 575K in 4 sec, 550K in 12 sec FROM [dwh].[Facts Replies LVL 1 T] WHERE [By Agent (1/0)] = 1 GROUP BY [Ticket ID] ) ) AS FAR ON T.[Ticket ID] = FAR.[Ticket ID] /* Last Agent Reply */ LEFT JOIN ( SELECT [Ticket ID], [Reply User-ID] AS [Agent User ID], [Creation (datetime)] AS [Updated (datetime)], [Creation (datetime)] AS [Last Agent-reply (datetime)] -- 573K in 9 sec, 9 sec FROM [dwh].[Facts Replies LVL 1 T] WHERE [Reply ID] IN ( SELECT MAX([Reply ID]) -- 575K in 4 sec FROM [dwh].[Facts Replies LVL 1 T] WHERE [By Agent (1/0)] = 1 GROUP BY [Ticket ID] ) ) AS LAR ON T.[Ticket ID] = LAR.[Ticket ID] /* ################## Action queries ################## */ /* First 'Assigned the ticket to' Action. 12K in 0 sec */ LEFT JOIN ( SELECT [Ticket ID], MIN([Creation (datetime)]) AS [Initially assigned (datetime)] -- 5K in 1 sec FROM [dwh].[Facts Actions LVL 1 T] WHERE [Action Type ID] = 28 /* Assigned the ticket to */ GROUP BY [Ticket ID]) AS FAA ON T.[Ticket ID] = FAA.[Ticket ID] /* Last 'Assigned the ticket to' Action. 12K in 0 sec*/ LEFT JOIN ( SELECT [Ticket ID], MAX([Creation (datetime)]) as [Assigned (datetime)] -- 5K in 1 sec ----------- Artur FROM [dwh].[Facts Actions LVL 1 T] WHERE [Action Type ID] = 28 /* Assigned the ticket to */ GROUP BY [Ticket ID]) AS LAA ON T.[Ticket ID] = LAA.[Ticket ID] /* First 'Completed this ticket' Action. 504K in 8 sec. */ LEFT JOIN ( SELECT [Ticket ID], MIN([Creation (datetime)]) AS [First Completion time (datetime)] -- 534K in 4 sec FROM [dwh].[Facts Actions LVL 1 T] WHERE [Action Type ID] = 12 and [Action Type Value] = 'Completed' /* Completed this ticket */ GROUP BY [Ticket ID]) AS FCT ON T.[Ticket ID] = FCT.[Ticket ID] /* Last 'Completed this ticket' Action. 504K in 8 sec. */ LEFT JOIN ( SELECT [Ticket ID], MAX([Creation (datetime)]) AS [Status updated (datetime)], MAX([Creation (datetime)]) AS [Solved (datetime)], MAX([Creation (datetime)]) AS [Completion time (datetime)] -- 534K in 8 sec FROM [dwh].[Facts Actions LVL 1 T] WHERE [Action Type ID] = 12 and [Action Type Value] = 'Completed' /* Completed this ticket */ GROUP BY [Ticket ID]) AS LCT ON T.[Ticket ID] = LCT.[Ticket ID] /* [Agent touches (count)]. 558K in 6 sec. */ LEFT JOIN ( SELECT [Ticket ID], COALESCE(COUNT(*),0) AS [Agent touches (count)] -- 616K in 5 sec FROM [dwh].[Facts Actions LVL 1 T] WHERE [Action User is Agent] = 'Yes' GROUP BY [Ticket ID]) AS ATC ON T.[Ticket ID] = ATC.[Ticket ID] /* Reopens (count) 504K in 7 sec. */ LEFT JOIN ( SELECT [Ticket ID], COUNT(*) -1 AS [Reopens (count)] -- 7K in 1 sec FROM [dwh].[Facts Actions LVL 1 T] WHERE [Action Type ID] = 12 and [Action Type Value] = 'Open' GROUP BY [Ticket ID] HAVING count([Ticket ID]) > 1 ) AS AL2 ON T.[Ticket ID] = AL2.[Ticket ID] WHERE YEAR([Created (datetime)]) = 2018 ```<issue_comment>username_1: This is just silly to me. You only report if it is the `MAX([Ticket Satisfaction ID])` of **all** the tickets. Why not report the max of that ticket? Less work and more information. Use the same pattern for all the joins. ``` LEFT JOIN ( SELECT [Ticket ID], max([Satisfaction, scored]) as max FROM [dwh].[Facts Ticket Satisfactions LVL 1 V] GROUP BY [Ticket ID] ) as TS ON T.[Ticket ID] = TS.[Ticket ID] ``` If you only want the max of all then ``` LEFT JOIN ( SELECT [Ticket ID], [Satisfaction, scored]) , DENSE_RANK() over (partition by [Ticket ID] order by [Satisfaction, scored] desc) as dr FROM [dwh].[Facts Ticket Satisfactions LVL 1 V] ) as TS ON T.[Ticket ID] = TS.[Ticket ID] AND TS.dr = 1 ``` Upvotes: 1 <issue_comment>username_2: Using a stored procedure which uses spreads the calculations among different levels of temp tables with 4-6 joins, with subqueries using a temp table shared among all the calculations, drastically reduced the query from seemingly indefinate calculation time to a calculation time of about 10-15 seconds. Upvotes: 0 <issue_comment>username_3: > > How to best join 10 plus tables on the same main table > > > I could see your derived tables are little bit complicated..SQLserver uses statistics to choose a plan and when you join multiple tables like the way you did, estimates might be off.. So i suggest,use temp table,index it and then run your query **Example:** ``` SELECT [Ticket ID], [Creation (datetime)] AS [First Customer-reply (datetime)] -- 398K in 5 sec, 398K in 8 sec FROM [dwh].[Facts Replies LVL 1 T] WHERE [Reply ID] IN ( SELECT MIN([Reply ID]) -- 398K in 4 sec FROM [dwh].[Facts Replies LVL 1 T] WHERE [By Agent (1/0)] = 0 GROUP BY [Ticket ID] ) ``` The above query output should be inserted in a temp table, should be indexed with joining key as lead column. also, i could see most of the queries are of below form.. ``` SELECT [Ticket ID], [Satisfaction, scored] -- 317 0 sec, 317 1 sec FROM [dwh].[Facts Ticket Satisfactions LVL 1 V] WHERE [Ticket Satisfaction ID] IN ( SELECT MAX([Ticket Satisfaction ID]) FROM [dwh].[Facts Ticket Satisfactions LVL 1 V] GROUP BY [Ticket ID] ) ``` You don't need to access table twice, you can use window functions ..Example query ,for the query above ``` ;with cte as ( select row_number() over (partition by ticketid order by [Ticket Satisfaction ID]) as rn from table ) select * from cte where rn=1 ``` you can remove `*` from my sample query and index it like below for the query to perform good ``` create index NCI_tcktid_trnsfrmid on table(ticketid,[Satisfaction ID]) include(somecolumns you need) ``` Upvotes: 2 [selected_answer]
2018/03/15
515
1,951
<issue_start>username_0: I want to integrate Firebase Crashlytics into my Android game (developed using Cocos2D-X engine). I have followed the steps mentioned in [this link](https://firebase.google.com/docs/crashlytics/get-started). I'm getting following error when building the project using Android Studio: ``` ERROR - Crashlytics Developer Tools error. java.lang.IllegalArgumentException: Crashlytics found an invalid API key: null. Check the Crashlytics plugin to make sure that the application has been added successfully! ``` It looks like that I am missing some API key. How can I find the **Crashlytics API** key in Firebase? How to mention that key in the Android project? I would appreciate any suggestions and thought on this topic. Thank you.<issue_comment>username_1: Rolling back to "com.google.gms:google-services:4.0.1" in my project build.gradle file fixed the problem for me. The Firebase docs say to use version 4.1.0, which if you follow will give the error above. Upvotes: 1 <issue_comment>username_2: Please note the following things, this should solve your issue: 1. Check Android Studio version, suggested version is 3.1.4. 2. Classpath (project-level *build.gradle* file shouldn't have any alpha or beta versions. That is because Firebase only supports [stable version](https://developer.android.com/studio/releases/) of Android Studio. That said, the recommended setup of your gradle version is: ``` classpath 'com.android.tools.build:gradle:3.1.4' classpath 'com.google.gms:google-services:4.1.0' ``` Upvotes: 1 <issue_comment>username_3: Firebase Crashlytics requires the Google services Gradle plugin (com.google.gms:google-services) in your build script to process the google-services.json file at build time. However, Crashlytics does work on devices that do not have Google Play services. I have added this line at end of `build.gradle` ``` apply plugin: 'com.google.gms.google-services' ``` Upvotes: 0
2018/03/15
1,015
4,291
<issue_start>username_0: I have a loop that creates 34 buttons, how would i set a different onclick for each one? ``` while(i<34) { Button btnTag = new Button(this); btnTag.setLayoutParams(new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); btnTag.setText(names[i]); btnTag.setId(i); btnTag.callOnClick(); layout.addView(btnTag); btnTag.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { HOW TO MAKE EACH ONE DO A DIFFERENT THING } }); i++; } ```<issue_comment>username_1: ``` public void createButtons() { while(i<34) { Button btnTag = new Button(this); btnTag.setLayoutParams(new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); btnTag.setText(names[i]); btnTag.setId(i); btnTag.setTag("TAG"+i); // set a tag btnTag.callOnClick(); layout.addView(btnTag); btnTag.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { buttonClicked(v); } }); i++; } } // hand button click public void buttonClicked(View view) { final Object tag = view.getTag(); if ("TAG1".equals(tag)) { //button 1 } else if ("TAG2".equals(tag)) { //button 2 } // add more if else } ``` Every button you will create a new OnClickListener object, prefer use one click listener object: ``` public void createButtons() { // just need one listener object View.OnClickListener listener = new View.OnClickListener() { @Override public void onClick(View v) { buttonClicked(v); } }; while(i<34) { Button btnTag = new Button(this); btnTag.setLayoutParams(new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); btnTag.setText(names[i]); btnTag.setId(i); btnTag.setTag("TAG"+i); // set a tag btnTag.callOnClick(); layout.addView(btnTag); btnTag.setOnClickListener(listener); i++; } } ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: You can make different onclick event thanks to `v.getId()` ``` while(i<34) { Button btnTag = new Button(this); btnTag.setLayoutParams(new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); btnTag.setText(names[i]); btnTag.setId(i); btnTag.callOnClick(); layout.addView(btnTag); btnTag.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { switch(v.getId()) { // Your code } } }); i++; } ``` Upvotes: 1 <issue_comment>username_3: Though what you are doing isn't practical but Here is what I would Suggest : Make your activity `implements` `View.OnClickListener` Implement the overrided method inside your activity. ``` @override public void onClick(View view){ switch(view.getId()){ // You will have to understand the buttons Ids case R.id.buttonId1: MyFunction(); break; case R.id.buttonId2: MyFunction2(); break; default: break; } ``` while creating buttons use `button.setOnClickListener(this);` Your Edited Function. ``` public void createButtons() { while(i<34) { Button btnTag = new Button(this); btnTag.setLayoutParams(new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); btnTag.setText(names[i]); btnTag.setId(i); btnTag.setTag("TAG"+i); // set a tag btnTag.callOnClick(); layout.addView(btnTag); btnTag.setOnClickListener(this); i++; } } ``` Upvotes: 0
2018/03/15
931
3,877
<issue_start>username_0: How do i set counter for separate 3 buttons that will increase by 5, 10, 20 if clicked. Suppose, Button A will count 5 if clicked Button B will count 10 if clicked Button C will count 20 if click and, It will Increase on Every Click.....<issue_comment>username_1: ``` public void createButtons() { while(i<34) { Button btnTag = new Button(this); btnTag.setLayoutParams(new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); btnTag.setText(names[i]); btnTag.setId(i); btnTag.setTag("TAG"+i); // set a tag btnTag.callOnClick(); layout.addView(btnTag); btnTag.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { buttonClicked(v); } }); i++; } } // hand button click public void buttonClicked(View view) { final Object tag = view.getTag(); if ("TAG1".equals(tag)) { //button 1 } else if ("TAG2".equals(tag)) { //button 2 } // add more if else } ``` Every button you will create a new OnClickListener object, prefer use one click listener object: ``` public void createButtons() { // just need one listener object View.OnClickListener listener = new View.OnClickListener() { @Override public void onClick(View v) { buttonClicked(v); } }; while(i<34) { Button btnTag = new Button(this); btnTag.setLayoutParams(new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); btnTag.setText(names[i]); btnTag.setId(i); btnTag.setTag("TAG"+i); // set a tag btnTag.callOnClick(); layout.addView(btnTag); btnTag.setOnClickListener(listener); i++; } } ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: You can make different onclick event thanks to `v.getId()` ``` while(i<34) { Button btnTag = new Button(this); btnTag.setLayoutParams(new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); btnTag.setText(names[i]); btnTag.setId(i); btnTag.callOnClick(); layout.addView(btnTag); btnTag.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { switch(v.getId()) { // Your code } } }); i++; } ``` Upvotes: 1 <issue_comment>username_3: Though what you are doing isn't practical but Here is what I would Suggest : Make your activity `implements` `View.OnClickListener` Implement the overrided method inside your activity. ``` @override public void onClick(View view){ switch(view.getId()){ // You will have to understand the buttons Ids case R.id.buttonId1: MyFunction(); break; case R.id.buttonId2: MyFunction2(); break; default: break; } ``` while creating buttons use `button.setOnClickListener(this);` Your Edited Function. ``` public void createButtons() { while(i<34) { Button btnTag = new Button(this); btnTag.setLayoutParams(new LinearLayout.LayoutParams(LinearLayout.LayoutParams.MATCH_PARENT, LinearLayout.LayoutParams.MATCH_PARENT)); btnTag.setText(names[i]); btnTag.setId(i); btnTag.setTag("TAG"+i); // set a tag btnTag.callOnClick(); layout.addView(btnTag); btnTag.setOnClickListener(this); i++; } } ``` Upvotes: 0
2018/03/15
681
2,145
<issue_start>username_0: I have an object like this: ``` myObject: { items: [{ title: '140', isActive: true, }, { title: '7', isActive: false }, { title: '10', isActive: false }] } ``` Which I'm using like this: ``` ``` This is how the component looks like: ``` export default { name: 'IPanel', props: { items: { type: Array, default () { return [] } } }, computed: { // code prefs () { return { items: this.items } } }, methods: { onClick (item) { this.prefs.items.forEach(item => { if (JSON.stringify(item) === JSON.stringify(clickedItem)) { item.isActive = true } }) } } } ``` When I click an item (and that `item` is the same as the `clickedItem`), it's supposed to become `isActive` `true`. It does. But I have to refresh the Vue devtools or re-render the page for the change to take effect. Why isn't `item.isActive = true` reactive?<issue_comment>username_1: Change ``` ``` to ``` ``` Then, in your change method, go like this: ``` onClick (index) { Vue.set(this.items, index, true); } ``` > > <https://v2.vuejs.org/v2/guide/list.html#Object-Change-Detection-Caveats> > > > Upvotes: 2 [selected_answer]<issue_comment>username_2: In the code you posted, you are using a clickedItem object that is not defined anywhere. I don't know if this is just in the process of writing your question or if it is your problem. However, when using `clickedItem` the right way, it seems to work: <https://jsfiddle.net/d5z93ygy/4/> **HTML** ``` {{ item.isActive ? 'active' : 'inactive' }} ``` **JS** ``` new Vue({ el: "#app", data: { items: [{ title: '140', isActive: true, }, { title: '7', isActive: false }, { title: '10', isActive: false }] }, computed: { // code prefs () { return { items: this.items } } }, methods: { onClick (clickedItem) { this.prefs.items.forEach(item => { if (JSON.stringify(item) === JSON.stringify(clickedItem)) { item.isActive = true } }) } } }) ``` Upvotes: 2
2018/03/15
754
2,502
<issue_start>username_0: I'm trying to Graph Excel data using ChartJS. Visual Studio is saying that `List does not contain a definition for Answers.` I can't find anything wrong with my code, though. Though, I've only been using VS for the past two days. Can someone look at my code and maybe find a mistake, or two? Thanks! ViewModel: ``` using System; using System.Collections.Generic; using System.Linq; using ReinovaGrafieken.Models; namespace ReinovaGrafieken.Models { public class GraphDataViewModel { public List GraphData { get; set; } } } ``` Graphs Model: ``` using System; using System.Collections.Generic; using System.Linq; using System.Web; using ReinovaGrafieken.Models; namespace ReinovaGrafieken.Models { public class Graphs { public string Names { get; set; } public string AnswerHeaders { get; set; } public int Answers { get; set; } public string Questions { get; set; } public string AnteOrPost { get; set; } } } ``` And a piece of the code from the View: ``` @model ReinovaGrafieken.Models.GraphDataViewModel @{ ViewBag.Title = "Dashboard"; Layout = "~/Views/Shared/_Layout.cshtml"; } @Scripts.Render("~/bundles/jquery") @Scripts.Render("~/bundles/bootstrap") Dashboard --------- ```<issue_comment>username_1: Change ``` ``` to ``` ``` Then, in your change method, go like this: ``` onClick (index) { Vue.set(this.items, index, true); } ``` > > <https://v2.vuejs.org/v2/guide/list.html#Object-Change-Detection-Caveats> > > > Upvotes: 2 [selected_answer]<issue_comment>username_2: In the code you posted, you are using a clickedItem object that is not defined anywhere. I don't know if this is just in the process of writing your question or if it is your problem. However, when using `clickedItem` the right way, it seems to work: <https://jsfiddle.net/d5z93ygy/4/> **HTML** ``` {{ item.isActive ? 'active' : 'inactive' }} ``` **JS** ``` new Vue({ el: "#app", data: { items: [{ title: '140', isActive: true, }, { title: '7', isActive: false }, { title: '10', isActive: false }] }, computed: { // code prefs () { return { items: this.items } } }, methods: { onClick (clickedItem) { this.prefs.items.forEach(item => { if (JSON.stringify(item) === JSON.stringify(clickedItem)) { item.isActive = true } }) } } }) ``` Upvotes: 2
2018/03/15
702
2,280
<issue_start>username_0: I'm trying to use ip-api.com/php but there's a very slow response on my server and I figured out that it is because of the file\_get\_contents So basically, I have a pretty simple script (got from github I think) ``` function get_ip() { //Just get the headers if we can or else use the SERVER global if ( function_exists( 'apache_request_headers' ) ) { $headers = apache_request_headers(); } else { $headers = $_SERVER; } //Get the forwarded IP if it exists if ( array_key_exists( 'X-Forwarded-For', $headers ) && filter_var( $headers['X-Forwarded-For'], FILTER_VALIDATE_IP, FILTER_FLAG_IPV4 ) ) { $the_ip = $headers['X-Forwarded-For']; } elseif ( array_key_exists( 'HTTP_X_FORWARDED_FOR', $headers ) && filter_var( $headers['HTTP_X_FORWARDED_FOR'], FILTER_VALIDATE_IP, FILTER_FLAG_IPV4 ) ) { $the_ip = $headers['HTTP_X_FORWARDED_FOR']; } else { $the_ip = filter_var( $_SERVER['REMOTE_ADDR'], FILTER_VALIDATE_IP, FILTER_FLAG_IPV4 ); } return $the_ip; } $ip=get_ip(); $query = @unserialize(file_get_contents('http://ip-api.com/php/'.$ip)); ``` When it comes to ``` $query = @unserialize(file_get_contents('http://ip-api.com/php/'.$ip)); ``` It freezes for around 1min.<issue_comment>username_1: If you read their API you will see this: > > Deprecated Use JSON. Almost all PHP sites now support json\_decode(), > and it's faster than unserialize() > > > The reference is [here](http://ip-api.com/docs/api:serialized_php) There you will also find how to do it with Json with an example you can leverage to get your point: > > To receive the response in JSON format, send a GET request to > > > <http://ip-api.com/json> > > > You can supply an IP address or domain to lookup, or none to use your > current IP address. > > > The reference is [here](http://ip-api.com/docs/api:json) Upvotes: 2 <issue_comment>username_2: In my experience, <http://ip-api.com/json> didn't respond to the server and it takes much time. When I call from localhost it working fine. Now I am using <https://freegeoip.app/json/> and it's allowed up to 15,000 queries per hour by default. Upvotes: 0
2018/03/15
771
2,626
<issue_start>username_0: There are 2 divs. div1 and div2. Initialy, div1 is shown and div2 is hidden. Onclick of a button, div1 has to be hidden and div2 should be displayed in the place of div1.<issue_comment>username_1: I would have done like below in simple HTML and JQuery as below: ``` function ToggleDiv() { var divType = $("#DivType").val(); if (divType === "div1") { $("#div1").show(); $("#div2").hide(); $('#DivType input').value = "div2"; } else if (divType === "div2") { $("#div1").show(); $("#div2").hide(); $('#DivType input').value = "div1"; } } ``` `ToggleDiv` will be called on `OnClick` event of button. Not sure whether there is any better way to do it in React. Upvotes: 0 <issue_comment>username_2: Create a state to indicate whether `div1` is to be shown or `div2` is to be shown. Then, add a `onClick` handler function to the button. Finally, conditionally render which component is to be shown according to that state. Code: ``` class TwoDivs extends React.Component { state = { div1Shown: true, } handleButtonClick() { this.setState({ div1Shown: false, }); } render() { return ( this.handleButtonClick()}>Show div2 { this.state.div1Shown ? (Div1) : (Div2) } ); } } ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: You can achieve it by adding a new property inside component's state. Clicking the button will simply toggle that state, and the component will re-render, due to **setState** method. Please notice that this will toggle between the two divs. If you only want to show the second one, set the new state like this: `this.setState({firstDivIsActive: false}` ``` class MyComponent extends React.Component { constructor(props) { super(props); this.state = { firstDivIsActive: true }; } render() { let activeDiv; if (this.state.firstDivIsActive) { activeDiv = I am one; } else { activeDiv = I am two; } return ( { this.setState({ firstDivIsActive: !this.state.firstDivIsActive }); }} > Toggle Div {activeDiv} ); } } ``` Upvotes: 1 <issue_comment>username_4: class ToggleDivs extends React.Component { ``` state = { showDiv1: true, } handleButtonClick() { this.setState({ showDiv1: !this.state.showDiv1 }); } render() { const { showDiv1 } = this.state; const buttonTitle = showDiv1 ? 'Div2' : 'Div1'; const div1 = (Div1); const div2 = (Div2); return ( this.handleButtonClick()}>Show {buttonTitle} {showDiv1 ? div1 : div2} ); } ``` } Upvotes: 0
2018/03/15
984
3,501
<issue_start>username_0: newbie here... my root view controller has the following code: ``` func showVC1() { let storyBoard : UIStoryboard = UIStoryboard(name: "Main", bundle:nil) let vc1 = storyBoard.instantiateViewController(withIdentifier: "first") self.present(vc1, animated:true, completion:nil) } func showVC2() { let storyBoard : UIStoryboard = UIStoryboard(name: "Main", bundle:nil) let vc2 = storyBoard.instantiateViewController(withIdentifier: "second") self.present(vc2, animated:true, completion:nil) } ``` I can call these functions in either order, the first works fine, but the second will give me an error like "Attempt to present on whose view is not in the window hierarchy!" I think I want to keep all of my code in the root vc that contains these functions. How can I make this work? I'm also concerned that each time I call these functions I'll be creating new vc instances which will use more memory. Is there a way to keep a reference to these vc's outside of these functions? And will that solve the hierarchy issue?<issue_comment>username_1: I would have done like below in simple HTML and JQuery as below: ``` function ToggleDiv() { var divType = $("#DivType").val(); if (divType === "div1") { $("#div1").show(); $("#div2").hide(); $('#DivType input').value = "div2"; } else if (divType === "div2") { $("#div1").show(); $("#div2").hide(); $('#DivType input').value = "div1"; } } ``` `ToggleDiv` will be called on `OnClick` event of button. Not sure whether there is any better way to do it in React. Upvotes: 0 <issue_comment>username_2: Create a state to indicate whether `div1` is to be shown or `div2` is to be shown. Then, add a `onClick` handler function to the button. Finally, conditionally render which component is to be shown according to that state. Code: ``` class TwoDivs extends React.Component { state = { div1Shown: true, } handleButtonClick() { this.setState({ div1Shown: false, }); } render() { return ( this.handleButtonClick()}>Show div2 { this.state.div1Shown ? (Div1) : (Div2) } ); } } ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: You can achieve it by adding a new property inside component's state. Clicking the button will simply toggle that state, and the component will re-render, due to **setState** method. Please notice that this will toggle between the two divs. If you only want to show the second one, set the new state like this: `this.setState({firstDivIsActive: false}` ``` class MyComponent extends React.Component { constructor(props) { super(props); this.state = { firstDivIsActive: true }; } render() { let activeDiv; if (this.state.firstDivIsActive) { activeDiv = I am one; } else { activeDiv = I am two; } return ( { this.setState({ firstDivIsActive: !this.state.firstDivIsActive }); }} > Toggle Div {activeDiv} ); } } ``` Upvotes: 1 <issue_comment>username_4: class ToggleDivs extends React.Component { ``` state = { showDiv1: true, } handleButtonClick() { this.setState({ showDiv1: !this.state.showDiv1 }); } render() { const { showDiv1 } = this.state; const buttonTitle = showDiv1 ? 'Div2' : 'Div1'; const div1 = (Div1); const div2 = (Div2); return ( this.handleButtonClick()}>Show {buttonTitle} {showDiv1 ? div1 : div2} ); } ``` } Upvotes: 0
2018/03/15
1,201
4,189
<issue_start>username_0: Ok, so currently I've got kubernetes master up and running on AWS EC2 instance, and a single worker running on my laptop: ``` $ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 34d v1.9.2 worker Ready 20d v1.9.2 ``` I have created a Deployment using the following configuration: ``` apiVersion: apps/v1 kind: Deployment metadata: name: hostnames labels: app: hostnames-deployment spec: selector: matchLabels: app: hostnames replicas: 1 template: metadata: labels: app: hostnames spec: containers: - name: hostnames image: k8s.gcr.io/serve_hostname ports: - containerPort: 9376 protocol: TCP ``` The deployment is running: ``` $ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE hostnames 1 1 1 1 1m ``` A single pod has been created on the worker node: ``` $ kubectl get pods NAME READY STATUS RESTARTS AGE hostnames-86b6bcdfbc-v8s8l 1/1 Running 0 2m ``` From the worker node, I can curl the pod and get the information: ``` $ curl 10.244.8.5:9376 hostnames-86b6bcdfbc-v8s8l ``` I have created a service using the following configuration: ``` kind: Service apiVersion: v1 metadata: name: hostnames-service spec: selector: app: hostnames ports: - port: 80 targetPort: 9376 ``` The service is up and running: ``` $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hostnames-service ClusterIP 10.97.21.18 80/TCP 1m kubernetes ClusterIP 10.96.0.1 443/TCP 34d ``` As I understand, the service should expose the pod cluster-wide and I should be able to use the service IP to get the information pod is serving from any node on the cluster. If I curl the service from the worker node it works just as expected: ``` $ curl 10.97.21.18:80 hostnames-86b6bcdfbc-v8s8l ``` But if I try to curl the service from the master node located on the AWS EC2 instance, the request hangs and gets timed out eventually: ``` $ curl -v 10.97.21.18:80 * Rebuilt URL to: 10.97.21.18:80/ * Trying 10.97.21.18... * connect to 10.97.21.18 port 80 failed: Connection timed out * Failed to connect to 10.97.21.18 port 80: Connection timed out * Closing connection 0 curl: (7) Failed to connect to 10.97.21.18 port 80: Connection timed out ``` **Why can't the request from the master node reach the pod on the worker node by using the Cluster-IP service?** I have read quite a bit of articles regarding kubernetes networking and the official kubernetes services documentation and couldn't find a solution.<issue_comment>username_1: Depends of which mode you using it working different in details, but conceptually same. You trying to connect to 2 different types of addresses - the pod IP address, which is accessible from the node, and the virtual IP address, which is accessible from pods in the Kubernetes cluster. IP address of the service is **not** an IP address on some pod or any other subject, that is a virtual address which mapped to pods IP address based on rules you define in service and it managed by `kube-proxy` daemon, which is a part of Kubernetes. That address specially desired for communication inside a cluster for make able to access the pods behind a service without caring about how much replicas of pod you have and where it actually working, because service IP is static, unlike pod's IP. So, service IP address desired to be available from other pod, not from nodes. You can read in [official documentation](https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies) about how the Service Virtual IPs works. Upvotes: 2 <issue_comment>username_2: `kube-proxy` is responsible for setting up the IPTables rules (by default) that route cluster IPs. The Service's cluster IP should be routable from anywhere running `kube-proxy`. My first guess would be that `kube-proxy` is not running on the master. Upvotes: 2
2018/03/15
917
2,872
<issue_start>username_0: I'm trying the following tutorial. [Automatic serverless deployments with Cloud Source Repositories and Container Builder](https://cloudplatform.googleblog.com/2018/03/automatic-serverless-deployments-with-Cloud-Source-Repositories-and-Container-Builder.html) But I got the error below. ``` $ gcloud container builds submit --config deploy.yaml . BUILD Already have image (with digest): gcr.io/cloud-builders/gcloud ERROR: (gcloud.beta.functions.deploy) Error creating a ZIP archive with the source code for directory .: ZIP does not support timestamps before 1980 ERROR ERROR: build step 0 "gcr.io/cloud-builders/gcloud" failed: exit status 1 ``` I'm now trying to solve it. Do you have any idea? My gcloud is the latest version. ``` $ gcloud -v Google Cloud SDK 193.0.0 app-engine-go app-engine-python 1.9.67 beta 2017.09.15 bq 2.0.30 core 2018.03.09 gsutil 4.28 ``` Sample google cloud function code on the tutorial. ``` #index.js exports.f = function(req, res) { res.send("hello, gcf!"); }; #deploy.yaml steps: - name: gcr.io/cloud-builders/gcloud args: - beta - functions - deploy - --trigger-http - --source=. - --entry-point=f - hello-gcf # Function name #deploying without Cloud Container Builder is fine. gcloud beta functions deploy --trigger-http --source=. --entry-point=f hello-gcf ```<issue_comment>username_1: While I don't know the reason, I found a workaround. ``` (1) make src directory and move index.js into it. ├── deploy.yaml └── src └── index.js (2) deploy via Cloud Container Builder. $ gcloud container builds submit --config deploy.yaml ./src ``` Upvotes: 0 <issue_comment>username_2: Container Builder tars your source folder. Maybe something in your . directory has corrupted dates? That's why moving it to the source folder fixes it. Upvotes: 2 [selected_answer]<issue_comment>username_3: I ran into the same issue now. I could not solve it but at least I found out where it comes from. When you locally submit your build there is a tar created and uploaded to a bucket. In this tar the folders are created at 01.01.1970: ``` 16777221 8683238 drwxr-xr-x 8 user staff 0 256 "Jan 1 01:00:00 1970" "Jan 1 01:00:00 1970" "May 15 12:42:04 2019" "Jan 1 01:00:00 1970" 4096 0 0 test ``` This issue only occurs locally. If you have a github build trigger it works Upvotes: 0 <issue_comment>username_4: I recently came across the same issue using Cloud Build (the successor to Container Builder). What helped was adding a step to list all the files/folders in the Cloud Build environment (default directory is `/workspace`) to identify the problematic file/folder. You can do this by overriding the `gcloud` container's entrypoint to execute the `ls` command. ``` steps - name: gcr.io/cloud-builders/gcloud entrypoint: "ls" args: ["-la", "/workspace"] ``` Upvotes: 0
2018/03/15
1,055
3,268
<issue_start>username_0: Flexdashboard allows to specify three sectors for its gauges: "danger", "warning" and "success". I want to use 5 gauge sectors to show in which interval my observed value lies. I calculated confidence intervals with alpha 0.2 (80 %) and 0.01 (99 %) and use this to define 5 sectors: ``` Sector 1 = c(min(value),lower_90_ci) Sector 2 = c(lower_90_ci,lower_80_ci) Sector 3 = c(lower_80_ci, upper_80_ci) Sector 4 = c(upper_80_ci, upper_90_ci) Sector 5 = c(upper_90_ci, max(value)) ``` This is a standard-gauge in flexdashboard: ``` library(flexdashboard) gauge(42, min = 0, max = 100, symbol = '%', gaugeSectors( success = c(80, 100), warning = c(40, 79), danger = c(0, 39) )) ```<issue_comment>username_1: I don't think it can be done out of the box. Digging into the `resolveSectors` function shows that it expects three sectors and is quite inflexible: ``` function (sectors, min, max) { if (is.null(sectors)) { sectors = sectors(success = c(min, max), warning = NULL, danger = NULL, colors = c("success", "warning", "danger")) } if (is.null(sectors$success) && is.null(sectors$warning) && is.null(sectors$danger)) { sectors$success <- c(min, max) } if (is.null(sectors$colors)) sectors$colors <- c("success", "warning", "danger") customSectors <- list() addSector <- function(sector, color) { if (!is.null(sector)) { if (!is.numeric(sector) || length(sector) != 2) stop("sectors must be numeric vectors of length 2", call. = FALSE) customSectors[[length(customSectors) + 1]] <<- list(lo = sector[[1]], hi = sector[[2]], color = color) } } sectors$colors <- rep_len(sectors$colors, 3) addSector(sectors$success, sectors$colors[[1]]) addSector(sectors$warning, sectors$colors[[2]]) addSector(sectors$danger, sectors$colors[[3]]) customSectors } ``` Nevertheless, you could build your own `gauge` function that uses a custom built `resolveSectors` function (using the current function as a template) that expects five sectors. Upvotes: 0 <issue_comment>username_2: If the intention is to have optimum range in the middle with warning and danger on both higher and lower sides, I tried this: ``` gauge(value = 95, # For example min = 0, max = 100, sectors = gaugeSectors( success = c(20, 80), warning = c(10, 90), danger = c(0, 100) ) ) ``` You may want to make sure that the sectors covers the whole range (min-max). Any value within the range but not belonging to any sector will use default color (success). Upvotes: 1 <issue_comment>username_3: Like Niels proposes, a workaround is to adjust the color in the gaugeSectors() to be depended on the input. For example, 5 colors on the interval 0-100: ``` gaugecol <- c("#9ceb34","#D2D64C","#ebb134","#FF8C00","purple") names(gaugecol) <- c("L","ML","M","MH","H") quintile <- cut(input$x, breaks = seq(0,100,length = 6),c("L","ML","M","MH","H") ) gauge(input$x, min = 0, max = 100, sectors = gaugeSectors(success = c(0,100), colors = gaugecol[quintile]) ) ``` Upvotes: 0
2018/03/15
1,092
4,549
<issue_start>username_0: Why can't I make second constructor using switch case statements that refer input to the first constructor? It shows error "Constructor call must be the first statement in a constructor using this". So it seems that I have to retype assignments from 1st constructor for every case statement in the second. ``` public class Card { public static final String CLUBS = "Clubs"; public static final String DIAMONDS = "Diamonds"; public static final String HEARTS = "Hearts"; public static final String SPADES = "Spades"; public static final int ACE = 1; public static final int JACK = 11; public static final int QUEEN = 12; public static final int KING = 13; public Card(int rank, String suit) { this.rank = rank; this.suit = suit; } public Card(String rank, String suit) { if (!isCorrectSuit(suit)) throw new IllegalArgumentException("incorrect suit"); switch(rank) { case ACE: this(1, suit); case JACK: this(11, suit); case QUEEN: this(12, suit); case KING : this(13, suit); default: throw new IllegalArgumentException("incorrect rank"); } } private boolean isCorrectSuit(String suit) { return (suit.equals(CLUBS) || suit.equals(DIAMONDS) || suit.equals(HEARTS) || suit.equals(SPADES)); } private boolean isCorrectRank(int rank) { return rank == 1 || rank == 11 || rank == 12 || rank == 13; } private int rank; private String suit; } ```<issue_comment>username_1: If you want invoke one constructor from another within the class you invoke it through `this(args)` key word with adequate arguments. It's not fully clear which constructor you want to use in which, but I think this is more probable: ``` public Card(int rank, String suit) { this.rank = rank; this.suit = suit; } public static Card create(String rank, String suit) { if (!isCorrectSuit(suit)) throw new IllegalArgumentException("incorrect suit"); switch(rank) { case ACE: new Card(1, suit); break; case JACK: new Card(11, suit); break; case QUEEN: new Card(12, suit); break; case KING : new Card(13, suit); break; default: throw new IllegalArgumentException("incorrect rank"); } } ``` you have a static method that lets you create an object of type `Card`. Upvotes: 0 <issue_comment>username_2: You are probably looking for a static factory method where `this(...)` can be replaced with `new Card(...)`: ``` class Card { ... private Card(int rank, String suit) { this.rank = rank; this.suit = suit; } public static Card of(String rank, String suit) { if (!isCorrectSuit(suit)) { throw new IllegalArgumentException("incorrect suit"); } final Card card; switch (Integer.valueOf(rank)) { case ACE: card = new Card(1, suit); break; case JACK: card = new Card(11, suit); break; case QUEEN: card = new Card(12, suit); break; case KING: card = new Card(13, suit); break; default: { throw new IllegalArgumentException("incorrect rank"); } } return card; } } ``` These four instances can be predefined, you needn't create them over again. --- I wonder why you didn't apply the same approach to the `rank` value: ``` public Card(int rank, String suit) { if (!isCorrectSuit(suit)) { throw new IllegalArgumentException("incorrect suit"); } if (!isCorrectRank(rank)) { throw new IllegalArgumentException("incorrect rank"); } this.rank = rank; this.suit = suit; } public Card(String rank, String suit) { this(Integer.valueOf(rank), suit); } ``` Note that you've got 2 public constructors, and only one of them has a kind of validation. I can create a `new Card(2, "Unknown")` and will get no exceptions. Other options for consideration might be: * writing enums instead of primitive values * writing sets of values to replace the `isCorrectX` methods with `Set#contains` Upvotes: 3 [selected_answer]<issue_comment>username_3: just add a static routine that converts your String to an int Upvotes: -1
2018/03/15
1,198
4,410
<issue_start>username_0: I looked through the internet and couldn't really get an answer, if I have a process that is heavy on CPU usage, and I would like to present a progress bar while it's running (so the program will not get in the state of "not responding") on another thread probably, how do I do that?<issue_comment>username_1: The solution should be something like: You have at least 2 threads, **threadA** and **threadB**. * **ThreadA** is the thread that calls `mainloop()` and where the progress bar lives. * **ThreadB** is the thread where your heavy process do his stuff. When every thing starts: * **ThreadB** put a message in a shared queue. + (this way ThreadB tells ThreadA it is not in "not responding" state) * **ThreadA** get the message from the shared queue and updates the progress bar + (this way ThreadA tells the user that ThreadB is not in "not responding" state) This is a very general solution pattern. If you try to write down some code that implements this pattern, post it and I will try to help. Here some reference: * for the threads: <https://docs.python.org/3/tutorial/stdlib2.html#multi-threading> * for the shared queue: <https://docs.python.org/3/library/queue.html#module-queue> **EDIT** Here a complete example of the general solution. I hope this helps. ``` import tkinter as tk from tkinter import ttk import threading import queue import time shared_queue = queue.Queue() def thread1_main(): tot_time = 5 # seconds elapsed_time = 0 while(True): time.sleep(1) shared_queue.put("I'm working") elapsed_time += 1 if(elapsed_time > tot_time): break; shared_queue.put("task done") def updates_progress_bar(): try: msg = shared_queue.get(block=False) except queue.Empty: msg = None else: # do update the progress bar here delta = 1 val = bar_value.get()+delta val = val if val <= bar["maximum"] else 0 bar_value.set(val) if(msg != "task done"): root.after(500, updates_progress_bar) else: print(msg) if(__name__ == "__main__"): root = tk.Tk() root.wm_geometry("200x200") root.rowconfigure(0, weight=1) root.columnconfigure(0, weight=1) bar_value = tk.IntVar() bar = ttk.Progressbar( root, maximum=3, length=100, variable=bar_value) bar.grid(row=0, column=0, sticky="we", padx=5) shared_queue = queue.Queue() thread1 = threading.Thread(target=thread1_main) thread1.deamon = True thread1.start() root.after(500, updates_progress_bar) root.mainloop() print("that's all folks") ``` Upvotes: 1 <issue_comment>username_2: This is a primitive progress bar implementation with an indefinite mode of operation. It does not run in a thread as it really should. If you integrate this in a thread that is NOT doing the heavy lifting, and send a stop message to this thread, then the process will stop. This example uses a timer to stop the progress bar, but you can use any event just as well. ``` import tkinter as tk from tkinter import ttk import sys class MyProgress(tk.Frame): def __init__(self, parent, **kwargs): tk.Frame.__init__(self, parent) self.pack() label = tk.Label(self, text="Progress:", anchor='w') label.pack(side='left') self.progress = ttk.Progressbar(self) for attribute,value in kwargs.items(): try: self.progress[attribute] = value except: print("Attribute error:", attribute) print("Try one of:") print(self.progress.configure()) sys.exit() self.progress.pack(side='left') self.running = False self.parent = parent def start(self): self.running = True self.increaseProgress() def stop(self): self.running = False print("End of Job") def increaseProgress(self): if not self.running: return self.progress['value'] += 1 self.parent.after(100, self.increaseProgress) if __name__ == '__main__': root = tk.Tk() app = MyProgress(root, mode='indeterminate', maximum=10, length=100, value=0) app.start() # Trigger a stop after 5 seconds root.after(5000, app.stop) root.mainloop() ``` Upvotes: 0
2018/03/15
593
2,094
<issue_start>username_0: I am using react router in and I want to disable the to attribute in a certain state. I passed empty string, but that doesn't disable the link instead it takes to the base route of the page. I even tried to pass null but that breaks the code. Is it even possible to do so? ``` ```<issue_comment>username_1: You could try disabling the button with a custom click handler. ``` handleClick = (e) => { const { linkDisabled } = this.state if(linkDisabled) e.preventDefault() } render() { return ( ... ) } ``` You might want to add some css for when the button is disabled Alternatively you could just not show the button at all ``` { this.state.linkDisabled ? null : } ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: ``` { this.state.role !== 4 ? : } ``` Upvotes: 2 <issue_comment>username_3: I used the same as username_1 , but i just change to functional component only my confirmation is disabled my styled links is the same of NavLink : NavBar --> index.js ``` export default function NavBar() { const handleClick = (e) => { e.preventDefault() } return ( CART PAYMENT CONFIRMATION ) } ``` NavBar ---> Styles.js ``` export const StyledLink = styled(NavLink)` text-decoration: none; color: #d6d6d6; display: flex; cursor: pointer; ${(props) => props.disabled && ` cursor: default;`} &.${(props) => props.activeClassName} { color: #fe8d3b; } &:focus, &:hover, &:visited, &:link, &:active { text-decoration: none; } ` ``` Upvotes: 2 <issue_comment>username_4: Another option would be to create your custom link wrapper component and to render the `NavLink` or not conditionally. In the following example, the property `active` determines if the link will be rendered or simply the text of the link. ```js function HeaderLink(props) { if(props.active) { return {props.children} } return {props.children} } ``` Usage with e.g. state dependency within a navigation element: ``` * Personal * Contact * Sigup ``` Upvotes: 0
2018/03/15
1,238
4,792
<issue_start>username_0: I need to save user information from a register form to mongodb. Everything works well except the image file. I am using Multer for uploading images from form to server. Also to show that image in the profile page once the user logged in. Below is the code: signup.ejs: ``` Profile Picture Name Email Password ``` routes.js: ``` var multer = require('multer'); var storage = multer.diskStorage({ destination: function (req, file, cb) { cb(null, './uploads'); }, filename: function (req, file, cb) { cb(null, file.originalname); } }); var upload = multer({ storage: storage }); module.exports = function(app, passport) { app.post('/signup', upload.single('image'), passport.authenticate('local-signup', { successRedirect : '/login', failureRedirect : '/', failureFlash : true })); ``` The image is uploaded in /uploads folder. But how to get that image and save it in mongodb. I am using passport.js and below is the code for saving post data. UserModel.js: ``` var mongoose = require('mongoose'); var Schema = mongoose.Schema; var userSchema = new Schema({ image: { data : Buffer, contentType : String }, name: { type: String, }, email: { type: String, }, password: { type: String, } }); ``` passport.js: ``` passport.use('local-signup', new LocalStrategy({ usernameField : 'email', passwordField : '<PASSWORD>', passReqToCallback : true }, function(req, email, password, done) { User.findOne({ 'email' : email }, function(err, user) { if (err) { //errorLogger.error(err); return done(err); } if (user) { return done(null, false, req.flash('signupMessage', 'Email already exists.')); } else { var newUser = new User(); //newUser.image = "dont know how to get image from /uploads" newUser.name = req.body.name; newUser.email = req.body.email; newUser.password = <PASSWORD>; newUser.save(function(err) { if (err) throw err; return done(null, newUser, req.flash('signupMessage', 'User created')); }); } }); ```<issue_comment>username_1: You can create your own middleware that will handle the Upload middleware and get the filename. I would also suggest to add some random string at the end of uploaded image as protection from same names. ``` app.post('/signup', middleware , passport.authenticate('local-signup', { successRedirect : '/login', failureRedirect : '/', failureFlash : true })); ``` middleware ``` function middleware(req, res, next) { var imageName; var uploadStorage = multer.diskStorage({ destination: function (req, file, cb) { cb(null, './uploads'); }, filename: function (req, file, cb) { imageName = file.originalname; //imageName += "_randomstring" cb(null, imageName); } }); var uploader = multer({storage: uploadStorage}); var uploadFile = upload.single('image'); uploadFile(req, res, function (err) { req.imageName = imageName; req.uploadError = err; next(); }) } ``` and the you can use req.imageName ``` passport.use('local-signup', new LocalStrategy({ usernameField : 'email', passwordField : '<PASSWORD>', passReqToCallback : true }, function(req, email, password, done) { User.findOne({ 'email' : email }, function(err, user) { if (err) { //errorLogger.error(err); return done(err); } if (user) { return done(null, false, req.flash('signupMessage', 'Email already exists.')); } else { var newUser = new User(); newUser.image = req.imageName; newUser.name = req.body.name; newUser.email = req.body.email; newUser.password = <PASSWORD>; newUser.save(function(err) { if (err) throw err; return done(null, newUser, req.flash('signupMessage', 'User created')); }); } }); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: If you are using images of format 'jpeg / png' and if they are less than 16mb, you can go with this github repo, its a module that helps saving images to mongodb easily, and without the complexity of GRIDFS, but in case if your files are greater than 16mb, you need to use GRIDFS, This is the link for the github repo for images less than 16 mb (and also works well with react) <https://github.com/saran-surya/Mongo-Image-Converter> Upvotes: 0
2018/03/15
998
4,080
<issue_start>username_0: I am using nodeJS and mongodb in one of my project. I am trying to save data in multiple collection in one save button. The code which I am using to achieve this is as follow: ``` var lastInsertId; loginData={ userName: req.body.txtUsername, password: <PASSWORD>, active:1, createdOn:new Date(), updatedOn:new Date() }; var dbo = db.db("test"); dbo.collection("login").insertOne(loginData, function(err, result) { if (err) throw err; lastInsertId=result.insertedId; }); if(lastInsertId){ usersData={ email: req.body.txtEmail, firstName: req.body.txtFirstName, lastName:req.body.txtLastName, mobileNumber:req.body.txtMobileNumber, login_id:lastInsertId, active:1, createdOn:new Date(), updatedOn:new Date() }; dbo.collection("users").insertOne(usersData, function(err, result) { if (err) throw err; console.log('saved to users'); }); } ``` Could you please tell what is wrong in the above code? Thank you. Regards, Saloni<issue_comment>username_1: I think move IF block inside callback of insert login function like this should work ``` var lastInsertId; loginData = { userName: req.body.txtUsername, password: <PASSWORD>, active: 1, createdOn: new Date(), updatedOn: new Date() }; var dbo = db.db("test"); dbo.collection("login").insertOne(loginData, function (err, result) { if (err) throw err; lastInsertId = result.insertedId; if (lastInsertId) { usersData = { email: req.body.txtEmail, firstName: req.body.txtFirstName, lastName: req.body.txtLastName, mobileNumber: req.body.txtMobileNumber, login_id: lastInsertId, active: 1, createdOn: new Date(), updatedOn: new Date() }; dbo.collection("users").insertOne(usersData, function (err, result) { if (err) throw err; console.log('saved to users'); }); } }); ``` Upvotes: 0 <issue_comment>username_2: I want to give an explanation regarding the above issue with the code. ``` var lastInsertId; //it is undefined right now //The following function is an asynchronous function. var dbo = db.db("test"); dbo.collection("login").insertOne(loginData, function(err, result) { if (err) throw err; lastInsertId=result.insertedId; }); /*NodeJs doesn't run synchronously. So, while it is running above function i.e. insertOne, without it being completed it reaches here. Since, the above function is not completed executing, lastInsertId will be undefined still. So, the following block of code doesn't run */ if(lastInsertId){ //this block won't run usersData={ //key-value pairs }; dbo.collection("users").insertOne(usersData, function(err, result) { if (err) throw err; console.log('saved to users'); }); } /*The solution to the above problem can be achieved by putting the code block inside 'if(lastInsertId)' in callback of insert login. So it would run only after the execution of insertOne function.*/ //Therefore the correct code would be: var dbo = db.db("test"); dbo.collection("login").insertOne(loginData, function(err, result) { if (err) throw err; lastInsertId=result.insertedId; if(lastInsertId){ //this block will run usersData={ //key-value pairs }; dbo.collection("users").insertOne(usersData, function(err, result) { if (err) throw err; console.log('saved to users'); }); } }); ``` Upvotes: 3 [selected_answer]
2018/03/15
1,191
4,025
<issue_start>username_0: I'm currently in the process of updating some apps to the latest Swift 4 syntax and have run into a problem, and I'm sure I'm just missing something really obvious. Initially my code was as follows: ``` let indices : [Int] = [0,1,2,3] //let newSequence = shuffle(indices) let newSequence = indices.shuffle() var i : Int = 0 for(i = 0; i < newSequence.count; i++) { let index = newSequence[i] if(index == 0) { // we need to store the correct answer index currentCorrectAnswerIndex = i } ``` Initially I had three error with the above code, namely: 1. `Value of type '[Int]' has no member 'shuffle'` 2. `C-style for statement has been removed in Swift 3` 3. `Unary operator '++' cannot be applied to an operand of type '@lvalue Int'` To address these I changed the code so that it's now as follows: ``` let indices : [Int] = [0,1,2,3] //let newSequence = shuffle(indices) let newSequence = indices.shuffle() var i : Int = 0 while i < newSequence.count {i += 1 let index = newSequence[i] if(index == 0) { // we need to store the correct answer index currentCorrectAnswerIndex = i } ``` After changing the code and running a Product > Clean within Xcode, I no longer have any errors. However, when I use the app in the iOS Simulator it hangs at a certain point and checking within Xcode gives me the `Thread 1: Fatal error: Index out of range` error on the following line of code: ``` let index = newSequence[I] ``` I've read through other questions about the same error in Swift (see: [1](https://stackoverflow.com/q/44616460/4353060), [2](https://stackoverflow.com/q/47355509/4353060), [3](https://stackoverflow.com/q/44731707/4353060) and [4](https://stackoverflow.com/q/46863774/4353060)) but these don't seem to apply in my scenario. I know I'm missing something really obvious here, but at present it's just escaping me. Any thoughts?<issue_comment>username_1: You put a `i += 1` in there that isn't in the original code. Upvotes: 0 <issue_comment>username_2: Change position of (incrementing value) `i` as shown in below code. (Value of `i` is increment before it is used in loop operations, hence during final/last iteration of loop, value of `i` becomes larger than array indices) ``` let indices : [Int] = [0,1,2,3] //let newSequence = shuffle(indices) let newSequence = indices.shuffle() var i : Int = 0 while i < newSequence.count { let index = newSequence[i] if(index == 0) { // we need to store the correct answer index currentCorrectAnswerIndex = i } i += 1 // Update (increment) value for i at last } ``` ### Update (As suggested by MartinR) This would be better than you did: ``` let indices : [Int] = [0,1,2,3] let newSequence = indices.shuffle() for (i, newSeqIndex) in newSequence.enumerated() { if (newSeqIndex == 0) { currentCorrectAnswerIndex = i } } ``` Upvotes: 2 <issue_comment>username_3: The reason for crash is that you increment `i`, perform some operations with it, and only then check if it is not out of range. To get rid of this error just move `i += 1` to the end of your closure. But why not use fast enumeration? You're still trying to do a loop the old way, but with new syntax. Instead of doing it C way ``` var i : Int = 0 while i < newSequence.count { let index = newSequence[i] if(index == 0) { // we need to store the correct answer index currentCorrectAnswerIndex = i } i += 1 } ``` do it the correct Swift way ``` for (index, value) in newSequence.enumerated() { if value == 0 { currentCorrectAnswerIndex = index } } ``` Upvotes: 2 <issue_comment>username_4: The Swift 3+ equivalent of the code is ``` let indices : [Int] = [0,1,2,3] let newSequence = indices.shuffle() // this seems to be a custom function for i in 0.. ``` Upvotes: 3 [selected_answer]
2018/03/15
546
2,316
<issue_start>username_0: Dear Stackoverflow Users, I haven't seen any examples regarding this question, but I should probably mention my plan first. I have three different pages, that basically share many FormGroups. Their differences are basically minimal, maybe one or two FormControls. Of course, I could basically just copy paste the entire component and merely add the control I need, but I feel that it is a fairly impractical and messy solution. How would I properly approach this problem? And are any good resources regarding inheritance in Angular in general? The official page doesn't seem to make any mention from what I have seen in the hero course.<issue_comment>username_1: You can do it with content projection. Let's assume you have a FormGroup component with two inputs. This component will also have `ng-content` so that other components can inject their own html. ``` @Component({ selector: 'my-form-group', template: ` Type something : Select something: ... Insert your own stuff: ` }) export class FormGroupComponent {} @Component({ selector: 'my-form-group-radio', template: ` ` }) export class FromGroupRadioComponent {} ``` In this way, you can reuse `FormGroupComponent` as many times as you want. You can create different kind of components and project different content. Upvotes: 0 <issue_comment>username_2: I don't know why I assumed, that it wasn't as simple as I thought it to be. First, I create a FormBuilder class, that sets up all the controls that are shared between all the other forms I use. Now, any component that is extending the FormBuilder class is able to create a form like this. ``` import { Component } from '@angular/core'; import { FormControl } from '@angular/forms'; import { Form } from '../form'; @Component({ selector: 'app-invoices', templateUrl: './invoices.component.html', styleUrls: ['./invoices.component.scss'] }) export class InvoicesComponent extends FormBuilder { constructor() { super() this.createForm(); } } ``` So, the FormControls that I needed could be easily included by just adding ``` this.queryForm.addControl('newControl', new FormControl('')) ``` So the html code of the component required almost no changes whatsoever. Upvotes: 2 [selected_answer]
2018/03/15
1,341
4,816
<issue_start>username_0: Inspecting memory leakage on one of my apps, I've found that the next code "behaves strange". ``` public String DoTest() { String fileContent = ""; String fileName = ""; String[] filesNames = System.IO.Directory.GetFiles(logDir); List contents = new List(); for (int i = 0; i < filesNames.Length; i++) { fileName = filesNames[i]; if (fileName.ToLower().Contains("aud")) { contents.Add(System.IO.File.ReadAllText(fileName)); } } fileContent = String.Join("", contents); return fileContent; } ``` Before running this piece of code, the memory used by object was approximatly 1.4 Mb. Once this method called, it used 70MB. waiting some minutes, nothing changed (the original object was being released a long time ago). calling to ``` GC.Collect(); GC.WaitForFullGCComplete(); ``` decreased memory to 21MB (Yet, far much more than the 1.4MB at the beginning). Tested with console app (infinity loop) and winform app. Happens even on direct call (no need to create more objects). Edit: full code (console app) to show the problem ``` using System; using System.Collections.Generic; using System.Diagnostics; using System.Threading; namespace memory_tester { /// /// Class to show loosing of memory /// class memory_leacker { // path to folder with 250 text files, total of 80MB of text const String logDir = @"d:\http_server_test\http_server_test\bin\Debug\logs\"; /// /// Collecting all text from files in folder logDir and returns it. /// /// public String DoTest() { String fileContent = ""; String fileName = ""; String[] filesNames = System.IO.Directory.GetFiles(logDir); List contents = new List(); for (int i = 0; i < filesNames.Length; i++) { fileName = filesNames[i]; if (fileName.ToLower().Contains("aud")) { //using string builder directly into fileContent shows same results. contents.Add(System.IO.File.ReadAllText(fileName)); } } fileContent = String.Join("", contents); return fileContent; } /// /// demo call to see that no memory leaks here /// /// public String DoTestDemo() { return ""; } } class Program { /// /// Get current proc's private memory /// /// public static long GetUsedMemory() { String procName = System.AppDomain.CurrentDomain.FriendlyName; long mem = Process.GetCurrentProcess().PrivateMemorySize64 ; return mem; } static void Main(string[] args) { const long waitTime = 10; //was 240 memory\_leacker mleaker = new memory\_leacker(); for (int i=0; i< waitTime; i++) { Console.Write($"Memory before {GetUsedMemory()} Please wait {i}\r"); Thread.Sleep(1000); } Console.Write("\r\n"); mleaker.DoTestDemo(); for (int i = 0; i < waitTime; i++) { Console.Write($"Memory after demo call {GetUsedMemory()} Please wait {i}\r"); Thread.Sleep(1000); } Console.Write("\r\n"); mleaker.DoTest(); for (int i = 0; i < waitTime; i++) { Console.Write($"Memory after real call {GetUsedMemory()} Please wait {i}\r"); Thread.Sleep(1000); } Console.Write("\r\n"); mleaker = null; for (int i = 0; i < waitTime; i++) { Console.Write($"Memory after release objectg {GetUsedMemory()} Please wait {i}\r"); Thread.Sleep(1000); } Console.Write("\r\n"); GC.Collect(); GC.WaitForFullGCComplete(); for (int i = 0; i < waitTime; i++) { Console.Write($"Memory after GC {GetUsedMemory()} Please wait {i}\r"); Thread.Sleep(1000); } Console.Write("\r\n...pause..."); Console.ReadKey(); } } } ```<issue_comment>username_1: I refactored version of your code below, here I have removed the need for the list of strings named 'contents' in your original question. ``` public String DoTest() { string fileContent = ""; IEnumerable filesNames = System.IO.Directory.GetFiles(logDir).Where(x => x.ToLower().Contains("aud")); foreach (var fileName in filesNames) { fileContent = string.Join("", System.IO.File.ReadAllText(fileName)); } return fileContent; } ``` Upvotes: -1 <issue_comment>username_2: I believe that if you use stringbuilder on fileContent instead string, you can improve your performance and usage of memory. ``` public String DoTest() { var fileContent = new StringBuilder(); String fileName = ""; String[] filesNames = System.IO.Directory.GetFiles(logDir); for (int i = 0; i < filesNames.Length; i++) { fileName = filesNames[i]; if (fileName.ToLower().Contains("aud")) { fileContent.Append(System.IO.File.ReadAllText(fileName)); } } return fileContent; } ``` Upvotes: 2
2018/03/15
845
2,917
<issue_start>username_0: See the code snippet below: ``` #include using std::vector; int main() { vector a; auto &⁢=--a.end();//what?the hint tell me it deduces a lvalue reference type! } ``` We all know that a self-defined class with `operator --` returns a **xvalue** when the operand itself is a rvalue, like `--Myclass()`. Obviously `Myclass()` is **prvalue**, so the return value of `--Myclass()` should also be **rvalue**(prcisely,xvalue) ,too. from [cppref](http://en.cppreference.com/w/cpp/language/value_category) > > a.m, the member of object expression, where a is an rvalue and m is a non-static data member of non-reference type; > > > So why does the `auto` deduce lvalue reference in this circumstance? What's more,**the code snippet could be compiled without any error!** **Why could a rvalue be binded to a lvalue reference?** And I've come across a confusing error(Not the same as the code snippet above,I'm sure the vector is not empty):Later when I use the `it`, **segment fault** happens! [The code that causes **segment fault**](https://paste.ubuntu.com/p/5BSTwW2Ssv/) (in the last three lines) This code is the answer for a Chinese online testing [PAT](https://www.patest.cn/contests/pat-a-practise/1068),when I submit the answer, it arouses segment fault. Is it **undefined behavior** to use lvalue reference to bind a `--Myclass()` ,and use it later?<issue_comment>username_1: This is an ["universal reference"](http://isocpp.org/blog/2012/11/universal-references-in-c11-scott-meyers). The variable is not an rvalue reference, but an lvalue into which the rvalue is moved. This is necessary because a reference would not extend the life time of the temporary object. Upvotes: -1 <issue_comment>username_2: > > We all know that a self-defined class with `operator --` returns a rvalue when we write somethin like `--Myclass()` > > > No, that's not something we know. Technically, a user defined prefix decrement operator can return either an object, or a reference. In practice, the prefix decrement operator typically returns an lvalue reference. > > Obviously `Myclass()` is prvalue, so the return value of `--Myclass()` should also be prvalue ,too. > > > That's not how value categories propagate. You can call a function on a prvalue, and the function can return an lvalue (possibly to `*this`). > > So why does the auto deduce lvalue reference in this circumstance? > > > Because the decrement operator of the iterator returns an lvalue reference. --- Use `auto it` to fix the problem. --- > > Is it undefined behavior to use lvalue reference to bind a `--Myclass()` ,and use it later? > > > Depends on how `Myclass::operator--()` is declared. If it returns an lvalue reference, then that is UB. If it returns an object, then there is no UB. It is possible to provide both variants overloaded by ref-qualifier. Upvotes: 4 [selected_answer]
2018/03/15
459
1,733
<issue_start>username_0: My Xamarin form apps was running like 5 minutes ago then all of sudden it stop working. The apps close instantly after it started. Below shows message display at Visual studio output, ``` Assembly Loader probing location: 'System.Buffers'. Could not load assembly 'System.Buffers' during startup registration. This might be due to an invalid debug installation. A common cause is to 'adb install' the app directly instead of doing from the IDE. ``` Things that I tried but failed to resolve the problem, 1. Reinstall System.Buffer from Nuget package. 2. Clean bin and obj folder of android 3. Unchecked use shared runtime at Android options 4. Clean solution and rebuild solution Forum that I referred 1. <https://forums.xamarin.com/discussion/115983/stuck-on-error-could-not-load-assembly-system-buffers-during-startup-registration> 2. <https://forums.xamarin.com/discussion/63584/android-could-not-load-assembly-xxx-during-startup-registration> 3. <https://bugzilla.xamarin.com/show_bug.cgi?id=44518> Can anyone helps me? Thanks in advance<issue_comment>username_1: I have found the solution from one of the [forum](https://forums.xamarin.com/discussion/113718/android-error-with-signalr-core-could-not-load-assembly-system-memory-during-startup-registration). Uninstall `System.Buffer` and install `System.Memory` using NuGet package to your project and its done. Hope it helps you. Upvotes: 3 [selected_answer]<issue_comment>username_2: I have solved this issue in little bit different manner. I have installed `System.Memory` from NuGet package manager without uninstalling `System.Buffer`. Then only error is gone. If I am uninstalling `System.Buffer` I still face the same issue. Upvotes: 0
2018/03/15
477
1,657
<issue_start>username_0: I'm working on a simple comments removal script in PHP. What I'd like to do is loop this entire script for all the files in the same directory. At this point I'm not concerned with subdirectories or subfolder. This example only has 1 file. I know we need to do a do..while loop but what would be the proper syntax? ``` php $file='home.html'; $page = file_get_contents($file); $current = preg_replace('<!--[^\[](.*?)--', '', $page); file_put_contents($file, $current); echo $current; ?> ``` I have a folder filled with \*.html files so this script would read each file with the \*.html extension and apply the preg\_replace. Thanks in advance! V<issue_comment>username_1: I wrote a function similar to your needs not long ago: ``` private function getFilesByPattern($pattern = '*.html', $flags = 0) { $files = glob($pattern, $flags); foreach (glob(dirname($pattern) . '/*', GLOB_ONLYDIR | GLOB_NOSORT) as $dir) { $files = array_merge($files, $this->getTestFiles($dir . '/' . basename($pattern), $flags)); } return $files; } ``` This function also searches recusively through sub directories! For `$flags` look into the flags of glob: <http://php.net/manual/de/function.glob.php> Can be called like this: `getTestFiles('../src/directory/*.html')` Upvotes: 2 <issue_comment>username_2: ``` foreach (glob("*.html") as $file) { $page = file_get_contents($file); $current = preg_replace('', '', $page); file_put_contents($file, $current); echo $file . " "; } ``` This works. It removes all the html comments from every html file in a folder. Thanks for pointing me in the right direction! Upvotes: 1
2018/03/15
994
2,923
<issue_start>username_0: Analysing project with this command ``` .\gradlew sonarqube \ -Dsonar.host.url=http://my.url \ -Dsonar.login=login --stacktrace ``` Getting this error > > org.gradle.execution.TaskSelectionException: Task '\' not found in root project 'JavaLint' > > > And here is my gradle file ``` plugins { id "org.sonarqube" version "2.6.2" } apply plugin: 'application' apply plugin: 'java' apply plugin: 'eclipse' archivesBaseName = 'JavaLint' version = '0.1-SNAPSHOT' mainClassName = 'Main' repositories { mavenCentral() } jar { manifest { attributes 'Main-Class': 'com.test.Run' } } sourceSets { main { java { srcDirs 'src' } } } dependencies { compile group: 'commons-io', name: 'commons-io', version: '2.6' compile group: 'commons-lang', name: 'commons-lang', version: '2.6' compile group: 'org.jsoup', name: 'jsoup', version: '1.11.2' compile group: 'junit', name: 'junit', version: '4.12' compile group: 'log4j', name: 'log4j', version: '1.2.16' } ``` Stack trace [![stacktrace](https://i.stack.imgur.com/Onjcd.png)](https://i.stack.imgur.com/Onjcd.png) I don't understand what am i supposed to do with it.<issue_comment>username_1: Use it like that in windows Terminal: .\gradlew sonarqube -Dsonar.host.url=<http://my.url> -Dsonar.login=login --stacktrace After that ,you will find the project already in the SonarQube server. Upvotes: 1 [selected_answer]<issue_comment>username_2: It looks like you're using [cmder](https://cmder.net/) (it's so beautiful) which means you're on Windows. For Windows you'll want to provide the arguments to `gradle` as follows ``` ./gradlew sonarqube -D "sonar.projectKey=" -D "sonar.host.url=http://localhost:9000" -D "sonar.login=" ``` Upvotes: 4 <issue_comment>username_3: Thanks to the accepted answer by @username_1 above,I got my answer too. Ok let me mention the details of how I was doing for an Android project in Android Studio and what I figured out after my research. From the command generated by the Sonar Dashbaord, it is `./gradlew sonarqube \-Dsonar.projectKey=LetsGo \-Dsonar.host.url=http://localhost:9000 \-Dsonar.login=<PASSWORD>` But, for Windows machine it should actually be: `gradlew sonarqube -Dsonar.projectKey=LetsGo -Dsonar.host.url=http://localhost:9000 -Dsonar.login=<PASSWORD>` This worked like a charm for me. Notice the Backwards slashes `\` which are not needed on the windows machine. Also, `./gradlew` works in mac and `gradlew` on windows Upvotes: 2 <issue_comment>username_4: Make sure first the directory contains `gradle.build`. The command for windows : ``` ./gradlew sonarqube -D "sonar.projectKey=your_sonarqube_key" -D "sonar.host.url=http://localhost:9000" -D "sonar.login=your_sonarqube_token" ``` This has worked for me as expected. Upvotes: 1
2018/03/15
687
2,134
<issue_start>username_0: I have a Javascript Array filled with mean Values and I want to insert them into a collection with a field named "mean". The Field "mean" already exists and has already values in them and now I want to update them with the values of the Array. To be more specific: I want the first Value of the Array to be in the first Document under the field "mean" and so on. I have 98 Documents and the Array has also a length of 98. The Collection looks like this with the name "cmean": ``` { "_id" : "000", "mean" : 33.825645389680915 } { "_id" : "001", "mean" : 5.046005719077798 } ``` and the Array: ``` [ 33.89923155012405, 5.063347068609219 ] ```<issue_comment>username_1: ``` update({"_id": id}, $set: {"mean": myArray}, function(res, err) { ... }); ``` If you are using mongoose, you also have to change data model from string to array. Upvotes: 0 <issue_comment>username_2: You can use the **[`forEach`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach)** method on the array to iterate it and update the collection. Use the index to get the `_id` to be used in the update query, something like the following: ``` meansArray.forEach(function(mean, idx) { var id = db.cmean.find({}).skip(idx).limit(1).toArray()[0]["_id"]; db.cmean.updateOne( { "_id": id }, { "$set": { "mean": mean } }, { "upsert": true } ); }); ``` --- For large collections, you can streamline your db performance using **[`bulkWrite`](https://docs.mongodb.com/manual/reference/method/db.collection.bulkWrite/)** as follows: ``` var ops = []; meansArray.forEach(function(mean, idx) { var id = db.cmean.find({}).skip(idx).limit(1).toArray()[0]["_id"]; ops.push({ "updateOne": { "filter": { "_id": id }, "update": { "$set": { "mean": mean } }, "upsert": true } }); if (ops.length === 1000 ) { db.cmean.bulkWrite(ops); ops = []; } }) if (ops.length > 0) db.cmean.bulkWrite(ops); ``` Upvotes: 3 [selected_answer]
2018/03/15
1,270
3,955
<issue_start>username_0: I have a dataframe with columns: 1. `diff` - difference between registration date and payment date,in days 2. `country` - country of user 3. `user_id` 4. `campaign_id` -- another categorical column, we will use it in groupby I need to calculate count distinct users for every `country`+`campaign_id` group who has `diff`<=n. For example, for `country` 'A', `campaign` 'abc' and `diff` 7 i need to get count distinct users from `country` 'A', `campaign` 'abc' and `diff` **<=** 7 My current solution(below) works too long ``` import pandas as pd import numpy as np ## generate test dataframe df = pd.DataFrame({ 'country':np.random.choice(['A', 'B', 'C', 'D'], 10000), 'campaign': np.random.choice(['camp1', 'camp2', 'camp3', 'camp4', 'camp5', 'camp6'], 10000), 'diff':np.random.choice(range(10), 10000), 'user_id': np.random.choice(range(1000), 10000) }) ## main result_df = pd.DataFrame() for diff in df['diff'].unique(): tmp_df = df.loc[df['diff']<=diff,:] tmp_df = tmp_df.groupby(['country', 'campaign'], as_index=False).apply(lambda x: x.user_id.nunique()).reset_index() tmp_df['diff'] = diff tmp_df.columns=['country', 'campaign', 'unique_ppl', 'diff'] result_df = pd.concat([result_df, tmp_df],ignore_index=True, axis=0) ``` Maybe there is better way to do this?<issue_comment>username_1: One alternative below, but [@username_2's solution](https://stackoverflow.com/a/49298071/9209546) is optimal. **Performance benchmarking** ``` %timeit original(df) # 149ms %timeit jp(df) # 81ms %timeit jez(df) # 47ms def original(df): result_df = pd.DataFrame() for diff in df['diff'].unique(): tmp_df = df.loc[df['diff']<=diff,:] tmp_df = tmp_df.groupby(['country', 'campaign'], as_index=False).apply(lambda x: x.user_id.nunique()).reset_index() tmp_df['diff'] = diff tmp_df.columns=['country', 'campaign', 'unique_ppl', 'diff'] result_df = pd.concat([result_df, tmp_df],ignore_index=True, axis=0) return result_df def jp(df): result_df = pd.DataFrame() lst = [] lst_append = lst.append for diff in df['diff'].unique(): tmp_df = df.loc[df['diff']<=diff,:] tmp_df = tmp_df.groupby(['country', 'campaign'], as_index=False).agg({'user_id': 'nunique'}) tmp_df['diff'] = diff tmp_df.columns=['country', 'campaign', 'unique_ppl', 'diff'] lst_append(tmp_df) result_df = result_df.append(pd.concat(lst, ignore_index=True, axis=0), ignore_index=True) return result_df def jez(df): df1 = pd.concat([df.loc[df['diff']<=x].assign(diff=x) for x in df['diff'].unique()]) df2 = (df1.groupby(['diff','country', 'campaign'], sort=False)['user_id'] .nunique() .reset_index() .rename(columns={'user_id':'unique_ppl'}) .reindex(columns=['country', 'campaign', 'unique_ppl', 'diff'])) return df2 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: First use list comprehension with [`concat`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html) and [`assign`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html) for join all together and then `groupby` with [`nunique`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.SeriesGroupBy.nunique.html) with adding column `diff`, last rename columns and if necessary add [`reindex`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html) for custom columns order: ``` df1 = pd.concat([df.loc[df['diff']<=x].assign(diff=x) for x in df['diff'].unique()]) df2 = (df1.groupby(['diff','country', 'campaign'], sort=False)['user_id'] .nunique() .reset_index() .rename(columns={'user_id':'unique_ppl'}) .reindex(columns=['country', 'campaign', 'unique_ppl', 'diff'])) ``` Upvotes: 3
2018/03/15
1,525
5,941
<issue_start>username_0: Suppose I'm trying to create a collector that aggregates data into a resource that has to be closed after usage. Is there any way to implement something similar to a `finally` block in a `Collector`? In the successful case this could be done in the `finisher` method, but there does not seem to be any method invoked in case of exceptions. The goal would be to implement an operation like the following in a clean way and without having to collect the stream into an in-memory list first. ``` stream.collect(groupingBy(this::extractFileName, collectToFile())); ```<issue_comment>username_1: Ok I have took a look on `Collectors` implementation, you need `CollectorImpl` to create custom collector but its not public. So I implement new one using its copy (last 2 method you might be interested in): ``` public class CollectorUtils implements Collector { static final Set CH\_ID = Collections .unmodifiableSet(EnumSet.of(Collector.Characteristics.IDENTITY\_FINISH)); private final Suppliersupplier; private final BiConsumer accumulator; private final BinaryOperatorcombiner; private final Function finisher; private final Set characteristics; CollectorUtils(Suppliersupplier, BiConsumer accumulator, BinaryOperatorcombiner, Function finisher, Set characteristics) { this.supplier = supplier; this.accumulator = accumulator; this.combiner = combiner; this.finisher = finisher; this.characteristics = characteristics; } CollectorUtils(Suppliersupplier, BiConsumer accumulator, BinaryOperatorcombiner, Set characteristics) { this(supplier, accumulator, combiner, castingIdentity(), characteristics); } @Override public BiConsumer accumulator() { return accumulator; } @Override public Suppliersupplier() { return supplier; } @Override public BinaryOperatorcombiner() { return combiner; } @Override public Function finisher() { return finisher; } @Override public Set characteristics() { return characteristics; } @SuppressWarnings("unchecked") private static Function castingIdentity() { return i -> (R) i; } public static > Collector toFile() { return new CollectorUtils<>((Supplier>) ArrayList::new, (c, t) -> { c.add(toFile(t)); }, (r1, r2) -> { r1.addAll(r2); return r1; }, CH\_ID); } private static File toFile(String fileName) { try (Closeable type = () -> System.out.println("Complete! closing file " + fileName);) { // stuff System.out.println("Converting " + fileName); return new File(fileName); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } throw new RuntimeException("Failed to create file"); } } ``` Then I call stream as below: ``` public static void main(String[] args) { Stream.of("x.txt", "y.txt","z.txt").collect(CollectorUtils.toFile()); } ``` Output: ``` Convertingx.txt closing filex.txt Convertingy.txt closing filey.txt Convertingz.txt closing filez.txt ``` Upvotes: 0 <issue_comment>username_2: The only way I think you could fulfil your requirement would be by means of a close handler supplied to the [`Stream.onClose`](https://docs.oracle.com/javase/9/docs/api/java/util/stream/BaseStream.html#onClose-java.lang.Runnable-) method. Suppose you have the following class: ``` class CloseHandler implements Runnable { List children = new ArrayList<>(); void add(Runnable ch) { children.add(ch); } @Override public void run() { children.forEach(Runnable::run); } } ``` Now, you'd need to use your stream as follows: ``` CloseHandler closeAll = new CloseHandler(); try (Stream stream = list.stream().onClose(closeAll)) { // Now collect stream.collect(Collectors.groupingBy( this::extractFileName, toFile(closeAll))); } ``` This uses the `try-with-resources` construct, so that the stream is automatically closed either when consumed or if an error occurs. Note that we're passing the `closeAll` close handler to the `Stream.onClose` method. Here's a sketch of your downstream collector, which will collect/write/send elements to the `Closeable` resource (note that we're also passing the `closeAll` close handler to it): ``` static Collector toFile(CloseHandler closeAll) { class Acc { SomeResource resource; // this is your closeable resource Acc() { try { resource = new SomeResource(...); // create closeable resource closeAll.add(this::close); // this::close is a Runnable } catch (IOException e) { throw new UncheckedIOException(e); } } void add(Something elem) { try { // TODO write/send to closeable resource here } catch (IOException e) { throw new UncheckedIOException(e); } } Acc merge(Acc another) { // TODO left as an exercise } // This is the close handler for this particular closeable resource private void close() { try { // Here we close our closeable resource if (resource != null) resource.close(); } catch (IOException ignored) { } } } return Collector.of(Acc::new, Acc::add, Acc::merge, a -> null); } ``` So, this uses a local class (named `Acc`) to wrap the closeable resource, and declares methods to `add` an element of the stream to the closeable resource, and also to `merge` two `Acc` instances in case the stream is parallel (left as an exercise, in case it's worth the effort). `Collector.of` is used to create a collector based on the `Acc` class' methods, with a finisher that returns `null`, as we don't want to put anything in the map created by `Collectors.groupingBy`. Finally, there's the `close` method, which closes the wrapped closeable resource in case it has been created. When the stream is implicitly closed by means of the `try-with-resources` construct, the `CloseHandler.run` method will be automatically executed, and this will in turn execute all the child close handlers previously added when each `Acc` instance was created. Upvotes: 2 [selected_answer]
2018/03/15
1,246
4,278
<issue_start>username_0: I am running the following query: ``` SELECT * INTO dbo.2015_10_2_cs FROM dbo.2015_10_2 IF NOT EXISTS (SELECT type FROM sys.indexes WHERE object_id = object_id('dbo.2015_10_2_cs') AND NAME ='cci' AND type = 5) BEGIN CREATE CLUSTERED COLUMNSTORE INDEX cci ON dbo.2015_10_2_cs DROP TABLE dbo.2015_10_2 EXEC sp_rename "dbo.2015_10_2_cs" , "dbo.2015_10_2" END ``` and I want to make sure that the part where I am renaming the table dbo.2015\_10\_2\_cs to dbo.2015\_10\_2 is done successfully (without losing any data). The step inside the loop should be surrounded with SQL transaction to keep the process safe and reliable (in case if any step will fail). Could anyone help with this? Thanks in advance.<issue_comment>username_1: You can use an `EXISTS` checking for the tablename and schema. ``` IF NOT EXISTS (SELECT 'table does not exist' FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = N'2015_10_2'AND TABLE_SCHEMA = 'dbo') BEGIN RAISERROR('The table doesn''t exist!!!!', 16, 1) END ``` `sp_rename` won't make you lose table contents, it will just change the table reference name and update all it's contraints and indexes references. It will also raise an error if the table to rename does not exist. Maybe what you want is to wrap your process in a transaction and rollback if something fails. **EDIT:** For basic transaction handling you can use the following. Please read the [documentation](https://learn.microsoft.com/en-us/sql/t-sql/language-elements/transactions-transact-sql) for using transaction, it might take a while to know how it works correctly. ``` IF OBJECT_ID('tempdb..#Test') IS NOT NULL DROP TABLE #Test CREATE TABLE #Test (Number INT) SELECT AmountRecords = COUNT(1) FROM #Test -- AmountRecords = 0 BEGIN TRY BEGIN TRANSACTION -- Do your statements here INSERT INTO #Test (Number) VALUES (1) DECLARE @errorVariable INT = CONVERT(INT, 'NotAnInteger!!') -- Example of error: can't convert COMMIT END TRY BEGIN CATCH -- If something goes wrong IF @@TRANCOUNT > 0 -- ... and transaction is still open ROLLBACK -- Revert statements from the BEGIN TRANSACTION onwards END CATCH SELECT AmountRecords = COUNT(1) FROM #Test -- AmountRecords = 0 (the transaction was rolled back and the INSERT reverted) ``` Basically you use `BEGIN TRANSACTION` to initiate a restore point to go back to if something fails. Then use a `COMMIT` once you know everything is OK (from that point onwards, other users will see the changes and modifications will be persisted). If something fails (you need `TRY/CATCH` block to handle errors) you can issue a `ROLLBACK` to revert your changes. Upvotes: 2 [selected_answer]<issue_comment>username_2: > > EXEC sp\_rename "dbo.2015\_10\_2\_cs" , "dbo.2015\_10\_2" > > > This will not do what you expect. The new table will be named [dbo].[dbo.2015\_10\_2] if you specify the schema name in the new table name. Renamed tables are implicitly in the existing table's schema since one must use `ALTER SCHEMA` instead of `sp_rename` to move an object between schemas. There are a number of other problems with your script. Because the table name starts with a number, it doesn't conform to [regular identifier naming rules](https://learn.microsoft.com/en-us/sql/relational-databases/databases/database-identifiers) and must be enclosed in square brackets or double quotes. The literal parameters passed to `sp_rename` should be single quotes. You can also check to stored procedure return code to ascertain success or failure. The example below performs these tasks in a transaction with structured error handling. ``` DECLARE @rc int; BEGIN TRY BEGIN TRAN; IF NOT EXISTS (SELECT type FROM sys.indexes WHERE object_id = object_id(N'dbo.2015_10_2_cs') AND NAME ='cci' AND type = 5) BEGIN CREATE CLUSTERED COLUMNSTORE INDEX cci ON dbo.[2015_10_2_cs]; DROP TABLE dbo.[2015_10_2]; EXEC @rc = sp_rename 'dbo.[2015_10_2_cs]' , '2015_10_2'; IF @rc <> 0 BEGIN RAISERROR('sp_rename returned return code %d',16,1); END; END; COMMIT; END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK; THROW; END CATCH; ``` Upvotes: 2
2018/03/15
1,314
2,896
<issue_start>username_0: I have a list of data frames, I want to add a column to each data frame and this column would be the concatenation of the row number and another variable. I have managed to do that using a for loop but it is taking a lot of time when dealing with a large dataset, is there a way to avoid a for loop? ``` my_data_vcf <-lapply(my_vcf_files,read.table, stringsAsFactors = FALSE) for i in 1:length(my_data_vcf){ for(j in 1:length(my_data_vcf[[i]]){ my_data_vcf[[i]] <- cbind(my_data_vcf[[i]], "Id" = paste(c(variable,j), collapse = "_"))}} ```<issue_comment>username_1: You can use `lapply`; since you don't provide a minimal sample dataset, I'm generating some sample data. ``` # Sample list of data.frame's lst <- list( data.frame(one = letters[1:10], two = 1:10), data.frame(one = letters[11:20], two = 11:20)) # Concatenate row number with entries in second column lapply(lst, function(x) { x$three <- paste(1:nrow(x), x$two, sep = "_"); x }) #[1]] # one two three #1 a 1 1_1 #2 b 2 2_2 #3 c 3 3_3 #4 d 4 4_4 #5 e 5 5_5 #6 f 6 6_6 #7 g 7 7_7 #8 h 8 8_8 #9 i 9 9_9 #10 j 10 10_10 # #[[2]] # one two three #1 k 11 1_11 #2 l 12 2_12 #3 m 13 3_13 #4 n 14 4_14 #5 o 15 5_15 #6 p 16 6_16 #7 q 17 7_17 #8 r 18 8_18 #9 s 19 9_19 #10 t 20 10_20 ``` Upvotes: 2 <issue_comment>username_2: One way we can do this is to create a nested data frame using `enframe` from the `tibble` package. Once we've done that, we can `unnest` the data and use `mutate` to concatenate the row number and a column: ``` library(tidyverse) # using username_1' data, treating stringsAsFactors lst <- list( data.frame(one = letters[1:10], two = 1:10, stringsAsFactors = F), data.frame(one = letters[11:20], two = 11:20, stringsAsFactors = F) ) lst %>% enframe() %>% unnest(value) %>% group_by(name) %>% mutate(three = paste(row_number(), two, sep = "_")) %>% nest() ``` Returns: > > > ``` > # A tibble: 2 x 2 > name data > > 1 1 > 2 2 > > ``` > > If we `unnest` the data, we can see that var `three` is the concatenation of var `two` and the row number: ``` lst %>% enframe() %>% unnest(value) %>% group_by(name) %>% mutate(three = paste(row_number(), two, sep = "_")) %>% nest() %>% unnest(data) ``` Returns: > > > ``` > # A tibble: 20 x 4 > name one two three > > 1 1 a 1 1\_1 > 2 1 b 2 2\_2 > 3 1 c 3 3\_3 > 4 1 d 4 4\_4 > 5 1 e 5 5\_5 > 6 1 f 6 6\_6 > 7 1 g 7 7\_7 > 8 1 h 8 8\_8 > 9 1 i 9 9\_9 > 10 1 j 10 10\_10 > 11 2 k 11 1\_11 > 12 2 l 12 2\_12 > 13 2 m 13 3\_13 > 14 2 n 14 4\_14 > 15 2 o 15 5\_15 > 16 2 p 16 6\_16 > 17 2 q 17 7\_17 > 18 2 r 18 8\_18 > 19 2 s 19 9\_19 > 20 2 t 20 10\_20 > > ``` > > Upvotes: 0
2018/03/15
1,646
3,373
<issue_start>username_0: i am trying to hold some data of $i variable using cut command in another variable like ip and port number so that i can store both ip and port into database but this output creating issue to hold data due to next line please help.. ``` foo=( $(grep logs data.txt) ) for i in "${foo[@]}" do echo "$i" | sed 's/Failed//g' | sed "s/logs//g" | sed "s/for//g" | sed "s/delmum//g" | sed "s/from//g" | sed "s/port//g" | sed "s/invalid//g" | sed "s/user//g"| sed "s/castis//g" | sed "s/guest//g" | sed '/^$/d' done Output :-Mar 4 03:08:15 sshd[96487]: 172.16.31.10 62445 Mar 4 03:08:15 sshd[65741]: 172.16.31.10 62445 Mar 4 03:08:15 sshd[34595]: 172.16.31.10 43321 Mar 4 03:08:16 sshd[25485]: 356.214.857.246 12445 Mar 4 03:08:16 sshd[25245]: 324.684.723.857 24875 output expected :- Mar 4 03:08:15 sshd[96487]: 172.16.31.10 62445 Mar 4 03:08:15 sshd[34595]: 172.16.31.10 43321 Mar 4 03:08:16 sshd[25245]: 324.684.723.857 24875 data.txt Mar 4 03:08:15 delmum sshd[96487]: Failed logs for root from 172.16.31.10 port 62445 ssh2 Mar 4 03:08:06 perfmum sshd[33799]: Connection closed by Mar 4 03:08:15 delmum sshd[65741]: Failed logs for root from 172.16.31.10 port 62445 ssh2 Mar 4 03:08:15 delmum sshd[34595]: Failed logs for root from 172.16.31.10 port 43321 ssh2 Mar 4 03:08:06 delmum sshd[12485]: Connection closed by Mar 4 03:08:06 delmum sshd[85468]: Connection closed by Mar 4 03:08:06 delmum sshd[51396]: Connection closed by Mar 4 03:08:16 delmum sshd[25485]: Failed logs for invalid user castis from 356.214.857.246 port 12445 ssh2 Mar 4 03:08:16 delmum sshd[25245]: Failed logs for invalid user castis from 324.684.723.857 port 24875 ssh2 Mar 4 03:08:06 delmum sshd[23541]: Connection closed by ``` data.txt content in line but why loop breaking the line its self<issue_comment>username_1: You can process the whole log line by line without reading each line into a variable: ``` grep logs data.txt \ | sed 's/Failed\|logs\|for\|delmum\|from\|port\|invalid\|user\|castis\|guest//g' ``` It's also much faster to run just one `sed` instead of many, and to use just one expression rather than many in it. Upvotes: 0 <issue_comment>username_2: With single **`awk`** command: ``` awk '/logs/{ if (/from/ && /port/) { sub(/: .* from/,""); tail=":" OFS $6 OFS $8 } print $1,$2,$3,$4,$5 tail; tail=""; }' data.txt ``` Sample output: ``` Mar 4 03:08:15 delmum sshd[96487]: 172.16.31.10 62445 Mar 4 03:08:15 delmum sshd[65741]: 172.16.31.10 62445 Mar 4 03:08:15 delmum sshd[34595]: 172.16.31.10 43321 Mar 4 03:08:16 delmum sshd[25485]: 356.214.857.246 12445 Mar 4 03:08:16 delmum sshd[25245]: 324.684.723.857 24875 ``` Upvotes: 1 <issue_comment>username_3: it is working for me:- ``` grep logs data.txt > tmp.txt while read -r line do echo "$line" | sed 's/Failed//g;s/logs//g;s/for//g;s/delmum//g;s/from//g;s/port//g;s/invalid//g;s/user//g;s/castis//g;s/guest//g;/^$/d;s/ssh2//g;s/root//g;s/ */ /g' > ./sort.txt done < ./tmp.txt rm ./tmp.txt ``` Upvotes: 0 <issue_comment>username_4: with sed ``` sed -E ' /logs/!d h s/.*from ([^ ]*) .*port ([^ ]*).*/\1 \2/ x s/(.*:).*/\1/;s/([^ ]* )//5 G y/\n/ / ' data.txt ``` Upvotes: 0
2018/03/15
601
2,284
<issue_start>username_0: I have a function that uses the params keyword to allow me to pass multiple objects to it: ``` public IEnumerable GetMyObjects(params FilterObject[] filters) return service.GetObjectsFromDatabase(filters); } ``` So it can be called like: ``` GetMyObjects(filter1, filter2, filter3); ``` My issue is that I need to convert each object passed to the function into a different type before calling a second function which also uses the params keyword. The second function looks like: ``` public IEnumerable GetObjectsFromDatabase(params AnotherType[] filters) { \_database.GetObjectsFromDatabase(filters); } ``` I assume I need to loop through each parameter and call my method which performs the conversion, however how do I then store each converted object to be able to pass them to the next function? I imagined it would be something like: ``` public IEnumerable GetMyObjects(params FilterObject[] filters) List myList = new List(); foreach (FilterObject f in FilterObjects) { AnotherType a = f.ConvertToAnotherType(); myList.Add(a); } return service.GetMyObjects(myList); } ``` But that didn't work as the second function doesn't accept a List. What is the correct way of passing these objects?<issue_comment>username_1: A [`params`](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/params)-array is an array not a list, so use: ``` public IEnumerable GetMyObjects(params FilterObject[] filters) AnotherType[] atArray = filters.Select(f => f.ConvertToAnotherType()).ToArray(); return service.GetMyObjects( atArray ); } ``` instead of LINQ you could also use `Array.ConvertAll`: ``` AnotherType[] atArray = Array.ConvertAll(filters, f => f.ConvertToAnotherType()); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: The simplest solution is to call `.ToArray()` on list but it'll iterate the collection twice which adds unnecessary overhead. You can do something like this: ``` public IEnumerable GetMyObjects(params FilterObject[] filters) { var myArray = new AnotherType[filters.Length]; for (int i = 0; i < myArray.Length; i++) { myArray[i] = filters[i].ConvertToAnotherType(); } return service.GetMyObjects(myArray); } ``` Or use LINQ .Select as suggested by username_1 Upvotes: 1
2018/03/15
948
4,112
<issue_start>username_0: **I am trying to merge the videos from the array of avssets but I am able to get only first and last video. Videos between the array are showing black area. Check the code I am using.** ``` func mergeVideoArray() { let mixComposition = AVMutableComposition() for videoAsset in videoURLArray { let videoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) do { try videoTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: videoAsset.tracks(withMediaType: AVMediaType.video).first!, at: totalTime) videoSize = (videoTrack?.naturalSize)! } catch let error as NSError { print("error: \(error)") } let trackArray = videoAsset.tracks(withMediaType: .audio) if trackArray.count > 0 { let audioTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) do { try audioTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: videoAsset.tracks(withMediaType: AVMediaType.audio).first!, at: audioTime) audioTime = audioTime + videoAsset.duration } catch { } } totalTime = totalTime + videoAsset.duration let videoInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack!) if videoAsset != videoURLArray.last{ videoInstruction.setOpacity(0.0, at: videoAsset.duration) } layerInstructionsArray.append(videoInstruction) } let mainInstruction = AVMutableVideoCompositionInstruction() mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, totalTime) mainInstruction.layerInstructions = layerInstructionsArray let mainComposition = AVMutableVideoComposition() mainComposition.instructions = [mainInstruction] mainComposition.frameDuration = CMTimeMake(1, 30) mainComposition.renderSize = CGSize(width: videoSize.width, height: videoSize.height) let url = "merge_video".outputURL let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) exporter!.outputURL = url exporter!.outputFileType = AVFileType.mov exporter!.shouldOptimizeForNetworkUse = false exporter!.videoComposition = mainComposition exporter!.exportAsynchronously { let video = AVAsset(url: url) let playerItem = AVPlayerItem(asset: video) let player = AVPlayer(playerItem: playerItem) self.playerViewController.player = player self.present(self.playerViewController, animated: true) { self.playerViewController.player!.play() } } } ``` Please help me resolving this issue. Thanks in advance. ***Note*** I am able to create a video from the array but only first and last index values are showing in videos. For rest of the values only blank screen is showing.<issue_comment>username_1: I just solved my question just need to update the one line in the code. Please have a look in to the code. ``` if videoAsset != videoURLArray.last{ videoInstruction.setOpacity(0.0, at: totalTime) } ``` **Note**: just need to change the at position of the next video for the every value of array. Upvotes: 2 [selected_answer]<issue_comment>username_2: Right way to do it would be to change ``` if videoAsset != videoURLArray.last{ videoInstruction.setOpacity(0.0, at: videoAsset.duration) } ``` to: ``` videoInstruction.setOpacity(0.0, at: totalTime) ``` I want to emphasize here that adding ``` totalTime = totalTime + videoAsset.duration ``` before setting opacity to layer instruction makes all the difference. Otherwise videos are black screen. Upvotes: 0
2018/03/15
1,226
4,594
<issue_start>username_0: I am new to SAPUI5. Currently I am trying to create a simple app, showing a table based reporting with data from an SAP-system (odata). I use the SAP Web IDE as environment. In the first attempt, I used the standard control sap.m.table. The worked fine for me. Now I wanted to use a simple sort function for my columns. Therefore I changed to sap.ui.table (seems a bit better for that need). Starting the application, only the header part is showing (title) but my table doesn't appaer. Debugging the apploication shows me correct data binding. Variable oTable gets the data, that is in variable oJsonModel. What is missing for showing my table and why? The console in Chrome doesn't throw any error. My application consists of one xml-view: ``` ``` My controller coding is as follows: ``` sap.ui.define([ "sap/ui/core/mvc/Controller", "sap/ui/table/SortOrder", "sap/ui/model/Sorter" ], function(Controller, SortOrder, Sorter) { "use strict"; return Controller.extend("Z_HCM_CICO_REP_2.controller.zbkTable", { /** * Called when a controller is instantiated and its View controls (if available) are already created. * Can be used to modify the View before it is displayed, to bind event handlers and do other one-time initialization. * @memberOf Z_HCM_CICO_REP_2.view.zbkTable */ onInit: function() { //load cicos var oJsonModel = new sap.ui.model.json.JSONModel(); var oView = this.getView(); var oModel = this.getOwnerComponent().getModel(); var oTable = oView.byId("zbkTable"); oTable.setModel(oJsonModel); // var that = this; oModel.read("/TimeEntrySet", { success: function(oData, oResponse) { oJsonModel.setData(oData); var aTimeList = oJsonModel.getData().results; $.each(aTimeList, function(index, value) { }); //that.getOwnerComponent().setModel(oJsonModel); }, error: function(oError) { } }); oView.setModel(oJsonModel); //this.getOwnerComponent().setModel(oJsonModel); //oTable.bindItems("/TimeEntrySet"); //Initial sorting var oDateColumn = oView.byId("date"); //oView.byId("zbkTable").sort(oDateColumn, SortOrder.Ascending); } }); ``` My component coding is: ``` sap.ui.generic.app.AppComponent.extend("Z_HCM_CICO_REP_2.Component", { metadata: { "manifest": "json" } }); sap.ui.define([ "sap/ui/core/UIComponent", "sap/ui/Device" ], function(UIComponent, Device) { "use strict"; return UIComponent.extend("Z_HCM_CICO_REP_2.Component", { /** * The component is initialized by UI5 automatically during the startup of the app and calls the init method once. * @public * @override */ init: function() { // call the base component's init function UIComponent.prototype.init.apply(this, arguments); var oResourceModel = new sap.ui.model.resource.ResourceModel({ bundleName: "Z_HCM_CICO_REP_2.i18n.i18n" }); sap.ui.getCore().setModel(oResourceModel, "i18n"); //load cicos var sServiceUrl = "/sap/opu/odata/sap/ZHCM_CICO_REP_SRV/"; var oModel = new sap.ui.model.odata.v2.ODataModel(sServiceUrl, true); this.setModel(oModel); } }); ``` thanks in advance and best regards Christian<issue_comment>username_1: I didn't read all your code, but you are defining your OData model in your component and then creating another JSON model in your controller. The JSON Model (the one set to the Table and the view) has the data you read 'inside /TimeEntrySet', therefore I guess there is not '/TimeEntrySet' node inside it. Just the children nodes. So the binding your are doing in your 'rows="/TimeEntrySet"' can't be resolved. I think you don't need to replicate your data creating a JSON model in your controller. You should be able to access your OData model you define in the component. Could you please delete all your code in the 'onInit()' function and try again? Upvotes: 1 <issue_comment>username_2: Actually Rafael's answer is true. Because data should be sorted on server side, not client side. This way you would not need a json model. But anyway in addition to Rafaels answer, this should do the trick : ``` oModel.read("/TimeEntrySet", { success: function(oData, oResponse) { oJsonModel.setProperty("/TimeEntrySet",oData.results); ``` Your row binding should work now . Upvotes: 0
2018/03/15
1,075
3,374
<issue_start>username_0: How to find in the text file using Vim how many times specific word repeats. For example, I want to see how many times word "name" repeats in the code: ``` "collection": "animals", "fields": [ { "field": "id", "name": "" }, { "field": "age", "name": "" }, { "field": "origin", "name": "" } ``` Result - 3<issue_comment>username_1: You can count the number of matches using the `n` flag in the substitute command. Use the following to show number of times that `some-word` matches text in current buffer: ``` :%s/some-word//gn ``` You can read all the details on the [vim tips wiki](http://vim.wikia.com/wiki/Count_number_of_matches_of_a_pattern) Upvotes: 7 [selected_answer]<issue_comment>username_2: I have a count word function that you cand use in a number of ways ``` fun! CountWordFunction() try let l:win_view = winsaveview() exec "%s/" . expand("") . "//gn" finally call winrestview(l:win\_view) endtry endfun command! -nargs=0 CountWord :call CountWordFunction() ``` So, you can map it: ``` nnoremap :CountWord ``` Even mouse double click... ``` " When double click a word vim will hightlight all other ocurences " see CountWordFunction() " [I shows lines with word under the cursor nnoremap <2-LeftMouse> :let @/='\V\<'.escape(expand(''), '\').'\>':set hls:CountWord nnoremap \* :let @/='\V\<'.escape(expand(''), '\').'\>':set hls:CountWord ``` Upvotes: 1 <issue_comment>username_3: My [SearchPosition plugin](http://www.vim.org/scripts/script.php?script_id=2634) would print the following output (with the cursor on the first match, and the mapping or `:SearchPosition name` command): ``` On sole match in this line, 2 in following lines within 5,13 for /\/ ``` It builds on the other solutions given here, and was expressly written for obtaining such statistics easily. Upvotes: 0 <issue_comment>username_4: Nowadays (as of Vim [8.1.1270](https://github.com/vim/vim/releases/tag/v8.1.1270)) you can use: ``` set shortmess-=S ``` to enable a count of `[x/y]` showing in the bottom right corner every time you do a `/` or `?` search. [![native search count](https://i.stack.imgur.com/9XJw2.png)](https://i.stack.imgur.com/9XJw2.png) [**Relevant section from the Vim help**](https://vimhelp.org/options.txt.html#%27shortmess%27) Note that if the search finds more than 99 results, Vim unfortunately just shows `[x/>99]`: [![native search count over 99](https://i.stack.imgur.com/8I5Hp.png)](https://i.stack.imgur.com/8I5Hp.png) For this reason, I personally use the [google/vim-searchindex](https://github.com/google/vim-searchindex) plugin, which works for any amount of results: [![vim-search-index search count](https://i.stack.imgur.com/hFBxp.png)](https://i.stack.imgur.com/hFBxp.png) (By default the plugin is limited to files with “only” less than one million lines; this can be adjusted with `let g:searchindex_line_limit=2000000`.) Upvotes: 5 <issue_comment>username_5: Use the following commands. 1. Multiple matches per line ``` :%s/pattern//ng ``` `g` stands for global, it will match multiple occurrences per line. *Outputs* are shown as ``` 13 matches on 10 lines ``` 2. Line matches ``` :%s/pattern//n ``` *Outputs* are shown as ``` 10 matches on 10 lines ``` Upvotes: 1
2018/03/15
1,044
3,714
<issue_start>username_0: ``` $scope.populateMap=[{name: "ABC", code: "123"}, {name: "XYZ", code: "345"}] ``` //Want to send model name + value of model Currently sending ngObject.MainObj.specificFormatObj HTML ``` {{i.name}} ``` JS ``` // CONTROLLER CODE JSON parse object to get name and code GOT parsedObj $scope.genericSetLookups=function (Obj) { // want to do something like get the ngmodel string + the value, currently only value comes in Obj.code=parsedObj.code; Obj.name=parsedObj.name }; ``` More Explanation: ngObject.MainObj.specificFormatObj * I want in my model to store value of lookups in a specific way, with name and code. On the UI I populate using ng-repeat , So when I select a particular value I can either take i.name as display and set value as i.code . * But if i do that my ngObject.MainObj.specificFormatObj.name will be null and the value will get set to ngObject.MainObj.specificFormatObj.code by using ng-model ,so that is the reason in value I am taking i, not i.code or i.value ,now in the map i have code and name pair. * I sent it to a function and parse it, and set the value to ngObject.MainObj.specificFormatObj.code=inputTofunc.code respectively for name. In this case in the ng-change i pass on the ngObject.MainObj.specificFormatObj.code ,rather i want to set i from the map to ngObject.MainObj.specificFormatObj send it to function also the model string which in this case would be "ngObject.MainObj.specificFormatObj" . * So for 10 lookups i can write a generic code ,where the model name and model value i can send as parameter to function and set it there, the above way am doing is probably hardcoding values which i want to set to model in a specific format.<issue_comment>username_1: I didn't really understand your problem and the comments didn't make it clearer for me. What I tried to do is give you an example of how I would handle a select box and the model. I would loop over the options with `ng-options` and show the selected option by putting `{{selected.name}}` in the template. Ofcourse if you would want to format the selected value in anyway or react to a change you can use `ng-change`. Hope it helps. [Here is my JSFiddle](https://jsfiddle.net/cy1z8hjh/) Upvotes: 0 <issue_comment>username_2: I'm not sure if I understood your question. If you want to save in your model the value code + name, maybe this code can help you: ``` ``` [jsfiddle](https://jsfiddle.net/rd340byj/1/) Upvotes: 0 <issue_comment>username_3: Since you need to pass the model name as a parameter, pass it as a string like this from html : ``` ng-change="genericSetLookups('ngObject.SomeObject.abc',ngObject.SomeObject.abc)" ``` And in the controller as the model name contains "." we cannot use the name directly as the key. We need to parse the model name. I have cooked something up after searching a bit. Hope it works. Controller code: ``` $scope.genericSetLookups(modelName, value){ Object.setValueByString($scope, modelName, value); } Object.setValueByString = function(o, s, val) { s = s.replace(/\[(\w+)\]/g, '.$1'); // convert indexes to properties s = s.replace(/^\./, ''); // strip a leading dot var a = s.split('.'); for (var i = 0, n = a.length; i < n; ++i) { var k = a[i]; if (k in o) { if(i != n-1){ o = o[k]; } else{ o[k] = val; } } else { return; } } return o; } ``` Credit must also go to @Alnitak for the answer [here](https://stackoverflow.com/questions/6491463/accessing-nested-javascript-objects-with-string-key) Upvotes: 2 [selected_answer]
2018/03/15
1,999
8,101
<issue_start>username_0: Django provides a really nice feature called `makemigrations` where it will create migration files based on the changes in models. We are developing a module where we will want to generate custom migrations. I haven't found much info about creating custom migrations in the Django docs. There is documentation on the various `Operation` classes that can yield migrations but there's nothing about creating custom `Operation` classes that can yield custom migrations. The `autodetector` module for generating new migrations also doesn't seem to leave much room for adding custom `Operation` classes: <https://github.com/django/django/blob/master/django/db/migrations/autodetector.py#L160> It seems this is completely static. Are there other ways to generate custom migrations, perhaps by using existing classes with a custom management command?<issue_comment>username_1: You can create a custom class to hook into the makemigrations class and add your custom migrations stuff then execute using the "runscript" command. Below is a sample module where the file is named custom\_migrations.py and located in a "scripts" folder off one of your apps: ``` from django.core.management.commands.makemigrations import Command """ To invoke this script use: manage.py runscript custom_migrations --script-args [app_label [app_label ...]] name=my_special_migration verbosity=1 """ class MyMigrationMaker(Command): ''' Override the write method to add more stuff before finishing ''' def write_migration_files(self, changes): print("Do some stuff to \"changes\" object here...") super().write_migration_files(changes) def run(*args): nargs = [] kwargs = {} # Preload some options with defaults and then can be overridden in the args parsing kwargs['empty'] = True kwargs['verbosity'] = 1 kwargs['interactive'] = True kwargs['dry_run'] = False kwargs['merge'] = False kwargs['name'] = 'custom_migration' kwargs['check_changes'] = False for arg in args: kwarg = arg.split('=', 1) if len(kwarg) > 1: val = kwarg[1] if val == "True": arg_val = True elif val == "False": arg_val = False elif val.isdigits(): arg_val = int(val) else: arg_val = val the_kwargs[kwarg[0]] = arg_val else: nargs.append(arg) MyMigrationMaker().handle(*nargs, **kwargs) ``` Upvotes: 2 <issue_comment>username_1: An alternative if you are not willing to Tango with Django internals is to use this script below to generate a migration file by invoking a random method that will produce the Python code you want to run in the migration and insert it into a valid migration file that will then be part of the standard Django migrations. It you had an app named "xxx" and you had a method in a file xxx/scripts/test.py looking like this: ``` def run(*args, **kwargs): return "print(\"BINGO!!!!!!!!! {} :: {}\".format(args[0], kwargs['name']))" ``` ... and you invoked the script shown at the bottom of this post stored in xxx/scripts/custom\_migrations.py with the following command ``` manage.py runscript custom_migrations --script-args xxx name=say_bingo callable=xxx.scripts.test.run ``` Then you would end up with a migration file in xxx/migrations with the appropriate number sequence (something like 0004\_say\_bingo.py) looking like this: ``` Generated by Django 2.1.2 on 2018-12-14 08:54 from django.db import migrations def run(*args, **kwargs): print("BINGO!!!!!!!!! {} :: {}".format(args[0], kwargs['name'])) class Migration(migrations.Migration): dependencies = [ ] operations = [ migrations.RunPython(run,) ] ``` The script is as follows: ``` from django.core.management.base import no_translations from django.core.management.commands.makemigrations import Command from django.db.migrations import writer from django import get_version from django.utils.timezone import now import os import sys """ To invoke this script use: manage.py runscript custom_migrations --script-args [app_label [app_label ...]] callable=my_migration_code_gen name=my_special_migration -- the "name" argument will be set as part of the migration file generated -- the "callable" argument will be the function that is invoked to generate the python code you want to execute in the migration the callable will be passed all the args and kwargs passed in to this script from the command line as part of --script-args Only app names are allowed in args so use kwargs for custom arguments """ class LazyCallable(object): def __init__(self, name): self.n, self.f = name, None def __call__(self, *a, **k): if self.f is None: modn, funcn = self.n.rsplit('.', 1) if modn not in sys.modules: __import__(modn) self.f = getattr(sys.modules[modn], funcn) return self.f(*a, **k) class MyMigrationMaker(Command): ''' Override the write method to provide access to script arguments ''' @no_translations def handle(self, *app_labels, **options): self.in_args = app_labels self.in_kwargs = options super().handle(*app_labels, **options) ''' Override the write method to add more stuff before finishing ''' def write_migration_files(self, changes): code = LazyCallable(self.in_kwargs['callable'])(self.in_args, self.in_kwargs) items = { "replaces_str": "", "initial_str": "", } items.update( version=get_version(), timestamp=now().strftime("%Y-%m-%d %H:%M"), ) items["imports"] = "from django.db import migrations\n\ndef run(*args, **kwargs):\n " items["imports"] += code.replace("\n", "\n ") + "\n\n" items["operations"] = " migrations.RunPython(run,)\n" directory_created = {} for app_label, app_migrations in changes.items(): for migration in app_migrations: # Describe the migration my_writer = writer.MigrationWriter(migration) dependencies = [] for dependency in my_writer.migration.dependencies: dependencies.append(" %s," % my_writer.serialize(dependency)[0]) items["dependencies"] = "\n".join(dependencies) + "\n" if dependencies else "" # Write the migrations file to the disk. migrations_directory = os.path.dirname(my_writer.path) if not directory_created.get(app_label): if not os.path.isdir(migrations_directory): os.mkdir(migrations_directory) init_path = os.path.join(migrations_directory, "__init__.py") if not os.path.isfile(init_path): open(init_path, "w").close() # We just do this once per app directory_created[app_label] = True migration_string = writer.MIGRATION_TEMPLATE % items with open(my_writer.path, "w", encoding='utf-8') as fh: fh.write(migration_string) if self.verbosity >= 1: self.stdout.write("Migration file: %s\n" % my_writer.filename) def run(*args): glob_args = [] glob_kwargs = {} # Preload some options with defaults and then can be overridden in the args parsing glob_kwargs['empty'] = True glob_kwargs['verbosity'] = 1 glob_kwargs['interactive'] = True glob_kwargs['dry_run'] = False glob_kwargs['merge'] = False glob_kwargs['name'] = 'custom_migration' glob_kwargs['check_changes'] = False for arg in args: kwarg = arg.split('=', 1) if len(kwarg) > 1: glob_kwargs[kwarg[0]] = kwarg[1] else: glob_args.append(arg) MyMigrationMaker().handle(*glob_args, **glob_kwargs) ``` Upvotes: 1
2018/03/15
596
2,091
<issue_start>username_0: I have problem with `HWIOAuthBundle` and `vk.com`, everything works, but some time ago I found face to face with error, after oAuth with `vk.com` ``` 'Cannot register an account.' ``` I found this place `vendor/hwi/oauth-bundle/Controller/ConnectController.php:106` ``` if (!($error instanceof AccountNotLinkedException) || (time() - $key > 300)) { // todo: fix this throw new \Exception('Cannot register an account.'); } ``` But not understand why it's happens. I try to use another apps but it's not help. Maybe somebody knows why I got this error ?<issue_comment>username_1: Looks like problem is caused by recent update: <https://vk.com/dev/version_update> You should add "v" parameter here: <https://github.com/hwi/HWIOAuthBundle/blob/6401d53617816257208b8df384bdc47d2d701bf5/OAuth/ResourceOwner/VkontakteResourceOwner.php#L46> Upvotes: 0 <issue_comment>username_2: you must specify version 3.0 api VK in vendor/hwi/oauth-bundle/OAuth/ResourceOwner/VkontakteResourceOwner.php ``` public function getUserInformation(array $accessToken, array $extraParameters = array()) { $url = $this->normalizeUrl($this->options['infos_url'], array( 'access_token' => $accessToken['access_token'], 'fields' => $this->options['fields'], 'name_case' => $this->options['name_case'], 'v' => '3.0' )); $content = $this->doGetUserInformationRequest($url)->getContent(); $response = $this->getUserResponse(); $response->setResponse($content); $response->setResourceOwner($this); $response->setOAuthToken(new OAuthToken($accessToken)); $content = $response->getResponse(); $content['email'] = isset($accessToken['email']) ? $accessToken['email'] : null; if (isset($content['screen_name'])) { $content['nickname'] = $content['screen_name']; } else { $content['nickname'] = isset($content['nickname']) ? $content['nickname'] : null; } $response->setResponse($content); return $response; } ``` Upvotes: 2 [selected_answer]
2018/03/15
2,730
9,160
<issue_start>username_0: I have a complex model written in Matlab. The model was not written by us and is best thought of as a "black box" i.e. in order to fix the relevant problems from the inside would require rewritting the entire model which would take years. If I have an "embarrassingly parallel" problem I can use an array to submit X variations of the same simulation with the option `#SBATCH --array=1-X`. However, clusters normally have a (frustratingly small) limit on the maximum array size. Whilst using a PBS/TORQUE cluster I have got around this problem by forcing Matlab to run on a single thread, requesting multiple CPUs and then running multiple instances of Matlab in the background. An example submission script is: ``` #!/bin/bash #PBS -l nodes=1:ppn=5,walltime=30:00:00 #PBS -t 1-600 # define Matlab options options="-nodesktop -noFigureWindows -nosplash -singleCompThread" for sub\_job in {1..5} do matlab ${options} -r "run\_model(${arg1}, ${arg2}, ..., ${argN}); exit" & done wait ``` Can anyone help me do the equivalent on a SLURM cluster? * The `par` function will not run my model in a parallel loop in Matlab. * The PBS/TORQUE language was very intuitive but SLURM's is confusing me. Assuming a similarly structured submission script as my PBS example, here is what I think certain commands will result in. + --ncpus-per-task=5 seems like the most obvious one to me. Would I put srun in front of the matlab command in the loop or leave it as it is in the PBS script loop? + --ntasks=5 I would imagine would request 5 CPUs but will run in serial unless a program specifically requests them (i.e. MPI or Python-Multithreaded etc). Would I need to put srun in front of the Matlab command in this case?<issue_comment>username_1: I am not a big expert on array jobs but I can help you with the inner loop. I would always use [GNU parallel](https://www.gnu.org/software/parallel/) to run several serial processes in parallel, within a single job that has more than one CPU available. It is a simple `perl` script, so not difficult to 'install', and its syntax is extremely easy. What it basically does is to run some (nested) loop in parallel. Each iteration of this loop contains a (long) process, like your Matlab command. In contrast to your solution it does not submit all these processes at once, but it runs only `N` processes at the same time (where `N` is the number of CPUs you have available). As soon as one finishes, the next one is submitted, and so on until your entire loop is finished. It is perfectly fine that not all processes take the same amount of time, as soon as one CPU is freed, another process is started. Then, what you would like to do is to launch 600 jobs (for which I substitute 3 below, to show the complete behavior), each with 5 CPUs. To do that you could do the following (whereby I have not included the actual run of `matlab`, but that trivially can be included): ``` #!/bin/bash #SBATCH --job-name example #SBATCH --out job.slurm.out #SBATCH --nodes 1 #SBATCH --ntasks 1 #SBATCH --cpus-per-task 5 #SBATCH --mem 512 #SBATCH --time 30:00:00 #SBATCH --array 1-3 cmd="echo matlab array=${SLURM_ARRAY_TASK_ID}" parallel --max-procs=${SLURM_CPUS_PER_TASK} "$cmd,subjob={1}; sleep 30" ::: {1..5} ``` Submitting this job using: ``` $ sbatch job.slurm ``` submits 3 jobs to the queue. For example: ``` $ squeue | grep tdegeus 3395882_1 debug example tdegeus R 0:01 1 c07 3395882_2 debug example tdegeus R 0:01 1 c07 3395882_3 debug example tdegeus R 0:01 1 c07 ``` Each job gets 5 CPUs. These are exploited by the `parallel` command, to run your inner loop in parallel. Once again, the range of this inner loop may be (much) larger than 5, `parallel` takes care of the balancing between the 5 available CPUs within this job. Let's inspect the output: ``` $ cat job.slurm.out matlab array=2,subjob=1 matlab array=2,subjob=2 matlab array=2,subjob=3 matlab array=2,subjob=4 matlab array=2,subjob=5 matlab array=1,subjob=1 matlab array=3,subjob=1 matlab array=1,subjob=2 matlab array=1,subjob=3 matlab array=1,subjob=4 matlab array=3,subjob=2 matlab array=3,subjob=3 matlab array=1,subjob=5 matlab array=3,subjob=4 matlab array=3,subjob=5 ``` You can clearly see the 3 times 5 processes run at the same time now (as their output is mixed). No need in this case to use `srun`. SLURM will create 3 jobs. Within each job everything happens on individual compute nodes (i.e. as if you were running on your own system). --- Installing GNU Parallel - option 1 ---------------------------------- To 'install' GNU parallel into your home folder, for example in `~/opt`. 1. [Download the latest GNU Parallel](http://ftp.gnu.org/gnu/parallel/). 2. Make the directory `~/opt` if it does not yet exist ``` mkdir $HOME/opt ``` 3. 'Install' GNU Parallel: ``` tar jxvf parallel-latest.tar.bz2 cd parallel-XXXXXXXX ./configure --prefix=$HOME/opt make make install ``` 4. Add `~/opt` to your path: ``` export PATH=$HOME/opt/bin:$PATH ``` (To make it permanent, add that line to your `~/.bashrc`.) --- Installing GNU Parallel - option 2 ---------------------------------- Use `conda`. 1. (Optional) Create a new environment ``` conda create --name myenv ``` 2. Load an existing environment: ``` conda activate myenv ``` 3. Install GNU parallel: ``` conda install -c conda-forge parallel ``` Note that the command is available only when the environment is loaded. Upvotes: 3 <issue_comment>username_2: While Tom's suggestion to use GNU Parallel is a good one, I will attempt to answer the question asked. If you want to run 5 instances of the `matlab` command with the same arguments (for example if they were communicating via MPI) then you would want to ask for `--ncpus-per-task=1`, `--ntasks=5` and you should preface your `matlab` line with `srun` and get rid of the loop. In your case, as each of your 5 calls to `matlab` are independent, you want to ask for `--ncpus-per-task=5`, `--ntasks=1`. This will ensure that you allocate 5 CPU cores per job to do with as you wish. You can preface your `matlab` line with `srun` if you wish but it will make little difference you are only running one task. Of course, this is only efficient if each of your 5 `matlab` runs take the same amount of time since if one takes much longer then the other 4 CPU cores will be sitting idle, waiting for the fifth to finish. Upvotes: 3 [selected_answer]<issue_comment>username_3: You can do it with python and subprocess, in what I describe below you just set the number of nodes and tasks and that is it, no need for an array, no need to match the size of the array to the number of simulations, etc... It will just execute python code until it is done, more nodes faster execution. Also, it is easier to decide on variables as everything is being prepared in python (which is easier than bash). It does assume that the Matlab scripts save the output to file - nothing is returned by this function (it can be changed..) In the sbatch script you need to add something like this: ```sh #!/bin/bash #SBATCH --output=out_cluster.log #SBATCH --error=err_cluster.log #SBATCH --time=8:00:00 #SBATCH --nodes=36 #SBATCH --exclusive #SBATCH --cpus-per-task=2 export IPYTHONDIR="`pwd`/.ipython" export IPYTHON_PROFILE=ipyparallel.${SLURM_JOBID} whereis ipcontroller sleep 3 echo "===== Beginning ipcontroller execution ======" ipcontroller --init --ip='*' --nodb --profile=${IPYTHON_PROFILE} --ping=30000 & # --sqlitedb echo "===== Finish ipcontroller execution ======" sleep 15 srun ipengine --profile=${IPYTHON_PROFILE} --timeout=300 & sleep 75 echo "===== Beginning python execution ======" python run_simulations.py ``` depending on your system, read more here:<https://ipyparallel.readthedocs.io/en/latest/process.html> and run\_simulations.py should contain something like this: ```py import os from ipyparallel import Client import sys from tqdm import tqdm import subprocess from subprocess import PIPE def run_sim(x): import os import subprocess from subprocess import PIPE # send job! params = [str(i) for i in x] p1 = subprocess.Popen(['matlab','-r',f'"run_model({x[0]},{x[1]})"'], env=dict(**os.environ)) p1.wait() return ##load ipython parallel rc = Client(profile=os.getenv('IPYTHON_PROFILE')) print('Using ipyparallel with %d engines', len(rc)) lview = rc.load_balanced_view() view = rc[:] print('Using ipyparallel with %d engines', len(rc)) sys.stdout.flush() map_function = lview.map_sync to_send = [] #prepare variables <-- here you should prepare the arguments for matlab #################### for param_1 in [1,2,3,4]: for param_2 in [10,20,40]: to_send.append([param_1, param_2]) ind_raw_features = lview.map_async(run_sim,to_send) all_results = [] print('Sending jobs');sys.stdout.flush() for i in tqdm(ind_raw_features,file=sys.stdout): all_results.append(i) ``` You also get a progress bar in the stdout, which is nice... you can also easily add a check to see if the output files exist and ignore a run. Upvotes: 1
2018/03/15
482
1,830
<issue_start>username_0: I'm doing a research about how memory is managed in RTEMS using an ARM-based Xilinx Zynq. The program runs on two cores with SMP. I have read about memory barriers and out-of-order execution paradigm, I concluded that a barrier or a fence is a hardware implementation rather than software. RAM is divided in several sections, however there are some sections called barriers which shared areas with other sections. I attach you a capture. [![start-text-barrier](https://i.stack.imgur.com/FikvW.png)](https://i.stack.imgur.com/FikvW.png) xbarrier starts where the next section begins and ends where previous section ends. Another example: [![rwbarrier](https://i.stack.imgur.com/H5qJ9.png)](https://i.stack.imgur.com/H5qJ9.png) In this one, the barrier starts at the same addres as the previous section and it ends before the next section starts. Are these memory sections related with barrier instructions? Why are these memory sections implemented? Thanks in advance,<issue_comment>username_1: It looks like this is not linked to barrier instructions at all. Could it be a section of memory which is called like this just to separate a region which is read-write from a region which is read-only (vector) ? The barrier instructions are used to force order in a multiprocessor system, they will never be linked to an address. The barrier instruction is used to split the visibility (For other CPUs or threads) between: * Load and store instructions before the barrier * Load and store instructions after the barrier. Upvotes: 1 <issue_comment>username_2: Googling "section .rwbarrier" will get you to <https://lists.rtems.org/pipermail/users/2015-May/028893.html>, which says: > > This section helps to protect the code and read-only sections from write access via the MMU. > > > Upvotes: 2
2018/03/15
467
1,519
<issue_start>username_0: My current url is: <https://localhost:8080/MyApp/manager/dashboard> Within the above url I want to fetch only <https://localhost:8080/MyApp/> part. Is there any way to fetch this without hard coding anything?<issue_comment>username_1: If you want to get the domain name and application name only, rely on the router location and on the window location. ``` import { Location } from '@angular/common'; constructor(private loc: Location) {} ngOnInit() { const angularRoute = this.loc.path(); const url = window.location.href; const domainAndApp = url.replace(angularRoute, ''); } ``` Upvotes: 4 <issue_comment>username_2: **window.location** contains lots of information. Suppose the URL is <https://example.com:80/a/b/c>, you can get following information easily in Angular: ``` window.location.hostname = "example.com" window.location.host = "example.com:80" window.location.protocol = "https:" window.location.port = "80" window.location.pathname = "/a/b/c" window.location.href = "https://example.com:80/a/b/c" ``` Upvotes: 5 <issue_comment>username_3: If you want to get hostname and port name from the URL, then you can simply do the `console.log(location);` This will return the separate value of each. ``` host: "localhost:4200", hostname: "localhost", href: "http://localhost:4200/order-form/2", origin: "http://localhost:4200", pathname: "/order-form/2", port: "4200", protocol: "http:", ``` in your case this should `console.log(location.origin);` Upvotes: 0
2018/03/15
876
2,378
<issue_start>username_0: The following below is a portion of our logs from our application in production. We are using **log4j** With configuration: ``` [INFO ] 2018-03-10 15:47:16.905 [WebContainer : 23] Logger - [INFO ] 2018-03-10 15:47:16.905 [WebContainer : 23] Logger - [INFO ] 2018-03-10 15:49:27.320 [WebContainer : 19] Logger - [INFO ] 2018-03-10 15:49:27.320 [WebContainer : 19] Logger - [INFO ] 2018-03-10 15:49:27.320 [WebContainer : 19] Logger - [INFO ] 2018-03-10 15:49:27.320 [WebContainer : 19] Logger - [INFO ] 2018-03-10 15:49:27.320 [WebContainer : 19] Logger - [INFO ] 2018-03-10 15:49:27.320 [WebContainer : 19] Logger - [INFO ] 2018-03-10 15:54:23.703 [WebContainer : 23] Logger - [INFO ] 2018-03-10 15:54:23.703 [WebContainer : 23] Logger - [INFO ] 2018-03-10 15:54:23.703 [WebContainer : 23] Logger - [INFO ] 2018-03-10 15:54:23.703 [WebContainer : 23] Logger - ``` From there you can see there is a thread name "WebContainer : 19" and "WebContainer : 23". We are confused with the thread name. Does having the same thread name mean they come from the same request? and with that we can say 15:54:23.703 happened after 2018-03-10 15:47:16.905 and belong to the same request? or are these thread names mean just one process group of the same class? and all the lines above can belong to the same request but just different thread names for different classes that executed in the request? Please enlighten me.<issue_comment>username_1: Each webcontainer typically has a pool of threads. All those threads keep listening for incoming TCP connections on the socket, and accepting them. Once accepted, data can be read from the newly established TCP connection, parsed, and turned into a HTTP request. "WebContainer : 19" and "WebContainer : 23" you see in these logs are two such threads, belonging to the webcontainer thread pool. And yes, they are different. Whenever each thread is getting its chance to execute, it is showing up in the log, hence you are observing randomization in these log statements. To handle this random nature of threads gracefully and also to make the application behave correctly, multi-threading concepts are a must to build such applications. Upvotes: 1 <issue_comment>username_2: WebContainer : 23 is Thread A WebContainer : 19 is Thread B A and B two request is processed by WebContainer. Upvotes: 0
2018/03/15
952
3,060
<issue_start>username_0: I'm trying to work which IP addresses comes from a certain company. There are 4 different groups of IP addresses with different ranges for each company. On sheet 1, in column (A) has all of the IP addresses listed and on sheet 2 there are 4 columns with each company name and their range of IP addresses (Columns A-D). On sheet 1, I have a column called IP Location (Column B). In this column I want it to list what the company that should be for using the list of IP addresses in column A. Below is the formula that I'm using: ``` =IF(A2,'IP Addresses'!A:A,IF"COMPANY NAME",A2,'IP Addresses'!A:A,IF"COMPANY NAME2",A2,'IP Addresses'!B:B,IF"COMPANY NAME3",A2,'IP Addresses'!C:C,IF"COMPANY NAME4", N/A")))) ``` In this formula if the ip location isn't listed I want it to print out N/A. On the second sheet, I have IP addresses with 4 columns, for example: ``` Company Name: Company Name2: Company Name3: Company Name4: 192.168.3.11 172.16.31.10 192.168.3.11 192.168.127.12 172.16.58.3 172.16.31.10 192.168.3.11 172.16.17.32 ``` If it's an IP address in company Name column 1 I want it to print out that company name, if it's an ip address in Company Name2 in Column 2 I want it to print out Company Name2 etc. If anyone could advise where I am going wrong, I'd really appreciate it. Thank you<issue_comment>username_1: Try a series of MATCH statements that work progressively through the columns. ``` =if(isnumber(MATCH(A2, 'IP Addresses'!A:A, 0)), 'IP Addresses'!A$1, if(isnumber(MATCH(A2, 'IP Addresses'!B:B, 0)), 'IP Addresses'!B$1, if(isnumber(MATCH(A2, 'IP Addresses'!C:C, 0)), 'IP Addresses'!C$1, if(isnumber(MATCH(A2, 'IP Addresses'!D:D, 0)), 'IP Addresses'!D$1, "N/A")))) ``` [![enter image description here](https://i.stack.imgur.com/W9hVD.png)](https://i.stack.imgur.com/W9hVD.png) Upvotes: 3 [selected_answer]<issue_comment>username_2: I used `Tables` with structured references, but you can replace them with absolute or named ranges. By using structured references, your data table reference will auto-expand if you add columns or rows. It also assumes that the data table starts in column 1. If it starts in a different column, the formula will need to be adjusted. This formula must be entered as an array formula by holding down `ctrl` + `shift` while hitting `enter`. If you do this correctly, Excel will place braces `{...}` around the formula. The formula returns the error `#N/A` but you can see in the error test part where to change it to the text string `N/A` if that is preferable to you. ``` B2: =IF(OR(MAX(--(A2=Table1))=0,A2=""), NA(),INDEX(Table1[#Headers],1,MAX((A2=Table1)*COLUMN(Table1)))) ``` [![enter image description here](https://i.stack.imgur.com/gwfEQ.png)](https://i.stack.imgur.com/gwfEQ.png) [![enter image description here](https://i.stack.imgur.com/PIWs6.png)](https://i.stack.imgur.com/PIWs6.png) Upvotes: 1
2018/03/15
355
1,170
<issue_start>username_0: This is something that's been bugging me for a while... So far I've been working around it by committing and pushing via terminal, but it would be nice to get it sorted. In Xcode, there are two remotes listed in the source control navigator of my project. [![](https://i.stack.imgur.com/euvKE.png)](https://i.stack.imgur.com/euvKE.png) In terminal, there aren't: [![](https://i.stack.imgur.com/8r1ab.png)](https://i.stack.imgur.com/8r1ab.png) Where is Xcode storing this remote so I can remove it? I've seen the answer [here](https://stackoverflow.com/questions/36363328/how-do-i-remove-a-second-remote-git-repo-from-an-xcode-project) before, but it doesn't seem to be stored in the same way in Xcode 9.<issue_comment>username_1: If you don't have repo on remote and only showing in XCode then you can try this so that you won't lose any data, Where you can delete the remote it self. Not sure if it will work or not. [![enter image description here](https://i.stack.imgur.com/hay9S.png)](https://i.stack.imgur.com/hay9S.png) Upvotes: 0 <issue_comment>username_2: Solution was to manually delete derived data. Upvotes: 4 [selected_answer]
2018/03/15
1,805
6,781
<issue_start>username_0: In natural language it is quite common to have quotation marks within quotation marks, as in, for example: (1) "This sentence is not a "sentence" " (2) "‘S’ is true in language L iff S" (2) is in fact important for any Tarskian truth-definition (Tarski argued the truth of every instance of (2) was necessary for the material adequacy of a truth definition). Other examples arise in the study of paradoxes; famously, in [Quine's paradox](https://en.wikipedia.org/wiki/Quine%27s_paradox) we are led to consider (3): (3) "The statement "'yields falsehood when preceded by its quotation' yields falsehood when preceded by its quotation" is false" I am interested in how quotations within quotations can be implemented in Haskell. What I want to know is how, in Haskell, with appropriate technical trickery, we can represent (1)-(3) as they are written above and as they would appear in a book (i.e, not modified orthographically in any way), without receiving an error message. A simple type search for (3), `:t ("The statement "'yields falsehood when preceded by its quotation' yields falsehood when preceded by its quotation" is false")` yields (no pun intended!) ``` :1:18: error: • Syntax error on 'yields Perhaps you intended to use TemplateHaskell or TemplateHaskellQuotes • In the Template Haskell quotation 'yields ```<issue_comment>username_1: As had been initially hinted in a comment below the question, but which was not clear to me at the time (although it may have been utterly obvious to others), there is a simple way of representing (1)-(3) above using \ and `putStrLn`. We just have to use the following code: ``` putStrLn "This sentence is not a \"sentence\" " This sentence is not a "sentence" putStrLn "\"S\" is true in language L iff S" "S" is true in language L iff S putStrLn "The statement \"'yields falsehood when preceded by its quotation' yields falsehood when preceded by its quotation\" is false" The statement "'yields falsehood when preceded by its quotation' yields falsehood when preceded by its quotation" is false ``` Upvotes: 1 <issue_comment>username_2: This answer will mostly repeat what is said in the comments (and in your own answer), but with some elaboration that might be useful. In what follows, I will abuse keyboard input formatting, as in `case`, when I want to talk about a string as it really is. You may take "as it really is" to mean a certain sequence of bytes in memory or something more Platonic -- feel free to pick your favourite ontology. What matters is that, if I want to refer to `case` in a Haskell program, I need some way to specify I'm really talking about a string, as Haskell source code is literally a string of characters which is interpreted in a certain manner -- and, in particular, in that interpretation `case` would be taken as a keyword introducing a case-statement. The solution adopted by Haskell syntax is using double quotes to introduce string literals, so that `"case"` represents `case`. That's all well and good, but what about `This sentence is not a "sentence"`? The problem is that Haskell syntax confiscates, among others, the character `"`, giving it special meaning, which means `"This sentence is not a "sentence""` doesn't actually represent `This sentence is not a "sentence"` (rather, it represents `This sentence is not a` applied, like a function, to a variable called `sentence` and the empty string -- which is approximately always nonsense). The solution to this problem is confiscating `\` in order to use it as an *escape character* which, when used within a string literal, indicates the following character should be taken literally, and not according to its usual role in Haskell syntax (that is not the only way `\` is used as an escape character, but let's not get bogged down in detail). That being so, `This sentence is not a "sentence"` is represented by `"This sentence is not a \"sentence\""`, and all is well in the World. All of the above probably is, after the long discussion in the comments, very obvious. There are a few interesting things to add, though, about `putStrLn`, `show` and `print`. Starting with `putStrLn`, its role can be abstractly described as *drawing in your screen* something that looks sufficiently like how you would represent a string if you weren't concerned about the mundane constraints of Haskell syntax. There is no fundamental difference between `putStrLn` and an hypothetical `cartoonifyStrLn`, which would draw a pink Comic Sans string all over your desktop, or even a `seeStrLn`, which would plant a visual impression of the string directly in your retina. The point is that `putStrLn "This sentence is not a \"sentence\""` represents `This sentence is not a "sentence"` in a way that is *completely external to Haskell*. This fact is, fortunately enough, expressed by the type system: ``` GHCi> -- Actually a string. GHCi> :t "This sentence is not a \"sentence\"" "This sentence is not a \"sentence\"" :: [Char] GHCi> -- Note how there won't be any mention of String (or [Char]). GHCi> :t putStrLn "This sentence is not a \"sentence\"" putStrLn "This sentence is not a \"sentence\"" :: IO () ``` Values with `IO` in their types stand in for things that are, in part or wholly, external to Haskell. `IO` makes it possible to refer to and make use of such things without losing track of this crucial fact about them. The `show` function produces string -- or, more precisely, `String` -- representations of Haskell values ``` GHCi> show 99 "99" GHCi> show (Just 99) "Just 99" ``` That however, is an incomplete description of what `show` does. The `String` it produces is supposed to represent, as a string literal, a valid representation of the value it is given according to Haskell syntax. That is why `show`ing a `String` produces a string with extra quotes tacked on... ``` GHCi> show "case" "\"case\"" ``` ... not to mention all the extra backslashes: ``` GHCi> show "This sentence is not a \"sentence\"" "\"This sentence is not a \\\"sentence\\\"\"" ``` As for `print`, it is just `show` followed by `putStrLn`, so it tends to do what you expect for most types... ``` GHCi> print 99 99 GHCi> print (Just 99) Just 99 ``` ... but not for `String`s -- in their case, you should just use `putStrLn` instead: ``` GHCi> print "This sentence is not a \"sentence\"" "This sentence is not a \"sentence\"" ``` P.S.: Possibly interesting extra reading about technicalities: [*How to convert Unicode Escape Sequence to Unicode String in Haskell*](https://stackoverflow.com/q/24953125/2751851) and [*Semantics of show w.r.t. escape characters*](https://stackoverflow.com/q/48846437/2751851). P.P.S.: I welcome criticism of my usage of quotation in this answer. Upvotes: 2
2018/03/15
1,938
5,934
<issue_start>username_0: C# 7.2 introduced [reference semantics with value-types](https://learn.microsoft.com/en-us/dotnet/csharp/reference-semantics-with-value-types), and alongside this Microsoft have developed types like [`Span` and `ReadOnlySpan`](https://github.com/dotnet/corefxlab/blob/master/docs/specs/span.md) to potentially improve performance for apps that need to perform operations on contiguous regions of memory. According to the docs, one way of potentially improving performance is to pass immutable structs by reference by adding an `in` modifier to parameters of those types: ``` void PerformOperation(in SomeReadOnlyStruct value) { } ``` What I'm wondering is whether I ought to do this with types like `ReadOnlySpan`. Should I be declaring methods that accept a read-only span like this: ``` void PerformOperation(in ReadOnlySpan value) { } ``` or simply like: ``` void PerformOperation(ReadOnlySpan value) { } ``` Will the former offer any performance benefits over the latter? I couldn't find any documentation that explicitly advises in either direction, but I did find [this example](https://blogs.msdn.microsoft.com/mazhou/2018/03/02/c-7-series-part-9-ref-structs/) where they demonstrated a method that accepts a `ReadOnlySpan` and *did not* use the `in` modifier.<issue_comment>username_1: A key factor here is size; `Span` / `ReadOnlySpan` are deliberately *very small*, so the difference between a span and a reference-to-a-span is tiny. One key usage for `in` here is for *larger* readonly structs, to avoid a significant stack copy; note that there's a trade-off: the `in` is really a `ref`, so you're adding an extra layer of indirection to all access, unless the JIT sees what you're doing and works some voodoo. And of course: if the type *doesn't* declare itself as `readonly` then a stack copy is automatically added *before* the call to preserve the semantics. Upvotes: 4 [selected_answer]<issue_comment>username_2: Marc's answer seems spot-on. I'm posting this just to supplement his own answer with some benchmarks that confirm what he's saying. I set up the following benchmark class: ``` public class SpanBenchmarks { private const int Iterations = 100_000; private byte[] _data; private LargeStruct _control; [GlobalSetup] public void GlobalSetup() { _data = new byte[1000]; new Random().NextBytes(_data); _control = new LargeStruct(_data[0], _data[1], _data[2], _data[3], _data[4], _data[5]); } [Benchmark] public void PassSpanByValue() { for (int i = 0; i < Iterations; i++) AcceptSpanByValue(_data); } [Benchmark] public void PassSpanByRef() { for (int i = 0; i < Iterations; i++) AcceptSpanByRef(_data); } [Benchmark] public void PassLargeStructByValue() { for (int i = 0; i < Iterations; i++) AcceptLargeStructByValue(_control); } [Benchmark] public void PassLargeStructByRef() { for (int i = 0; i < Iterations; i++) AcceptLargeStructByRef(_control); } private int AcceptSpanByValue(ReadOnlySpan span) => span.Length; private int AcceptSpanByRef(in ReadOnlySpan span) => span.Length; private decimal AcceptLargeStructByValue(LargeStruct largeStruct) => largeStruct.A; private decimal AcceptLargeStructByRef(in LargeStruct largeStruct) => largeStruct.A; private readonly struct LargeStruct { public LargeStruct(decimal a, decimal b, decimal c, decimal d, decimal e, decimal f) { A = a; B = b; C = c; D = d; E = e; F = f; } public decimal A { get; } public decimal B { get; } public decimal C { get; } public decimal D { get; } public decimal E { get; } public decimal F { get; } } } ``` I repeated the same benchmark job three times with this and got similar results each time: ```none BenchmarkDotNet=v0.10.13, OS=Windows 10 Redstone 3 [1709, Fall Creators Update] (10.0.16299.248) Intel Core i7-4790 CPU 3.60GHz (Haswell), 1 CPU, 8 logical cores and 4 physical cores Frequency=3507500 Hz, Resolution=285.1033 ns, Timer=TSC .NET Core SDK=2.1.300-preview2-008354 [Host] : .NET Core 2.0.6 (CoreCLR 4.6.26212.01, CoreFX 4.6.26212.01), 64bit RyuJIT DefaultJob : .NET Core 2.0.6 (CoreCLR 4.6.26212.01, CoreFX 4.6.26212.01), 64bit RyuJIT Method | Mean | Error | StdDev | ----------------------- |----------:|----------:|----------:| PassSpanByValue | 641.71 us | 0.1758 us | 0.1644 us | PassSpanByRef | 642.62 us | 0.1524 us | 0.1190 us | PassLargeStructByValue | 390.78 us | 0.2633 us | 0.2463 us | PassLargeStructByRef | 35.33 us | 0.3446 us | 0.3055 us | ``` Using a large struct as a control, I confirm there are significant performance advantages when passing them by reference rather than by value. However, there are no significant performance differences between passing a `Span` by reference or value. --- **September 2019 Update** Out of curiosity, I ran the same benchmarks again using .NET Core 2.2. There seem to have been some clever optimisations introduced since last time to reduce the overhead of implicitly casting an array to a `Span`: ```none BenchmarkDotNet=v0.11.5, OS=Windows 10.0.17134.984 (1803/April2018Update/Redstone4) Intel Core i7-4700HQ CPU 2.40GHz (Haswell), 1 CPU, 8 logical and 4 physical cores .NET Core SDK=2.2.106 [Host] : .NET Core 2.2.4 (CoreCLR 4.6.27521.02, CoreFX 4.6.27521.01), 64bit RyuJIT DefaultJob : .NET Core 2.2.4 (CoreCLR 4.6.27521.02, CoreFX 4.6.27521.01), 64bit RyuJIT | Method | Mean | Error | StdDev | |----------------------- |----------:|----------:|----------:| | PassSpanByValue | 39.78 us | 0.1873 us | 0.1660 us | | PassSpanByRef | 41.21 us | 0.2618 us | 0.2186 us | | PassLargeStructByValue | 475.41 us | 1.3104 us | 1.0943 us | | PassLargeStructByRef | 39.75 us | 0.1001 us | 0.0937 us | ``` Upvotes: 4
2018/03/15
2,689
8,211
<issue_start>username_0: I am new to Hadoop and learning few mapreduce program. I was trying to read a CSV file using a Mapper class. The CSV contains header and the values till 20 columns. The strange thing is while reading the CSV file the program was working fine till the point where I am reading the 17th index but getting ArrayOutOfBondException. I am not able to understand even though 18th index is present it is throwing the exception. Here is my code: ``` package org.apress.prohadoop.c3; import java.io.IOException; import java.util.Iterator; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapred.FileInputFormat; import org.apache.hadoop.mapred.FileOutputFormat; import org.apache.hadoop.mapred.JobClient; import org.apache.hadoop.mapred.JobConf; import org.apache.hadoop.mapred.MapReduceBase; import org.apache.hadoop.mapred.Mapper; import org.apache.hadoop.mapred.OutputCollector; import org.apache.hadoop.mapred.Reducer; import org.apache.hadoop.mapred.Reporter; import org.apache.hadoop.mapred.TextInputFormat; import org.apache.hadoop.mapred.TextOutputFormat; import org.apache.log4j.Logger; import org.apress.prohadoop.c3.CSVFileProcessor.LastFmConstants; public class CSVFileProcessorNewAPI { protected static Logger logger = Logger.getLogger(CSVFileProcessorNewAPI.class); public class LastFmConstants { public static final int match_id = 0; public static final int inning_id= 1; public static final int batting_team = 2; public static final int bowling_team = 3; public static final int over = 4; public static final int ball = 5; public static final int batsman = 6; public static final int non_striker = 7; public static final int bowler = 8; public static final int is_super_over = 9; public static final int wide_runs = 10; public static final int total_runs_inOver=17; public static final int player_dismissed=18; public static final int dismissal_kind=19; } public static class MyMapper extends MapReduceBase implements Mapper { public void map(LongWritable key, Text value, OutputCollector output, Reporter reporter) throws IOException { logger.info("Vibhas Logger Started"); try { if ((key).get() == 0 && value.toString().contains("header") /\*Some condition satisfying it is header\*/) return; } catch (Exception e) { e.printStackTrace(); } String[] parts = value.toString().split("[,]"); String inning\_id=parts[LastFmConstants.inning\_id]; String match\_id\_=parts[LastFmConstants.match\_id]; String batting\_team=parts[LastFmConstants.batting\_team]; String bowling\_team=parts[LastFmConstants.bowling\_team]; String over=parts[LastFmConstants.over]; String ball=parts[LastFmConstants.ball]; String batsman=parts[LastFmConstants.batsman]; String non\_striker=parts[LastFmConstants.non\_striker]; String bowler=parts[LastFmConstants.bowler]; String wide\_runs=parts[LastFmConstants.wide\_runs]; String total\_runs\_inOver=parts[LastFmConstants.total\_runs\_inOver]; String player\_Dismissed=parts[LastFmConstants.player\_dismissed]; String dismissal\_kind=parts[LastFmConstants.dismissal\_kind]; if(!bowler.isEmpty() && bowler.trim().contains("Chahal") && dismissal\_kind.equalsIgnoreCase("S Dhawan")){ int runs=Integer.parseInt(total\_runs\_inOver); output.collect(new Text("Match-->"+match\_id\_), new IntWritable(runs)); } } } public static class MyReducer extends MapReduceBase implements Reducer { public void reduce(Text key, Iterator values, OutputCollector output, Reporter reporter) throws IOException { logger.info("Vibhas Reducer Started"); int sum = 0; while (values.hasNext()) { sum += values.next().get(); } output.collect(key, new IntWritable(sum)); } } public static void main(String[] args) throws Exception { JobConf conf = new JobConf(CSVFileProcessorNewAPI.class); conf.setJobName("CSVFileProcessorNewAPI Job"); conf.setOutputKeyClass(Text.class); conf.setOutputValueClass(IntWritable.class); conf.setMapperClass(MyMapper.class); conf.setCombinerClass(MyReducer.class); conf.setReducerClass(MyReducer.class); conf.setNumReduceTasks(1); conf.setInputFormat(TextInputFormat.class); conf.setOutputFormat(TextOutputFormat.class); FileInputFormat.setInputPaths(conf, new Path(args[0])); FileOutputFormat.setOutputPath(conf, new Path(args[1])); JobClient.runJob(conf); } } ``` CSV File format: ``` match_id,inning,batting_team,bowling_team,over,ball,batsman,non_striker,bowler,is_super_over,wide_runs,bye_runs,legbye_runs,noball_runs,penalty_runs,batsman_runs,extra_runs,total_runs,player_dismissed,dismissal_kind,fielder 1,1,<NAME>,Royal Challengers Bangalore,1,1,<NAME>,S Dhawan,TS Mills,0,0,0,0,0,0,0,0,0,,, 1,1,<NAME>,Royal Challengers Bangalore,1,2,<NAME>,S Dhawan,TS Mills,0,0,0,0,0,0,0,0,0,,, ``` Exception: ``` hadoop jar /home/cloudera/Downloads/pro-apache-hadoop-master/prohadoop.jar org.apress.prohadoop.c3.CSVFileProcessorNewAPI /Input/test.csv /outPutCSV 18/03/15 02:19:19 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 18/03/15 02:19:20 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 18/03/15 02:19:20 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 18/03/15 02:19:20 INFO mapred.FileInputFormat: Total input paths to process : 1 18/03/15 02:19:20 INFO mapreduce.JobSubmitter: number of splits:2 18/03/15 02:19:21 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1520413460063_0042 18/03/15 02:19:21 INFO impl.YarnClientImpl: Submitted application application_1520413460063_0042 18/03/15 02:19:21 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1520413460063_0042/ 18/03/15 02:19:21 INFO mapreduce.Job: Running job: job_1520413460063_0042 18/03/15 02:19:27 INFO mapreduce.Job: Job job_1520413460063_0042 running in uber mode : false 18/03/15 02:19:27 INFO mapreduce.Job: map 0% reduce 0% 18/03/15 02:19:43 INFO mapreduce.Job: map 50% reduce 0% 18/03/15 02:19:45 INFO mapreduce.Job: Task Id : attempt_1520413460063_0042_m_000001_0, Status : FAILED Error: java.lang.ArrayIndexOutOfBoundsException: 18 at org.apress.prohadoop.c3.CSVFileProcessorNewAPI$MyMapper.map(CSVFileProcessorNewAPI.java:77) at org.apress.prohadoop.c3.CSVFileProcessorNewAPI$MyMapper.map(CSVFileProcessorNewAPI.java:1) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:459) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) ``` Kindly help me to solve this issue.<issue_comment>username_1: It is dangerous to do `String[] parts = value.toString().split("[,]");` and then assume that every single record you get will have the correct number of columns. Especially when dealing with large amounts of data, the likelihood of receiving "dirty" data is something that cannot be ignored. It only takes one bad row and your entire job dies. You should instead do a check: ``` String[] parts = value.toString().split(",", -1); if (parts != null && parts.length == 20) { //your normal logic } else { logger.warn("Unparseable record identified: {}", value); } ``` Upvotes: 1 <issue_comment>username_2: You should split the `String` via below command, and you will be all set: ``` String[] parts = value.toString().split(",", -1); ``` Adding `-1` in the second parameter ensures that you are going to retain empty string at the end of the line. Upvotes: -1 [selected_answer]
2018/03/15
1,042
3,996
<issue_start>username_0: Is there any way of seeing the full logcat log? I have an error in my app but I don't have time to see it because lots of text appears after the error and goes past the limit of lines in the logcat. So even if I go to the top the error is already gone.<issue_comment>username_1: Ok, so while I was doing some testing I found the solution. I did it by selecting so it only shows errors, that way I don't see the other messages. Edit: To change to error mode select LogCat at the bottom of the screen (or press Alt + 6) and then in the third drop down select the "Error" option. [![enter image description here](https://i.stack.imgur.com/EpBn2.png)](https://i.stack.imgur.com/EpBn2.png) Upvotes: 1 <issue_comment>username_2: This is another idea to store your logcat detail for testing purpose Try below code ``` public static void extractLogToFile() { PackageManager manager = getContext().getPackageManager(); PackageInfo info = null; try { info = manager.getPackageInfo(getContext().getPackageName(), 0); } catch (NameNotFoundException e2) { } String model = Build.MODEL; if (!model.startsWith(Build.MANUFACTURER)) model = Build.MANUFACTURER + " " + model; String filePath = Environment.getExternalStorageDirectory().getAbsolutePath() + File.separator + "your App name" + File.separator; File folder = new File(filePath); if (!folder.exists()) { folder.mkdir(); } String fileName = "Logcat-" + (new SimpleDateFormat("dd-MM-yyyy").format(Calendar.getInstance().getTime())) + ".txt"; File file = new File(folder, fileName); InputStreamReader reader = null; FileWriter writer = null; try { // For Android 4.0 and earlier, you will get all app's log output, // so filter it to // mostly limit it to your app's output. In later versions, the // filtering isn't needed. String cmd = (Build.VERSION.SDK_INT <= Build.VERSION_CODES.ICE_CREAM_SANDWICH_MR1) ? "logcat -d -v time MyApp:v dalvikvm:v System.err:v *:s" : "logcat -d -v time"; // get input stream Process process = Runtime.getRuntime().exec(cmd); reader = new InputStreamReader(process.getInputStream()); // write output stream writer = new FileWriter(file, true); writer.write("Android version: " + Build.VERSION.SDK_INT + "\n"); writer.write("Device: " + model + "\n"); writer.write("App version: " + (info == null ? "(null)" : info.versionCode) + "\n"); char[] buffer = new char[10000]; do { int n = reader.read(buffer, 0, buffer.length); if (n == -1) break; writer.write(buffer, 0, n); } while (true); reader.close(); writer.close(); } catch (IOException e) { if (writer != null) try { writer.close(); } catch (IOException e1) { } if (reader != null) try { reader.close(); } catch (IOException e1) { } } } ``` hope this will work By this code, you can store logcat detail in your phone storage. make sure storage permission mentioned in your manifest file as well as implement runtime permission. Upvotes: 0 <issue_comment>username_3: You should use the filter in Android Studio. Here are some options you have: 1. Limit the output only to your app: [![enter image description here](https://i.stack.imgur.com/CfObX.png)](https://i.stack.imgur.com/CfObX.png) 2. Limit the output only to error messages: [![enter image description here](https://i.stack.imgur.com/ZDrtV.png)](https://i.stack.imgur.com/ZDrtV.png) 3. Add custom TAG to the place where the error is thrown and filter by that: `Log.e(custom_tag, message, throwable);` Also, if you don't want the logcat to be scrolled automatically, just select a piece of output in the middle (or scroll up). Upvotes: 3 [selected_answer]
2018/03/15
494
1,849
<issue_start>username_0: The way we are used to debug a running "IISExpress" process in .Net Framework is by attaching a process by the name "iisexpress.exe" in Visual Studio. But attaching the iisexpress process is not working in .Net core. It shows the message - "The breakpoint will not currently be hit. No symbols have been loaded for this document."<issue_comment>username_1: To debug a running iisexpress instance in .Net core, you will have to attach "dotnet.exe" process in Visual Studio. \*Update - 1: If dotnet.exe does not work for you, check "{projectName}.exe" as well. \*Update -2: Attaching "iisexpress.exe" works with .net 5 Upvotes: 6 [selected_answer]<issue_comment>username_2: In my case, using Asp.Net Core 2.2 with `InProcess` Hosting Model, the accepted answer doesn't work. But, I should choose from the menu `Debug` > `Attach to process`, then select the **"Show processes from all users"** checkbox (below the list of process) and select **"iisexpress.exe"**. It works properly and hit the break points. Upvotes: 1 <issue_comment>username_3: > > Visual studio does not loads all dlls (don't know why). Try this: > > > 1) First attach with w3wp.exe (for InProcess Hosting) or dotnet.exe (for OutOfProcess Hosting) 2) Load Symbols manually: Go to Debug > Windows > Modules (or try shortcut Ctrl + Alt + U) Search and select dll (and dependent dll's) which you want to debug and right click > Load Symbols. Upvotes: 1 <issue_comment>username_4: In Visual Studio 2019 using "dotnet watch run" I could no longer attach to the "dotnet.exe" process, I would get the Symbols could not be loaded error, took me a while to figure out but instead I attached to [project name].exe which is in the list of running processes and I could successfully attach the debugger to a process with symbols loaded and debug. Upvotes: 4
2018/03/15
563
2,044
<issue_start>username_0: What's the difference between storing files (images) in `public/images` folder and storing them in `storage/app/public`?<issue_comment>username_1: The `public/images` is a webroot directory. This means that it can be accessed via a web browser `mozilla, chrome, etc...` The `storage/app/public` is a folder for `cache`, `logs`. **Where to place my files?** Rule of thumb: If you need to control who can view those files put them in `storage/app/public` otherwise put them in `public/images` **EDIT** As other answers pointed out the `public` webroot directory any user can see it. Even non logged users Upvotes: 3 <issue_comment>username_2: A similar question was asked [here](https://laracasts.com/discuss/channels/laravel/storage-v-public-folder) and was answered stating: > > public is a "WEBROOT" directory. it consists of files which can be > accessed from a browser. There is your `index.php` file, which take a > role of your enter point. Also your `css`, `javascript` files there. > > > `storage` is a folder for cache, logs etc. > > > Upvotes: 2 <issue_comment>username_3: Public folder means files will be publicly accessible. For example an image stored in `public/images/my-image.jpeg` can be viewed by anyone by going to `mysite.com/images/my-image.jpeg` However, files stored in `storage` directory are only available to your app. Laravel has a `php artisan storage:link` command that adds a symlink to `public` from `storage/app/public` The reason for this is that your `storage` may not be your local filesystem, but rather an Amazon S3 bucket or a Rackspace CDN (or anything else) You will need to setup your filesystem configurations by following the docs <https://laravel.com/docs/5.6/filesystem> Once this is done you can get/store files to/from the storage place rather than have everything on your server. There are 2 helper methods for `public` and `storage` to show files: storage: `storage_path('my-file.jpg')` public: `asset('my-file.jpg')` Upvotes: 6 [selected_answer]
2018/03/15
1,265
4,717
<issue_start>username_0: I have created a new project including c++ ndk. So it automatically created a `CMakeLists.txt` file and a `cpp` directory. But When I created a new directory named `jni` and tried to put `test.c`, `Android.mk` and `Application.mk` files on it, I got this error message when I wanted to sync my project: > > This file is not part of the proejct.please include it to appropriaten > build file > > > Android.mk : ``` LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := ndktest LOCAL_SRC_FILES := test.c ``` Application.mk : ``` APP_ABI := all ``` test.c : ``` #include #include extern "C" JNIEXPORT jstring JNICALL Java\_com\_example\_m\_ndktest\_MainActivity\_testJNI(JNIEnv \*env, jobject instance) { return env->NewStringUTF(hello.c\_str()); } JNIEXPORT jstring JNICALL Java\_com\_example\_m\_ndktest\_MainActivity\_testJNI2(JNIEnv \*env, jobject instance) { return (\*env)->NewStringUTF(env, "hellooooo"); } ``` How should I include my `.mk` files inside the `CMakeLists.txt` ? CMakeLists.txt : ``` # For more information about using CMake with Android Studio, read the # documentation: https://d.android.com/studio/projects/add-native-code.html # Sets the minimum version of CMake required to build the native library. cmake_minimum_required(VERSION 3.4.1) # Creates and names a library, sets it as either STATIC # or SHARED, and provides the relative paths to its source code. # You can define multiple libraries, and CMake builds them for you. # Gradle automatically packages shared libraries with your APK. add_library( # Sets the name of the library. native-lib # Sets the library as a shared library. SHARED # Provides a relative path to your source file(s). src/main/cpp/native-lib.cpp ) # Searches for a specified prebuilt library and stores the path as a # variable. Because CMake includes system libraries in the search path by # default, you only need to specify the name of the public NDK library # you want to add. CMake verifies that the library exists before # completing its build. find_library( # Sets the name of the path variable. log-lib # Specifies the name of the NDK library that # you want CMake to locate. log ) # Specifies libraries CMake should link to your target library. You # can link multiple libraries, such as libraries you define in this # build script, prebuilt third-party libraries, or system libraries. target_link_libraries( # Specifies the target library. native-lib # Links the target library to the log library # included in the NDK. ${log-lib} ) ``` [![enter image description here](https://i.stack.imgur.com/sOk1K.png)](https://i.stack.imgur.com/sOk1K.png) > > \*\*\*Also I have a question, where can I find the tutorial about the STRANGE syntax that we should write in Android-ndk? (I mean the test.c > syntax that I copied from the default `native-lib.c`) > > ><issue_comment>username_1: Android Studio has the concept of external native builds. And for these external native builds, you have [two](https://developer.android.com/studio/projects/add-native-code.html) options: * ndk-build * CMake CMake seems to be the more modern one, so is probably more future-proof. I advice using that one. This means that you write a `CMakeLists.txt` file (which is typically in your `jni/` directory.) You no longer need any of the .mk files, they are used by the ndk-build external native build, not the CMake one. To indicate which of the external native build systems you use, you use the app's gradle file. ``` android { defaultConfig { externalNativeBuild { cmake { cppFlags "..." arguments "..." } } } } ``` In the CMakeLists.txt file, you describe how to build the native .so library. This includes listing all the `.c` and `.cpp` source files that are required to build it. This is done with CMake's [add\_library()](https://cmake.org/cmake/help/v3.0/command/add_library.html) macro. So to summarize, your new project definition will consist of gradle files, and one (or more) CMakeLists.txt file, but no longer will you need any .mk files. Upvotes: 2 <issue_comment>username_2: > > if I want to build a library with ndk-build like tesseract I shouldn'g fill the check box *including support for c++* at the project wizard? > > > The check box is not a problem, you simply edit your **build.gradle** script and replace `cmake` that the wizard inserted with `ndkBuild`. Upvotes: 1
2018/03/15
380
1,208
<issue_start>username_0: I have an SCNText node displayed in the AR screen. I am trying to animate it by using SCNAction property. But it is not working properly. Is there any other way to animate the node? Also if I want to insert an image in place of the text what should I do? I know I can't add it as a geometry. So is there any other way? Thank you ``` func displayText() { let textGeo = SCNText(string: "HELLO WORLD", extrusionDepth: 1.0) textGeo.firstMaterial?.diffuse.contents = UIColor.black let textNode = SCNNode(geometry: textGeo) textNode.position = SCNVector3(0,0.1,-0.5) textNode.scale = SCNVector3(0.002,0.002,0.002) self.sceneView.scene.rootNode.addChildNode(textNode) let action = SCNAction.fadeOpacity(by: 10.0, duration: 5.0) textNode.runAction(action) ```<issue_comment>username_1: An SCNNode's opacity ranges from 0 to 1. As your node starts with an opacity of 1, fading the opacity by 10 will have no effect. If you want the node to fade out use a value of -1. Upvotes: 2 <issue_comment>username_2: please set the opactiy for your text node = 0 befor you add your text node and then run your opacity action by 1.0 not by 10.0 Upvotes: 0
2018/03/15
676
3,227
<issue_start>username_0: I have a javascript code that i use to change the contents of a jsp page dynamically.But the code i giving a **not a function error** when using the javascript function.Event i tried to use the jquery expression but i am getting error for that to.i am including the script at the end to the body. My code is : ``` $(document).ready(function() { alert("dom ready"); $(document).on('click' , '#generatePdf' , function(event){ event.preventDefault(); var intervalId ; var statusText = document.getElementById("progresstext"); var statusbar = document.getElementById("progressbar"); $.ajax({ url : 'ExportLogs' , type : 'GET' , success : function(response) { alert("calling process"); trackprogress(response); } , error : function() { alert("error"); } }); function trackprogress(response) { $.ajax({ url : 'LogExportingStatus' , type : 'GET' , success : function(response) { var val = parseInt(response); alert(val) ; if( val < 15) { statusText.textContent("completed"); // I get the error here and also for all subsuquent textContent function call. } else if(val < 30) { statusText.textContent("completed"); } else if(val < 60) { statusText.textContent("completed"); } else if(val == 100) { clearTimeout(intervalId); statusText.textContent("completed"); } statusbar.style.width = val + "%" ; }, error : function() { alert("error"); } }) ; intervalId = setTimeout(trackprogress , 500 ); } event.preventDefault(); }); }); ``` The error i am getting is ``` generatelist.js:55 Uncaught TypeError: statusText.textContent is not a function at Object.success (generatelist.js:55) at i (jquery.min.js:2) at Object.fireWith [as resolveWith] (jquery.min.js:2) at A (jquery.min.js:4) at XMLHttpRequest. (jquery.min.js:4) ``` I could not figure out what mistake i am making.Please any help.Thanks in advance<issue_comment>username_1: [textContent](https://developer.mozilla.org/en-US/docs/Web/API/Node/textContent) is not a function, but rather a property, the right code should be: `statusText.textContent = "completed";` Upvotes: 2 [selected_answer]<issue_comment>username_2: Use [innerText](https://developer.mozilla.org/en-US/docs/Web/API/Node/innerText) like, ``` statusText.innerText="completed"; ``` Upvotes: 0
2018/03/15
365
1,229
<issue_start>username_0: I'm having trouble creating a vector of vectors of objects. The goal is to get something like a chessboard, with each of the fields beeing an object which can then be accessed by using the vector. ``` #include using namespace std; class raster; class field{ int x, y; public: field(int x, int y) : x(x), y(y) {}; friend class raster; }; class raster{ const size\_t s=12; vector > game\_row(s,vectorgame\_column(s,0)); //here I get the second error public: friend class field; }; ``` So I want the vector of vectors to be a member of a different class, but each time I compile using g++ it says error: 's' is not a type error: expected ',' or '...' before '(' token Sorry if i mixed up the formatting of the post, I'm pretty new to c++ and this is my first post here.<issue_comment>username_1: [textContent](https://developer.mozilla.org/en-US/docs/Web/API/Node/textContent) is not a function, but rather a property, the right code should be: `statusText.textContent = "completed";` Upvotes: 2 [selected_answer]<issue_comment>username_2: Use [innerText](https://developer.mozilla.org/en-US/docs/Web/API/Node/innerText) like, ``` statusText.innerText="completed"; ``` Upvotes: 0
2018/03/15
405
1,352
<issue_start>username_0: I have two versions of python ( 2.7 and 3.6.4) installed on `ubuntu 16.04`. Now I only want to work on 3.6. I want to install few packages like `pandas` and `psycopg2`. I tried to install it with `pip3 install pandas` and it shows successfully installed. However, when I am trying to import from python 3.6 it is throwing me this error: ``` ModuleNotFoundError: No module named 'pandas'. ``` Now the question is how to tell installer (`pip/pip3`) to install all packages under python3.6 so that I can import the installed packages?<issue_comment>username_1: Have you tried virtual environments? [revelant documentation](http://docs.python-guide.org/en/latest/dev/virtualenvs/) Create a python3.6 virtual environment and activate it and work on that Upvotes: 0 <issue_comment>username_2: You can install it for python 3.6 using: ``` pip3.6 install pandas ``` But a better way to work with different versions of python would be to use virtual environments. Look into anaconda: <https://conda.io/docs/> or virtualenv: <http://docs.python-guide.org/en/latest/dev/virtualenvs/> Upvotes: 0 <issue_comment>username_3: `python3 -m pip install pandas` This is [advice](https://twitter.com/raymondh/status/968634031842603008) from [<NAME>](https://stackoverflow.com/users/1001643/raymond-hettinger). Upvotes: 1
2018/03/15
360
1,176
<issue_start>username_0: I'm implementing a button to refresh a value on my HTML page and I put a refresh icon inside the button. And by doing this I saw that i could use a glyphicon : ``` ``` or use an image : CSS: ``` .buttn{ background:transparent url(/reload.png) no-repeat left; } ``` HTML: ``` ``` What is the big difference between the two since they are both working ?<issue_comment>username_1: Have you tried virtual environments? [revelant documentation](http://docs.python-guide.org/en/latest/dev/virtualenvs/) Create a python3.6 virtual environment and activate it and work on that Upvotes: 0 <issue_comment>username_2: You can install it for python 3.6 using: ``` pip3.6 install pandas ``` But a better way to work with different versions of python would be to use virtual environments. Look into anaconda: <https://conda.io/docs/> or virtualenv: <http://docs.python-guide.org/en/latest/dev/virtualenvs/> Upvotes: 0 <issue_comment>username_3: `python3 -m pip install pandas` This is [advice](https://twitter.com/raymondh/status/968634031842603008) from [Raymond Hettinger](https://stackoverflow.com/users/1001643/raymond-hettinger). Upvotes: 1
2018/03/15
345
1,218
<issue_start>username_0: I made a tableview with some labels, the labels all have a border so it looks like a nice table. However, if the label text is aligned to the right, the text is directly placed against the border. I've tried adding a space after the text, but it seems like Android just trims these spaces. Is there any way I can create some whitespace between the text and the border? (For example with some character?)<issue_comment>username_1: Have you tried virtual environments? [revelant documentation](http://docs.python-guide.org/en/latest/dev/virtualenvs/) Create a python3.6 virtual environment and activate it and work on that Upvotes: 0 <issue_comment>username_2: You can install it for python 3.6 using: ``` pip3.6 install pandas ``` But a better way to work with different versions of python would be to use virtual environments. Look into anaconda: <https://conda.io/docs/> or virtualenv: <http://docs.python-guide.org/en/latest/dev/virtualenvs/> Upvotes: 0 <issue_comment>username_3: `python3 -m pip install pandas` This is [advice](https://twitter.com/raymondh/status/968634031842603008) from [<NAME>er](https://stackoverflow.com/users/1001643/raymond-hettinger). Upvotes: 1
2018/03/15
393
1,306
<issue_start>username_0: Why after moving dev server to production all authentication requests begun returning with 401 status: ``` // $request contains 'passport' and 'username' fields $request->request->add([ "client_id" => 'env('PASSPORT_CLIENT_ID')', "client_secret" => 'env('PASSPORT_CLIENT_SECRET')', "grant_type" => 'password', "scope" => '', ]); $tokenRequest = $request->create('/oauth/token', 'POST', $request->all()); $passportResponse = Route::dispatch($tokenRequest); ```<issue_comment>username_1: Have you tried virtual environments? [revelant documentation](http://docs.python-guide.org/en/latest/dev/virtualenvs/) Create a python3.6 virtual environment and activate it and work on that Upvotes: 0 <issue_comment>username_2: You can install it for python 3.6 using: ``` pip3.6 install pandas ``` But a better way to work with different versions of python would be to use virtual environments. Look into anaconda: <https://conda.io/docs/> or virtualenv: <http://docs.python-guide.org/en/latest/dev/virtualenvs/> Upvotes: 0 <issue_comment>username_3: `python3 -m pip install pandas` This is [advice](https://twitter.com/raymondh/status/968634031842603008) from [<NAME>](https://stackoverflow.com/users/1001643/raymond-hettinger). Upvotes: 1
2018/03/15
297
970
<issue_start>username_0: Can someone say how to push data from hive to hdfs. I could find how to export data from hdfs to hive but not regarding hive to hdfs. Any help great thanx..<issue_comment>username_1: Have you tried virtual environments? [revelant documentation](http://docs.python-guide.org/en/latest/dev/virtualenvs/) Create a python3.6 virtual environment and activate it and work on that Upvotes: 0 <issue_comment>username_2: You can install it for python 3.6 using: ``` pip3.6 install pandas ``` But a better way to work with different versions of python would be to use virtual environments. Look into anaconda: <https://conda.io/docs/> or virtualenv: <http://docs.python-guide.org/en/latest/dev/virtualenvs/> Upvotes: 0 <issue_comment>username_3: `python3 -m pip install pandas` This is [advice](https://twitter.com/raymondh/status/968634031842603008) from [<NAME>](https://stackoverflow.com/users/1001643/raymond-hettinger). Upvotes: 1
2018/03/15
453
1,348
<issue_start>username_0: I am using below code for sending mail with cc and bcc. ``` $headers="From: $name <{$fromAddress}>\r\n". "Reply-To: <EMAIL>\r\n". "Cc: <EMAIL>\r\n". "BCC: <EMAIL>\r\n". "MIME-Version: 1.0\r\n". "Content-type: text/html; charset=UTF-8"; @mail($toAddres,$subject,$message,$headers,$parameters); ``` All the things are working fine including reply-to, cc but the bcc is not working. In the mailbox headers, it is not showing me the bcc mail address. What do I need to fix it?<issue_comment>username_1: You could try with **ICC** (**I**nvisible **C**arbon **C**opy) wish is the second name of **BCC**. By the way, **BCC** is **B**lind **C**arbon **C**opy. So if you don't see them in your mail it's normal. Upvotes: 2 [selected_answer]<issue_comment>username_2: without having a close look to your code I think the problem is that they do not recieve your e-mail which is set as BCC the reason might be you used `BCC` instead of `Bcc` Corrected $header ``` $headers="From: $name <{$fromAddress}>\r\n". "Reply-To: <EMAIL>\r\n". "Cc: <EMAIL>\r\n". "Bcc: <EMAIL>\r\n". "MIME-Version: 1.0\r\n". "Content-type: text/html; charset=UTF-8"; ``` I have changed the BCC: to Bcc: Upvotes: 0
2018/03/15
4,117
11,078
<issue_start>username_0: I am looking for the fastest way to convert a stream of integers into a list that counts consecutive ones and zeros. For example the integers `[4294967295,4194303,3758096384]` are at bit level: ``` 11111111111111111111111111111111 11111111111111111111110000000000 00000000000000000000000000000111 ``` (each string of bits is in little-endian order) So the program should output three values: [54 39 3] There are 54 ones, followed by 39 zeros, and finally 3 ones. I have been looking into these algorithms: <http://graphics.stanford.edu/~seander/bithacks.html#ZerosOnRightLinear> Probably I need to write something along these lines ``` i=(the first bit of the first integer) repeat till the end find the number of consecutive i's in this integer if we reach the end of the integer, continue with the next else i = (not)i ``` But I was wondering if someone can think of a better way to do it. At the moment the function is build in Matlab like this: ``` %get all bits in a long vector data = uint32([4294967295,4194303,3758096384]); logi = false([1,length(data)*32]); for ct = 1:length(data) logi(1+32*(ct-1):ct*32)=bitget(data(1+(ct-1)),1:32); end %count consecutive 1s and 0s Lct=1; L=1;i = logi(1); for ct = 2:length(logi) if logi(ct)==i L(Lct)=L(Lct)+1; else i=logi(ct); Lct=Lct+1; L(Lct)=1; end end >> L = 54 39 3 ``` Note: It took me some time to make the problem clear. Hence the comments about language and the exact nature of the problem. Hopefully (after many edits) this question is now in a form where it can be found and the answer can be useful to others as well.<issue_comment>username_1: Earlier I had misunderstood the question. Now i know what You were asking. This should work, I've tested it: ``` #include #include using namespace std; //old version for whole collection void ConsecutiveOnesAndZeros(deque values, deque &outCount) { int i; if (!values.empty()) { uint8\_t count = 0, lastBit = (values[0] & 1); for (uint32\_t &value : values) { for (i = 0; (i < 32) && (value != 0); i++) { if (lastBit != uint8\_t((value >> i) & 1)) { outCount.push\_back(count); count = 0; lastBit = !lastBit; } count++; } if (i < 32) count += (32 - i); } outCount.push\_back(count); } } //stream version for receiving integer void ConsecutiveOnesAndZeros(uint32\_t value, uint8\_t &count, uint8\_t &lastBit, deque &outCount) { int i; for (i = 0; (i < 32) && (value != 0); i++) { if (lastBit != uint8\_t((value >> i) & 1)) { if(count) outCount.push\_back(count); count = 0; lastBit = !lastBit; } count++; } if (i < 32) count += (32 - i); } int main() { deque outCount; deque stream = { 4294967295u,4194303u,3758096384u }; ConsecutiveOnesAndZeros(stream, outCount); for (auto res : outCount) { printf\_s("%d,", res); } printf\_s("\n"); uint8\_t count = 0, bit = 0; outCount.clear(); for (auto val : stream) ConsecutiveOnesAndZeros(val, count, bit, outCount); if (count) outCount.push\_back(count); for (auto res : outCount) { printf\_s("%d,", res); } printf\_s("\n"); system("pause"); } ``` **UPDATE** - I've made a little optimisation of checking value != 0. I've also divided ConsecutiveOnesAndZeros to two functions for giving next integer from received stream. Upvotes: 3 [selected_answer]<issue_comment>username_2: Well, you could try to make it faster by splitting the first part into threads. For example, if you have a function that you described you would call several of them as `std::thread` or `std::future` depending on how you wish to approach it. After they all finish you could compare the two border bits (one at the end of the previous, and one at the start of the next) and either add the first result count to the last result count or push the result onto the result of the previous, all other parts of the result get pushed onto previous without any comparison. This of course will be overdoing it if your input is quite short. Upvotes: 1 <issue_comment>username_3: First of all, to say that your sample numbers are wrong, as the second has the most significant bit at one, it should be larger than `2147483643`, but it is only `4194303`, and the third one should be `7`, so I guess you have inverted the bit positions when converting them to decimal. See my last complete code for a comment at the beginning of `main()`, on how the numbers have been determined (to look like in your example) The numbers corresponding to your bit pattern are (hex/dec): ``` [0xffffffff/4294967295][0xfffffc00/4294966272][0x00000007/7] ``` (if we put the more weight digits at the left, why don't we do also in binary?) To solve your problem, you can consider that when you have `n` consecutive **ones in the LSB part of a number**, and you increment that value by one, then you have all those consecutive ones switched to zeros (by means of carry propagation) until the next o the last one you have, and if you have `n` consecutive zeros and decrement the value, then you have all these zeros convert into ones... well, with one more bit, as carry cascades again one bit further. The idea is to check what bit do we have in the LSB, and depending on this, increment or decrement the value and XOR it with the original value.... the result you will get is a number that has as many ones in the LSBs as equal bits to the LSB, plus one, e.g: ``` 1100100011111111 ``` as the LSB is 1, we increment it: ``` 1100100100000000 ^^^^^^^^^ changed bits. ``` if we now xor this value with the previous: ``` 0000000111111111 => 9 "1" bits, that indicate that 8 "1" consecutive bits were present ``` if we prepare a `switch` statement with all the possible values we can get from this function, you can get a very effective way to the following result: ``` int get_consecutive_bits(unsigned value) { unsigned next = value; switch (value) { case 0: case ~0: return 32; /* these are special cases, see below */ } switch (value & 1) { /* get the lower bit */ case 0: next--; break; /* decrement */ case 1: next++; break; /* increment */ } switch (value ^ next) { /* make the xor */ case 0x00000003: return 1; case 0x00000007: return 2; case 0x0000000f: return 3; case 0x0000001f: return 4; case 0x0000003f: return 5; case 0x0000007f: return 6; /* ... */ case 0xffffffff: return 31; } /* switch */ } ``` Now you have to accumulate that value in case the next array cell begins with the same bit value as you finished the previous. The reason we never have a `case` statement of `0x00000001` is that we are forcing a carry in the second bit, so we always have a value of `1` or more, with two bits changed (`...0000001 => ...0000010 => ...0000011` and `...11111110 => ...11111101 => ...00000011`) and this also means that for values `0000...0000` and `1111...1111` we should get one bit more than the word length, making these values special (as they make the carry go to the next bit to the msb, the 33rd) so we check for those values first. This is a very efficient way to do the task in chunks of one array cell. You have to accumulate, when the value you get includes the MSB, as the next word can start with that same bit you ended before. The next code should illustrate the algorithm: pru\_49297910.c =============== ``` /* pru_49297910.c -- answer to https://stackoverflow.com/questions/49297910/ * Author: username_3 \* Date: Wed Apr 24 11:12:21 EEST 2019 \* Copyright: (C) username_3. All rights reserved. \* License: BSD. Open source. \*/ #include #include #define BITS\_PER\_ELEMENT 32 int get\_consecutive\_bits(unsigned value) { switch (value) { case 0: case ~0: /\* these are special cases, see below \*/ return BITS\_PER\_ELEMENT; } unsigned next = value; switch (value & 1) { /\* get the lower bit \*/ case 0: next--; break; /\* decrement \*/ case 1: next++; break; /\* increment \*/ } switch (value ^ next) { /\* make the xor \*/ case 0x00000003: return 1; case 0x00000007: return 2; case 0x0000000f: return 3; case 0x0000001f: return 4; case 0x0000003f: return 5; case 0x0000007f: return 6; case 0x000000ff: return 7; case 0x000001ff: return 8; case 0x000003ff: return 9; case 0x000007ff: return 10; case 0x00000fff: return 11; case 0x00001fff: return 12; case 0x00003fff: return 13; case 0x00007fff: return 14; case 0x0000ffff: return 15; case 0x0001ffff: return 16; case 0x0003ffff: return 17; case 0x0007ffff: return 18; case 0x000fffff: return 19; case 0x001fffff: return 20; case 0x003fffff: return 21; case 0x007fffff: return 22; case 0x00ffffff: return 23; case 0x01ffffff: return 24; case 0x03ffffff: return 25; case 0x07ffffff: return 26; case 0x0fffffff: return 27; case 0x1fffffff: return 28; case 0x3fffffff: return 29; case 0x7fffffff: return 30; case 0xffffffff: return 31; } /\* switch \*/ assert(!"Impossible"); return 0; } #define FLUSH() do{ \ runlen(accum, state); \ state ^= 1; \ accum = 0; \ } while (0) void run\_runlen\_encoding(unsigned array[], int n, void (\*runlen)(int, unsigned)) { int state = 0; /\* always begin in 0 \*/ int accum = 0; /\* accumulated bits \*/ while (n--) { /\* see if we have to change \*/ if (state ^ (array[n] & 1)) /\* we changed state \*/ FLUSH(); int nb = BITS\_PER\_ELEMENT; /\* number of bits to check \*/ int w = array[n]; while (nb > 0) { int b = get\_consecutive\_bits(w); if (b < nb) { accum += b; FLUSH(); w >>= b; nb -= b; } else { /\* b >= nb, we only accumulate nb \*/ accum += nb; nb = 0; } } } if (accum) FLUSH(); } /\* run\_runlen\_encoding \*/ void output\_runlen(int n, unsigned kind) { if (n) { /\* don't print for n == 0 \*/ static int i = 0; std::cout << "[" << n << "/" << kind << "]"; if (!(++i % 10)) std::cout << std::endl; } } /\* output\_runlen \*/ int main() { /\* 0b1111\_1111\_1111\_1111\_1111\_1111\_1111\_1111, 0b1111\_1111\_1111\_1111\_1111\_1100\_0000\_0000, 0b0000\_0000\_0000\_0000\_0000\_0000\_0000\_0111 \*/ /\* 0xf\_\_\_\_f\_\_\_\_f\_\_\_\_f\_\_\_\_f\_\_\_\_f\_\_\_\_f\_\_\_\_f, 0xf\_\_\_\_f\_\_\_\_f\_\_\_\_f\_\_\_\_f\_\_\_\_c\_\_\_\_0\_\_\_\_0, 0x0\_\_\_\_0\_\_\_\_0\_\_\_\_0\_\_\_\_0\_\_\_\_0\_\_\_\_0\_\_\_\_7 \*/ /\* 0xffffffff, 0xfffffc00, 0x00000007 \*/ unsigned int array[] = #if 1 { 0xffffffff, 0xfffffc00, 0x00000007 }; /\* correct values for your example \*/ #else { 4294967295, 4194303, 3758096384 }; /\* original values, only first matches. \*/ #endif size\_t array\_n = sizeof array / sizeof array[0]; run\_runlen\_encoding(array, array\_n, output\_runlen); std::cout << std::endl; } /\* main \*/ ``` Note: ===== As we needed to compute how far the carry bit jumps in one increment, we have to go from the less significant bit to the most, making the output just the reverse order than you tried, but I'm sure you will be able to change the order to make it appear as you stated in the question. The program output shows: ``` $ pru_49297910 [3/1][39/0][54/1] ``` Upvotes: 1
2018/03/15
3,588
9,890
<issue_start>username_0: I am doing facebook login in iOS(swift) . I have already done all the stuff related to pod install carefully, but while importing 'FacebookLogin' I am getting error like "No Such Module FacebookLogin". I had gone through all the possible options, but I couldn't get a satisfactory solution yet. So , someone please tell me why I am facing this kind of problem and what is the exact solution can be? [Code error screenshot for reference](https://i.stack.imgur.com/ngDSw.png)<issue_comment>username_1: Earlier I had misunderstood the question. Now i know what You were asking. This should work, I've tested it: ``` #include #include using namespace std; //old version for whole collection void ConsecutiveOnesAndZeros(deque values, deque &outCount) { int i; if (!values.empty()) { uint8\_t count = 0, lastBit = (values[0] & 1); for (uint32\_t &value : values) { for (i = 0; (i < 32) && (value != 0); i++) { if (lastBit != uint8\_t((value >> i) & 1)) { outCount.push\_back(count); count = 0; lastBit = !lastBit; } count++; } if (i < 32) count += (32 - i); } outCount.push\_back(count); } } //stream version for receiving integer void ConsecutiveOnesAndZeros(uint32\_t value, uint8\_t &count, uint8\_t &lastBit, deque &outCount) { int i; for (i = 0; (i < 32) && (value != 0); i++) { if (lastBit != uint8\_t((value >> i) & 1)) { if(count) outCount.push\_back(count); count = 0; lastBit = !lastBit; } count++; } if (i < 32) count += (32 - i); } int main() { deque outCount; deque stream = { 4294967295u,4194303u,3758096384u }; ConsecutiveOnesAndZeros(stream, outCount); for (auto res : outCount) { printf\_s("%d,", res); } printf\_s("\n"); uint8\_t count = 0, bit = 0; outCount.clear(); for (auto val : stream) ConsecutiveOnesAndZeros(val, count, bit, outCount); if (count) outCount.push\_back(count); for (auto res : outCount) { printf\_s("%d,", res); } printf\_s("\n"); system("pause"); } ``` **UPDATE** - I've made a little optimisation of checking value != 0. I've also divided ConsecutiveOnesAndZeros to two functions for giving next integer from received stream. Upvotes: 3 [selected_answer]<issue_comment>username_2: Well, you could try to make it faster by splitting the first part into threads. For example, if you have a function that you described you would call several of them as `std::thread` or `std::future` depending on how you wish to approach it. After they all finish you could compare the two border bits (one at the end of the previous, and one at the start of the next) and either add the first result count to the last result count or push the result onto the result of the previous, all other parts of the result get pushed onto previous without any comparison. This of course will be overdoing it if your input is quite short. Upvotes: 1 <issue_comment>username_3: First of all, to say that your sample numbers are wrong, as the second has the most significant bit at one, it should be larger than `2147483643`, but it is only `4194303`, and the third one should be `7`, so I guess you have inverted the bit positions when converting them to decimal. See my last complete code for a comment at the beginning of `main()`, on how the numbers have been determined (to look like in your example) The numbers corresponding to your bit pattern are (hex/dec): ``` [0xffffffff/4294967295][0xfffffc00/4294966272][0x00000007/7] ``` (if we put the more weight digits at the left, why don't we do also in binary?) To solve your problem, you can consider that when you have `n` consecutive **ones in the LSB part of a number**, and you increment that value by one, then you have all those consecutive ones switched to zeros (by means of carry propagation) until the next o the last one you have, and if you have `n` consecutive zeros and decrement the value, then you have all these zeros convert into ones... well, with one more bit, as carry cascades again one bit further. The idea is to check what bit do we have in the LSB, and depending on this, increment or decrement the value and XOR it with the original value.... the result you will get is a number that has as many ones in the LSBs as equal bits to the LSB, plus one, e.g: ``` 1100100011111111 ``` as the LSB is 1, we increment it: ``` 1100100100000000 ^^^^^^^^^ changed bits. ``` if we now xor this value with the previous: ``` 0000000111111111 => 9 "1" bits, that indicate that 8 "1" consecutive bits were present ``` if we prepare a `switch` statement with all the possible values we can get from this function, you can get a very effective way to the following result: ``` int get_consecutive_bits(unsigned value) { unsigned next = value; switch (value) { case 0: case ~0: return 32; /* these are special cases, see below */ } switch (value & 1) { /* get the lower bit */ case 0: next--; break; /* decrement */ case 1: next++; break; /* increment */ } switch (value ^ next) { /* make the xor */ case 0x00000003: return 1; case 0x00000007: return 2; case 0x0000000f: return 3; case 0x0000001f: return 4; case 0x0000003f: return 5; case 0x0000007f: return 6; /* ... */ case 0xffffffff: return 31; } /* switch */ } ``` Now you have to accumulate that value in case the next array cell begins with the same bit value as you finished the previous. The reason we never have a `case` statement of `0x00000001` is that we are forcing a carry in the second bit, so we always have a value of `1` or more, with two bits changed (`...0000001 => ...0000010 => ...0000011` and `...11111110 => ...11111101 => ...00000011`) and this also means that for values `0000...0000` and `1111...1111` we should get one bit more than the word length, making these values special (as they make the carry go to the next bit to the msb, the 33rd) so we check for those values first. This is a very efficient way to do the task in chunks of one array cell. You have to accumulate, when the value you get includes the MSB, as the next word can start with that same bit you ended before. The next code should illustrate the algorithm: pru\_49297910.c =============== ``` /* pru_49297910.c -- answer to https://stackoverflow.com/questions/49297910/ * Author: username_3 \* Date: Wed Apr 24 11:12:21 EEST 2019 \* Copyright: (C) username_3. All rights reserved. \* License: BSD. Open source. \*/ #include #include #define BITS\_PER\_ELEMENT 32 int get\_consecutive\_bits(unsigned value) { switch (value) { case 0: case ~0: /\* these are special cases, see below \*/ return BITS\_PER\_ELEMENT; } unsigned next = value; switch (value & 1) { /\* get the lower bit \*/ case 0: next--; break; /\* decrement \*/ case 1: next++; break; /\* increment \*/ } switch (value ^ next) { /\* make the xor \*/ case 0x00000003: return 1; case 0x00000007: return 2; case 0x0000000f: return 3; case 0x0000001f: return 4; case 0x0000003f: return 5; case 0x0000007f: return 6; case 0x000000ff: return 7; case 0x000001ff: return 8; case 0x000003ff: return 9; case 0x000007ff: return 10; case 0x00000fff: return 11; case 0x00001fff: return 12; case 0x00003fff: return 13; case 0x00007fff: return 14; case 0x0000ffff: return 15; case 0x0001ffff: return 16; case 0x0003ffff: return 17; case 0x0007ffff: return 18; case 0x000fffff: return 19; case 0x001fffff: return 20; case 0x003fffff: return 21; case 0x007fffff: return 22; case 0x00ffffff: return 23; case 0x01ffffff: return 24; case 0x03ffffff: return 25; case 0x07ffffff: return 26; case 0x0fffffff: return 27; case 0x1fffffff: return 28; case 0x3fffffff: return 29; case 0x7fffffff: return 30; case 0xffffffff: return 31; } /\* switch \*/ assert(!"Impossible"); return 0; } #define FLUSH() do{ \ runlen(accum, state); \ state ^= 1; \ accum = 0; \ } while (0) void run\_runlen\_encoding(unsigned array[], int n, void (\*runlen)(int, unsigned)) { int state = 0; /\* always begin in 0 \*/ int accum = 0; /\* accumulated bits \*/ while (n--) { /\* see if we have to change \*/ if (state ^ (array[n] & 1)) /\* we changed state \*/ FLUSH(); int nb = BITS\_PER\_ELEMENT; /\* number of bits to check \*/ int w = array[n]; while (nb > 0) { int b = get\_consecutive\_bits(w); if (b < nb) { accum += b; FLUSH(); w >>= b; nb -= b; } else { /\* b >= nb, we only accumulate nb \*/ accum += nb; nb = 0; } } } if (accum) FLUSH(); } /\* run\_runlen\_encoding \*/ void output\_runlen(int n, unsigned kind) { if (n) { /\* don't print for n == 0 \*/ static int i = 0; std::cout << "[" << n << "/" << kind << "]"; if (!(++i % 10)) std::cout << std::endl; } } /\* output\_runlen \*/ int main() { /\* 0b1111\_1111\_1111\_1111\_1111\_1111\_1111\_1111, 0b1111\_1111\_1111\_1111\_1111\_1100\_0000\_0000, 0b0000\_0000\_0000\_0000\_0000\_0000\_0000\_0111 \*/ /\* 0xf\_\_\_\_f\_\_\_\_f\_\_\_\_f\_\_\_\_f\_\_\_\_f\_\_\_\_f\_\_\_\_f, 0xf\_\_\_\_f\_\_\_\_f\_\_\_\_f\_\_\_\_f\_\_\_\_c\_\_\_\_0\_\_\_\_0, 0x0\_\_\_\_0\_\_\_\_0\_\_\_\_0\_\_\_\_0\_\_\_\_0\_\_\_\_0\_\_\_\_7 \*/ /\* 0xffffffff, 0xfffffc00, 0x00000007 \*/ unsigned int array[] = #if 1 { 0xffffffff, 0xfffffc00, 0x00000007 }; /\* correct values for your example \*/ #else { 4294967295, 4194303, 3758096384 }; /\* original values, only first matches. \*/ #endif size\_t array\_n = sizeof array / sizeof array[0]; run\_runlen\_encoding(array, array\_n, output\_runlen); std::cout << std::endl; } /\* main \*/ ``` Note: ===== As we needed to compute how far the carry bit jumps in one increment, we have to go from the less significant bit to the most, making the output just the reverse order than you tried, but I'm sure you will be able to change the order to make it appear as you stated in the question. The program output shows: ``` $ pru_49297910 [3/1][39/0][54/1] ``` Upvotes: 1
2018/03/15
854
2,434
<issue_start>username_0: I am trying to create calendar table using this PL/SQL script from Oracle site: [Date Dimension Data Generator](https://livesql.oracle.com/apex/livesql/file/content_D483FJ633JTDU61TWX2N71EC3.html) Everything works good, but I cannot run statement from section 5: ``` SELECT * FROM TABLE(UDF_CALENDAR_GENERATOR(CAST('1-JAN-2016' AS DATE), CAST('31-DEC-2016' AS DATE))); ``` I get error: > > 1843. 00000 - "not a valid month" > > > \*Cause: > > \*Action: > > > I tried to use `to_date` function (based od [this answer](https://stackoverflow.com/a/13002605/6898608)): ``` SELECT * FROM TABLE(UDF_CALENDAR_GENERATOR(TO_DATE('14-Apr-2015', 'DD-MON-YYYY'), TO_DATE('14-May-2015', 'DD-MON-YYYY'))); ``` But it doesn't work. What is wrong with this statement?<issue_comment>username_1: It must be a date format issue, be sure that your date format is `DD-MON-YYYY` and ,since you're using English, be also sure that your date languate is English: ``` ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY'; ALTER SESSION SET NLS_DATE_LANGUAGE = 'American'; ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` SELECT * FROM TABLE( UDF_CALENDAR_GENERATOR( TO_DATE('14-Apr-2015', 'DD-MON-YYYY', 'NLS_DATE_LANGUAGE = American'), TO_DATE('14-May-2015', 'DD-MON-YYYY', 'NLS_DATE_LANGUAGE = American') ) ); ``` Upvotes: 3 <issue_comment>username_3: Someone should tell the author that in Oracle, [date literals](https://docs.oracle.com/database/121/SQLRF/sql_elements003.htm#SQLRF51062) are written like `date '2016-01-01'`, so the query should be: ``` select * from table(udf_calendar_generator(date '2016-01-01', date '2016-12-31')); ``` The `CAST()` expression is effectively `TO_DATE()` with no date format or language specified, requiring an element of luck for it to work in any given environment. Upvotes: 3 <issue_comment>username_4: Find default date format: SELECT \* FROM NLS\_SESSION\_PARAMETERS; in sql developer: at beginning of query: alter session set NLS\_DATE\_FORMAT = 'MM/DD/YYYY'; at end of query (to return to default format): alter session set NLS\_DATE\_FORMAT = 'DD-MON-RR'; BUT stored procedure is different. after the BEGIN: execute immediate 'alter session set NLS\_DATE\_FORMAT = ''MM/DD/YYYY'''; At end of stored procedure, just before 'END ' : execute immediate 'alter session set NLS\_DATE\_FORMAT = ''DD-MON-RR'''; Upvotes: 1
2018/03/15
734
2,470
<issue_start>username_0: The `File::Temp` module is not deleting the file on exit. I create a temp file and pass that file to other functions in which the file is opened for reading and writing. Here is the code : ``` #!/usr/bin/perl use strict ; use warnings ; use Data::Dumper; use File::Temp qw/ tempfile tempdir /; use sigtrap qw(die normal-signals error-signals); sub temp{ my $decrypted_file_path = "/home/programming/perl"; my $file = new File::Temp(DIR => $decrypted_file_path, SUFFIX => '.tmp',UNLINK=>1)->filename; print Dumper $file; writeFile($file); my @arr = parse($file); return ; } sub writeFile{ my ($file) = @_ ; print $file ; open(my $fh,'>', $file) or die "cannot open : $!"; print $fh 'this is test' ; close $fh ; } sub parse{ my ($file) = @_ ; open(my $fh,'<', $file) or die "cannot open : $!"; my @arr = <$fh> ; close $fh ; return @arr ; } temp(); ``` The problem is when the program terminates, the files are still present. How can I automatically remove the files? Perl version: v5.10.1<issue_comment>username_1: Use the option `CLEANUP => 1`. See more in the [documentation](http://perldoc.perl.org/File/Temp.html). Upvotes: -1 <issue_comment>username_2: I observe the same behavior on version 5.16.3. Using `new` and `filename` on the same line looks odd to me. If I separate the two functions, the file is automatically removed: ``` my $tmp = File::Temp->new(DIR => $decrypted_file_path, SUFFIX => '.tmp', UNLINK => 1); my $file = $tmp->filename(); ``` Upvotes: 2 <issue_comment>username_3: You are using [File::Temp](https://metacpan.org/pod/File::Temp) the wrong way. It will already give you an object which contains the filehandle and filename. If you do ``` my $filename = new File::Temp(...)->filename; ``` then the object returned by File::Temp containing the filehandle will be destroyed immediately. It's similar to: ``` my $file = new File::Temp(...); my $filename = $file->filename; undef $file; ``` So it creates a file and directly deletes it, and all you are left with is the filename. Then you open this file yourself and never take care of deleting it. Use it like that: ``` my $temp = File::Temp->new(...); # is already a filehandle print $temp $content; # explicitly remove it, otherwise it will be removed when it falls out of scope undef $temp; ``` Upvotes: 4 [selected_answer]
2018/03/15
439
1,273
<issue_start>username_0: My gitlab CI is as follows: ``` stages: - test cache: key: ${CI_COMMIT_SHA} paths: - mymark test: stage: test script: - ls -l - echo "hi" > mymark - ls -l tags: - myrunner only: - dev ``` The file `mymark` is created by the build scripts: ``` $ ls -l total 76 -rw-r--r-- 1 root root 3 Mar 15 10:48 mymark ``` But GitLab does not see it: ``` Creating cache 122f151d6b0a9d37cfa2172941d642e5c48287fc... WARNING: mymark: no matching files Created cache Job succeeded ``` This seems to happen randomly: sometimes the file is found, sometimes not: ``` Creating cache 63d295dad175370aa61d13c4d2f3149e050df5e0... mymark: found 1 matching files Created cache Job succeeded ```<issue_comment>username_1: This can happen when you `cd` out of the original directory in any `before_script` or `script` directive. Make sure you `cd` back to the original directory you were in so the `cache` code can see the paths its looking for. Upvotes: 0 <issue_comment>username_2: This is because `cache` can only be used for files inside your project, so even ``` cache: paths: - /my_root_path ``` will let GitLab CI look for /home/user/yourproject/my\_root\_path. Upvotes: 2
2018/03/15
2,141
5,345
<issue_start>username_0: I am looking to count consecutive day spells for each individual person. My tables: ``` CREATE TABLE Absence( Date Date, Code varchar(10), Name varchar(10), Type varchar(10) ); INSERT INTO Absence (Date, Code, Name, Type) VALUES ('01-10-18', 'S', 'Sam', 'Sick'), ('01-11-18','S', 'Sam', 'Sick'), ('01-12-18','S', 'Sam', 'Sick'), ('01-21-18','S', 'Sam', 'Sick'), ('01-26-18','S', 'Sam', 'Sick'), ('01-27-18','S', 'Sam', 'Sick'), ('02-12-18','S', 'Sam', 'Holiday'), ('02-13-18','S', 'Sam', 'Holiday'), ('02-18-18','S', 'Sam', 'Holiday'), ('02-25-18','S', 'Sam', 'Holiday'), ('02-10-18','S', 'Sam', 'Holiday'), ('02-13-18','F', 'Fred', 'Sick'), ('02-14-18','F', 'Fred', 'Sick'), ('02-17-18','F', 'Fred', 'Sick'), ('02-25-18','F', 'Fred', 'Sick'), ('02-28-18','F', 'Fred', 'Sick'); ``` This is the code i currently have: ``` WITH CTE AS ( SELECT Date, Name, Type ,GroupingSet = DATEADD(DAY, ROW_NUMBER() OVER (PARTITION BY [Name], [Type] ORDER BY [Date]), [Date]) FROM Absence ) SELECT Name, StartDate = MIN(Date), EndDate = MAX(Date), Result = COUNT(Name), min(Type) AS [Type] FROM CTE GROUP BY Name, GroupingSet -- HAVING COUNT(NULLIF(Code, 0)) > 1 ORDER BY Name, StartDate ``` Which produces the result: ``` | Name | StartDate | EndDate | Result | Type | |------|------------|------------|--------|---------| | Fred | 2018-02-13 | 2018-02-13 | 1 | Sick | | Fred | 2018-02-14 | 2018-02-14 | 1 | Sick | | Fred | 2018-02-17 | 2018-02-17 | 1 | Sick | | Fred | 2018-02-25 | 2018-02-25 | 1 | Sick | | Fred | 2018-02-26 | 2018-02-28 | 1 | Sick | | Sam | 2018-01-10 | 2018-01-10 | 1 | Sick | | Sam | 2018-01-11 | 2018-01-11 | 1 | Sick | | Sam | 2018-01-12 | 2018-01-12 | 1 | Sick | | Sam | 2018-01-21 | 2018-01-21 | 1 | Sick | | Sam | 2018-01-26 | 2018-01-26 | 1 | Sick | | Sam | 2018-01-27 | 2018-01-27 | 1 | Sick | | Sam | 2018-02-10 | 2018-02-10 | 1 | Holiday | | Sam | 2018-02-12 | 2018-02-12 | 1 | Holiday | | Sam | 2018-02-13 | 2018-02-13 | 1 | Holiday | | Sam | 2018-02-18 | 2018-02-18 | 1 | Holiday | | Sam | 2018-02-25 | 2018-02-25 | 1 | Holiday | ``` Where as i am looking for a result set like this: ``` | Name | Date | Result | Type | |------|------------|---------|---------| | Fred | 2018-02-13 | 2 | Sick | | Sam | 2018-01-27 | 2 | Sick | | Sam | 2018-02-10 | 1 | Holiday | ``` I need to count the consecutive days where there is more then 1 day in a row. And then have this as a total of how many consecutive spells someone had. e.g. fred had 2 consecutive sick spells during that time period. I also need this to cover if someone had a friday and a monday off, this should count as a consecutive spell. Im a bit lost as to how to get there. Any help would be appreciate. PLEASE SEE : <http://sqlfiddle.com/#!18/88612/16><issue_comment>username_1: You can get the periods of absences using: ``` select name, min(date), max(date), count(*) as numdays, type from (select a.*, row_number() over (partition by name, type order by date) as seqnum_ct from absence a ) a group by name, type, dateadd(day, -seqnum_ct, date); ``` [Here](http://www.sqlfiddle.com/#!18/472d2/8) is a SQL Fiddle for this. You can add `having count(*) > 1` to get periods with one day or more. This seems useful. I don't understand what the ultimate output is. The description just doesn't make sense to me. If you want the number of absences that are 2 or more days, then use this as a subquery/CTE: ``` select name, count(*), type from (select name, min(date) as mindate, max(date) as maxdate, count(*) as numdays, type from (select a.*, row_number() over (partition by name, type order by date) as seqnum_ct from absence a ) a group by name, type, dateadd(day, -seqnum_ct, date) ) b where numdays > 1 group by name, type; ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this: ``` select name, min([date]) [date], count(*) [result], [type] from ( select *, SUM(isConsecutive) over (partition by name,[type] order by [date] rows between unbounded preceding and current row) [isConsecutiiveId] from ( select *, case when dateadd(day, -1, [date]) = LAG([date]) over (partition by name,[type] order by [date]) then 0 else 1 end [isConsecutive] from #Absence ) a ) a group by name,[type],isConsecutiiveId ``` It results more periods of absence than your expected result, as in your data there's more periods of absence. It obviously includes your results, but there's more :) Upvotes: 1 <issue_comment>username_3: ``` SELECT s.[Name], 'Last 12 Months' as [Date], s.[Type], COUNT(s.[numdays]) AS [Consecutive Spells] FROM ( select name, min(date) AS [Date], max(date) AS [Date2], count(*) as numdays, type from (select a.*, row_number() over (partition by name, type order by date) as seqnum_ct from absence a ) a group by name, type, dateadd(day, -seqnum_ct, date) HAVING count(*) > 1 )S GROUP BY s.[Name], s.[Type] ``` This is more the result i was looking for. - <http://sqlfiddle.com/#!18/472d2/23> But thanks for your help Gordon, it pushed me in the right direction! Upvotes: -1
2018/03/15
410
1,556
<issue_start>username_0: I'm making C# application for Universal Windows Platform (UWP) and I would like to bind manually to the xaml element from the properties (Create Data Binding... after clicking on rectangle next to ie. Content in Button Properties). Binding is of course working when I type `Content={Binding Text}` from keyboard. This has place just in the UWP projects, and it is working for normal WPF applications. Here is the example: [![UWP binding example](https://i.stack.imgur.com/5oxef.png)](https://i.stack.imgur.com/5oxef.png) And code behind: ``` public sealed partial class MainPage : Page { public MainPage() { this.InitializeComponent(); this.DataContext = this; } public string Text { get; private set; } = "Hello"; } ``` Has anyone solved this problem because it is slightly annoying?<issue_comment>username_1: You would need to give a page name and then you can bind to this property thro element binding. x:Name="MainPage"> ... Content="{Binding Path=Text, ElementName=MainPage}" Upvotes: -1 <issue_comment>username_2: I had [a similar problem](https://github.com/MicrosoftDocs/windows-uwp/pull/295) with Visual Studio. The response from Microsoft was > > ...due to updates to the XAML designer, some features are currently disabled, including adding Styles through the property marker menu. This functionality will return at a later date. You can find more info at <https://blogs.msdn.microsoft.com/visualstudio/2017/09/11/a-significant-update-to-the-xaml-designer/> > > > Upvotes: 2
2018/03/15
381
1,110
<issue_start>username_0: Any one tell me how to filter date from json date format. I want to disply like eg,wed, 07 Mar 2018. My code is, ``` {{item.date}} ``` my json is, ``` "date": "Wed, 07 Mar 2018 00:00:00 PST", ```<issue_comment>username_1: use pipe "|" here is the link to the documentation <https://docs.angularjs.org/api/ng/filter/date> Upvotes: 0 <issue_comment>username_2: Apply a date filter with your requirements for the format: ```js var app = angular.module('myApp', []); app.controller('datCtrl', function($scope) { var json = { "date": "Wed, 07 Mar 2018 00:00:00 PST" }; $scope.item = new Date(json.date); }); ``` ```html {{ item | date: 'dd MMM y'}} ``` Note that you need to cast a date type to it first: `new Date()` Upvotes: 2 <issue_comment>username_3: you have to use the angularjs filter date passing the correct format, in your case the format is 'dd MMM yyyy'. ``` {{item.date | date: 'dd MMM yyyy'}} ``` And also you have to transform your date in a valid javascript address: ``` $scope.item.date = new Date($scope.item.date); ``` Upvotes: 0
2018/03/15
642
2,379
<issue_start>username_0: ``` [Vue warn]: Property or method "onlyNumbers" is not defined on the instance but referenced during render. Make sure that this property is reactive, either in the data option, or for class-based components, by initializing the property. {{ input.placeholder}} {{ select.placeholder }} ``` Hi all i want to filter value from input text, i created filter in 2 ways but all time have this error, how to do that? 1 try ``` Vue.filter('onlyNumbers',function (value) { return +value.replace(/[^\d.]/g, ''); }); ``` 2 try ``` export default { name: "profile-add-inputs", props: ['category'], data() { return { inputs: { text : {}, select: {} }, } }, methods: { getCategories(){ ///// }, selectChange(select){ /// }, inputChange(){ /// some code what i have } }, filters : { onlyNumbers: function (value) { return +value.replace(/[^\d.]/g, ''); } }, watch: { category : function () { this.getCategories(); }, } } ``` But always i had had this error my eror is add.js:32380 [Vue warn]: Property or method "onlyNumbers" is not defined on the instance but referenced during render. Make sure that this property is reactive, either in the data option, or for class-based components, by initializing the property<issue_comment>username_1: use pipe "|" here is the link to the documentation <https://docs.angularjs.org/api/ng/filter/date> Upvotes: 0 <issue_comment>username_2: Apply a date filter with your requirements for the format: ```js var app = angular.module('myApp', []); app.controller('datCtrl', function($scope) { var json = { "date": "Wed, 07 Mar 2018 00:00:00 PST" }; $scope.item = new Date(json.date); }); ``` ```html {{ item | date: 'dd MMM y'}} ``` Note that you need to cast a date type to it first: `new Date()` Upvotes: 2 <issue_comment>username_3: you have to use the angularjs filter date passing the correct format, in your case the format is 'dd MMM yyyy'. ``` {{item.date | date: 'dd MMM yyyy'}} ``` And also you have to transform your date in a valid javascript address: ``` $scope.item.date = new Date($scope.item.date); ``` Upvotes: 0
2018/03/15
1,497
6,077
<issue_start>username_0: I'm using Laravel 5.6 and yajira datatables plugin. I want to display user status in one of the columns which is a number from 0-5 in the database but I want to show and display it in the column as words, (New, Updated, Initial, etc.) Method to make the datatable: ``` public function usersDatatable() { $query = User::with('jobrole')->select([ 'users.id', 'users.first_name', 'users.last_name', 'users.email', 'users.postcode', 'users.preferred_role_id', 'users.status', ]); return Datatables::of($query) ->addColumn('jobrole', function (User $user) { return $user->jobrole ? str_limit($user->jobrole->role, 30, '...') : ''; }) ->addColumn('status', function (User $user) { return $user->status_name; }) ->addColumn('action', function (User $user) { return ''; }) ->make(true); } ``` As you can see, the `status` is returned as `$user->status_name` which is an Accessor method on my User Model: ``` public function getStatusNameAttribute() { return UserStatus::getDescription($this->status); } ``` And the UserStatus Enum class has the logic for the status translation from digits to strings: ``` namespace App\Enums; use BenSampo\Enum\Enum; final class UserStatus extends Enum { const Initial = 0; const New = 1; const Updated = 2; const Synced = 3; const Ignore = 4; /** * Get the description for an enum value * * @param int $value * @return string */ public static function getUserStatus(int $value): string { switch ($value) { case self::Initial: return 'Initial'; break; case self::New: return 'New'; break; case self::Updated: return 'Updated'; break; case self::Synced: return 'Synced'; break; case self::Ignore: return 'Ignore'; break; default: return self::getKey($value); } } } ``` In the view, I fetch the data via jQuery Ajax and datatables my code in the view is here: ``` $('#users-table').DataTable({ processing: true, serverSide: true, ajax: '{!! route('users') !!}', columns: [ { data: 'id', width: '10', name: 'users.id' }, { data: null, render:function (data, type, row) { return data.last_name+', '+data.first_name; }, name: 'users.last_name' }, { data: 'email', name: 'users.email' }, { data: 'postcode', name: 'users.postcode' }, { data: 'jobrole', name: 'jobrole.role' }, { data: 'status', name: 'user.status' }, { data: 'action', width: '10', name: 'action', orderable: false, searchable: false} ] }); ``` Now, because of `name:user.status` the search and ordering will be based on the `user.status` column which is just digits. Is there a way to force it to use displayed data for search and ordering? Please point me in the right direction.<issue_comment>username_1: ``` public function usersDatatable() { $query = User::with('jobrole')->select([ 'users.id', 'users.first_name', 'users.last_name', 'users.email', 'users.postcode', 'users.preferred_role_id', 'users.status', ]); return Datatables::of($query) ->addColumn('jobrole', function (User $user) { return $user->jobrole ? str_limit($user->jobrole->role, 30, '...') : ''; }) ->addColumn('status', function (User $user) { return $user->status_name; }) ->addColumn('status', function ($user) { if ($user->status == 1) { return "New"; } elseif ($user->status == 2) { return "Updated"; } else { return "Pending"; } }) ->addColumn('action', function (User $user) { return ''; }) ->make(true); } ``` Upvotes: 0 <issue_comment>username_2: ### SOLUTION 1 It would be the best if you had a lookup table for user statuses (similar to `jobrole`) which you could have joined to get status names. ### SOLUTION 2 In your case I would have the code generate raw `CASE ... WHEN` clause. For example: ``` $query = User::with('jobrole')->select([ 'users.id', 'users.first_name', 'users.last_name', 'users.email', 'users.postcode', 'users.preferred_role_id', \DB::raw(" ( CASE WHEN status=0 THEN 'Initial' WHEN status=1 THEN 'New' WHEN status=2 THEN 'Updated' WHEN status=3 THEN 'Synced' WHEN status=4 THEN 'Ignore' ELSE '' END ) AS status_name ") ]); ``` Then remove `addColumn('status')` statement. Then use `{ data: 'status_name', name: 'status_name' }` in JavaScript. Please note that using `CASE ... WHEN` will probably affect performance. ### SOLUTION 3 Alternatively you could use [`filterColumn()`](https://yajrabox.com/docs/laravel-datatables/master/filter-column) method to filter numeric column based on given keyword. You can only do ordering by numerical status with this solution. ### SOLUTION 4 Alternatively you can use client-side processing by removing `serverSide: true` option and let the jQuery DataTables do the sorting/filtering. You don't need yajra datatables for that, simply return array of objects as JSON string. That may be simpler but it will work for small datasets under a few thousand records. Upvotes: 3 [selected_answer]
2018/03/15
1,342
5,595
<issue_start>username_0: I followed [this](https://github.com/Grishu/BackgroundLocationUpdate/blob/master/app/src/main/java/com/google/android/gms/location/sample/locationupdates/MainActivity.java) to Get Location Coordinates For Every 10 Seconds So Now Here I want to Change the Activity to Service So that I will start it as Background Service to Get Updates on Location for Every 10 Seconds... But Here I modified `UpdateUI()` method ``` private void updateUI() { mLatitudeTextView.setText(String.format("%s: %f", mLatitudeLabel, mCurrentLocation.getLatitude())); mLongitudeTextView.setText(String.format("%s: %f", mLongitudeLabel, mCurrentLocation.getLongitude())); mLastUpdateTimeTextView.setText(String.format("%s: %s", mLastUpdateTimeLabel, mLastUpdateTime)); } ``` to ``` private void updateUI() { try{ mlatitude = String.valueOf(mCurrentLocation.getLatitude()); mlongitude = String.valueOf(mCurrentLocation.getLongitude()); } catch(Exception e){ Toast.makeText(this,"Location Not Found",Toast.LENGTH_LONG).show(); } } ``` And I am Toasting those values at `onStartCommand()` and I have Given On start Command like this ``` @Override public int onStartCommand(Intent intent, int flags, int startId) { buildGoogleApiClient(); mRequestingLocationUpdates = true; updateUI(); return mStartMode; } ``` But here it runs only for 1st time and it's not Displaying Values and it's not getting updates or not running for every 10 seconds. However in Activity its working Fine I am getting Location Updates for every 10 seconds... Can Anyone suggest how to Give the same in Service... so that I will get Updates **Update 1:** To Run a Service for every 10 seconds I have added this... ``` private final Integer TEN_SECONDS = 10000; public void sendSendLocation() { new Handler().postDelayed(new Runnable() { public void run () { updateUI(); new Handler().postDelayed(this, TEN_SECONDS); } },TEN_SECONDS); } ``` But Here Location Displays Null... Can anyone suggest me... **Update 2:** Actually, I need to add `updateValuesFromBundle(savedInstanceState)` in `OnStartCommand` if I add it... the Service will work. But hence its a service I am Unable to Give `Bundle savedInstanceState`... ``` private void updateValuesFromBundle(Bundle savedInstanceState) { Log.i(TAG, "Updating values from bundle"); if (savedInstanceState != null) { // Update the value of mRequestingLocationUpdates from the Bundle, and make sure that // the Start Updates and Stop Updates buttons are correctly enabled or disabled. if (savedInstanceState.keySet().contains(REQUESTING_LOCATION_UPDATES_KEY)) { mRequestingLocationUpdates = savedInstanceState.getBoolean( REQUESTING_LOCATION_UPDATES_KEY); setButtonsEnabledState(); } // Update the value of mCurrentLocation from the Bundle and update the UI to show the // correct latitude and longitude. if (savedInstanceState.keySet().contains(LOCATION_KEY)) { // Since LOCATION_KEY was found in the Bundle, we can be sure that mCurrentLocation // is not null. mCurrentLocation = savedInstanceState.getParcelable(LOCATION_KEY); } // Update the value of mLastUpdateTime from the Bundle and update the UI. if (savedInstanceState.keySet().contains(LAST_UPDATED_TIME_STRING_KEY)) { mLastUpdateTime = savedInstanceState.getString(LAST_UPDATED_TIME_STRING_KEY); } updateUI(); } } ``` So I am Running `UpdateUI()` Directly at `OnStartCommand()` in service Anyone Suggest me On this kind?<issue_comment>username_1: Try this code ``` private void startProcess(){ Timer timer = new Timer(); timer.schedule(new TimerTask() { @Override public void run() { updateUI(); } }, 0, 10000); } ``` This startProcess method write in startCommand method Upvotes: 1 <issue_comment>username_2: You can use handler or timertask. Some of reported problems with TimerTask: * Can't update the UI thread * Memory leaks * Unreliable (doesn't always work) * Long running tasks can interfere with the next scheduled event Handler: ``` private final Integer TEN_SECONDS = 10000; public void sendSendLocation() { new Handler().postDelayed(new Runnable() { public void run () { updateUI(); new Handler().postDelayed(this, TEN_SECONDS); } },TEN_SECONDS); } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Follow [this](https://stackoverflow.com/a/44151866)..... your code is correct but `onConnected()` is not used... if your are using it in service.. you must add this line...... In Your Code you can Use `googleApiclient.connect();` So in your code ``` protected synchronized void buildGoogleApiClient() { Log.i(TAG, "Building GoogleApiClient"); mGoogleApiClient = new GoogleApiClient.Builder(this) .addConnectionCallbacks(this) .addOnConnectionFailedListener(this) .addApi(LocationServices.API) .build(); mGoogleApiClient.connect(); **// Add this line** createLocationRequest(); } ``` You missing this I also faced same problem but after adding this its worked for me background service... Upvotes: 1
2018/03/15
670
2,184
<issue_start>username_0: I am trying to importTA-Lib in python but getting an error: > > Traceback (most recent call last): > File "", line 1, in > File "/home/arque/anaconda3/lib/python3.6/site-packages/talib/**init**.py", >line 43, in > from .\_ta\_lib import ( > ModuleNotFoundError: No module named 'talib.\_ta\_lib' > > > Steps that I follow to install: ``` 1. wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz 2. tar -xzf ta-lib-0.4.0-src.tar.gz 3. cd ta-lib/ 4. ./configure --prefix=/usr 5. make 6. sudo make install 7. pip install TA-Lib ``` `TA-Lib` successfully installed but when I'm trying to import the same, getting the error mentioned above. Also tried the solutions given [here](https://stackoverflow.com/questions/45406213/unable-to-install-ta-lib-on-ubuntu) but didn't work. Using **Linux Ubuntu 32bit and python 3.6** Any help and suggestions are welcome.<issue_comment>username_1: This is the process I used to fix the problem when I got this error - however I'm using a 64-bit Windows installation, so you'll need to make some changes along the way! * Go to <https://www.lfd.uci.edu/~gohlke/pythonlibs/#ta-lib> and download the relevant \*.whl file (depending on which python version you’re using, and depending on whether you have a 32 bit or 64 bit operating system) * Copy the whl file into the C:\Users\your\_name directory * Go to Anaconda and click on the Environments tab and open the Terminal window (from the Base(root) environment arrow) * In the Terminal window type “conda list” to see if TA-Lib is already available and continue if not * Type “conda update conda” to ensure you have the latest version installed * You’ll install TA-Lib using pip so ensure you have the latest version of pip by typing “python -m pip install --upgrade pip” * Then install the TA-Lib package by typing “pip install TA\_Lib-0.4.17-cp37-cp37m-win\_amd64.whl” (or whatever the name of your \*.whl file is) * Check to see that it’s installed by typing “conda list” and finding it on the list Hope this helps :) Upvotes: 3 [selected_answer]<issue_comment>username_2: !pip install ta For installing ta on kaggle platform Upvotes: -1
2018/03/15
995
3,457
<issue_start>username_0: I'm writing a method that returns a `Set`. The set may contain 0, 1, or 2 objects. The string keys are also quite small (maximum 8 characters). The set is then used in a tight loop with many iterations calling `contains()`. For 0 objects, I would return `Collections.emptySet()`. For 1 object, I would return `Collections.singleton()`. For 2 objects (the maximum possible number), a `HashSet` seems overkill. Isn't there a better structure? Maybe a `TreeSet` is slightly better? Unfortunately, I'm still using Java 7 :-( so can't use modern things like `Set.of()`. An array of 2 strings would probably give the best performance, but that's not a Set. I want the code to be self-documenting, so I really want to return a Set as that is the logical interface required.<issue_comment>username_1: Just wrap an array with an `AbstractSet`. You only have to implement 2 methods, assuming you want an unmodifiable set: ``` class SSet extends AbstractSet { private final String[] strings; SSet(String[] strings) { this.strings = strings; } @Override public Iterator iterator() { return Arrays.asList(strings).iterator(); } @Override public int size() { return strings.length; } } ``` If you want, you can store the `Arrays.asList(strings)` in the field instead of a `String[]`. You can also provide 0, 1 and 2-arg constructors if you want to constrain the array only to be that length. You can also override `contains`: ``` public boolean contains(Object obj) { for (int i = 0; i < strings.length; ++i) { if (Objects.equals(obj, strings[i])) return true; } return false; } ``` If you don't want to create a list simply to create an iterator, you can trivially implement one as an inner class: ``` class ArrayIterator implements Iterator { int index; public String next() { // Check if index is in bounds, throw if not. return strings[index++]; } public boolean hasNext() { return index < strings.length; } // implement remove() too, throws UnsupportedException(). } ``` Upvotes: 2 <issue_comment>username_2: You can achieve this by * Make a class that implements Set interface * Override add and remove method * Add value upon class initialisation by `super.add(E element)` * Use that class instead Upvotes: 0 <issue_comment>username_3: > > The set is then used in a tight loop with many iterations calling contains(). > > > I would probably streamline it for this. Perhaps something like: ``` public static class TwoSet extends AbstractSet { T a = null; T b = null; @Override public boolean contains(Object o) { return o.equals(a) || o.equals(b); } @Override public boolean add(T t) { if(contains(t)){ return false; } if ( a == null ) { a = t; } else if ( b == null ) { b = t; } else { throw new RuntimeException("Cannot have more than two items in this set."); } return true; } @Override public boolean remove(Object o) { if(o.equals(a)) { a = null; return true; } if(o.equals(b)) { b = null; return true; } return false; } @Override public int size() { return (a == null ? 0 : 1) + (b == null ? 0 : 1); } @Override public Iterator iterator() { List list; if (a == null && b == null) { list = Collections.emptyList(); } else { if (a == null) { list = Arrays.asList(b); } else if (b == null) { list = Arrays.asList(a); } else { list = Arrays.asList(a, b); } } return list.iterator(); } } ``` Upvotes: 2 [selected_answer]
2018/03/15
636
2,336
<issue_start>username_0: I have a view that that I have written in HTML and PHP that is like the following: ``` Please select php foreach ($categories as $contents ) {? php echo $contents-name;?> php }? ``` Now, I want to get `php echo $contents-name;?>` when the onChange function is called. I tried to resolve it unsuccessfully in JS, and here is the JS code that I attempted: ``` function submitTheForm () { var selectElement = document.getElementById('_category'); var selected = selectedElem.options[selectedElem.selectedIndex]; console.log(selected.text); } ``` I want to console.log the selected `php echo $contents-name;?>` when the onchange function is called.<issue_comment>username_1: ```html Please select First Second function submitTheForm(sel) { console.log(sel.options[sel.selectedIndex].text); } ``` You can do as per below ``` // select box code whch one you have alredy Please select php foreach ($categories as $contents ) {? php echo $contents-name;?> php }? ``` //Javascript code to get selected options text value ``` function submitTheForm(sel) { console.log(sel.options[sel.selectedIndex].text); } ``` Upvotes: 1 <issue_comment>username_2: Use the function `addEventListener` to bind the event `change`. *This is an alternative.* You can get the `value` and `text` using the context `this`. ```js document.getusername_2mentById('_category').addEventListener('change', function() { console.log(this.value); console.log(this.options[this.selectedIndex].text); }); ``` ```html Please select username_2 ``` Upvotes: 1 <issue_comment>username_3: You can use jquery easily rather than javascript: ``` function submitTheForm () { var optionName = jQuery('#_category option:selected').text(); alert(optionName); } ``` Upvotes: 0 <issue_comment>username_4: There is a missing `"` after the javascript function and your code could be modified a little like this - to use `event` rather than rely upon names or ids ``` Please select php foreach ($categories as $contents ) {? php echo $contents-name;?> php }? function submitTheForm( event ) { var value=event.target.value; var text=event.target.options[event.target.options.selectedIndex].text; console.log('%s -> %s',value,text); } ``` Upvotes: 2 [selected_answer]
2018/03/15
519
1,863
<issue_start>username_0: Trying to get a reference to a radio group defined in the xml file. Usually I use `findViewById(R.id.idAll);` to get a reference to a ui object. Cant figure out how to do it for a `RadioGroup`.<issue_comment>username_1: ```html Please select First Second function submitTheForm(sel) { console.log(sel.options[sel.selectedIndex].text); } ``` You can do as per below ``` // select box code whch one you have alredy Please select php foreach ($categories as $contents ) {? php echo $contents-name;?> php }? ``` //Javascript code to get selected options text value ``` function submitTheForm(sel) { console.log(sel.options[sel.selectedIndex].text); } ``` Upvotes: 1 <issue_comment>username_2: Use the function `addEventListener` to bind the event `change`. *This is an alternative.* You can get the `value` and `text` using the context `this`. ```js document.getusername_2mentById('_category').addEventListener('change', function() { console.log(this.value); console.log(this.options[this.selectedIndex].text); }); ``` ```html Please select username_2 ``` Upvotes: 1 <issue_comment>username_3: You can use jquery easily rather than javascript: ``` function submitTheForm () { var optionName = jQuery('#_category option:selected').text(); alert(optionName); } ``` Upvotes: 0 <issue_comment>username_4: There is a missing `"` after the javascript function and your code could be modified a little like this - to use `event` rather than rely upon names or ids ``` Please select php foreach ($categories as $contents ) {? php echo $contents-name;?> php }? function submitTheForm( event ) { var value=event.target.value; var text=event.target.options[event.target.options.selectedIndex].text; console.log('%s -> %s',value,text); } ``` Upvotes: 2 [selected_answer]
2018/03/15
582
2,154
<issue_start>username_0: I'm having a requirement of draw a border around a parent layout in `Xamarin.Android` it can be any layout like `FrameLayout`, `LinearLayout` etc. I have achieved this by using GradientDrawable - Which is just like setting a background for the layout with a shape. So my requirement will be achieved [![enter image description here](https://i.stack.imgur.com/Y0VlY.png)](https://i.stack.imgur.com/Y0VlY.png) Query When setting corner radius for the border, it is not clipping the children. How to clip the children or any other way alternative to do the both? Kindly share your suggestion on this.? Note: I have tried with `ClipChildren`, `ClipToPadding` for the layout. [![enter image description here](https://i.stack.imgur.com/e4wee.png)](https://i.stack.imgur.com/e4wee.png) PS: Above images are mentioned for illustration purpose, they are not the exact output. TIA.<issue_comment>username_1: You can clip the view using canvas. Use ClipPath(path) method of Canvas to clip the required area. Syntax: ``` Path path = new Path(); path.AddCircle(200,200,100,Direction.CW); canvas.ClipPath(path); ``` Upvotes: 1 <issue_comment>username_2: You can make use of [`ViewOutlineProvider`](https://developer.android.com/reference/android/view/ViewOutlineProvider.html) API. As an example usage see [ClippingBasic](https://github.com/googlesamples/android-ClippingBasic) project. Having defined outline provider class as such: ``` private class OvalOutlineProvider extends ViewOutlineProvider { @Override public void getOutline(View view, Outline outline) { outline.setOval(0, 0, view.getWidth(), view.getHeight()); } } ``` Then apply view outline to parent: ``` View parent = findViewById(R.id.parent); parent.setOutlineProvider(new OvalOutlineProvider()); parent.setClipToOutline(true); ``` `ViewOutlineProvider` is accessible starting from API 21. Not sure, whether the solution is applicable for Xamarin (hope it does). Upvotes: 1 <issue_comment>username_3: Setting `ClipToOutline = true;` for the parent, resolves the issue. Upvotes: 3 [selected_answer]
2018/03/15
752
2,398
<issue_start>username_0: Trying to deploy a Flask app on Heroku, used to work perfectly with other apps, but now when I try to upgrade the db with: ``` >:heroku run python manage.py db upgrade ``` I get the following error: ``` Traceback (most recent call last): File "manage.py", line 121, in manager.run() File "/app/.heroku/python/lib/python3.6/site-packages/flask\_script/\_\_init\_\_.py", line 417, in run result = self.handle(argv[0], argv[1:]) File "/app/.heroku/python/lib/python3.6/site-packages/flask\_script/\_\_init\_\_.py", line 386, in handle res = handle(\*args, \*\*config) File "/app/.heroku/python/lib/python3.6/site-packages/flask\_script/commands.py", line 216, in \_\_call\_\_ return self.run(\*args, \*\*kwargs) File "/app/.heroku/python/lib/python3.6/site-packages/flask\_migrate/\_\_init\_\_.py", line 259, in upgrade command.upgrade(config, revision, sql=sql, tag=tag) File "/app/.heroku/python/lib/python3.6/site-packages/alembic/command.py", line 254, in upgrade script.run\_env() File "/app/.heroku/python/lib/python3.6/site-packages/alembic/script/base.py", line 425, in run\_env util.load\_python\_file(self.dir, 'env.py') File "/app/.heroku/python/lib/python3.6/site-packages/alembic/util/pyfiles.py", line 85, in load\_python\_file raise ImportError("Can't find Python file %s" % path) ImportError: Can't find Python file migrations/env.py ``` The file migrations/env.py is there on my local machine, and in the git repository: ``` >: git status migrations\env.py On branch master nothing to commit, working tree clean ``` Locally I work on python 3.6.3, tried also to move to python 3.6.4 on Heroku because it says 3.6.3 is not supported, but with no luck. Locally, it works without problem, I can migrate and upgrade the DB (postgres both locally and on heroku). Thanks<issue_comment>username_1: This happened to me ! Here is how I solved the problem: I logged into my dyno using ``` heroku ps:exec ``` Then I went to my migrations folder, and the env.py was not there. So I checked my .gitignore, and there was obviously a `env*` line in it. So just delete this line, and commit your migrations/env.py on Heroku. Upvotes: 3 [selected_answer]<issue_comment>username_2: Try to init it first.I also encountered this problem when I implemented the examples in the book.Then I noticed that I had missed initalization. Upvotes: 0
2018/03/15
177
683
<issue_start>username_0: when user swipe my app from recent apps, how can we show notification, any help appreciated.<issue_comment>username_1: This happened to me ! Here is how I solved the problem: I logged into my dyno using ``` heroku ps:exec ``` Then I went to my migrations folder, and the env.py was not there. So I checked my .gitignore, and there was obviously a `env*` line in it. So just delete this line, and commit your migrations/env.py on Heroku. Upvotes: 3 [selected_answer]<issue_comment>username_2: Try to init it first.I also encountered this problem when I implemented the examples in the book.Then I noticed that I had missed initalization. Upvotes: 0
2018/03/15
477
1,463
<issue_start>username_0: How do I check if a MongoDB id is a valid BSON? This is what I have at the moment ``` public function findOne($id=""){ if($BSON_id = new \MongoDB\BSON\ObjectId($id)){ $user = $this->collection->findOne([ '_id'=> new \MongoDB\BSON\ObjectId($id) ]); if(!$user){ return false; } return $user; } } (new User)->findOne("invalidBson123") ``` However, the above throws an fatal error: > > Uncaught MongoDB\Driver\Exception\InvalidArgumentException: Error > parsing ObjectId string > > ><issue_comment>username_1: <http://php.net/manual/en/mongodb-bson-objectid.construct.php> reads: > > id (string) > > > A 24-character hexadecimal string. If not provided, the driver will generate an ObjectId. > > > So the check can be a simple regex: ``` if(preg_match('/^[0-9a-f]{24}$/i', $id) === 1) { ..... ``` Or if you prefer to stick to ObjectId constructor and make it future-proof, do it with try-catch: ``` try { $user = $this->collection->findOne([ '_id'=> new \MongoDB\BSON\ObjectId($id) ]); if(!$user){ return false; } return $user; } catch() {} ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Thanks @Alex for pointing out that a MongoDB ObjectId is a 24-character hexadecimal string. So, now all I had to do is check if its a hexadecimal and its length is 24. ``` if(strlen($id) == "24" && ctype_xdigit($id)){ // Its a valid BSON // Do your thing } ``` Upvotes: 1
2018/03/15
413
1,583
<issue_start>username_0: Some day ago i'd write for asking what was the problem about onclick event doesn't activate when i clicked in a specific area of the browser. After check a guide i find it the problem end also the rappresentation of my simply test is trivial, i've known the mechanism for activating a particular function in browser text area. Now that work it... i'd want add more detail of my question...what is the mechanism that activate my function from template html to tag ? ``` ### Check is palindrome TEST //WHAT IS THE MECHANISM IN THIS POINT FOR ACTIVATE check()? function check() { var x = document.getElementById("input1").value; var y = document.getElementById("input2").value; if(x===y){ document.getElementById("output").innerHTML=" E' PALINDROMA " }else document.getElementById("output").innerHTML=" NON E' PALINDROMA ""; } function myFunction() { location.reload(true); } ```<issue_comment>username_1: From what I understand, move your `check()` function to the `input` field ``` ``` and remove from `body` ``` ``` Upvotes: 1 <issue_comment>username_2: Everything about your code is wrong. Like... literally everything. Here's a suggested alternative: ```js var out = document.getElementById('output'); document.getElementById('input').oninput = function() { if( this.value.split('').reverse().join('') === this.value) { out.innerHTML = "The input is a palindrome!"; } else { out.innerHTML = "The input is not a palindrome..."; } }; ``` ```html Type something... ``` Upvotes: 3 [selected_answer]
2018/03/15
235
903
<issue_start>username_0: I wrote the following code to toggle between showing and hiding content. This works fine. ```html Simple collapsible Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. ``` Is it possible to replace **div** content with couple of **tds**? ``` | | | | ```<issue_comment>username_1: Have you tried? Seems to be working fine. ```html Simple Collapsible ------------------ Click on the button to toggle between showing and hiding content. Simple collapsible test | test | test | test | ``` Upvotes: 1 <issue_comment>username_2: ``` Yes you can replace div content with couple of tds. | | | --- | | Content | | Content | | Content | ``` Upvotes: -1
2018/03/15
482
1,953
<issue_start>username_0: I have only one edit text field to accept email or phone number. how to change the input type based on the 1st character ? based on user input i need to perform different operations.. The main thing is i should identify whether that is email or phone number. how to do this?<issue_comment>username_1: <NAME>'s answer is correct. My Views : First thing is that you can't know input type using the first letter cause some email addresses begin with numbers. On button click you can identify type by knowing if the text contains only numbers or alphanumeric characters. For more info about code you can search it on Google. I will try to add code after some time. Good Luck Upvotes: 0 <issue_comment>username_2: In your, `EditText` set a `TextWatcher` which calls a function to check if the text is email or is a phone number. Get the text from your TextView like : ``` String text = textView.getText().toString(); ``` Adding the listener to your TextView : ``` textView.addTextChangedListener(new TextWatcher() { @Override public void onTextChanged(....){} @Override public void beforeTextChanged(...){} @Override public void afterTextChanged(Editable s) { if(isEmail(text)){//do your stuff} if(isPhone(text)){//do your stuff} } } ``` Your methods would look something like this: ``` public static boolean isEmail(String text) { String expression = "^[\\w\\.-]+@([\\w\\-]+\\.)+[A-Z]{2,4}$"; Pattern p = Pattern.compile(expression, Pattern.CASE_INSENSITIVE); Matcher m = p.matcher(text); return m.matches(); } public static boolean isPhone(String text) { if(!TextUtils.isEmpty(text)){ return TextUtils.isDigitsOnly(text); } else{ return false; } } ``` This might not be possible with just checking the first digit. Though you can put validations inside `onTextChanged` method. Upvotes: 2
2018/03/15
367
1,597
<issue_start>username_0: What I need to do is share location from one android device and show it on another device(not necessarily android, even web would be fine). I can think of following solutions as of now. 1. Use GPS to get latitude and longitude from android device and send it to a server after fixed intervals(eg. 5 sec) which will store it in database(Can we use firebase for this purpose instead of writing full server side code from scratch?). The client which needs to show the location can now request data from server every 5 secs and plot the location on google map. My question is, is this approach scalable? 2. Is there some Google API that allows real-time location sharing out of the box? I tried searching for it but couldn't find anything like that. Does something like that exist?<issue_comment>username_1: 1. Yes, you can use [Firebase realtime database](https://firebase.google.com/docs/database/android/start/) for sharing the location. 2. No, there is no such thing as far as I know. Upvotes: 3 [selected_answer]<issue_comment>username_2: You may use the Google Places API for Android. For example, if you wanted to get a latitude/longitude based on a user inputted address, you could use the Autocomplete service: <https://developers.google.com/places/android-api/autocomplete> I don't think you will have the scalability issue you imagine, because the strain for finding coordinates would be on a Google server, not yours, as well as the mobile device of your users. You would only need to worry about storing coordinates in a central database somewhere. Upvotes: 0
2018/03/15
364
1,275
<issue_start>username_0: I have try to use php to get Github release's tag\_name,but in vain. The link is [Latest Release](https://api.github.com/repos/carry0987/Messageboard/releases/latest) ``` php ini_set('user_agent', 'Mozilla/4.0 (compatible; MSIE 6.0)'); $json =file_get_contents("https://api.github.com/repos/carry0987/Messageboard/releases/latest") ; $myArray = json_decode($json); foreach( $myArray as $key = $value ){ echo $key."\t=>\t".$value."\n"; } ?> ```<issue_comment>username_1: 1. Yes, you can use [Firebase realtime database](https://firebase.google.com/docs/database/android/start/) for sharing the location. 2. No, there is no such thing as far as I know. Upvotes: 3 [selected_answer]<issue_comment>username_2: You may use the Google Places API for Android. For example, if you wanted to get a latitude/longitude based on a user inputted address, you could use the Autocomplete service: <https://developers.google.com/places/android-api/autocomplete> I don't think you will have the scalability issue you imagine, because the strain for finding coordinates would be on a Google server, not yours, as well as the mobile device of your users. You would only need to worry about storing coordinates in a central database somewhere. Upvotes: 0
2018/03/15
566
2,102
<issue_start>username_0: Using this JSON object as an example: ``` { data: [ { type: "animal" name: "dog" consumes: "dog food" }, { type: "plant" name: "cactus" environment: "desert" } ] } ``` Note the `animal` and `plant` types have some different properties and some shared properties. How would `JSONDecoder` be used in Swift to convert these into the following structs: ``` struct Animal: Decodable { let name: String let consumes: String } struct Plant: Decodable { let name: String let environment: String } ```<issue_comment>username_1: Try to make consistent array structure ``` { data: { animals:[ { type: "animal" name: "Cat" consumes: "cat food" }, { type: "animal" name: "dog" consumes: "dog food" } ], plants:[ { type: "plant" name: "cactus" environment: "desert" }, { type: "plant" name: "Plam" environment: "hill" } ] } } ``` Upvotes: -1 <issue_comment>username_2: You can change your structs a bit, like this ``` enum ItemType { case animal(consumes: String) case plant(environment: String) case unknown } struct Item: Decodable { let name: String let type: ItemType enum CodingKeys: CodingKey { case name case type case consumes case environment } init(from decoder: Decoder) throws { let values = try decoder.container(keyedBy: CodingKeys.self) self.name = try values.decode(String.self, forKey: .name) let type = try values.decode(String.self, forKey: .type) switch type { case "animal": let food = try values.decode(String.self, forKey: .consumes) self.type = .animal(consumes: food) case "plant": let environment = try values.decode(String.self, forKey: .environment) self.type = .plant(environment: environment) default: self.type = .unknown } } } ``` Upvotes: 1
2018/03/15
536
2,130
<issue_start>username_0: I am trying to launch chromium browser with selenium python on windows 8. Added `binary_location` as chromium binary location which is appdata. But still chromedriver starts google chrome instead of chromium. If I uninstall google chrome, then chromedriver by default launches chromium. But with chrome installed it always launches chrome regardless. Does anyone have an idea on how to start chromium with selenium while chrome installed? Please do not mark it as duplicate. The other one was about unix and solution provided to selenium java while this one is about windows and python.<issue_comment>username_1: Try to make consistent array structure ``` { data: { animals:[ { type: "animal" name: "Cat" consumes: "cat food" }, { type: "animal" name: "dog" consumes: "dog food" } ], plants:[ { type: "plant" name: "cactus" environment: "desert" }, { type: "plant" name: "Plam" environment: "hill" } ] } } ``` Upvotes: -1 <issue_comment>username_2: You can change your structs a bit, like this ``` enum ItemType { case animal(consumes: String) case plant(environment: String) case unknown } struct Item: Decodable { let name: String let type: ItemType enum CodingKeys: CodingKey { case name case type case consumes case environment } init(from decoder: Decoder) throws { let values = try decoder.container(keyedBy: CodingKeys.self) self.name = try values.decode(String.self, forKey: .name) let type = try values.decode(String.self, forKey: .type) switch type { case "animal": let food = try values.decode(String.self, forKey: .consumes) self.type = .animal(consumes: food) case "plant": let environment = try values.decode(String.self, forKey: .environment) self.type = .plant(environment: environment) default: self.type = .unknown } } } ``` Upvotes: 1
2018/03/15
1,369
4,023
<issue_start>username_0: I want to count the average value of the odd numbers from a list of numbers. I have a starting code to count the average, but I don't know how can I choose only the odd numbers from the list? Here is my code: ``` var numberArray = [1,2,3,4,5,6], thisTotal=0,thisAverage=0; for (var i=0; i ```<issue_comment>username_1: ``` var acc = 0, oddCount = 0; for(var i = 0; i < numberArray.length; i++) { if(numberArray[i] % 2 !== 0) { acc += numberArray[i]; oddCount++; } } return acc / oddCount; ``` Upvotes: -1 <issue_comment>username_2: ```js var numberArray=[1,2,3,4,5,6], thisAverage=0,oddlength=0; for(var i=0;i ``` Upvotes: 0 <issue_comment>username_3: You can use a [filter](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter) function to return only the odd numbers: ``` var oddArray = numberArray.filter(function(val) { return val % 2 !== 0; }); ``` Full example: ```js var numberArray = [1, 2, 3, 4, 5, 6]; var thisTotal = 0; var thisAverage = 0; var oddArray = numberArray.filter(function(val) { return val % 2 !== 0; }); console.log(oddArray); // [1, 3, 5] var thisTotal = oddArray.reduce(function(accumulator, currentValue) { return accumulator + currentValue; }); console.log(thisTotal); // 1 + 3 + 5 => 9 var thisAverage = thisTotal / oddArray.length; console.log(thisAverage); // 9 / 3 => 3 ``` Upvotes: 1 [selected_answer]<issue_comment>username_4: Well, you can get what you want by this. ``` var numberArray=[1,2,3,4,5,6], thisTotal=0,thisAverage=0; for(var i=0; i < 3; i++) { thisTotal += numberArray[i * 2]; thisAverage= (thisTotal/numberArray.length); } console.log(thisAverage); ``` or if you want **general solution**, use this. ``` var numberArray=[1,2,3,4,5,6,7,8,...on and on], thisTotal=0,thisAverage=0; for(var i=0; i < Math.ceil(numberArray.length() / 2); i++) { thisTotal += numberArray[i * 2]; thisAverage= (thisTotal/numberArray.length); } console.log(thisAverage); ``` hope my code be helpful :) Upvotes: 0 <issue_comment>username_5: You can use the function `reduce` to add and count. ```js var numberArray = [1, 2, 3, 4, 5, 6], result = numberArray.reduce((a, n) => { if (n % 2 !== 0) { a.sum += n; a.count++; } return a; }, {sum: 0, count: 0}), average = result.sum / result.count; console.log(average); ``` Upvotes: 0 <issue_comment>username_6: Assuming the numbers are in an array, you can do this: ``` var numbers = [1, 2, 3, 4, 5, 6]; var info = numbers.filter(function(n) { return n % 2 !== 0}) .reduce(function(acc, item) { return {sum: acc.sum + item, count: acc.count + 1} }, {sum: 0, count: 0}); var avg = info.sum / info.count; ``` This example uses `filter` and `reduce` methods, which are declarative and more clear. `filter` returns a new array with the items for which the function returns `true`, and then `reduce`, for each item, updates an 'accumulator'. The accumulator can be anything, and in this case is an object with the sum of the numbers and their count. For each item, we add add the current number to the `sum` property and add 1 to `count`. Finally, we just devide `sum` by `count` and done. Upvotes: 0 <issue_comment>username_7: You can create a new array and store odd values in that array and after that you can apply your logic to that array. ```js var a=[1,2,3,4,5,6,10,11]; var ar=[]; for (var i = 0; i < a.length; i++) { if(a[i] % 2 !== 0) { ar.push(a[i]); } } console.log(ar); ``` Upvotes: -1 <issue_comment>username_8: ``` var numberArray=[1,2,3,4,5,6,7,8]; var count = 0; var result = 0; for (let i = 0; i <= (numberArray.length-1); i++) { if (numberArray[i] % 2 != 0) { result += numberArray[i]; count++; } } alert(result / count); ``` Upvotes: -1
2018/03/15
548
2,103
<issue_start>username_0: Using `For Each` loop I measure the size of "X" labels from a `FlowLayoutPanel` control: ``` For Each _Label As Label In _FlowLayoutPanel.Controls Dim _TextSize As System.Drawing.Size = TextRenderer.MeasureText(_Label.Text, _Label.Font) _Label.Size = New Size(_TextSize.Width, _TextSize.Height) Next ``` How can I get to a variable the biggest `Width` value of all those `Label`s widths?<issue_comment>username_1: Did you try to use a variable and only change the value of that variable when the size of the current label is bigger than the variable's value? Something like that: ``` Dim maxWidth as Integer = -1 For Each ... currentWidth = ... If currentWidth > maxWidth Then maxWidth = currentWidth End If Next MsgBox(maxWidth) ``` --- Edit: If you are familiar with LINQ or if you like to get to know LINQ you could even get your wanted information in one line ``` Dim maxWidth As Integer = _FlowLayoutPanel.Controls.OfType(Of Label) _ .Select(Of Integer)(Function(x As Label) TextRenderer.MeasureText(x.Text, x.Font).Width) _ .Max ``` Although the possible functions might vary depending on the data type of the `Controls` collection. Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` Dim LabelMaxWidth As Integer = 0 For Each _Label As Label In _FlowLayoutPanel.Controls Dim _TextSize As System.Drawing.Size = TextRenderer.MeasureText(_Label.Text, _Label.Font) _Label.Size = New Size(_TextSize.Width, _TextSize.Height) If _Label.Width > LabelMaxWidth Then LabelMaxWidth = _Label.Width Next ``` Upvotes: 2 <issue_comment>username_3: You can get the maximum `Width` also by using some [LINQ](https://learn.microsoft.com/en-us/dotnet/visual-basic/programming-guide/language-features/linq/introduction-to-linq): ``` MsgBox(FlowLayoutPanel.Controls. OfType(Of Label). Max(Function(l) TextRenderer.MeasureText(l.Text, l.Font).Width)) ``` Upvotes: 1
2018/03/15
676
2,397
<issue_start>username_0: I am having a dataframe like ``` x y _ _ 1 10 2 30 3 50 4 24 5 36 6 45 ``` I want to append another column z which will be depending on the value of y. So i created a function ``` def GiveNumVal(col: Column) => Integer = { if(Column>=0 && Column<15){ return 1; } else if(Column>=15 && Column<30){ return 2; } else if(Column>=30 && Column<45){ return 3; } else if (Column>=45 && Column<=59){ return 4; } else{ return 0; } } ``` And I call by ``` val new_df=df.withColumn("z",GiveNumVal($"y")); ``` It cant even compile. I am not sure where is the wrong part.Any help is appreciated.<issue_comment>username_1: Did you try to use a variable and only change the value of that variable when the size of the current label is bigger than the variable's value? Something like that: ``` Dim maxWidth as Integer = -1 For Each ... currentWidth = ... If currentWidth > maxWidth Then maxWidth = currentWidth End If Next MsgBox(maxWidth) ``` --- Edit: If you are familiar with LINQ or if you like to get to know LINQ you could even get your wanted information in one line ``` Dim maxWidth As Integer = _FlowLayoutPanel.Controls.OfType(Of Label) _ .Select(Of Integer)(Function(x As Label) TextRenderer.MeasureText(x.Text, x.Font).Width) _ .Max ``` Although the possible functions might vary depending on the data type of the `Controls` collection. Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` Dim LabelMaxWidth As Integer = 0 For Each _Label As Label In _FlowLayoutPanel.Controls Dim _TextSize As System.Drawing.Size = TextRenderer.MeasureText(_Label.Text, _Label.Font) _Label.Size = New Size(_TextSize.Width, _TextSize.Height) If _Label.Width > LabelMaxWidth Then LabelMaxWidth = _Label.Width Next ``` Upvotes: 2 <issue_comment>username_3: You can get the maximum `Width` also by using some [LINQ](https://learn.microsoft.com/en-us/dotnet/visual-basic/programming-guide/language-features/linq/introduction-to-linq): ``` MsgBox(FlowLayoutPanel.Controls. OfType(Of Label). Max(Function(l) TextRenderer.MeasureText(l.Text, l.Font).Width)) ``` Upvotes: 1
2018/03/15
1,209
4,004
<issue_start>username_0: I want to make a portable `opencv` application which the dependency is added to maven file `pom.xml`. Simplified code is : ``` import org.opencv.core.Mat; public class Builder { public static void main(String[] args) { nu.pattern.OpenCV.loadShared(); System.loadLibrary(org.opencv.core.Core.NATIVE_LIBRARY_NAME); Mat mat = new Mat(4,3,1); System.out.println(mat.dump()); } } ``` I added this to `pom.xml`: ``` org.openpnp opencv 3.2.0-0 compile ``` It works with the following warning **for java 9**: ``` WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by nu.pattern.OpenCV$SharedLoader (file:/home/martin/.m2/repository/org/openpnp/opencv/3.2.0-0/opencv-3.2.0-0.jar) to field java.lang.ClassLoader.usr_paths WARNING: Please consider reporting this to the maintainers of nu.pattern.OpenCV$SharedLoader WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /tmp/opencv_openpnp6598848942071423284/nu/pattern/opencv/linux/x86_64/libopencv_java320.so which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'. 101, 115, 47; 108, 105, 98; 47, 108, 105; 98, 111, 112] ``` **UPDATE:** And the following warning for java 8: ``` Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library/tmp/opencv_openpnp3835511967220002519/nu/pattern/opencv/linux/x86_64/libopencv_java320.so which might have disabled stack guard. The VM will try to fix the stack guard now. It's highly recommended that you fix the library with 'execstack -c ', or link it with '-z noexecstack'. ``` Collapsing the opencv jar in maven dependency folder shows that all the libraries are in grayed color as depicted here: [![maven dependency folder for opencv 3.2](https://i.stack.imgur.com/AtA2A.png)](https://i.stack.imgur.com/AtA2A.png) What if I ignore this warning and don't link the libraries with the mentioned command in the warning message? Because it was very simple to make the opencv portable with java and I want to realize if there is no problem then continue this method in future. I use JDK 9, Eclipse Oxygen.2, Fedora 27. In future, the application's target might be windows.<issue_comment>username_1: This occurred because of Java 9 new checks of illegal access, and it is common to have such kind of security warnings after the release of java 9. The permanent solution to this is to report the bug to the maintainers and wait for them to release a patch update. However only at your own risk, yes you can continue **without** security features i.e stack guard, depending on your use and scope of this package under discussion. Upvotes: 3 <issue_comment>username_2: Since the Java update 9, the "illegal reflective access operation has occurred" warning occurs. I have resolved this from Maven Build and Maven Install by modifying my pom.xml file in multiple projects when I did upgrade to from jdk1.8 to jdk 9+ (jdk11 has more support). Following are 2 examples that get rid of the "illegal reflective access operation has occurred" build message: Change version from: ``` org.apache.maven.plugins maven-war-plugin 2.4 WebContent WebContent\WEB-INF\web.xml ``` To: ``` org.apache.maven.plugins maven-war-plugin 3.3.1 WebContent WebContent\WEB-INF\web.xml ``` And also changed the artifactId and version from: ``` org.apache.maven.plugins maven-compiler-plugin 3.5.1 1.8 1.8 UTF-8 ``` To: ``` 1.8 1.8 UTF-8 .. org.apache.maven.plugins maven-war-plugin 3.3.1 .. ``` When I re-run Maven Build (clean compile package) or Maven Install, the "illegal reflective access operation has occurred" is gone. Upvotes: 2
2018/03/15
157
484
<issue_start>username_0: I am using `^(?= %)(.*?)(?=%)` regex for find out all the elements between % . Test string is `"Hey %firstName% %lastName%"`. It works fine on chrome but fails on mozilla and ie.<issue_comment>username_1: If the only thing you need is to get the text bettween the %, you can try this: ``` \%(.*?)\% ``` Upvotes: 0 <issue_comment>username_2: Here we go : ``` /%(\S.*?)%/g ``` Input : `Hey %firstName% %lastName%` Output : `firstName,lastName` Upvotes: 1
2018/03/15
835
2,991
<issue_start>username_0: I'm using PowerMockito and @PrepareForTest annotation for my test class. When I do this, Sonar says none of the branches have been covered. However, my other test classes that don't use PowerMockito works well. here is my pom looks like ``` org.powermock powermock-api-mockito 1.6.2 test org.powermock powermock-module-junit4 1.6.2 test org.codehaus.mojo sonar-maven-plugin 3.2 org.jacoco jacoco-maven-plugin 0.7.6.201602180812 default-prepare-agent prepare-agent default-report test report ``` Do I need to add anything in this pom ? Any help is Appreciated, Thanks in Advance<issue_comment>username_1: yes right powermock doesn't work with sonar in the first step, the cause of the problem is the combination of powermock and jacoco, there is already an issue related to this problem (<https://github.com/powermock/powermock/issues/727>), see here for further informations: <https://github.com/powermock/powermock/wiki/Code-coverage-with-JaCoCo> in the meantime you can try a workaround, as described here: <https://www.geekality.net/2013/09/12/maven-package-runtime-dependencies-in-lib-folder-inside-packaged-jar/> Upvotes: 2 <issue_comment>username_2: You just need to add the delegate the Runner to SpringRunner; ``` @RunWith(PowerMockRunner.class) @PowerMockRunnerDelegate(SpringRunner.class) ``` Upvotes: -1 <issue_comment>username_3: I've found that if you have only one class in the @PrepareForTest annotation e.g. `@PrepareForTest(ClassA.class)` then jacoco coverage works. It's only when there is more than one class e.g. `@PrepareForTest({ClassA.class, ClassB.class})` that jacoco coverage is reported as 0% Upvotes: 1 <issue_comment>username_4: * **PowerMock** doesn't work with sonar in the first step, the cause of the problem is the combination of powermock and jacoco * We will not face this issue locally, but if we use sonar for test coverage calculations, the error is as follows ``` Could not initialize plugin: interface org.mockito.plugins.MockMaker (alternate: null) ``` * In PowerMock 2.0.9 version, a workaround for this problem is provided they have added an annotation `@PrepareOnlyThisForTest`, this provides the ability for the isolation of a particular Test class. * Apart from that, we need to add delegate the Runner to SpringRunner using `@PowerMockRunnerDelegate(SpringRunner.class)` Add Power Mock Dependencies in pom.xml ``` org.powermock powermock-module-junit4 2.0.2 test org.powermock powermock-api-mockito2 2.0.9 test ``` Use the PowerMock Junit Runner for test Class and declare the test class that we’re mocking and delegate SpringRunner ``` @RunWith(PowerMockRunner.class) @PrepareOnlyThisForTest(ServiceApplication.class) @PowerMockRunnerDelegate(SpringRunner.class) public class ExampleControllerTest { PowerMockito.mockStatic(ServiceApplication.class); Mockito.when(ServiceApplication.getStatic()).thenReturn(""); ``` Upvotes: 2
2018/03/15
510
1,883
<issue_start>username_0: How to change the time zone from UTC to Arabian Standard Time in Azure virtual machine running the windows server?<issue_comment>username_1: I'm afraid that you would not be able to change the time Zone actually. Even if you changed it onece, the time zone would revert to UTC after restarting. Also, changing the time zone for Azure IaaS VM is not recommended. > > While it may be very tempting to change the server time on the Azure > Virtual Machines using a startup task, it is not recommended, you > should rather use methods like [TimeZoneInfo.ConvertTimeFromUTCTime](http://msdn.microsoft.com/en-us/library/system.timezoneinfo.converttimefromutc.aspx) in > your code. > > > You can also check [this blog](https://blogs.msdn.microsoft.com/cie/2013/07/29/manage-timezone-for-applications-on-windows-azure/) for more details about the TimeZone on Azure. Hope this helps! Upvotes: 4 [selected_answer]<issue_comment>username_2: Azure simply offers to set regional (timezone) selection during VM Provisioning! one can refer this <https://learn.microsoft.com/en-us/previous-versions/azure/dn495299(v=azure.100)> Furthermore, if you want to change time after VM provisioning then can use `Set-TimeZone -Id "Time Zone Id"` To get your time zone use `Get-TimeZone -ListAvailable` powershell command. Although, time would never set back itself to UTC time zone. But if it is so then you must use **Azure Desired State Configuration**. This helps to make changes persistent. please refer [Azure Desired State Configuration](https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-overview) for more detail. Upvotes: 3 <issue_comment>username_3: You can set the timezone now using powershell and the change will stick even if the VM is restarted, refer this <https://www.dcac.com/blog/changing-the-time-zine-of-azure-vms> Upvotes: 0
2018/03/15
450
1,738
<issue_start>username_0: I need a HTML division `div` that is visible only when print is given i.e. `Ctrl`+`P` is pressed and not visible in the normal page view ... I tried the below method. But it is not working. Is there anybody who know about it ```css @media print { .print_only { display: block; visibility: visible; } } ``` ```html THIS IS THE MOST DIFFICULT QUESTION OF THE CENTURY ```<issue_comment>username_1: It doesn't show because the inline style `display:none` overrides the style specified for the class `print_only`. See [this](https://css-tricks.com/specifics-on-css-specificity/) excellent article for details on CSS specificity. You need to put ``` .print_only { display: none; } ``` in your CSS (outside the `@media print` block) and remove the inline style from the HTML. Upvotes: 2 <issue_comment>username_2: You need to put !important after the display: block;. The inline style overwrite the style on the class. You can also do this: ``` .print_only { display: none; } ``` Hope it helps you! Upvotes: 1 <issue_comment>username_3: I did it using this method .... Thank you all ... Especially @pokeybit ```css .print_only { display: none; } @media print { .print_only { display: block; visibility: visible; } body * { visibility: hidden; } #print_area, #print_area * { visibility: visible; } #print_area { position: absolute; left: 0; top: -100px; } } ``` Upvotes: 2 [selected_answer]
2018/03/15
1,701
6,321
<issue_start>username_0: I am working on an android app. I want to retrieve json data into an ArrayList of my data model called Expert. Here is the code on the onCreate() method ``` mExpertsList = new ArrayList(); loadData(); Log.v("RESPONSE", Integer.toString(mExpertsList.size())); ``` Here is the code in the function retrieving json data. ``` private void loadData(){ RequestQueue queue = Volley.newRequestQueue(MainActivity.this); JsonObjectRequest jsonObjectRequest = new JsonObjectRequest(Request.Method.GET, "http://microblogging.wingnity.com/JSONParsingTutorial/jsonActors", null, new Response.Listener() { @Override public void onResponse(JSONObject response) { try { JSONArray jarray = response.getJSONArray("actors"); for (int i = 0; i < jarray.length(); i++) { JSONObject object = jarray.getJSONObject(i); Expert expert = new Expert(); expert.setImageUrl(object.getString("image")); expert.setName(object.getString("name")); expert.setTopic(object.getString("country")); expert.setRating(random.nextFloat() \* ((maxRating - minRating) + minRating)); expert.setDescription(object.getString("description")); mExpertsList.add(expert); } } catch(JSONException e){ e.printStackTrace(); } } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { } }); queue.add(jsonObjectRequest); } ``` Inside loadData() method the mExpertsList shows size of 6 at end of for loop. Checked this using log messages. But a Log statement immediately after loadData() shows that mExpertsList has size of zero. What am I missing here?<issue_comment>username_1: ``` mExpertsList = new ArrayList(); // You are initializing your list with size 0 loadData(); // Calling the API which will run in background ``` So whenever you are calling `Log.v("RESPONSE", Integer.toString(mExpertsList.size()));`just after `loadData()` method your `mExpertsList` is still of size 0. Size will change only after you get your API response. Upvotes: 1 <issue_comment>username_2: It looks like you are making some kind of asynchronous call when you call `loadData()`: ``` mExpertsList = new ArrayList(); loadData(); Log.v("RESPONSE", Integer.toString(mExpertsList.size())); ``` When you check the size, it is possible that `onResponse()`, the handler for the web call, has not yet been called. If so, then the size would still appear to be zero. You should only be relying on the contents of `mExpertsList` being there if `onResponse` has been successfully called with actual content. Upvotes: 0 <issue_comment>username_3: `JsonObjectRequest` is an Asynchronous request so your list will be update on other thread . And you logged the size synchronously so it will print 0 always . Access the List inside `onResponse(JSONObject response)`; ``` JsonObjectRequest jsonObjectRequest = new JsonObjectRequest(Request.Method.GET, "http://microblogging.wingnity.com/JSONParsingTutorial/jsonActors", null, new Response.Listener() { @Override public void onResponse(JSONObject response) { try { JSONArray jarray = response.getJSONArray("actors"); for (int i = 0; i < jarray.length(); i++) { JSONObject object = jarray.getJSONObject(i); Expert expert = new Expert(); expert.setImageUrl(object.getString("image")); expert.setName(object.getString("name")); expert.setTopic(object.getString("country")); expert.setRating(random.nextFloat() \* ((maxRating - minRating) + minRating)); expert.setDescription(object.getString("description")); mExpertsList.add(expert); } Log.v("RESPONSE", mExpertsList.size()); } catch(JSONException e){ e.printStackTrace(); } } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { } }); ``` Upvotes: 2 [selected_answer]<issue_comment>username_4: This is an Asynchronous operation, Volley will not wait for your response and execute your next code. If you want to print Arraylist size then print it just after for loop, ``` JsonObjectRequest jsonObjectRequest = new JsonObjectRequest(Request.Method.GET, "http://microblogging.wingnity.com/JSONParsingTutorial/jsonActors", null, new Response.Listener() { @Override public void onResponse(JSONObject response) { try { JSONArray jarray = response.getJSONArray("actors"); for (int i = 0; i < jarray.length(); i++) { JSONObject object = jarray.getJSONObject(i); Expert expert = new Expert(); expert.setImageUrl(object.getString("image")); expert.setName(object.getString("name")); expert.setTopic(object.getString("country")); expert.setRating(random.nextFloat() \* ((maxRating - minRating) + minRating)); expert.setDescription(object.getString("description")); mExpertsList.add(expert); } Log.v("RESPONSE", mExpertsList.size()); } catch(JSONException e){ e.printStackTrace(); } } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { } }); ``` It will print the exxact size of your list. Upvotes: 1 <issue_comment>username_5: Do this change in part 1: ``` mExpertsList = new ArrayList(); loadData(); //Log.v("RESPONSE", Integer.toString(mExpertsList.size()));----Comment it out ``` Change in part 2: ``` private void loadData(){ RequestQueue queue = Volley.newRequestQueue(MainActivity.this); JsonObjectRequest jsonObjectRequest = new JsonObjectRequest(Request.Method.GET, "http://microblogging.wingnity.com/JSONParsingTutorial/jsonActors", null, new Response.Listener() { @Override public void onResponse(JSONObject response) { try { JSONArray jarray = response.getJSONArray("actors"); for (int i = 0; i < jarray.length(); i++) { JSONObject object = jarray.getJSONObject(i); Expert expert = new Expert(); expert.setImageUrl(object.getString("image")); expert.setName(object.getString("name")); expert.setTopic(object.getString("country")); expert.setRating(random.nextFloat() \* ((maxRating - minRating) + minRating)); expert.setDescription(object.getString("description")); mExpertsList.add(expert); Log.v("RESPONSE", Integer.toString(mExpertsList.size())); //-----New ADDED } } catch(JSONException e){ e.printStackTrace(); } } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { } }); queue.add(jsonObjectRequest); } ``` Upvotes: 0
2018/03/15
367
1,321
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/zaXa3.png)](https://i.stack.imgur.com/zaXa3.png) I have created a 2D project. The images I used for sprites are all with high resolution. However I do not know what settings I have changed today, the sprites in game window started look terrible when I tested the game(good quality in scene window). And this happens to all of the projects I created earlier. But when I build the exe file to play, the graphics look all good with original quality. Could anyone tell me how to fix this issue?<issue_comment>username_1: Scale back the game view to 1x and change the size of your camera instead or set your sprites to their original size if they have been resized [![enter image description here](https://i.stack.imgur.com/FRaLn.png)](https://i.stack.imgur.com/FRaLn.png) Upvotes: 4 [selected_answer]<issue_comment>username_2: If you have a Retina screen, [make sure you uncheck the](https://youtu.be/4SFf3oXw9MU) following checkbox. [![enter image description here](https://i.stack.imgur.com/HNmdD.png)](https://i.stack.imgur.com/HNmdD.png) **[Watch this screencast](https://youtu.be/4SFf3oXw9MU)** Upvotes: 0 <issue_comment>username_3: For 2D game also make sure Projection property in Camera component is set to Orthographic Upvotes: 0
2018/03/15
834
2,775
<issue_start>username_0: I want to write a query that would return data if and only if it has all the `ids` that the subquery would return What I've tried: ``` SELECT * FROM main_table WHERE id IN ( SELECT id FROM sub_table WHERE SOMECODITION) ``` but it returns once it finds at least one matching row in sub query which i do not want. Mainly I wanna be able to check if the `main_table` has all the `ids` that `sub_table` returns EDIT: the `SOMECONDITION` has nothing to do with main query<issue_comment>username_1: This is rather painful in MySQL. The following gets the number of matches: ``` SELECT COUNT(DISTINCT id) FROM main_table WHERE id IN ( SELECT id FROM sub_table WHERE SOMECODITION); ``` If you only want to select the rows, I would suggest that you get the count in php and use conditional logic there. But, you can also do that in SQL: ``` SELECT mt.* FROM main_table mt WHERE mt.id IN ( SELECT st.id FROM sub_table st WHERE SOMECONDITION) AND (SELECT COUNT(DISTINCT mt.id) FROM main_table mt WHERE mt.id IN (SELECT st.id FROM sub_table st WHERE SOMECONDITION) ) = (SELECT COUNT(DISTINCT st.id) FROM sub_table st WHERE SOMECONDITION); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You need an `inner join` in this case. ``` SELECT m.* FROM main_table INNER JOIN sub_table s ON m.id = s.id WHERE SOMECODITION ``` I assume the `SOMECODITION` contains only conditions against `sub_table`. With `SOMECODITION`, it's guaranteed that the result from `sub_table` must meet the condition and with `INNER JOIN`, all id's from `sub_table` must be found in `main_table` in order to return something from `main_table`. Upvotes: 0 <issue_comment>username_3: You can achieve that by using `left join` as well. Like for instance, let's take a small example `main_table` has ids `1,2,3` and `sub_table` has ids `1,2,5,6` (that matches `SOMECONDITION`) so the following query, ``` select s.id sid,m.id mid from sub_table s left join main_table m on s.id = m.id where s.SOMECONDITION ``` would result in: ``` sid mid 1 1 2 2 5 null 6 null ``` In this case main table does not have all the IDs that the sub query returns. You can make use of `count` to fetch results only if the `count(sid)` equals `count(mid)`, since `count` doesn't consider `null` values. So your query would become. ``` SELECT * FROM main_table WHERE id IN ( SELECT id FROM sub_table WHERE SOMECODITION) AND (select count(Distinct sid) from (select s.id sid,m.id mid from sub_table s left join main_table m on s.id = m.id where s.SOMECONDITION) ) = (select count(Distinct mid) from (select s.id sid,m.id mid from sub_table s left join main_table m on s.id = m.id where s.SOMECONDITION) ) ``` Upvotes: 2
2018/03/15
1,317
4,956
<issue_start>username_0: Below is my HTTP post request to get the auth token, the headers are not getting set, in chrome I don't see the headers under request headers. [![enter image description here](https://i.stack.imgur.com/AHqXf.jpg)](https://i.stack.imgur.com/AHqXf.jpg) What I also observed is, if I add only the content-type header I can see the header and body being sent in the request, if I add authorization header then no header or body being sent in the request and I see Response to preflight has invalid HTTP status code 401. error on console. I also tried enabling "CORS" still the same issue. have tried all the commented options. ``` let headers = new HttpHeaders({ 'content-type': 'application/x-www-form-urlencoded; charset=UTF-8', 'authorization': 'Basic ' + btoa("infoarchive.custom:mysecret") }); //headers = headers.append("content-type","application/x-www-form-urlencoded"); //headers = headers.append("authorization","Basic " + btoa("infoarchive.custom:mysecret")); //headers = headers.set("Access-Control-Expose-Headers", "Authorization") //headers = headers.set(Content-Type', 'application/x-www-form-urlencoded') //headers = headers.set('Authorization', 'Basic aW5mb2FyY2hpdmUuY3VzdG9tOm15c2VjcmV0'); console.log(headers) const body = JSON.stringify({ client_id: 'infoarchive.custom', username: userName, password: <PASSWORD>, grant_type: '<PASSWORD>', scope: 'search' }); return this.http.post('http://localhost:8080/oauth/token', body, { headers: headers }) .map( (response: Response) => { const data = response.json(); console.log("data:" + data) // this.saveJwt(data.json().id_token); } ) .catch( (error: Response) => { console.log(error.status) return Observable.throw('Something went wrong'); } ); ```<issue_comment>username_1: Here is an example of how I have setup my `http.service.ts` with `HttpClient`. This works flawlessly for me and I can see my headers being set accordingly. ``` import { HttpClient, HttpHeaders } from '@angular/common/http'; import { Injectable } from '@angular/core'; import { Observable } from 'rxjs/Observable'; import { RequestOptions } from '../models/http.models'; @Injectable() export class HttpService { domain: string = 'https://example-production.systems/'; options: RequestOptions = { withCredentials: true, headers: new HttpHeaders({ 'X-CSRFToken': '<PASSWORD>' }), }; constructor( public httpClient: HttpClient, ) {} post = (path: string, payload: any, query?: string): Observable => this.httpClient.post(`${this.domain}/${path}/${query}`, payload, this.options) } ``` If you're curios - my `RequestOptions` interface looks like this: ``` export interface RequestOptions { headers: HttpHeaders; withCredentials: boolean; } ``` I built this interface off of the list of accepted HTTPClient options can be found on the various HTTPClient `client.d.ts` methods. For example - accepted `.post()` options are as follows: ``` post(url: string, body: any | null, options?: { headers?: HttpHeaders | { [header: string]: string | string[]; }; observe?: 'body'; params?: HttpParams | { [param: string]: string | string[]; }; reportProgress?: boolean; responseType?: 'json'; withCredentials?: boolean; }): Observable; ``` Upvotes: 0 <issue_comment>username_2: What you should do is use [HttpInterceptor](https://angular.io/api/common/http/HttpInterceptor) for this purpose and once this is configured it will automatically add information to your header for every http request and you will not have to manually add headers for every http request. Following is an example of how I am getting a **user** object from the storage and using its `authentication_token` and sending it as a part of a header. I hope this helps. ``` import {Injectable} from '@angular/core'; import {HttpEvent, HttpInterceptor, HttpHandler, HttpRequest} from '@angular/common/http'; import {Observable} from "rxjs"; import { Storage } from "@ionic/storage"; import "rxjs/add/operator/mergeMap"; @Injectable() export class AuthInterceptorProvider implements HttpInterceptor { constructor(private storage: Storage) { console.log("Hello AuthInterceptorProvider Provider"); } intercept( request: HttpRequest, next: HttpHandler ): Observable> { return this.getUser().mergeMap(user => { if (user) { // clone and modify the request request = request.clone({ setHeaders: { Authorization: `Bearer ${user.authentication\_token}` } }); } return next.handle(request); }); } getUser(): Observable { return Observable.fromPromise(this.storage.get("user")); } } ``` I will also recommend you to read [this](https://medium.com/@ryanchenkie_40935/angular-authentication-using-the-http-client-and-http-interceptors-2f9d1540eb8) article as this will give you more insight on this. Upvotes: 1