date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/14
1,139
4,069
<issue_start>username_0: I have a simple jenkins pipeline build, this is my jenkinsfile: ```groovy pipeline { agent any stages { stage('deploy-staging') { when { branch 'staging' } steps { sshagent(['my-credentials-id']) { sh('git push joe@repo:project') } } } } } ``` I am using sshagent to push to a git repo on a remote server. I have created credentials that point to a private key file in Jenkins master ~/.ssh. When I run the build, I get this output (I replaced some sensitive info with \*'s): ``` [ssh-agent] Using credentials *** (***@*** ssh key) [ssh-agent] Looking for ssh-agent implementation... [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) $ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-cjbm7oVQaJYk/agent.11558 SSH_AGENT_PID=11560 $ ssh-add *** Identity added: *** [ssh-agent] Started. [Pipeline] { [Pipeline] sh $ ssh-agent -k unset SSH_AUTH_SOCK; unset SSH_AGENT_PID; echo Agent pid 11560 killed; [ssh-agent] Stopped. [TDBNSSBFW6JYM3BW6AAVMUV4GVSRLNALY7TWHH6LCUAVI7J3NHJQ] Running shell script + git push joe@repo:project Host key verification failed. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. ``` As you can see, the ssh-agent starts, stops immediately after and then runs the git push command. The weird thing is it did work correctly once but that seemed completely random. I'm still fairly new to Jenkins - am I missing something obvious? Any help appreciated, thanks. edit: I'm running a multibranch pipeline, in case that helps.<issue_comment>username_1: I recently had a similar issue though it was inside a docker container. The logs gave the impression that ssh-agent exits too early but actually the problem was that I had forgotten to add the git server to known hosts. I suggest ssh-ing onto your jenkins master and trying to do the same steps as the pipeline does with ssh-agent (the cli). Then you'll see where the problem is. E.g: ``` eval $(ssh-agent -s) ssh-add ~/yourKey git clone ``` As explained [on help.github.com](https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/) Update: Here a util to add knownHosts if not yet added: ``` /** * Add hostUrl to knownhosts on the system (or container) if necessary so that ssh commands will go through even if the certificate was not previously seen. * @param hostUrl */ void tryAddKnownHost(String hostUrl){ // ssh-keygen -F ${hostUrl} will fail (in bash that means status code != 0) if ${hostUrl} is not yet a known host def statusCode = sh script:"ssh-keygen -F ${hostUrl}", returnStatus:true if(statusCode != 0){ sh "mkdir -p ~/.ssh" sh "ssh-keyscan ${hostUrl} >> ~/.ssh/known_hosts" } } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I was using this inside docker, and adding it to my Jenkins master's `known_hosts` felt a bit messy, so I opted for something like this: 1. In Jenkins, create a new credential of type "Secret text" (let's call it `GITHUB_HOST_KEY`), and set its value to be the host key, e.g.: ```sh # gets the host for github and copies it. You can run this from # any computer that has access to github.com (or whatever your # git server is) ssh-keyscan github.com | clip ``` 2. In your Jenkinsfile, save the string to `known_hosts` ``` pipeline { agent { docker { image 'node:12' } } stages { stage('deploy-staging') { when { branch 'staging' } steps { withCredentials([string(credentialsId: 'GITHUB_HOST_KEY', variable: 'GITHUB_HOST_KEY')]) { sh 'mkdir ~/.ssh && echo "$GITHUB_HOST_KEY" >> ~/.ssh/known_hosts' } sshagent(['my-credentials-id']) { sh 'git push joe@repo:project' } } } } } ``` This ensures you're using a "trusted" host key. Upvotes: 0
2018/03/14
1,361
5,280
<issue_start>username_0: I'm trying to develop a library that try to detect auto-click on a page. The library will be imported on several different pages, some will have jquery, some other will not, or will have other different libraries, so my solution should be **vanilla javascript**. the goal is to have several security layers, and the first one will be in javascript, this library will not be the only counter measure against auto-click, but should provide as much informations as possible. The idea is to intercept all **click** and **touch events** that occur on the page, and if those events are script generated, something will happen (should be a ajax call, or setting a value on a form, or setting a cookie or something else, this is not important at this stage). I've write a very simple script that checks for computer generated clicks: ``` (function(){ document.onreadystatechange = function () { if (document.readyState === "interactive") { try{ document.querySelector('body').addEventListener('click', function(evt) { console.log("which", evt.which); console.log("isTrusted", evt.isTrusted); }, true); // Use Capturing }catch(e){ console.log("error on addeventlistener",e); } } } }()); ``` I saw this working on a html page without any other js in it, but since I added this javascript to test the auto-click detection simply "nothing" happens, and with nothing I mean both autoclick and detection. The same code as follow, if used in the console, is working fine, and events are intercepted and evaulated. this is the script used: ``` document.onreadystatechange = function () { if (document.readyState === "interactive") { //1 try el = document.getElementById('target'); if (el.onclick) { el.onclick(); } else if (el.click) { el.click(); } console.log("clicked") } //2 try var d = document.createElement('div'); d.style.position = 'absolute'; d.style.top = '0'; d.style.left = '0'; d.style.width = '200px'; d.style.height = '200px'; d.style.backgroundColor = '#fff'; d.style.border = '1px solid black'; d.onclick = function() {console.log('hello');}; document.body.appendChild(d); } ``` the html page is very simple: ``` Hello, world! ============= aaaaa ``` and for test purposes I added the detection library in head, while the "autoclick" code is just behind the tag. I guess the problem is in "how I attach the event handler", or "when", so what I'm asking is what can I do to intercept clicks events "for sure", the idea is to intercept clicks on every element, present and future, I don't want to prevent them, just be sure to intercept them somehow. Of course I cannot intercept those events that has been prevented and do not bubble, but I'd like to "try" to have my js "before" any other. Do you have some **idea** about this? [jsfiddle of example](https://jsfiddle.net/hmz759oe/)<issue_comment>username_1: Using `document.onreadystatechange` will only work as expected in simple scenerios when no other third party libraries are included. Wrap you code inside the native `DOMContentLoaded` event. ```js document.addEventListener("DOMContentLoaded",function(){ document.body.addEventListener('click', function(evt) { if (evt.target.classList.contains('some-class')) {} // not used console.log("which", evt.which); console.log("isTrusted", evt.isTrusted); }, true); //this is the autoclick code! el = document.getElementById('target'); if (el.onclick) { el.onclick(); } else if (el.click) { el.click(); } console.log("clicked") }); ``` ```html Hello, world! Hello, world! ============= aaaaa ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: If you look at the event param passed to the function on a click, or whatever other event, you can look for the following which is a telltale sign that the clicker ain't human... ``` event.originalEvent === undefined ``` From what you've said I'd use the following to track clicks... ``` $(document).on("click", function(event){ if(event.originalEvent.isTrusted){ //not human }else{ //human } }); ``` Upvotes: 3 <issue_comment>username_3: Can you check if both a `click` event and either a `mouseup` or `touchend` event happen within 100 ms of each other? If they don't it's likely an automated event. ``` let mouseuportouchend = false; let click = false; let timer = null; const regMouseupOrTouchend = function(){ mouseuportouchend = true; if (!timer) timer = setTimer(); } const regClick = function(){ click = true; if (!timer) timer = setTimer(); } const setTimer = function(){ timer = setTimeout(()=>{ if (click && mouseuportouchend) console.log("Manual"); else console.log ("Auto"); click=false; mouseuportouchend=false; timer=null; }, 100) } let el = document.getElementById('target'); el.addEventListener("click",regClick); el.addEventListener("mouseup", regMouseupOrTouchend); el.addEventListener("touchend", regMouseupOrTouchend); ``` Upvotes: 1
2018/03/14
902
3,495
<issue_start>username_0: I want to change the height and width of glyphicon-comment and how to display text inside the glyphicon-comment.i am using predefined comment which is present in the bootstrap.when i try to change the height and width by using inline style it is not working.but if i changing the color of comment it is changing. Here is my code. ```html Bootstrap Example Learn by doing Each computer has a built-in instruction set that it knows how to execute by design. True False × **✔** This alert box could indicate a successful or positive action. × **✘** This alert box could indicate a dangerous or potentially negative action. computer uses intelligence to execute instructions. True False Reset function radiotruehintbox() { document.getElementById("false").checked = false; document.getElementById('falsebox').style.display='none'; document.getElementById('truebox').style.display='block'; } function radiofalsehintbox() { document.getElementById("true").checked = false; document.getElementById('truebox').style.display='none'; document.getElementById('falsebox').style.display='block'; } function ResetClick() { document.getElementById("myForm").reset(); document.getElementById("truebox").reset(); document.getElementById("falsebox").reset(); } ```<issue_comment>username_1: Using `document.onreadystatechange` will only work as expected in simple scenerios when no other third party libraries are included. Wrap you code inside the native `DOMContentLoaded` event. ```js document.addEventListener("DOMContentLoaded",function(){ document.body.addEventListener('click', function(evt) { if (evt.target.classList.contains('some-class')) {} // not used console.log("which", evt.which); console.log("isTrusted", evt.isTrusted); }, true); //this is the autoclick code! el = document.getElementById('target'); if (el.onclick) { el.onclick(); } else if (el.click) { el.click(); } console.log("clicked") }); ``` ```html Hello, world! Hello, world! ============= aaaaa ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: If you look at the event param passed to the function on a click, or whatever other event, you can look for the following which is a telltale sign that the clicker ain't human... ``` event.originalEvent === undefined ``` From what you've said I'd use the following to track clicks... ``` $(document).on("click", function(event){ if(event.originalEvent.isTrusted){ //not human }else{ //human } }); ``` Upvotes: 3 <issue_comment>username_3: Can you check if both a `click` event and either a `mouseup` or `touchend` event happen within 100 ms of each other? If they don't it's likely an automated event. ``` let mouseuportouchend = false; let click = false; let timer = null; const regMouseupOrTouchend = function(){ mouseuportouchend = true; if (!timer) timer = setTimer(); } const regClick = function(){ click = true; if (!timer) timer = setTimer(); } const setTimer = function(){ timer = setTimeout(()=>{ if (click && mouseuportouchend) console.log("Manual"); else console.log ("Auto"); click=false; mouseuportouchend=false; timer=null; }, 100) } let el = document.getElementById('target'); el.addEventListener("click",regClick); el.addEventListener("mouseup", regMouseupOrTouchend); el.addEventListener("touchend", regMouseupOrTouchend); ``` Upvotes: 1
2018/03/14
416
813
<issue_start>username_0: ``` list = [(u'SFG2',), (u'FG2',), (u'FG3',), (u'SFG1',), (u'RM1',), (u'RM2',), (u'RM3',), (u'FG1',)] ``` expected output: ``` u'SFG2' u'FG2' u'FG3' ```<issue_comment>username_1: Iterate over your list and use `index`. **ex:** ``` my_list = [(u'SFG2',), (u'FG2',), (u'FG3',), (u'SFG1',), (u'RM1',), (u'RM2',), (u'RM3',), (u'FG1',)] for i in my_list: print i[0] ``` **Output:** ``` SFG2 FG2 FG3 SFG1 RM1 RM2 RM3 FG1 ``` Upvotes: 1 <issue_comment>username_2: ``` In [1]: list1 = [(u'SFG2',), (u'FG2',), (u'FG3',), (u'SFG1',), (u'RM1',), (u'RM2',), (u'RM3',), (u'FG1',)] In [2]: list2 = [x for tup in list1 for x in tup] In [3]: list2 Out[3]: ['SFG2', 'FG2', 'FG3', 'SFG1', 'RM1', 'RM2', 'RM3', 'FG1'] ``` NB: I am using python 3.x and you should too! Upvotes: 0
2018/03/14
1,286
5,891
<issue_start>username_0: I am new in swift moved from java. And some implementaion of dessign patterns confuse me. For example I have presudo pattern observer (callback) in java code (there is example below). Namely UI passed own listener to Manager class and listen callbacks isConnected and isDisconnected. If a callback is executed UI class shows certain message "isConnected" or "isDisconnected" ``` public class UI{ private Manager mManager; void createManager(){ mManager = new Manager(mManagerLister); } public void showMessage(String aMsg){ print(aMsg) } private final IManagerListener mManagerLister = new IManagerListener{ void isConnected(){ this.showMessage("isConnected") } void isDisconnected(){ this.showMessage("isConnected") } } } public class Manager{ interface IManagerListener{ void isConnected(); void isDisconnected(); } private final mListener; public Manager(IManagerListener aListener){ mListener = aListener; } } ``` How to correctly port this java code to swift code? I tries to port but error message ***Value of type 'UI' has no member 'showMessage'*** is shown ``` public class UI{ var manager: Manager? var managerListener: IManagerListener? func createManager(){ managerListener = ManagerListenerImp(self) manager = Manager(managerListener) } public func showMessage(msg: String){ print(msg) } class ManagerListenerImp: IManagerListener{ weak var parent: UI init(parent : UI ){ self.parent = parent } func isConnected(){ parent.showMessage("isConnected") // Value of type 'UI' has no member 'showMessage' } .......... } } ``` Perhaps exists more gracefully a way to use callbacks and my way is not correctly?<issue_comment>username_1: There are multiple ways to achieve it. 1. Delegate Pattern (Using Protocols which are nothing but interfaces in Java) 2. Using Blocks/Closures 3. Using KVO Because you have used interfaces am elaborating on Delegate pattern below. Modify your code as below **Declare a protocol** ``` @objc protocol ManagerListenerImp { func isConnected() } ``` **Declare a variable in Manager class** ``` class Manager { weak var delegate : ManagerListenerImp? = nil } ``` **Confirm to `ManagerListenerImp` in your UI class** ``` extension UI : ManagerListenerImp { func isConnected () { //your isConnected implementation here } } ``` **Pass UI instance (self in swift and this in JAVA to manager class)** ``` func createManager(){ manager = Manager() manager?.delegate = self } ``` Finally, whenever you wanna trigger `isConnected` from `Manager` class simply say ``` self.delegate?.isConnected() ``` in your Manager class Hope it helps Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm a bit confused by which class has a reference to which, but that shouldn't be too hard to change in the following example. You might be looking for an Observer Pattern. This can have multiple objects listening to the same changes: 1. ManagerStateListener Protocol -------------------------------- Protocol to be implemented by any class that should react to changes to the state of the Manager ```swift protocol ManagerStateListener: AnyObject { func stateChanged(to state: Manager.State) } ``` 2. Manager Class ---------------- The Manager class contains: 1. Its state 2. A list with listeners 3. Methods for adding, removing and invoking the listeners 4. An example class that implements the ManagerStateListener protocol ```swift class Manager { /// The possible states of the Manager enum State { case one case two case three } /// The variable that stores the current state of the manager private var _currentState: State = .one var currentState: State { get { return _currentState } set { _currentState = newValue /// Calls the function that will alert all listeners /// that the state has changed invoke() } } /// The list with all listeners var listeners: [ManagerStateListener] = [] /// A specific listener that gets initialised here let someListener = SomeListener() init() { addListener(someListener) /// Add the listener to the list } /// Method that invokes the stateChanged method on all listeners func invoke() { for listener in listeners { listener.stateChanged(to: currentState) } } /// Method for adding a listener to the list of listeners func addListener(_ listener: ManagerStateListener) { listeners.append(listener) } /// Method for removing a specific listener from the list of listeners func removeListener(_ listener: ManagerStateListener) { if let index = listeners.firstIndex(where: { $0 === listener }) { listeners.remove(at: index) } } } ``` 3. SomeListener Class --------------------- An example listener that implements the ManagerStateListener protocol, held by the Manager class ```swift class SomeListener : ManagerStateListener { func stateChanged(to state: Manager.State) { /// Do something based on the newly received state switch state { case .one: print("State changed to one") case .two: print("State changed to two") case .three: print("State changed to three") } } } ``` I hope this is of any help. Upvotes: 1
2018/03/14
523
1,707
<issue_start>username_0: I have HashMap where key is bird specie and value is number of perceptions. Here is my code: ``` public class Program { public static void main(String[] args) { HashMap species = new HashMap<>(); Scanner reader = new Scanner(System.in); species.put("hawk (buteo jamaicensis)", 0); species.put("eagle (aquila chrysaetos)", 0); species.put("sparrow (passeridae)", 0); System.out.println("Add perception"); System.out.println("What was perceived?"); //output should be "hawk"/"eagle"/"sparrow" String perception = reader.nextLine(); // Change here the value of hashmap key. ArrayList list = new ArrayList<>(); for (HashMap.Entry entry: species.entrySet()) { System.out.println((entry.getKey()+" : "+entry.getValue()+" perception")); } } ``` My goal is to change key value to from 0 to 1, when scanner is asking what was perceived. For example: Scanner is asking "What was perceived?" and output is "hawk". Then the program should change key "hawk (buteo jamaicensis)" value from 0 to 1. So the goal output would be now: ``` sparrow (passeridae) : 0 perception eagle (aquila chrysaetos) : 0 perception hawk (buteo jamaicensis) : 1 perception ```<issue_comment>username_1: Use `String.indexOf` check if the input string is substring of the key, and if it is, set the new value: ``` // Change here the value of hashmap key. for (HashMap.Entry entry: species.entrySet()) { if (entry.getKey().indexOf(perception) >= 0) { entry.setValue(entry.getValue() + 1); } ``` Upvotes: 2 <issue_comment>username_2: ``` for (HashMap.Entry entry: species.entrySet()) { if (entry.getKey().equals(perception)) { entry.setValue(entry.getValue() + 1); } } ``` Upvotes: 0
2018/03/14
443
1,566
<issue_start>username_0: There is somthing wrong with the page I want to test. My first try: When I clicked manually on a button, then I will be forwarded normally on the next page. When I tried to click on the same button with selenium, then I get an error page "Sorry...something gone wrong...blabla". I think this problem can only solve the developer team of the page. ``` By book = By.cssSelector("#button\\.buchung\\.continue"); //By book = By.cssSelector("button.buchung.continue"); //By book = By.xpath("//*[@id='button.<EMAIL>ung.continue']"); WebElement element= ConfigClass.driver.findElement(book); element.click(); ``` But I want to try a workaround: I clicked on the same button with JQuery. I opened my chrome console and execute the button with: ``` jQuery('#button\\.buchung\\.continue').click() ``` **How can I execute this JQuery expression in my selenium code?** I tried this, but without success: ``` JavascriptExecutor je = (JavascriptExecutor) driver; je.executeScript("jQuery('#button\\.buchung\\.continue').click()"); ```<issue_comment>username_1: Use `String.indexOf` check if the input string is substring of the key, and if it is, set the new value: ``` // Change here the value of hashmap key. for (HashMap.Entry entry: species.entrySet()) { if (entry.getKey().indexOf(perception) >= 0) { entry.setValue(entry.getValue() + 1); } ``` Upvotes: 2 <issue_comment>username_2: ``` for (HashMap.Entry entry: species.entrySet()) { if (entry.getKey().equals(perception)) { entry.setValue(entry.getValue() + 1); } } ``` Upvotes: 0
2018/03/14
2,700
6,800
<issue_start>username_0: Ok so there are a number of questions like this but after having experimented with the code in some of the answers given to other similar questions, I'm still stuck! I've managed to get 2 flex rows working in a flex column, with the brand image vertically centered, but I'm having trouble with the horizontal spacing. On the first row of my navbar I have a list of nav-items and also an inline form with a search bar. I want the search bar to be right aligned, while the nav-items stay left aligned. I've tried using justify-content-between on various elements but with no luck and I've also tried m\*-auto classes but I just can't keep the nav-items and search bar on the same row while separating them horizontally! ### Here's my code ```css .navbar { padding-top: 0; padding-bottom: 0; /* box-shadow: 0 5px 5px rgba(0, 0, 0, 0.12), 0 10px 10px rgba(0, 0, 0, 0.03); */ font-weight: 300; } .navbar-dark { background: linear-gradient(to right, rgba(0, 45, 165, 0.97), rgba(10, 88, 157, 0.97), rgba(10, 88, 157, 0.97), rgba(0, 45, 165, 0.97)); } .navbar-brand { margin-right: 20px; } .nav-item { font-family: 'Raleway', sans-serif; font-weight: 300; font-size: 80%; padding: 0 .4rem; } .navbar .navbar-nav .nav-link { transition: all .05s ease-in-out; } .navbar-dark .navbar-nav .nav-link.active { border-bottom: 1px solid white; } .navbar-dark .navbar-nav .nav-link:hover { border-bottom: 1px solid white; } .navbar-toggler:hover { cursor: pointer; } #search-bar { background-color: #5c87af; color: white; font-size: 14px; width: 200px; height: 30px; transition: all .2s; border: none; } #search-bar:hover { background-color: #779ec1; } #search-bar:focus { background-color: white; color: #212529; width: 400px; } #search-bar::-webkit-input-placeholder { color: white !important; } #search-bar:-moz-placeholder { /* Firefox 18- */ color: white !important; } #search-bar::-moz-placeholder { /* Firefox 19+ */ color: white !important; } #search-bar:-ms-input-placeholder { color: white !important; } ``` ```html [![](/images/MW-logo-white.png)](#) * [PROPERTY](#property-tab) * [UNITS](#units-tab) * [TENANCIES](#tenancies-tab) * [PDFs](#pdfs-tab) * [CONTACTS](#contacts-tab) * [ALL](#) * [CURRENT](#) * [PAST](#) ```<issue_comment>username_1: Just make sure both `navbar-nav` are full width. You can use `w-100` for this... <https://www.codeply.com/go/DGmjwI79yy> ``` [![](//placehold.it/100x30)](#) * [PROPERTY](#property-tab) * [UNITS](#units-tab) * [TENANCIES](#tenancies-tab) * [PDFs](#pdfs-tab) * [CONTACTS](#contacts-tab) * [ALL](#) * [CURRENT](#) * [PAST](#) ``` Then the `ml-auto` will work as expected to push the form right. --- Related question: [Bootstrap 4 navbar with 2 rows](https://stackoverflow.com/questions/42635126/bootstrap-4-navbar-with-2-rows/42635243) Upvotes: 3 [selected_answer]<issue_comment>username_2: You can achieve this layout with only two `class`. > > 1 - Add `w-100` here > > > 2 - Add `ml-auto` here > > > **Here is the working Demo** ```css .navbar { padding-top: 0; padding-bottom: 0; /* box-shadow: 0 5px 5px rgba(0, 0, 0, 0.12), 0 10px 10px rgba(0, 0, 0, 0.03); */ font-weight: 300; } .navbar-dark { background: linear-gradient(to right, rgba(0, 45, 165, 0.97), rgba(10, 88, 157, 0.97), rgba(10, 88, 157, 0.97), rgba(0, 45, 165, 0.97)); } .navbar-brand { margin-right: 20px; } .nav-item { font-family: 'Raleway', sans-serif; font-weight: 300; font-size: 80%; padding: 0 .4rem; } .navbar .navbar-nav .nav-link { transition: all .05s ease-in-out; } .navbar-dark .navbar-nav .nav-link.active { border-bottom: 1px solid white; } .navbar-dark .navbar-nav .nav-link:hover { border-bottom: 1px solid white; } .navbar-toggler:hover { cursor: pointer; } #search-bar { background-color: #5c87af; color: white; font-size: 14px; width: 200px; height: 30px; transition: all .2s; border: none; } #search-bar:hover { background-color: #779ec1; } #search-bar:focus { background-color: white; color: #212529; width: 400px; } #search-bar::-webkit-input-placeholder { color: white !important; } #search-bar:-moz-placeholder { /* Firefox 18- */ color: white !important; } #search-bar::-moz-placeholder { /* Firefox 19+ */ color: white !important; } #search-bar:-ms-input-placeholder { color: white !important; } ``` ```html [![](/images/MW-logo-white.png)](#) * [PROPERTY](#property-tab) * [UNITS](#units-tab) * [TENANCIES](#tenancies-tab) * [PDFs](#pdfs-tab) * [CONTACTS](#contacts-tab) * [ALL](#) * [CURRENT](#) * [PAST](#) ``` Upvotes: 1 <issue_comment>username_3: Simply use `ml-auto` to your search bar class. Basically Ml is coming under **Auto Margins** Another thing which can be applied to single flex items are margins. The following margin classes are available: **mr-auto:** add margin to the right side of the item **ml-auto:** add margin to the left side of the item **mt-auto:** add margin to the top of the item **mb-auto:** add margin to the bottom of the item Here is your code snippet ```css .navbar { padding-top: 0; padding-bottom: 0; /* box-shadow: 0 5px 5px rgba(0, 0, 0, 0.12), 0 10px 10px rgba(0, 0, 0, 0.03); */ font-weight: 300; } .navbar-dark { background: linear-gradient(to right, rgba(0, 45, 165, 0.97), rgba(10, 88, 157, 0.97), rgba(10, 88, 157, 0.97), rgba(0, 45, 165, 0.97)); } .navbar-brand { margin-right: 20px; } .nav-item { font-family: 'Raleway', sans-serif; font-weight: 300; font-size: 80%; padding: 0 .4rem; } .navbar .navbar-nav .nav-link { transition: all .05s ease-in-out; } .navbar-dark .navbar-nav .nav-link.active { border-bottom: 1px solid white; } .navbar-dark .navbar-nav .nav-link:hover { border-bottom: 1px solid white; } .navbar-toggler:hover { cursor: pointer; } #search-bar { background-color: #5c87af; color: white; font-size: 14px; width: 200px; height: 30px; transition: all .2s; border: none; } #search-bar:hover { background-color: #779ec1; } #search-bar:focus { background-color: white; color: #212529; width: 400px; } #search-bar::-webkit-input-placeholder { color: white !important; } #search-bar:-moz-placeholder { /* Firefox 18- */ color: white !important; } #search-bar::-moz-placeholder { /* Firefox 19+ */ color: white !important; } #search-bar:-ms-input-placeholder { color: white !important; } ``` ```html [![](/images/MW-logo-white.png)](#) * [PROPERTY](#property-tab) * [UNITS](#units-tab) * [TENANCIES](#tenancies-tab) * [PDFs](#pdfs-tab) * [CONTACTS](#contacts-tab) * [ALL](#) * [CURRENT](#) * [PAST](#) ``` Upvotes: 1
2018/03/14
1,025
3,776
<issue_start>username_0: I have a problem where the order of de index of the loop keeps changing in the callback. See the code below for what I've tried, the goal I'm trying to achieve is to add markers based on the available layers. However the order on which I loop over the layers has to be the same order in which I add the markers. Later on, when I fix this problem, the goal is that users can click on a generated marker what will result in a layer that opens with information. Does anyone know what the solution is? I've experimented with the comments of @patrickRoberts and @stian. ``` $(function(){ // I've tried the following: // – async/await with promises // – JavaScript Closures // – Closures with IIFE (see example below) 'use strict'; //Cache DOM let $win = $(window), $doc = $(document), $body = $('body'), $layer = $('.layer'), $drawingImage = $('.drawings__image'), $markerContainer = $('.markers'); //Init _addMarkers(); function _addMarkers(){ for(var i = 0; i < $layer.length; i++){ //Right order console.log(i); (function(i){ _calculateScaleFactor($drawingImage.eq(0), function(data){ //Wrong order console.log(i); }); })(i); } } function _createMarker(x, y, ){ return $(`${i}`); } function _calculateScaleFactor(image, callback){ let newImage = new Image(); newImage.src = image.attr('src'); newImage.onload = function(){ let scaleFactor = newImage.width / image.width(); callback(scaleFactor); } } ``` })<issue_comment>username_1: Here is a away to do it with async/await and promises. Hope it's useful. ``` $(() => { //your array of elements let arr = ["hello", "hi", "bye"] calculate(arr) }) async function calculate(arr){ for(let i = 0; i < arr.length; i++){ console.log(i, arr[i]) let result = await someAsyncStuff(arr[i]) console.log(i, result) } } function someAsyncStuff(el){ return new Promise((resolve, reject) => { //your async code here setTimeout(resolve, 2000, el) }) } ``` Upvotes: -1 <issue_comment>username_2: After hours of experimenting (with no await/async experience at all) I finally fixed it with the async and await. Special thanks to @username_1! Hope that the solution also wil help others, see code below: ``` $(function(){ 'use strict'; //Cache DOM let $win = $(window), $doc = $(document), $body = $('body'), $layer = $('.layer'), $drawingImage = $('.drawings__image'), $markerContainer = $('.markers'); //Init _addMarkers(); async function _addMarkers(){ for(var i = 0; i < $layer.length; i++){ let _self = $layer.eq(i), layerData = { x: _self.data('x'), y: _self.data('y') } await _calculateScaleFactor($drawingImage.eq(0)).then(function(resolve){ layerData.x = (layerData.x / resolve) + $drawingImage.eq(0).offset().left; layerData.y = (layerData.y / resolve) + $drawingImage.eq(0).offset().top; $markerContainer.append( _createMarker(layerData.x, layerData.y, i) ); console.log(i); }); } } function _createMarker(x, y, i){ return $(`${i}`); } function _calculateScaleFactor(image){ return new Promise(function(resolve, reject){ let newImage = new Image(); newImage.src = image.attr('src'); newImage.onload = function(){ let scaleFactor = newImage.width / image.width(); resolve(scaleFactor); } }); } ``` }) Upvotes: 1 [selected_answer]
2018/03/14
243
998
<issue_start>username_0: ``` sudo apt-get -y install python-pip ``` I was trying to install a package from a tutorial and cam across this line of code. I could not figure what is the function of '-y' flag? What does '-y' flag do in the above line of code?<issue_comment>username_1: Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. If an undesirable situation, such as changing a held package, trying to install a unauthenticated package or removing an essential package occurs then apt-get will abort. Upvotes: 3 [selected_answer]<issue_comment>username_2: By specifying -y flag while installing packages, installer wont ask you for a prompt. Try without -y flag and run the same command and you will better understand. Upvotes: 2 <issue_comment>username_3: The option -y (yes) allows to confirm automatically installation of your package. For more information about this aoption, you can read the manual by using the command "man apt-get" Upvotes: 1
2018/03/14
694
2,477
<issue_start>username_0: The command `npm run` will run the command which I set in the package.json, and I think it does not new a `child process` to run the command. What does npm do to run the command without new a `child process`?<issue_comment>username_1: **Npm run** has nothing to do with the node child process if that is what you are asking. Npm run is a command provided by npm CLI which allows to instantiate a shell and execute the command provided in the package.json file of your project. Considering this is your package.json : ``` { "name": "my-awesome-package", "version": "1.0.0", "script" : { "test" : "mocha ./test/unit/mytest.js" } } ``` Now if you execute `npm run test`, npm will simply go and check in package.json script section for 'test' key and execute that command in shell or cmd.exe based on your operating system. If you have not installed mocha globally the command will show error in the console itself, OR if the file `mytest.js` does not exist the CLI will throw an error which is similar to just typing `mocha ./tests/unit/mytest.json` This paragraph from the [npm docs](https://docs.npmjs.com/cli/run-script) which is pretty self-explanatory. > > The actual shell your script is run within is platform dependent. By > default, on Unix-like systems it is the /bin/sh command, on Windows it > is the cmd.exe. The actual shell referred to by /bin/sh also depends > on the system. As of npm@5.1.0, you can customize the shell with the > script-shell configuration. > > > --- Update : As per the response in comment, if you want to execute CLI commands via. node without using `child_process` api you can try `exec` or `execsync(cmd)` as a simple workround.This will simply execute your shell cmd and return to your code if no errors were found. Upvotes: 5 [selected_answer]<issue_comment>username_2: I want to add an answer since the accepted one is outdated for npm v8. `run`, `rum` and `urn` are aliases for `run-script`. What `npm run X` does is to run the command under the key `X` inside `scripts` object. If the command to run isn't installed globally, it will search on `node_modules` because `npm` adds to the OS PATH `node_modules`. So, an example: ``` `npm run test` ``` If in `package.json` we have: ``` "scripts": { "test": "jest --runInBand", ``` The `npm` will try to execute the globally installed `jest` command. And if it's not globally, then it will search on `node_modules`. Upvotes: 0
2018/03/14
1,238
5,459
<issue_start>username_0: I'm having a hard time figuring out how to write Unit tests in Spring. This is the method I'm trying to test: ``` @Service public class ActionRequestHandler { @Autowired private Vertx vertx; @Autowired private InventoryService inventoryService; @Autowired private RequestSerializationWrapper requestWrapper; @Autowired private ProducerTemplate producer; @EventListener @Transactional public void registerConsumer(ApplicationReadyEvent event) { EventBus eb = vertx.eventBus(); eb.consumer("bos.admin.wui.action", (Message msg) -> { handleIncomingRequest(msg); }); } // ... } ``` So far I have tried creating a configuration inside my test class, like this: ``` @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(loader = AnnotationConfigContextLoader.class) public class ActionRequestHandlerTest { @Configuration static class ContextConfiguration { @Bean @Primary public Vertx vertx() { return Mockito.mock(Vertx.class); } @Bean @Primary public InventoryService inventoryService() { return Mockito.mock(InventoryService.class); } @Bean @Primary public RequestSerializationWrapper requestWrapper() { return new RequestSerializationWrapper(); } } @Autowired private Vertx vertx; @Autowired private InventoryService inventoryService; @Autowired private RequestSerializationWrapper requestWrapper; @Autowired private ActionRequestHandler systemUnderTest; @Test public void registerConsumer_shouldRegisterVertxEventBusConsumer() { EventBus eventBusMock = Mockito.mock(EventBus.class); Mockito.when(vertx.eventBus()).thenReturn(eventBusMock); systemUnderTest.registerConsumer(null); Mockito.verify(eventBusMock.consumer(Matchers.anyString()), Mockito.times(1)); } } ``` However, this seems to try to resolve every dependency inside InventoryService instead of mocking the entire class. The above configuration gives me this error when I run: ``` Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire field: private admin.messaging.converters.XmlToEntityConverter admin.persistence.service.InventoryService.entityConverter; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [admin.messaging.converters.XmlToEntityConverter] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true)} ``` I have also tried using a profile as suggested [here](http://www.baeldung.com/injecting-mocks-in-spring). The configuration class looks the same: ``` @Profile("test") @Configuration public class ActionRequestHandlerTestConfiguration { @Bean @Primary public Vertx vertx() { return Mockito.mock(Vertx.class); } @Bean @Primary public InventoryService inventoryService() { return Mockito.mock(InventoryService.class); } @Bean @Primary public RequestSerializationWrapper requestWrapper() { return new RequestSerializationWrapper(); } } ``` The test is set up a bit differently with the following annotations instead: ``` @ActiveProfiles("test") @RunWith(SpringJUnit4ClassRunner.class) @SpringApplicationConfiguration(classes = ActionRequestHandler.class) public class ActionRequestHandlerTest { // ... } ``` But this instead gives me an error that Vertx can't be wired: ``` Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire field: private io.vertx.core.Vertx admin.messaging.request.ActionRequestHandler.vertx; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [io.vertx.core.Vertx] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {@org.springframework.beans.factory.annotation.Autowired(required=true)} ``` How can I get this to work? Where am I going wrong?<issue_comment>username_1: Try ``` @ContextConfiguration("/test-context.xml") @RunWith(SpringJUnit4ClassRunner.class) ``` This tells Junit to use the test-context.xml file in the same directory as your test. This file should be similar to the real context.xml you're using for spring, but pointing to test resources, naturally. Upvotes: 0 <issue_comment>username_2: You don't need whole spring context to write **unit test** for `ActionRequestHandler`. You should use `MockitoJunitRunner` instead and do mocking on the dependencies. ``` @RunWith(MockitoJunitRunner.class) public class ActionRequestHandlerTest { @Mock private Vertx vertx; @Mock private InventoryService inventoryService; @Mock private RequestSerializationWrapper requestWrapper; @Mock private ProducerTemplate producer; @InjectMocks private ActionRequestHandler actionRequestHandler; @Test public void testRegisterConsumer() { .... Your code to test ActionRequestHandler#registerConsumer will go here.... } } ``` You can read more about it [here](https://www.toptal.com/java/a-guide-to-everyday-mockito). Upvotes: 2 [selected_answer]
2018/03/14
783
2,456
<issue_start>username_0: I'm currently stuck on an issue with powershell regex since I am not able to get the desired output of Servername, instance name, and port if available. Admittedly, I have much to understand since I am new with regex. Here is my setup and my findings so far ``` Data: ZP000042 QRMLD1001\TEST oFJA UUQE0294\FAR,8594 ``` Basically I need to extract the following items: ``` Match 1[QRMLD1001\TEST] - Servername: QRMLD1001 ; Instancename: TEST Match 2[UUQE0294\FAR,8594] - Servername: UUQE0294 ; Instancename: FAR ; Port: 8594 ``` So far I am only able to extract `Match 2[ UUQE0294\FAR,859` ] via this regex ``` (\w+)\\(\w+)\,(\d+) ``` Result: ``` Groups : {0, 1, 2, 3} Success : True Name : 0 Captures : {0} Index : 0 Length : 16 Value : UUQE0294\FAR,859 Success : True Name : 1 Captures : {1} Index : 0 Length : 8 Value : UUQE0294 Success : True Name : 2 Captures : {2} Index : 9 Length : 3 Value : FAR Success : True Name : 3 Captures : {3} Index : 13 Length : 3 Value : 859 ``` I really just want to either have 3 to 4 groups, this means that with or without the port specified, i'd like to dissect the server and instance name, and if the port is included, then the port too.<issue_comment>username_1: Try ``` @ContextConfiguration("/test-context.xml") @RunWith(SpringJUnit4ClassRunner.class) ``` This tells Junit to use the test-context.xml file in the same directory as your test. This file should be similar to the real context.xml you're using for spring, but pointing to test resources, naturally. Upvotes: 0 <issue_comment>username_2: You don't need whole spring context to write **unit test** for `ActionRequestHandler`. You should use `MockitoJunitRunner` instead and do mocking on the dependencies. ``` @RunWith(MockitoJunitRunner.class) public class ActionRequestHandlerTest { @Mock private Vertx vertx; @Mock private InventoryService inventoryService; @Mock private RequestSerializationWrapper requestWrapper; @Mock private ProducerTemplate producer; @InjectMocks private ActionRequestHandler actionRequestHandler; @Test public void testRegisterConsumer() { .... Your code to test ActionRequestHandler#registerConsumer will go here.... } } ``` You can read more about it [here](https://www.toptal.com/java/a-guide-to-everyday-mockito). Upvotes: 2 [selected_answer]
2018/03/14
657
2,249
<issue_start>username_0: I am trying to get my head around both composed functions and pure functions. I have an object with a mixture of data. On some values I need to: 1. remove the value's units 2. parse string to integer 3. Convert to value decimal I have written three functions attempting to make them pure in the sense that they only do one thing but strictly speaking, they are mutating state. I'm not sure how to avoid mutating state though and if this technically makes it not a pure function? My three "pure" functions are: ``` function parseValue(val) { return typeof val === 'number' ? val : parseInt(val) } function stripUnits(val) { return typeof val === 'string' ? val.match(/\d+/)[0] : val } function convertToDecimal(val) { return val / 100 } ``` I am then trying to compose these functions into one function with the help of [lodash](https://github.com/lodash/lodash/wiki/FP-Guide) `compose()` ``` function prepValue(val) { return compose(stripUnits, parseValue, convertToDecimal) } ``` When I try run this `console.log("prepValue", prepValue(weather.current.temperature))` I get the following in the terminal: ``` prepValue function (){var n=arguments,e=n[0];if(o&&1==n.length&&of(e))return o.plant(e).value();for(var u=0,n=r?t[u].apply(this,n):e;++u ``` So the main things is, 1. How can I make my three functions "pure" 2. How can I compose these functions into 1<issue_comment>username_1: You need to "create the function composition" before calling it. ```js function parseValue(val) { console.log(val) return typeof val === 'number' ? val : parseInt(val) } function stripUnits(val) { console.log(val) return typeof val === 'string' ? val.match(/\d+/)[0] : val } function convertToDecimal(val) {console.log(val) return val / 100 } function prepValue(val) { return _.compose(stripUnits, parseValue, convertToDecimal)(val); } console.log("prepValue", prepValue('001232')); ``` ```html ``` Upvotes: 1 <issue_comment>username_2: The only mistake you made is not calling the resulting composed method with `val` as the argument: ``` function prepValue(val) { return compose(stripUnits, parseValue, convertToDecimal)(val); } ``` Upvotes: 3 [selected_answer]
2018/03/14
888
2,673
<issue_start>username_0: i'm working on php, i need to convert an array to javascript array.How to change? Please help below is my output value. I am trying from long time to debug but not getting any leads. my php code: ``` php include "db_connection.php"; $locations=array(); $query = $conn-query('SELECT `pg_address` FROM `tbl_master_property` limit 10'); while ($row = $query->fetch_assoc()) { $locations[] = $row; } $locations = json_encode($locations); //echo " ``` ";print_r($locations);die; ?> ``` ``` Array php value output: ``` Array ( [0] => Array ( [pg_address] => # 3/20, 1st Main, 1st Cross, Hosur Main Road, Adugodi, Bangalore ) [1] => Array ( [pg_address] => 24/3 Bazaar Street, Adugodi, Bangalore - 560030 ) [2] => Array ( [pg_address] => # 430, Koramangala 7th Block, Beside Sai Baba Temple, Bangalore ) [3] => Array ( [pg_address] => # 41, 1st Cross, 2nd Main, Behind M R Granite, Adugodi, Bannerghatta Main Road, Bangalore ) [4] => Array ( [pg_address] => # 27, 2nd Main, B cross, Nanjappa Layout, Adugodi, opp. to Vijaya Bank, Bangalore ) ) ``` Javascript code: ``` var locations = = $locations ?; ``` //---I need a format like this in js ----// ``` var locations = [ '3/20, 1st Main, 1st Cross, Hosur Main Road, Adugodi, Bangalore', '24/3 Bazaar Street, Adugodi, Bangalore - 560030', '# 430, Koramangala 7th Block, Beside Sai Baba Temple, Bangalore', '# 41, 1st Cross, 2nd Main, Behind M R Granite, Adugodi, Bannerghatta Main Road, Bangalore', '# 27, 2nd Main, B cross, Nanjappa Layout, Adugodi, opp. to Vijaya Bank, Bangalore' ]; ```<issue_comment>username_1: You need to "create the function composition" before calling it. ```js function parseValue(val) { console.log(val) return typeof val === 'number' ? val : parseInt(val) } function stripUnits(val) { console.log(val) return typeof val === 'string' ? val.match(/\d+/)[0] : val } function convertToDecimal(val) {console.log(val) return val / 100 } function prepValue(val) { return _.compose(stripUnits, parseValue, convertToDecimal)(val); } console.log("prepValue", prepValue('001232')); ``` ```html ``` Upvotes: 1 <issue_comment>username_2: The only mistake you made is not calling the resulting composed method with `val` as the argument: ``` function prepValue(val) { return compose(stripUnits, parseValue, convertToDecimal)(val); } ``` Upvotes: 3 [selected_answer]
2018/03/14
762
2,665
<issue_start>username_0: I'm having a bit of a comprehension issue with mapping values from an instance of a base class to an extended class. Is there a way to do this without specifying all of the individual properties in a constructor? I don't want to do that in case more optional params are added to the base class by other developers. (Also there are about 20 of these classes each with 30+ properties) Given the following code, what is the best way to set the values of my extended class without 'breaking' typescript. ``` export class BaseClass { a: string; b: string; c: string; } export class ExtendClass extends BaseClass { d: string; e: string; } const exampleBase = { a: 'help', b: 'me', c: 'please', }; let exampleExtend: ExtendClass = exampleBase; // Is there a way do this ?? exampleExtend.d = 'hello'; exampleExtend.e = 'world'; let exampleExtend2: any = exampleBase; // Breaking typing exampleExtend2.d = 'hello'; exampleExtend2.e = 'world'; ``` I was wondering if there was perhaps a way to achieve this with constructors, but can't see a way to assign a class to a parameter being passed in... Probably because this is mental. ``` export class BaseClass { constructor(values?: BaseClass) { if (values) { this = values; // Is there a way to do this? I think not and for good reason. } } a: string; b: string; c: string; } export class ExtendClass extends BaseClass { constructor(values: BaseClass) { super(values); } d: string; e: string; } ```<issue_comment>username_1: What you are looking for is a clean constructor and `Object.assign` ``` class Bar { a: string; b: string; c: string; constructor(obj) { Object.assign(this, obj); } } let bar = new Bar({a: 'foo', b: 'bar', c: 'baz'}); console.log(bar.a); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: @username_1 solution will work,.. But one problem is you could pass `{a: 'foo', b: 'bar', c: 'baz', oops: 'oops'}` And you have now got a property called oops on your object, basically you have lost the type checking of typescript. One way around this is to use an interface, your constructor could then use this. eg. ``` interface BarImp { a: string; b: string; c: string; } class Bar implements BarImp { a: string; b: string; c: string; constructor(obj: BarImp) { Object.assign(this, obj); } } let bar = new Bar({a: 'foo', b: 'bar', c: 'baz'}); //but this will still fail. //let bar = new Bar({a: 'foo', b: 'bar', c: 'baz', oops: 'oops'}); console.log(bar.a); ``` Upvotes: 1
2018/03/14
943
3,107
<issue_start>username_0: I'm writing a plugin that creates metaboxes on the admin page. I wrote a class thinking that it should work but I don't see where it fails. The idea is that if a new object is loaded there is a possibility to set a custom name. ``` php class Loader{ public function __construct() { add_action('add_meta_boxes', 'loadMetaBox'); //add_action('save_post', array($this, 'save')); //add_action('the_content', array($this, 'custom_message')); } protected $_cmbName; public function setLoader($cmbName){ $this-_cmbName = $cmbName; } public function loadMetaBox(){ add_meta_box( 'cmb_meta', __( $this->_cmbName, 'cmb-textdomain' ), 'cmb_meta_callback', 'page' ); } }; ?> ``` And called the class like this: ``` $cmb = new Loader(); $cmb->setLoader("Custom name"); $cmb->loadMetaBox(); ``` This triggers a `Fatal error: Call to undefined function add_meta_box() in .../class.load-cmb.php on line 19`. Line 19: `add_meta_box( 'cmb_meta', __( $this->_cmbName, 'cmb-textdomain' ), 'cmb_meta_callback', 'page' );`<issue_comment>username_1: Try with this: Your code: ``` add_action('add_meta_boxes', 'loadMetaBox'); ``` Replace With this: ``` add_action('add_meta_boxes', array(&$this,'loadMetaBox' )); ``` Upvotes: 0 <issue_comment>username_2: There is some errors in your code: 1. Each hooked function or callback function need to be embeded in an array with `$this` as first element and the callback function slug as 2nd. 2. Is better to defined your class in $GLOBALS at the end. 3. The function names that you are using need to be a bit more custom. 4. The `cmb_meta_callback()` function is not defined. Try this instead: ``` if ( ! defined( 'ABSPATH' ) ) exit; // Exit if accessed directly if ( ! class_exists( 'MBLoader' ) ) { class MBLoader{ public function __construct() { add_action( 'add_meta_boxes', array( $this, 'load_meta_box') ); // <== HERE } protected $_cmbName; public function set_mbloader($cmbName){ $this->_cmbName = $cmbName; } public function load_meta_box(){ add_meta_box( 'cmb_meta', __( $this->_cmbName, 'cmb-textdomain' ), array( $this, 'cmb_meta_callback'), 'page' ); // <== HERE } // The metabox content public function cmb_meta_callback($post){ echo ''. \_\_('TEST CONTENT') . ' '; // just for testing echo 'Post ID: '. $post->ID . ' '; // just for testing } }; $GLOBALS['mbloader'] = new MBLoader(); // the global variable to call } ``` Then to call your class and make it work, you will use simply: ``` global $mbloader; $mbloader->set_mbloader("Custom name"); ``` I have used that in the **`init`** hook for testing: ``` add_action( 'init', 'create_a_custom_metabox'); function create_a_custom_metabox(){ global $mbloader; $mbloader->set_mbloader("Custom name"); } ``` You will get something like: [![enter image description here](https://i.stack.imgur.com/zDqOK.png)](https://i.stack.imgur.com/zDqOK.png) Upvotes: 2 [selected_answer]
2018/03/14
619
2,308
<issue_start>username_0: recently I solved the following problem: Given a chronologically ordered list of LocalDateTime, find the average duration between neighbours. I did the following: ``` @Test public void canCalculateAverageDuration() { final LocalDateTime now = LocalDateTime.now(); final List localDateTimes = Arrays.asList(now, now.minusHours(5), now.plusMinutes(2)); final List durations = new ArrayList<>(); localDateTimes.stream() .sorted() .reduce((first, second) -> { durations.add(Duration.between(first, second)); return second; }); final OptionalDouble averageNanos = durations.stream() .mapToDouble(Duration::toNanos) .average(); final Duration average = Duration.ofNanos((long) averageNanos.orElse(0.0)); assertThat(average).isEqualTo(Duration.parse("PT2H31M")); } ``` I wonder if the problem could be solved in a more elegant way, for example: I would like to avoid the List of durations if possible. What do you think?<issue_comment>username_1: You could solve this just using iterations (i.e. not using Streams): ``` @Test public void canCalculateAverageDuration() { final LocalDateTime now = LocalDateTime.now(); final List localDateTimes = Arrays.asList( now, now.minusHours(5), now.plusMinutes(2) ); localDateTimes.sort(Comparator.naturalOrder()); LocalDateTime previous = null; LongSummaryStatistics stats = new LongSummaryStatistics(); for (LocalDateTime dateTime : localDateTimes) { if (previous == null) { previous = dateTime; } else { stats.accept(Duration.between(previous, dateTime).toNanos()); } } final Duration average = Duration.ofNanos((long) Math.ceil(stats.getAverage())); assertThat(average).isEqualTo(Duration.parse("PT2H31M")); } ``` Whether or not this is more elegant is subject to personal preference, but this version uses no intermediate collections at least. Upvotes: 1 <issue_comment>username_2: I just found this: ``` Collections.sort(localDateTimes); final double average = IntStream.range(0, localDateTimes.size() - 1) .mapToLong(l -> Duration.between( localDateTimes.get(l), localDateTimes.get(l+1)) .toNanos()) .average().orElse(0.0); assertThat(Duration.ofNanos((long) average)).isEqualTo(Duration.parse("PT2H31M")); ``` Upvotes: 1 [selected_answer]
2018/03/14
494
1,821
<issue_start>username_0: I have a SQL table name MarketRates. In this table has one column, the name is `Rate`. Each month the rate will change. I have solved this issue. My doubt is any other better solution for this. My table schema and data like [![enter image description here](https://i.stack.imgur.com/60czS.png)](https://i.stack.imgur.com/60czS.png) The same way I have a lot of records. My problem is the same name has multiple rows. Any other better solution for this?<issue_comment>username_1: You could solve this just using iterations (i.e. not using Streams): ``` @Test public void canCalculateAverageDuration() { final LocalDateTime now = LocalDateTime.now(); final List localDateTimes = Arrays.asList( now, now.minusHours(5), now.plusMinutes(2) ); localDateTimes.sort(Comparator.naturalOrder()); LocalDateTime previous = null; LongSummaryStatistics stats = new LongSummaryStatistics(); for (LocalDateTime dateTime : localDateTimes) { if (previous == null) { previous = dateTime; } else { stats.accept(Duration.between(previous, dateTime).toNanos()); } } final Duration average = Duration.ofNanos((long) Math.ceil(stats.getAverage())); assertThat(average).isEqualTo(Duration.parse("PT2H31M")); } ``` Whether or not this is more elegant is subject to personal preference, but this version uses no intermediate collections at least. Upvotes: 1 <issue_comment>username_2: I just found this: ``` Collections.sort(localDateTimes); final double average = IntStream.range(0, localDateTimes.size() - 1) .mapToLong(l -> Duration.between( localDateTimes.get(l), localDateTimes.get(l+1)) .toNanos()) .average().orElse(0.0); assertThat(Duration.ofNanos((long) average)).isEqualTo(Duration.parse("PT2H31M")); ``` Upvotes: 1 [selected_answer]
2018/03/14
568
2,068
<issue_start>username_0: I'm using wordpress and I'm attempting to add this <http://davidjbradshaw.github.io/iframe-resizer/> to the my wordpress sites. This is the page I'm adding as an iframe: <http://www.abc-legal.reviews/abc-embed/> I've added this JS at the top of that page: ``` var iframes = iFrameResize( [{options}], [css selector] || [iframe] ); ``` Should I be making changes to this at all? I.e should I add my own css selector etc.. what would I put under iframe? Generally I'm just a bit confused, has anyone had success using this in two wordpress sites? Could someone give me a clear step of instructions? Edit: this is the console error I'm getting: Uncaught ReferenceError: iFrameResize is not defined<issue_comment>username_1: You could solve this just using iterations (i.e. not using Streams): ``` @Test public void canCalculateAverageDuration() { final LocalDateTime now = LocalDateTime.now(); final List localDateTimes = Arrays.asList( now, now.minusHours(5), now.plusMinutes(2) ); localDateTimes.sort(Comparator.naturalOrder()); LocalDateTime previous = null; LongSummaryStatistics stats = new LongSummaryStatistics(); for (LocalDateTime dateTime : localDateTimes) { if (previous == null) { previous = dateTime; } else { stats.accept(Duration.between(previous, dateTime).toNanos()); } } final Duration average = Duration.ofNanos((long) Math.ceil(stats.getAverage())); assertThat(average).isEqualTo(Duration.parse("PT2H31M")); } ``` Whether or not this is more elegant is subject to personal preference, but this version uses no intermediate collections at least. Upvotes: 1 <issue_comment>username_2: I just found this: ``` Collections.sort(localDateTimes); final double average = IntStream.range(0, localDateTimes.size() - 1) .mapToLong(l -> Duration.between( localDateTimes.get(l), localDateTimes.get(l+1)) .toNanos()) .average().orElse(0.0); assertThat(Duration.ofNanos((long) average)).isEqualTo(Duration.parse("PT2H31M")); ``` Upvotes: 1 [selected_answer]
2018/03/14
736
2,537
<issue_start>username_0: i'm trying to run a jar executable from a shell file. the path of my jar : ``` /home/flussi/xmlEncoder/encoder.jar ``` but I always get this error: ``` Exception in thread "main" java.lang.NoClassDefFoundError: smaf.encoder.Encoder at java.lang.Class.initializeClass(libgcj.so.7rh) Caused by: java.lang.ClassNotFoundException: java.nio.file.LinkOption not found in gnu.gcj.runtime.SystemClassLoader{urls=[file:/home/flussi/xmlEncoder/encoder.jar], parent=gnu.gcj.runtime.ExtensionClassLoader{urls=[], parent=null}} at java.net.URLClassLoader.findClass(libgcj.so.7rh) at java.lang.ClassLoader.loadClass(libgcj.so.7rh) at java.lang.ClassLoader.loadClass(libgcj.so.7rh) at java.lang.Class.initializeClass(libgcj.so.7rh) ``` shell command ``` java -jar /home/flussi/xmlEncoder/encoder.jar ```<issue_comment>username_1: There is evidence in the stacktrace that you are trying to use the GCJ tool chain to run that JAR file. (And the evidence in your comment below confirms this.) This is the real problem. Unfortunately, development of GCJ stalled before they completed support for Java 1.5. And it looks like you are trying to run a JAR file that depends on a Java 1.7 class (`java.nio.file.LinkOption`) My recommendation: * uninstall the GCJ java packages that have been installed * install OpenJDK Java 8 (1.8) packages or later1 from your package manager or download and install Java 8 or later RPMs from the Oracle site. If you don't manage the machine, get the managers to do it. Or try to run the JAR file somewhere else. It would most likely require a significant rewrite of the application to make it work on GCJ. And it would be wasted effort, since GCJ is effectively a dead Java platform. --- 1 - Java 7 would work, but is was EOLed a couple of years ago. Upvotes: 2 [selected_answer]<issue_comment>username_2: Hi Best way to run a java application is to set CLASS\_PATH and PATH variable first. If your current jar file depends on external jar files you will face lots of problem. Better set your path variable like below and run the application:- ``` #!/usr/bin/ksh export PATH=/usr/java/bin:$PATH # =/usr/java/bin is your java bin folder #set environment variable CP with all the jar libraries CP=/home/flussi/xmlEncoder/encoder.jar CP=${CP}:/other/jar/somejar.jar java -Xmx256M -classpath "$CP" "com.myproj.Example" ``` This is `com.myproj.Example` your java class file inside `encoder.jar` where you have declared `public static void main` Upvotes: 0
2018/03/14
1,192
3,448
<issue_start>username_0: Iterators have a `skip` method that skips the first `n` elements: ``` let list = vec![1, 2, 3]; let iterator = list.iter(); let skip_iter = iterator.skip(2); //skip the first 2 elements ``` I could not find a method to skip only the `n`-th element in the iterator. Do I need to implement something on my own or is there a method somewhere I haven't found?<issue_comment>username_1: That seems to be a very specific operation. There is no adaptor for that in the standard library or the `itertools` crate. It's easy to implement nonetheless. One could enumerate each element and filter on the index: ``` iter.enumerate().filter(|&(i, _)| i != n).map(|(_, v)| v) ``` [Playground](https://play.rust-lang.org/?gist=2cb98ca031dd8e13b7cf84965d558828&version=stable) Upvotes: 6 [selected_answer]<issue_comment>username_2: I am partial to the `filter_map` version ``` fn main() { let v = vec![1, 2, 3]; let n = 1; let x: Vec<_> = v.into_iter() .enumerate() .filter_map(|(i, e)| if i != n { Some(e) } else { None }) .collect(); println!("{:?}", x); } ``` [Playground](https://play.rust-lang.org/?gist=469447ea5572ccfa7d7c9677c8cebefd&version=stable) Upvotes: 4 <issue_comment>username_3: I already wanted to skip some range. The best in my opinion is to create an iterator: ``` mod skip_range { use std::ops::Range; use std::iter::Skip; /// Either the user provided iterator, or a `Skip` one. enum Either { Iter(I), Skip(Skip*), } pub struct SkipRange { it: Option>, count: usize, range: Range, } impl SkipRange *{ pub fn new(it: I, range: Range) -> Self { SkipRange { it: Some(Either::Iter(it)), count: 0, range } } } impl Iterator for SkipRange *{ type Item = I::Item; fn next(&mut self) -> Option { // If we are in the part we must skip, change the iterator to `Skip` if self.count == self.range.start { self.count = self.range.end; if let Some(Either::Iter(it)) = self.it.take() { self.it = Some(Either::Skip(it.skip(self.range.end - self.range.start))); } } else { self.count += 1; } match &mut self.it { Some(Either::Iter(it)) => it.next(), Some(Either::Skip(it)) => it.next(), \_ => unreachable!(), } } } } use skip\_range::SkipRange; fn main() { let v = vec![0, 1, 2, 3, 4, 5]; let it = SkipRange::new(v.into\_iter(), 2..4); let res: Vec<\_> = it.collect(); assert\_eq!(res, vec![0, 1, 4, 5]); }*** ``` The principle is to use 2 different iterators: the first one is given by the user, the second one is a `Skip` iterator, created from the first one. Upvotes: 2 <issue_comment>username_4: If you have access to original collection, it could be ``` let items = ["a", "b", "c", "d"]; let skipped_2nd = items.iter().take(1).chain(items.iter().skip(2)); ``` Upvotes: 2 <issue_comment>username_5: I don't think there is something in the stdlib, but here's yet another pretty simple way to go about it. ```rust fn main() { let (v, idx) = (vec!["foo", "bar", "baz", "qux"], 2_usize); let skipped = v[..idx].iter().chain(v[idx + 1..].iter()); skipped.for_each(|&val| { dbg!(val); }); } ``` <https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f47a28fd681fee2fe82b57c073d52648> Upvotes: 0 <issue_comment>username_6: More concise: ```rust let iter = vs .iter() .enumerate() .filter_map(|(i, el)| (i == n).then(|| el)); ``` Upvotes: 0
2018/03/14
1,283
3,722
<issue_start>username_0: SQL function in Oracle Database: ``` FUNCTION init(id in number, code out varchar2) RETURN number; ``` I have SQL (Oracle database) in my java code: ``` private static final String MY_FUNCTION_SQL = "SELECT live.api.init(?,?) FROM DUAL"; ``` And my method: ``` void myMethod() throws SQLException { try(CallableStatement cs = sdcon.prepareCall(MY_FUNCTION_SQL) { cs.setLong(1, _myID); cs.registerOutParameter(2, Types.VARCHAR); ResultSet resultSet = sdInsertTask.executeQuery(); } } ``` After `executeQuery()` I got exception: > > java.sql.SQLException: ORA-06572: Function INIT has out arguments. > > ><issue_comment>username_1: That seems to be a very specific operation. There is no adaptor for that in the standard library or the `itertools` crate. It's easy to implement nonetheless. One could enumerate each element and filter on the index: ``` iter.enumerate().filter(|&(i, _)| i != n).map(|(_, v)| v) ``` [Playground](https://play.rust-lang.org/?gist=2cb98ca031dd8e13b7cf84965d558828&version=stable) Upvotes: 6 [selected_answer]<issue_comment>username_2: I am partial to the `filter_map` version ``` fn main() { let v = vec![1, 2, 3]; let n = 1; let x: Vec<_> = v.into_iter() .enumerate() .filter_map(|(i, e)| if i != n { Some(e) } else { None }) .collect(); println!("{:?}", x); } ``` [Playground](https://play.rust-lang.org/?gist=469447ea5572ccfa7d7c9677c8cebefd&version=stable) Upvotes: 4 <issue_comment>username_3: I already wanted to skip some range. The best in my opinion is to create an iterator: ``` mod skip_range { use std::ops::Range; use std::iter::Skip; /// Either the user provided iterator, or a `Skip` one. enum Either { Iter(I), Skip(Skip*), } pub struct SkipRange { it: Option>, count: usize, range: Range, } impl SkipRange *{ pub fn new(it: I, range: Range) -> Self { SkipRange { it: Some(Either::Iter(it)), count: 0, range } } } impl Iterator for SkipRange *{ type Item = I::Item; fn next(&mut self) -> Option { // If we are in the part we must skip, change the iterator to `Skip` if self.count == self.range.start { self.count = self.range.end; if let Some(Either::Iter(it)) = self.it.take() { self.it = Some(Either::Skip(it.skip(self.range.end - self.range.start))); } } else { self.count += 1; } match &mut self.it { Some(Either::Iter(it)) => it.next(), Some(Either::Skip(it)) => it.next(), \_ => unreachable!(), } } } } use skip\_range::SkipRange; fn main() { let v = vec![0, 1, 2, 3, 4, 5]; let it = SkipRange::new(v.into\_iter(), 2..4); let res: Vec<\_> = it.collect(); assert\_eq!(res, vec![0, 1, 4, 5]); }*** ``` The principle is to use 2 different iterators: the first one is given by the user, the second one is a `Skip` iterator, created from the first one. Upvotes: 2 <issue_comment>username_4: If you have access to original collection, it could be ``` let items = ["a", "b", "c", "d"]; let skipped_2nd = items.iter().take(1).chain(items.iter().skip(2)); ``` Upvotes: 2 <issue_comment>username_5: I don't think there is something in the stdlib, but here's yet another pretty simple way to go about it. ```rust fn main() { let (v, idx) = (vec!["foo", "bar", "baz", "qux"], 2_usize); let skipped = v[..idx].iter().chain(v[idx + 1..].iter()); skipped.for_each(|&val| { dbg!(val); }); } ``` <https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=f47a28fd681fee2fe82b57c073d52648> Upvotes: 0 <issue_comment>username_6: More concise: ```rust let iter = vs .iter() .enumerate() .filter_map(|(i, el)| (i == n).then(|| el)); ``` Upvotes: 0
2018/03/14
726
2,730
<issue_start>username_0: I have a Grid in a Vaadin application. For one column I want to apply a `DateRenderer`. The following problem occurs: [![enter image description here](https://i.stack.imgur.com/NDHdl.png)](https://i.stack.imgur.com/NDHdl.png) What am I doing wrong? The [example](https://github.com/vaadin/book-examples/blob/master/src/com/vaadin/book/examples/component/grid/RendererExample.java) from the book of Vaadin is doing it like I do. UPDATE I got the same result as the answers to this question suggest. My working code (with several renderers): ``` final Grid grid = new Grid<>(Signature.class); grid.setSelectionMode(Grid.SelectionMode.SINGLE); grid.setSizeFull(); grid.setColumns(); grid.addColumn("type").setCaption(bundle.getString("type")); grid.addColumn("filename").setCaption(bundle.getString("filename")); grid.addColumn("createdTime", new DateRenderer("%1$td.%1$tm.%1$tY %1$tH:%1$tM:%1$tS")) .setCaption(bundle.getString("creationDate")); grid.addColumn(this::createCertificateLabel, new ComponentRenderer()) .setCaption(bundle.getString("certificate")) .setDescriptionGenerator((DescriptionGenerator) signature -> bundle.getString("certificateSerialNumber")); grid.addColumn(this::createLink, new ComponentRenderer()) .setCaption(bundle.getString("action")); ```<issue_comment>username_1: You can do it in `addColumn()` function which takes [AbstractRenderer](https://vaadin.com/api/com/vaadin/ui/renderers/AbstractRenderer.html) while `setRenderer()` excepts [Renderer](https://vaadin.com/api/com/vaadin/ui/renderers/Renderer.html). ``` grid.addColumn( "myColumn", new DateRenderer( ... ) ) ``` I guess you can also try doing it this way, but I haven't tested it (as `DateRenderer` implements `Renderer`): ``` column.setRenderer( (Renderer)new DateRenderer( ... ) ); ``` Upvotes: 2 <issue_comment>username_2: Let's have a look at the signature: `Column getColumn(String columnId)`. It doesn't really know what's the second type parameter of your column because it could be anything. So applying a renderer by method `Column setRenderer(Renderer super V renderer)` expects an inferred renderer of type `Renderer super ?` which I think can not be fulfilled. **Solution 1:** Cast column to appropriate type like ``` ((Grid.Column) grid.getColumn("xyz")).setRenderer(new DateRenderer()) ``` This will give you a compile warning due to unchecked cast. I think you could also cast to `Column` without type arguments but this will give you warnings, too. **Solution 2:** As avix already pointed out in his answer, passing the renderer in the `addColumn` method is easier. ``` grid.addColumn(item -> someExpressionThatReturnsDate, new DateRenderer()); ``` Upvotes: 2 [selected_answer]
2018/03/14
458
1,688
<issue_start>username_0: I have a dataframe like this: ``` In [23]: df Out[23]: scope 0 {'range': 2, 'category': '234'} 1 {'range': 1, 'category': '222'} ``` I would like to filter rows by condition "range==1".How can I do it?<issue_comment>username_1: You can do it in `addColumn()` function which takes [AbstractRenderer](https://vaadin.com/api/com/vaadin/ui/renderers/AbstractRenderer.html) while `setRenderer()` excepts [Renderer](https://vaadin.com/api/com/vaadin/ui/renderers/Renderer.html). ``` grid.addColumn( "myColumn", new DateRenderer( ... ) ) ``` I guess you can also try doing it this way, but I haven't tested it (as `DateRenderer` implements `Renderer`): ``` column.setRenderer( (Renderer)new DateRenderer( ... ) ); ``` Upvotes: 2 <issue_comment>username_2: Let's have a look at the signature: `Column getColumn(String columnId)`. It doesn't really know what's the second type parameter of your column because it could be anything. So applying a renderer by method `Column setRenderer(Renderer super V renderer)` expects an inferred renderer of type `Renderer super ?` which I think can not be fulfilled. **Solution 1:** Cast column to appropriate type like ``` ((Grid.Column) grid.getColumn("xyz")).setRenderer(new DateRenderer()) ``` This will give you a compile warning due to unchecked cast. I think you could also cast to `Column` without type arguments but this will give you warnings, too. **Solution 2:** As avix already pointed out in his answer, passing the renderer in the `addColumn` method is easier. ``` grid.addColumn(item -> someExpressionThatReturnsDate, new DateRenderer()); ``` Upvotes: 2 [selected_answer]
2018/03/14
461
1,305
<issue_start>username_0: I created a map of some large ports. With 'x' and 'y' latitude and longitude and 'text' the port names. ``` x,y = map(lonA, latA) map.scatter(x, y, s=Size, c=color, marker='o', label = 'Ports',alpha=0.65, zorder=2) for i in range (0,n): plt.annotate(text[i],xy=(x[i],y[i]),ha='right') ``` The dots I plotted (bigger dots for bigger ports) overlap with the labels. How do I plot them a little further away to increase readability? [![enter image description here](https://i.stack.imgur.com/NTDqe.png)](https://i.stack.imgur.com/NTDqe.png)<issue_comment>username_1: You can use the `xytext` parameter to adjust the text position: ``` plt.annotate(text[i],xy=(x[i],y[i]),xytext=(x[i]+10,y[i]+10), ha='right') ``` Here I added 10 to your xy position. For more you can look up the suggestions here: <https://matplotlib.org/users/annotations_intro.html> Upvotes: 2 [selected_answer]<issue_comment>username_2: @KiralySandor you were right, but you need to change textcoordinates to data. ``` for i in range (0,n): plt.annotate(text[i],xy=(x[i],y[i]),textcoords='data',xytext=(x[i]-9000,y[i]),ha='right') ``` [![enter image description here](https://i.stack.imgur.com/S7XGL.png)](https://i.stack.imgur.com/S7XGL.png) Now the names are slighlty more to the left. Upvotes: 0
2018/03/14
526
1,426
<issue_start>username_0: I have 3000 raw data with time and the amount of consumed energy. But this energy value is cumulative sum and I need to get the monthly consumption value for each month. I want to know how can I loop through the data from the same month and subtract the last value of each month from the first value of the same month. The number of data I have from each month is different from the other months. The first values of this list is as below: ``` Time Energy 2017-01-01 0.0 2017-01-01 456682295.279 2017-01-01 576253341.508 2017-01-01 693234839.384 2017-01-02 810613281.137 2017-01-02 928960004.805 . . . ```<issue_comment>username_1: You can use the `xytext` parameter to adjust the text position: ``` plt.annotate(text[i],xy=(x[i],y[i]),xytext=(x[i]+10,y[i]+10), ha='right') ``` Here I added 10 to your xy position. For more you can look up the suggestions here: <https://matplotlib.org/users/annotations_intro.html> Upvotes: 2 [selected_answer]<issue_comment>username_2: @KiralySandor you were right, but you need to change textcoordinates to data. ``` for i in range (0,n): plt.annotate(text[i],xy=(x[i],y[i]),textcoords='data',xytext=(x[i]-9000,y[i]),ha='right') ``` [![enter image description here](https://i.stack.imgur.com/S7XGL.png)](https://i.stack.imgur.com/S7XGL.png) Now the names are slighlty more to the left. Upvotes: 0
2018/03/14
943
3,565
<issue_start>username_0: In firestore I want a user to only access a document if the user is in the `teamid` mentioned in the document. Now I have a different collection called `teams` where I have users mapped as `{ user_id = true }`. So I have the following in the Firestore rules ``` return get(/databases/$(database)/documents/teams/$(resource.data.teamid)).data.approvedMembers[request.auth.uid] == true; ``` Now this rule does not work and fails any request made to the database by the frontend. But when I replace `$(resource.data.teamid)` with my actual `teamid` value as follows, ``` return get(/databases/$(database)/documents/teams/234234jk2jk34j23).data.approvedMembers[request.auth.uid] == true; ``` ... it works as expected. Now my question is am I using `resource` in a wrong way or is `resource` not supported in `get()` or `exists()` queries in Firestore rules? **Edit** Complete rules as follows ``` service cloud.firestore { match /databases/{database}/documents { function isTeamMember() { return get(/databases/$(database)/documents/teams/$(resource.data.teamid)).data.approvedMembers[request.auth.uid] == true; // return exists(/databases/$(database)/documents/teams/$(resource.data.teamid)); } match /{document=**} { allow read, write: if isTeamMember(); } } } ``` If you notice the commented out rule, `exists()` does not work in this case either.<issue_comment>username_1: You have this `match`: ``` match /{document=**} { allow read, write: if isTeamMember(); } ``` That will match any document in the database. That means when you call `isTeamMember()` it's not guaranteed that `resource` represents a `team` document. If you have any subcollections on `teams` and write to those then `resource` will be the subcollection document. Upvotes: 1 <issue_comment>username_2: Thanks for reaching out on Twitter. Following our chat, here is the conclusion: Querying documents using your security rule ------------------------------------------- You cannot use a security rule as a filter. To get this to work, you must also add `.where('teamid', '==', yourTeamId)` *You have confirmed that this works for you, but you don't always want to restrict on one teamid* Using custom claims ------------------- You can set up custom claims in your authentication tokens. Here is an example of how to set these and them use them in your rules ### Setting custom claim You will need to use the Admin SDK for this. ``` auth = firebase.auth(); const userId = 'exampleUserId'; const customClaims = {teams: {myTeam1: true, myTeam2: true}}; return auth.setCustomUserClaims(userId, customClaims) .then(() => { console.log('Custom claim created'); return auth.getUser(userId); }) .then(userRecord => { console.log(`name: ${userRecord.displayName}`); console.log(`emailVerified: ${userRecord.emailVerified}`); console.log(`email: ${userRecord.email}`); console.log(`emailVerified: ${userRecord.emailVerified}`); console.log(`phoneNumber: ${userRecord.phoneNumber}`); console.log(`photoURL: ${userRecord.photoURL}`); console.log(`disabled: ${userRecord.disabled}`); console.log(`customClaims: ${JSON.stringify(userRecord.customClaims)}`); }) .catch(err => { console.error(err); }); ``` ### Apply security rule ``` allow read, write: if request.auth.token.teams.$(resource.data.teamId) == true; ``` I'm still not 100% certain that this will not also require you to filter by the `teamid`. Please test it and feed back. Upvotes: 2
2018/03/14
634
2,460
<issue_start>username_0: How to write a json configuration file to deploy verticles in vertx dynamically ?<issue_comment>username_1: Shameless plug, I wrote a library for that: <https://github.com/username_1/vertx-boot> It works on HOCON, which is a superset of JSON. You write a configuration file in HOCON, where values can be overridden with Java properties, environments variables, alternative configuration files, etc, and the library provides a *main verticle* that spins all declared verticles. Is it adapted to your requirements? Upvotes: 2 <issue_comment>username_2: How I would handling this case would to leverage [Vertx Config](https://vertx.io/docs/vertx-config/java/). I would have an initial verticle that would retrieve the configuration and then I would pull from the config the class names that you want to deploy. Kotlin Example ``` package com.example import io.vertx.config.ConfigRetriever import io.vertx.config.ConfigRetriever.create import io.vertx.config.ConfigStoreOptions import io.vertx.core.* import io.vertx.core.json.JsonArray import io.vertx.core.json.JsonObject import io.vertx.core.logging.Logger import io.vertx.core.logging.LoggerFactory import io.vertx.kotlin.config.ConfigRetrieverOptions class EntryVerticle : AbstractVerticle() { val log: Logger = LoggerFactory.getLogger(EntryVerticle::class.simpleName) override fun start(startFuture: Future?) { log.info("Started!!") val retrieverOptions = ConfigRetrieverOptions() //FYI you need to verify that the file is there otherwise this app won't launch. //Too much for this example val fileConfig = ConfigStoreOptions() fileConfig.setType("file").setFormat("json").config = JsonObject().put("path", "/app.json") retrieverOptions.addStore(fileConfig) val retriever = create(vertx, retrieverOptions) retriever.getConfig { config -> if(config.succeeded()) { val verticles = if (config.result().containsKey("verticles")) { config.result().getJsonArray("verticles") } else JsonArray() //you would also need to verify this is a string. verticles.forEach{className: Any -> //example value "com.example.HelloWorldVerticle //full class name vertx.deployVerticle(className as String) } } } super.start(startFuture) } } ``` This was kind of off the cuff but I know you can create verticles from their full name. There are several other ways you can pull in a configuration besides from the file system. Its in the doc linked above!! Upvotes: 0
2018/03/14
736
2,772
<issue_start>username_0: I'm looking to try and do a querySelector for text that avoids picking up a div within the content. Any thoughts gratefully appreciated. **JS** ``` domdoc.querySelector('li.list_item').textContent ``` **HTML:** ``` - Hello world, Please ignore this how are you ``` Returns: ``` Hello world, Please ignore this how are you ``` Would like to see: ``` Hello world, how are you ```<issue_comment>username_1: Shameless plug, I wrote a library for that: <https://github.com/username_1/vertx-boot> It works on HOCON, which is a superset of JSON. You write a configuration file in HOCON, where values can be overridden with Java properties, environments variables, alternative configuration files, etc, and the library provides a *main verticle* that spins all declared verticles. Is it adapted to your requirements? Upvotes: 2 <issue_comment>username_2: How I would handling this case would to leverage [Vertx Config](https://vertx.io/docs/vertx-config/java/). I would have an initial verticle that would retrieve the configuration and then I would pull from the config the class names that you want to deploy. Kotlin Example ``` package com.example import io.vertx.config.ConfigRetriever import io.vertx.config.ConfigRetriever.create import io.vertx.config.ConfigStoreOptions import io.vertx.core.* import io.vertx.core.json.JsonArray import io.vertx.core.json.JsonObject import io.vertx.core.logging.Logger import io.vertx.core.logging.LoggerFactory import io.vertx.kotlin.config.ConfigRetrieverOptions class EntryVerticle : AbstractVerticle() { val log: Logger = LoggerFactory.getLogger(EntryVerticle::class.simpleName) override fun start(startFuture: Future?) { log.info("Started!!") val retrieverOptions = ConfigRetrieverOptions() //FYI you need to verify that the file is there otherwise this app won't launch. //Too much for this example val fileConfig = ConfigStoreOptions() fileConfig.setType("file").setFormat("json").config = JsonObject().put("path", "/app.json") retrieverOptions.addStore(fileConfig) val retriever = create(vertx, retrieverOptions) retriever.getConfig { config -> if(config.succeeded()) { val verticles = if (config.result().containsKey("verticles")) { config.result().getJsonArray("verticles") } else JsonArray() //you would also need to verify this is a string. verticles.forEach{className: Any -> //example value "com.example.HelloWorldVerticle //full class name vertx.deployVerticle(className as String) } } } super.start(startFuture) } } ``` This was kind of off the cuff but I know you can create verticles from their full name. There are several other ways you can pull in a configuration besides from the file system. Its in the doc linked above!! Upvotes: 0
2018/03/14
1,238
4,197
<issue_start>username_0: I am facing a MISRA C 2004 violation of rule 1.2 "likely use of null pointer. The code that I am using is as below: ``` tm_uint8* diagBuf = 0u; diagBuf[0] = diagBuf[0] + 0x40u; diagBuf[2] = 0x01u; diagBuf[0] = diagBuf[0] + 0x40u; diagBuf[2] = 0x01u; ``` This is just a part of the code that is indicated above. some of the statements have "IF" conditions. Can some one point out why I get the MISRA Violation.?<issue_comment>username_1: You affect 0u (so, 0, so NULL) to diagBuf, and then you use it with "diagBuf[0]". Either allocate it (malloc), or correct the declaration to fit your need (tm\_uint8 diagBuf[3]; at minimum) Upvotes: 0 <issue_comment>username_2: According to the 1999 C standard, Section 6.3.2 "Pointers", para 3 > > An integer constant expression with the value 0, or such an expression cast to type `void *`, is called a null pointer constant. If a null pointer constant is converted to a pointer type, the resulting pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. > > > (Note I've removed cross reference at the end of the first sentence in the above to a footnote which explains that `NULL` is defined in and other headers as a null pointer constant). This means that ``` tm_uint8* diagBuf = 0u; ``` initialises `diagBuf` using a null pointer constant, since `0u` is an integer constant expression with value zero. Accordingly, `diagBuf` is initialised as a null pointer. Furthermore the following statements ``` diagBuf[0] = diagBuf[0] + 0x40u; diagBuf[2] = 0x01u; ``` both dereference a null pointer. That is undefined behaviour according to C standards. The reported Misra violation is therefore completely correct. The circumstances in which such code would be acceptable (e.g. it would be possible to write a justification for an exemption from the Misra rule, and get that approved in context of the system development) are very limited in practice. Upvotes: 2 [selected_answer]<issue_comment>username_3: C11 6.3.2.3 states: > > An integer constant expression with the value 0, or such an expression cast to type `void *`, is called a *null pointer constant* > > > This is the only case in the C language where you can assign a pointer to the value `0` - to get a null pointer. The line in the question can never be used to set a pointer to point at address zero, since assignment of integer values to pointers is invalid C in all other cases. If we go through the rules of simple assignment C11 6.5.16.1, then there exists no case where the left operand of `=` is a pointer and the right operand is an arithmetic type. This rule is very clear in the standard, code such as `int* ptr = 1234;`is simply invalid C and has always been invalid C (it is a "constraint violation" of the standard). Compilers letting it through without a warning/error are garbage and not standard conforming. Simple assignment lists one valid exception (C11 6.5.16.1): > > * the left operand is an atomic, qualified, or unqualified pointer, and the right is a null pointer constant > > > This is the only reason why the code in the question compiles. Now if you actually wish to point at the hardware address 0, you must write something like this: ``` volatile uint8_t* ptr = (volatile uint8_t*)0u; ``` This forces a conversion from the integer 0 into a pointer pointing at address 0. Since the right operand is neither a `0` nor a zero cast to `void*`, it is *not* a null pointer constant, and thus `ptr` is not a null pointer. The C standard is clear and MISRA-C is perfectly compatible with the standard. --- Bugs and issues unrelated to the question: * Not using `volatile` when pointing at a hardware address is always a bug. * Using `diagBuf[0] + 0x40u` is a bad way of setting a bit to 1. If the bit is already set, then you get a massive bug. Use `|` instead. * Assuming `diagBuf` is a pointer to byte, then `diagBuf[0] = diagBuf[0] + 0x40u;` is a MISRA-C:2012 violation of rule 10.3 since you assign a wider type ("essentially unsigned") to a narrower type. MISRA compliant code is: ``` diagBuf[0u] = (tm_uint8)(diagBuf[0u] + 0x40u;); ``` Upvotes: 0
2018/03/14
384
1,342
<issue_start>username_0: Let's say that someone executes the result from the select: ``` ALTER LOGIN [myLOGIN] WITH PASSWORD = '<PASSWORD>' MUST_CHANGE, CHECK_POLICY = ON; ``` How is the correct way to execute a new `ALTER LOGIN` statement, which removes the `MUST_CHANGE` policy? Is something like this ok or there is another better practice: ``` ALTER LOGIN [myLOGIN] WITH PASSWORD = '<PASSWORD>' MUST_CHANGE, CHECK_POLICY = OFF; ```<issue_comment>username_1: Following this sentense: > > Set MUST\_CHANGE for new logins. If MUST\_CHANGE is specified, CHECK\_EXPIRATION and CHECK\_POLICY must be set to ON. > > > from [PasswordPolicySQLServerLogin](https://support.microsoft.com/en-us/help/2028712/understanding-password-policy-for-sql-server-logins), the best practice should be like this: ``` ALTER LOGIN [myLOGIN] WITH PASSWORD = '<PASSWORD>', CHECK_EXPIRATION = OFF; ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: combining the answers above. i used. ``` ALTER LOGIN [myLogin] WITH PASSWORD = '******' MUST_CHANGE, CHECK_POLICY = ON, CHECK_EXPIRATION = ON; ``` Upvotes: 0 <issue_comment>username_3: The correct way to disable MUST\_CHANGE and CHECK\_POLICY is with 2 separate statements. ``` ALTER LOGIN [myLOGIN] WITH PASSWORD = '<PASSWORD>'; ALTER LOGIN [myLOGIN] WITH CHECK_POLICY = OFF; ``` Upvotes: 3
2018/03/14
400
1,405
<issue_start>username_0: I am trying to generate all possible keys on a known `key_length` on Python. Keys are defined as a list of integers between 0 and 255. I am having trouble to generate all the possible keys... If `key_length` were a constant number such as 2, I would have known how to handle this - it's just 2 for-loops, you know. However, when it's in a variable then I don't know how many for-loops I need to write. I guess I need a different approach. That's what makes it difficult for me.<issue_comment>username_1: Following this sentense: > > Set MUST\_CHANGE for new logins. If MUST\_CHANGE is specified, CHECK\_EXPIRATION and CHECK\_POLICY must be set to ON. > > > from [PasswordPolicySQLServerLogin](https://support.microsoft.com/en-us/help/2028712/understanding-password-policy-for-sql-server-logins), the best practice should be like this: ``` ALTER LOGIN [myLOGIN] WITH PASSWORD = '<PASSWORD>', CHECK_EXPIRATION = OFF; ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: combining the answers above. i used. ``` ALTER LOGIN [myLogin] WITH PASSWORD = '******' MUST_CHANGE, CHECK_POLICY = ON, CHECK_EXPIRATION = ON; ``` Upvotes: 0 <issue_comment>username_3: The correct way to disable MUST\_CHANGE and CHECK\_POLICY is with 2 separate statements. ``` ALTER LOGIN [myLOGIN] WITH PASSWORD = '<PASSWORD>'; ALTER LOGIN [myLOGIN] WITH CHECK_POLICY = OFF; ``` Upvotes: 3
2018/03/14
563
2,095
<issue_start>username_0: I use react native and redux as following (working): **App.js** ``` export default class App extends React.PureComponent { constructor (props) { super(props); this.state = { ...store.getState()}; store.subscribe(() => { // i can read the state but no write it const storestate = store.getState(); }); } handleConnectivityChange = isConnected => { this.setState({ isConnected }); }; render() { return ( } persistor={persistor}> ); } } ``` --- ScreenX.js ``` import * as actions from "./redux/actions"; import { connect } from "react-redux"; class ScreenX extends React.Component { ... // i can do in any fonction this.props.action_saveNetInfo(isConnected); ... } function mapToStateProps(data) { return { isConnected: data["isConnected"] }; } export default connect(mapToStateProps, actions)(ScreenX); ``` The redux store work nice when i call actions from other component than App.js **Problem :** I want to call action from APP.js but this don't work ``` handleConnectivityChange = isConnected => { this.props.action_saveNetInfo(isConnected); }; ``` error : `TypeError: this.props.action_saveNetInfo is not a function` I cannot do this too ``` class App extends React.PureComponent { ... } export default connect(null, null)(App) ``` because it throw an error : ``` Invariant Violation: Could not find "store" in either the context or props of "Connect(App)". Either wrap the root component in a , or explicitly pass "store" as a prop to "Connect(App)". ``` Any idea ? thanks<issue_comment>username_1: Use mapDispatchToProps function to binding actions to this.props Upvotes: 0 <issue_comment>username_2: Within App component, you can call the `dispatch` directly from the `store`. Then, you just need to dispatch your actionCreator. ``` import {action_saveNetInfo} from './redux/actions' // ... handleConnectivityChange = isConnected => { store.dispatch(action_saveNetInfo(isConnected)); }; ``` Upvotes: 4 [selected_answer]
2018/03/14
937
2,210
<issue_start>username_0: For input type as time in **hh:mm** format validation, tried below pattern ``` pattern="^([01]\d|2[0-3]):([0-5]\d)$" ``` Requirement is need to restrict *00:00* and allow *24:00*, so updated pattern as ``` pattern="^(24:00)|(([01]\d|2[0-3]):([0-5]\d))$" ``` Now it is allowing 24:00, Please help how to restrict 00:00 format, Link (Sample Code): <https://www.w3schools.com/code/tryit.asp?filename=FPCIVB1OCQ04><issue_comment>username_1: I think this may be help full to you ``` (?!00)[0-2][0-4]:[0-5][0-9] ``` Upvotes: -1 <issue_comment>username_2: Try this one: ``` ^(24:00)|((0[1-9]|1\d|2[0-3]):([0-5]\d))|(00:(0[1-9]|[1-5][0-9]))$ ``` [Demo](https://regex101.com/r/xSqRUr/4/) It has extra handling for `00` hour allowing only non-zero minute part. Upvotes: 1 <issue_comment>username_3: Try this `^(24:00)|((0[1-9]|1\d|2[0-3]):([0-5]\d))|(00:(0[1-5]|[1-9]0|[1-5][1-9]))$` ```html Takes 00:01 to 24:00, we need to restrict 00:00 Input: ``` Upvotes: 1 <issue_comment>username_4: This regex should works: ``` ^(?!00:00)(24:00|([0-1]\d|2[0-3]):[0-5]\d)$ ``` **Explanation**: [![enter image description here](https://i.stack.imgur.com/P2DVD.png)](https://i.stack.imgur.com/P2DVD.png) [Demo](https://regex101.com/r/YJN3Wq/3) - updated^2 You can read more about regex `negative lookahead` in this [link](http://www.rexegg.com/regex-lookarounds.html) Upvotes: 4 [selected_answer]<issue_comment>username_5: Try this please `"^(20|21|22|23|[01]d|d)(([:][0-5]d){1,2})$"` edited as suggested ``` "^(24:00)|((0[1-9]|1\d|2[0-3]):([0-5]\d))|(00:(0[1-9]|[1-5]\d))$" ``` this should work fine Upvotes: 0 <issue_comment>username_6: First off, you don't need `^` and `$` anchors on `pattern` attribute. They are automatically applied [against input string](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input#Attributes): > > A regular expression that the control's value is checked against. The > `pattern` must match the entire value, not just some subset. > > > Secondly, you only need to prepend to your regex a negative lookahead to do an immediate failure on `00:00`: ``` pattern="(?!00:00)(?:(?:[01]\d|2[0-3]):[0-5]\d|24:00)" ``` Upvotes: 0
2018/03/14
2,208
6,091
<issue_start>username_0: I'm having some trouble calculating the RMSE (root-mean-squared-error) in my LSTM model. The model fits fine and I'm getting a good loss reduction, however when trying to inverse\_transform my yhat results I get the following error: ``` non-broadcastable output operand with shape (399,1) doesn't match the broadcast shape (399,4) ``` Here's my code: Preprocessing: ``` btc = pd.read_csv('live_bitcoin.csv') twitter_sent = pd.read_csv('live_tweet.csv') reddit_sent = pd.read_csv('live_reddit.csv') btc.columns = ["price_usd","24h_volume_usd","market_cap_usd","available_supply","total_supply","percent_change_1h","percent_change_24h","percent_change_7d", "Sell", "Buy", "15m", "Stamp"] twitter_sent.columns = ["Sentiment", "Stamp"] reddit_sent.columns = ["Sentiment", "Stamp"] merged = pd.merge(twitter_sent, btc, on='Stamp', how='inner').merge(reddit_sent, on='Stamp', how='inner') data = merged[["Sentiment_x", "Sentiment_y","24h_volume_usd", "market_cap_usd", "available_supply","price_usd"]].groupby(merged['Stamp']).mean() datag = data[["24h_volume_usd", "market_cap_usd", "available_supply","price_usd"]] tw_sentiment = data["Sentiment_x"] rdt_sentiment = data["Sentiment_y"] print "Dataset size: " + str(len(datag)) print "Timespan: " + str(len(datag)/60) + " hours" from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler(feature_range=(0, 1)) values = datag.values.reshape(-1, datag.shape[1]) tw_sentiment = tw_sentiment.values.reshape(-1, 1) rdt_sentiment = rdt_sentiment.values.reshape(-1, 1) tw_sentiment = tw_sentiment.astype('float32') rdt_sentiment = rdt_sentiment.astype('float32') values = values.astype('float32') scaled = scaler.fit_transform(values) ``` Training: ``` train_size = int(len(scaled) * 0.7) test_size = len(scaled) - train_size train, test = scaled[0:train_size,:], scaled[train_size:len(scaled),:] split = train_size def create_dataset(dataset, look_back, tw_sentiment, rdt_sentiment, sent=False): dataX, dataY = [], [] for i in range(len(dataset) - look_back): if i >= look_back: a = dataset[i-look_back:i+1, 0] a = a.tolist() if(sent==True): current_tw_sentiment = tw_sentiment[i].tolist()[0] current_rdt_sentiment = rdt_sentiment[i].tolist()[0] a.append(current_tw_sentiment) a.append(current_rdt_sentiment) dataX.append(a) dataY.append(dataset[i + look_back, 0]) print(len(dataY)) return np.array(dataX), np.array(dataY) look_back = 2 trainX, trainY = create_dataset(train, look_back, tw_sentiment[0:train_size], rdt_sentiment[0:train_size], sent=True) testX, testY = create_dataset(test, look_back, tw_sentiment[train_size:len(scaled)], rdt_sentiment[train_size:len(scaled)], sent=True) trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1])) # Creating new model model = Sequential() model.add(LSTM(100, input_shape=(trainX.shape[1], trainX.shape[2]), return_sequences=True)) model.add(LSTM(100)) model.add(Dense(1)) model.compile(loss='mae', optimizer='adam') model.save('LSTM_14-03-2018.h5') # Loading model # model = load_model('models/LSTM_12-03-2018_GOOD.h5') history = model.fit(trainX, trainY, epochs=300, batch_size=100, validation_data=(testX, testY), verbose=0, shuffle=False) yhat = model.predict(testX) yhat_inverse = scaler.inverse_transform(yhat.reshape(-1, 1)) testY_inverse = scaler.inverse_transform(testY.reshape(-1, 1)) rmse_sent = sqrt(mean_squared_error(testY_inverse, yhat_inverse)) print "Done" print 'Test RMSE: %.3f' % rmse_sent ``` The main problem lies here: ``` yhat_inverse = scaler.inverse_transform(yhat.reshape(-1,1)) testY_inverse = scaler.inverse_transform(testY.reshape(-1,1)) ``` For what I understand (still a beginner in ML), my yhat variable has a shape of (399, 1) as I'm trying to make a prediction based on several features. I'm only looking to revert my data to its previous transform so the RMSE error returns in an appropriate scale. I'm basically trying to reconvert prices to their normal scale. I'm also never re\_transforming the data after the MinMaxScaler does it in the preprocessing stage. Any clues on what might be wrong?<issue_comment>username_1: I think this may be help full to you ``` (?!00)[0-2][0-4]:[0-5][0-9] ``` Upvotes: -1 <issue_comment>username_2: Try this one: ``` ^(24:00)|((0[1-9]|1\d|2[0-3]):([0-5]\d))|(00:(0[1-9]|[1-5][0-9]))$ ``` [Demo](https://regex101.com/r/xSqRUr/4/) It has extra handling for `00` hour allowing only non-zero minute part. Upvotes: 1 <issue_comment>username_3: Try this `^(24:00)|((0[1-9]|1\d|2[0-3]):([0-5]\d))|(00:(0[1-5]|[1-9]0|[1-5][1-9]))$` ```html Takes 00:01 to 24:00, we need to restrict 00:00 Input: ``` Upvotes: 1 <issue_comment>username_4: This regex should works: ``` ^(?!00:00)(24:00|([0-1]\d|2[0-3]):[0-5]\d)$ ``` **Explanation**: [![enter image description here](https://i.stack.imgur.com/P2DVD.png)](https://i.stack.imgur.com/P2DVD.png) [Demo](https://regex101.com/r/YJN3Wq/3) - updated^2 You can read more about regex `negative lookahead` in this [link](http://www.rexegg.com/regex-lookarounds.html) Upvotes: 4 [selected_answer]<issue_comment>username_5: Try this please `"^(20|21|22|23|[01]d|d)(([:][0-5]d){1,2})$"` edited as suggested ``` "^(24:00)|((0[1-9]|1\d|2[0-3]):([0-5]\d))|(00:(0[1-9]|[1-5]\d))$" ``` this should work fine Upvotes: 0 <issue_comment>username_6: First off, you don't need `^` and `$` anchors on `pattern` attribute. They are automatically applied [against input string](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input#Attributes): > > A regular expression that the control's value is checked against. The > `pattern` must match the entire value, not just some subset. > > > Secondly, you only need to prepend to your regex a negative lookahead to do an immediate failure on `00:00`: ``` pattern="(?!00:00)(?:(?:[01]\d|2[0-3]):[0-5]\d|24:00)" ``` Upvotes: 0
2018/03/14
1,329
4,852
<issue_start>username_0: So, I have this function which takes a map, where a name is associated with one of a number of different indexes into an array. The index numbers will only ever have one name associated with them, so no duplicates and no nulls so it's safe to flatten the hierarchy using the following function. ``` public Map normalize( Map> hierarchalMap ) { Map normalizedMap = new HashMap<>(); for (Map.Entry> entry : hierarchalMap.entrySet()) { for (Integer integer : entry.getValue()) { noramizedMap.put(integer, entry.getKey()); } } return normalizedMap; } ``` I'm trying to change this function into using the streams API and I've gotten this far: ``` Map noramizedMap = new HashMap<>(); for (Map.Entry> entry : vars.entrySet()) { entry.getValue().forEach(e -> noramizedMap.put(e, entry.getValue())); } ``` If this were some other functional language i'd do a partial bind or whatever but with java I try and unwrap the outer loop into a stream...collectTo i just get lost.<issue_comment>username_1: Assuming my comment is correct, you can do with Streams like this: ``` hierarchalMap.entrySet() .stream() .flatMap(entry -> entry.getValue() .stream() .map(i -> new AbstractMap.SimpleEntry<>(entry.getKey(), i))) .collect(Collectors.toMap(Entry::getValue, Entry::getKey)); ``` This assumes there are no duplicates and no nulls. Upvotes: 2 <issue_comment>username_2: I think this should do what you need: ``` public Map normalizeJava8(Map> hierarchalMap ) { return hierarchalMap .entrySet() .stream() .collect( HashMap::new, (map, entry) -> entry.getValue().forEach(i -> map.put(i, entry.getKey())), HashMap::putAll); } ``` It's often the case when working with Java 8 streams that you have to put more logic into the "collect" part of the operation than in an equivalent construction in another language, due partially to the lack of a convenient tuple type. Intuitively it might make seem more sane to create a list of pairs then collect them into a map, but that ends up being more code and more computationally intensive than putting that logic in the `.collect` Upvotes: 2 <issue_comment>username_3: With streams and Java 9: ``` Map normalizedMap = hierarchalMap.entrySet().stream() .flatMap(e -> e.getValue().stream().map(i -> Map.entry(i, e.getKey()))) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); ``` This is almost identical to [this answer](https://stackoverflow.com/a/49275815/1876620), except I'm using the [`Map.entry()`](https://docs.oracle.com/javase/9/docs/api/java/util/Map.html#entry-K-V-) method to create the pairs and am putting the integers as the keys. --- Here's a another, less verbose way to do the same without streams: ``` Map normalizedMap = new HashMap<>(); hierarchalMap.forEach((k, v) -> v.forEach(i -> normalizedMap.put(i, k))); ``` Upvotes: 2 [selected_answer]<issue_comment>username_4: Here are two convenience collectors you can use in java 8 that are not just limited to maps. ``` public static Collector> flatInverseMapping(Function super T, ? extends Stream<? extends K> keyStreamFunction, Function super T, ? extends V valueFunction) { return Collector.of(HashMap::new, (m, v) -> keyStreamFunction.apply(v).forEach(innerV -> m.put(innerV, valueFunction.apply(v))), (m1, m2) -> { m1.putAll(m2); return m2; }); } public static Collector> flatInverseMapping(Function super T, ? extends Collection<? extends K> keyStreamFunction, Function super T, ? extends V valueFunction) { return Collector.of(HashMap::new, (m, v) -> keyStreamFunction.apply(v).forEach(innerV -> m.put(innerV, valueFunction.apply(v))), (m1, m2) -> { m1.putAll(m2); return m2; }); } ``` Since both streams and collections have a forEach, it makes both implementations identical other than input object. For a brief explanation of how this works, the output of the collector is a Map of the K and V (key and value) params that are defined by the outputs of the two functions. For each of the key values derived from the input object in the stream, the same value function will be applied so the map will be inverted with a consistent value across shared keys. Note that if there are multiple items in the stream that resolve to the same keys, this will not throw the merge exception like the normal toMap implementation. The BiConsumer will need to be changed to this to maintain that behavior: ``` (var1, var2) -> { Iterator> var3 = var2.entrySet().iterator(); while(var3.hasNext()) { Map.Entry var4 = var3.next(); var1.merge(var4.getKey(), var4.getValue(), (v0, v1) -> { throw new IllegalStateException(String.format("Duplicate key %s", v0)); }); } return var1; } ``` For reference, this was essentially copied from the Collectors.toMap code. Upvotes: 0
2018/03/14
1,269
4,613
<issue_start>username_0: I am use a PHP google sdk for my project. I want to create a account if authenticated user is a admin of the gsuite account. I try to find that but not able to find any single link which help me to identify account . Can anyone help me to check that user is admin of the gsuite or not? Here is the overview link [Click Here](https://developers.google.com/admin-sdk/directory/v1/reference/users) , you can check in the response ,there is one key `isAdmin` is available. But when i try out below link with `get` and `list` it return me error message something like this `"Not Authorized to access this resource/api"` Here is the `try it out` link of the google : <https://developers.google.com/admin-sdk/directory/v1/reference/users/list><issue_comment>username_1: Assuming my comment is correct, you can do with Streams like this: ``` hierarchalMap.entrySet() .stream() .flatMap(entry -> entry.getValue() .stream() .map(i -> new AbstractMap.SimpleEntry<>(entry.getKey(), i))) .collect(Collectors.toMap(Entry::getValue, Entry::getKey)); ``` This assumes there are no duplicates and no nulls. Upvotes: 2 <issue_comment>username_2: I think this should do what you need: ``` public Map normalizeJava8(Map> hierarchalMap ) { return hierarchalMap .entrySet() .stream() .collect( HashMap::new, (map, entry) -> entry.getValue().forEach(i -> map.put(i, entry.getKey())), HashMap::putAll); } ``` It's often the case when working with Java 8 streams that you have to put more logic into the "collect" part of the operation than in an equivalent construction in another language, due partially to the lack of a convenient tuple type. Intuitively it might make seem more sane to create a list of pairs then collect them into a map, but that ends up being more code and more computationally intensive than putting that logic in the `.collect` Upvotes: 2 <issue_comment>username_3: With streams and Java 9: ``` Map normalizedMap = hierarchalMap.entrySet().stream() .flatMap(e -> e.getValue().stream().map(i -> Map.entry(i, e.getKey()))) .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)); ``` This is almost identical to [this answer](https://stackoverflow.com/a/49275815/1876620), except I'm using the [`Map.entry()`](https://docs.oracle.com/javase/9/docs/api/java/util/Map.html#entry-K-V-) method to create the pairs and am putting the integers as the keys. --- Here's a another, less verbose way to do the same without streams: ``` Map normalizedMap = new HashMap<>(); hierarchalMap.forEach((k, v) -> v.forEach(i -> normalizedMap.put(i, k))); ``` Upvotes: 2 [selected_answer]<issue_comment>username_4: Here are two convenience collectors you can use in java 8 that are not just limited to maps. ``` public static Collector> flatInverseMapping(Function super T, ? extends Stream<? extends K> keyStreamFunction, Function super T, ? extends V valueFunction) { return Collector.of(HashMap::new, (m, v) -> keyStreamFunction.apply(v).forEach(innerV -> m.put(innerV, valueFunction.apply(v))), (m1, m2) -> { m1.putAll(m2); return m2; }); } public static Collector> flatInverseMapping(Function super T, ? extends Collection<? extends K> keyStreamFunction, Function super T, ? extends V valueFunction) { return Collector.of(HashMap::new, (m, v) -> keyStreamFunction.apply(v).forEach(innerV -> m.put(innerV, valueFunction.apply(v))), (m1, m2) -> { m1.putAll(m2); return m2; }); } ``` Since both streams and collections have a forEach, it makes both implementations identical other than input object. For a brief explanation of how this works, the output of the collector is a Map of the K and V (key and value) params that are defined by the outputs of the two functions. For each of the key values derived from the input object in the stream, the same value function will be applied so the map will be inverted with a consistent value across shared keys. Note that if there are multiple items in the stream that resolve to the same keys, this will not throw the merge exception like the normal toMap implementation. The BiConsumer will need to be changed to this to maintain that behavior: ``` (var1, var2) -> { Iterator> var3 = var2.entrySet().iterator(); while(var3.hasNext()) { Map.Entry var4 = var3.next(); var1.merge(var4.getKey(), var4.getValue(), (v0, v1) -> { throw new IllegalStateException(String.format("Duplicate key %s", v0)); }); } return var1; } ``` For reference, this was essentially copied from the Collectors.toMap code. Upvotes: 0
2018/03/14
575
1,734
<issue_start>username_0: i have a object like ``` obj = {name:"xxx" , des1:"x",des2:"xx",des3:"xxx" , age:"12"}. ``` But the property of des can be incresed as `des1,des2,des3,des4 ...` according to the users inputs. So basically we don't know how much of "des" properties are there in the object. I want do something like this. Grab all the properties of `des` and `put` them in array. Then update the object as follows ``` obj = {name:"xxx" , description:["x","xx","xxx"] , age:"12"} ``` how can I achieve this using `ES6 syntax`<issue_comment>username_1: you can transform your data in this way: ``` const transformed = Object.keys(obj).reduce( (acc, key) => { return key === 'name' || key === 'age' ? { ...acc, [key]: obj[key] } : { ...acc, description: [...acc.description, obj[key]] } }, { description: [] } ) ``` Upvotes: 1 <issue_comment>username_2: What about this one? ``` const f = {name:"xxx", des1:"x", des2:"xx", des3:"xxx", age:"12"}; const { name, age, ...rest} = f; const result = { name, age, description: Object.values(rest) }; console.log(result) // { name: 'xxx', age: '12', description: [ 'x', 'xx', 'xxx' ] } ``` Upvotes: 1 <issue_comment>username_3: You can make use of `reduce` and then match the `string` with the regex which checks if the string is `des`, followed by a number ```js var obj = {name:"xxx" , des1:"x",des2:"xx",des3:"xxx" , age:"12"} const res = Object.keys(obj).reduce((acc, key)=> { if(key.match(/^des([0-9]+)$/)) { if(acc.description) { acc.description.push(obj[key]); } else { acc.description = [obj[key]]; } } else { acc[key] = obj[key]; } return acc; }, {}) console.log(res); ``` Upvotes: 0
2018/03/14
332
1,328
<issue_start>username_0: I have a SQL script in SQL Server Management Studio (2012) that consists of my main query and then a local temp table. I'm constantly working on the main script, but the temp table is static. Is there a shortcut for running the script without the temp table. In other words, `F5` runs the whole script including the temp table. With the temp table, the script runs much longer. So, is there a shortcut that allows me to run the scrip without the temp table and without having to manually select the main script every time? Thanks<issue_comment>username_1: There is no shortcut to achieve what you want. Your options will always include editing your script: * Set `NOEXEC ON/OFF`. Use `SET NOEXEC ON` just before your table variable and `SET NOEXEC OFF` just after. * Comment the part you don't want to execute. * Use `GO TO` with the appropiate label. Upvotes: 3 [selected_answer]<issue_comment>username_2: If you make the temp table a global temp table using two #'s, for example: ``` CREATE ##mytemp TABLE (myval int) ``` **And do not close the window or drop the table**, then the connection stays open. You can then open a new window and query, modify, or manipulate the contents of ##mytemp. The catch is global temp tables are not private and can be queried by any other user. Upvotes: 0
2018/03/14
343
1,345
<issue_start>username_0: I have a JavaScript code snippet which is as following: ```js var obj = { message: "Hello", innerMessage: !(function() { console.log(this.message); })() }; console.log(obj.innerMessage); ``` It outputs: `undefined true` The function which gets executed for evaluating `innerMessage` property prints the `message` property of the object on which the method is called. The value of that property is `Hello`. However what gets printed is `undefined`. It looks like the object is not getting passed to the method. Why is it happening?<issue_comment>username_1: There is no shortcut to achieve what you want. Your options will always include editing your script: * Set `NOEXEC ON/OFF`. Use `SET NOEXEC ON` just before your table variable and `SET NOEXEC OFF` just after. * Comment the part you don't want to execute. * Use `GO TO` with the appropiate label. Upvotes: 3 [selected_answer]<issue_comment>username_2: If you make the temp table a global temp table using two #'s, for example: ``` CREATE ##mytemp TABLE (myval int) ``` **And do not close the window or drop the table**, then the connection stays open. You can then open a new window and query, modify, or manipulate the contents of ##mytemp. The catch is global temp tables are not private and can be queried by any other user. Upvotes: 0
2018/03/14
520
1,762
<issue_start>username_0: I have the following code which worked at some point but now throws me an error at ".SourceData = rng.Address(True, True, xlR1C1, True)" ``` Dim rng As Range Set rng = ActiveSheet.Range("A1:F" & LastRow) Set shTotalsPivot = ActiveWorkbook.Sheets("Totals Pivot") With shTotalsPivot.PivotTables(1).PivotCache .SourceData = rng.Address(True, True, xlR1C1, True) .Refresh End With ``` Could you please advise what I am doing incorrectly. I simply want to change a source in the existing pivot table to the new sheet which will an active sheet in this case. Thanks<issue_comment>username_1: See if this works. The source must be a string and include the sheet name, and you are referring to a pre-existing PT, hence the use of ChangePivotCache. ``` shTotalsPivot.PivotTables(1).ChangePivotCache _ ThisWorkbook.PivotCaches.Create( _ SourceType:=xlDatabase, _ SourceData:=ActiveSheet.Name & "!" & rng.Address(True, True, xlR1C1, True)) shTotalsPivot.PivotTables(1).refresh ``` Upvotes: 0 <issue_comment>username_2: In order to include the sheet's name of the `Range.Address`, the 4th parameter needs to be `xlExternal`. See modified code below: ``` Dim shTotalsPivot As Worksheet Dim Rng As Range Dim RngString As String Dim PvtTbl As PivotTable Set Rng = ActiveSheet.Range("A1:F" & LastRow) ' put the full range address (including sheet name) in a String variable RngString = Rng.Address(False, False, xlA1, xlExternal) Set shTotalsPivot = ActiveWorkbook.Sheets("Totals Pivot") 'set the Pivot-Table object Set PvtTbl = shTotalsPivot.PivotTables(1) ' === for DEBUG ONLY === Debug.Pring RngString ' update the Pivot-Cache With PvtTbl.PivotCache .SourceData = RngString .Refresh End With ``` Upvotes: 1
2018/03/14
1,055
2,889
<issue_start>username_0: I'm not good in html, css, i want to apply boarder to dev, which contains image in right ```css .amount-2 { border: 3px solid #4CAF50; padding: 5px; width: 70%; float:left; } .sample { float: right; width: 30%; margin-top: -100px; } ``` ```html Files must be less than 2 MB. Allowed file types: png gif jpg jpeg. Images must be between 200x200 and 800x1400 pixels. Web page addresses and e-mail addresses turn into links automatically. Lines and paragraphs break automatically. ![Pineapple](https://gallery.yopriceville.com/var/albums/Free-Clipart-Pictures/Cartoons-PNG/Cute_Bunny_Cartoon_Transparent_Clip_Art_Image.png?m=1478318101) ``` I dont want to fix the heights<issue_comment>username_1: ``` .main{ border: 3px solid #4CAF50; width:auto; margin: 0; } .amount-2 { padding: 5px; width: 70%; float:left; } .sample { float: right; width: 30%; margin-top: -100px; } Files must be less than 2 MB. Allowed file types: png gif jpg jpeg. Images must be between 200x200 and 800x1400 pixels. Web page addresses and e-mail addresses turn into links automatically. Lines and paragraphs break automatically. ![Pineapple](https://gallery.yopriceville.com/var/albums/Free-Clipart-Pictures/Cartoons-PNG/Cute_Bunny_Cartoon_Transparent_Clip_Art_Image.png?m=1478318101) ``` try this. Upvotes: 0 <issue_comment>username_2: Text need to have a width. ```css .amount-2 { border: 3px solid #4CAF50; padding: 5px; width: 70%; float:left; } .amount-2 p { display: inline-block; vertical-align: top; width: 69%; } .sample { display: inline-block; vertical-align: top; width: 30%; } ``` ```html Files must be less than 2 MB. Allowed file types: png gif jpg jpeg. Images must be between 200x200 and 800x1400 pixels. Web page addresses and e-mail addresses turn into links automatically. Lines and paragraphs break automatically. ![Pineapple](https://gallery.yopriceville.com/var/albums/Free-Clipart-Pictures/Cartoons-PNG/Cute_Bunny_Cartoon_Transparent_Clip_Art_Image.png?m=1478318101) ``` Upvotes: -1 <issue_comment>username_3: The image is now not overflowing parent div and not bad scaled: ```css .amount-2 { border: 3px solid #4CAF50; padding: 5px; width: 70%; float:left; overflow: hidden; } .sample { float: right; width: 30%; height: auto; margin-top: -100px; } ``` ```html Files must be less than 2 MB. Allowed file types: png gif jpg jpeg. Images must be between 200x200 and 800x1400 pixels. Web page addresses and e-mail addresses turn into links automatically. Lines and paragraphs break automatically. ![Pineapple](https://gallery.yopriceville.com/var/albums/Free-Clipart-Pictures/Cartoons-PNG/Cute_Bunny_Cartoon_Transparent_Clip_Art_Image.png?m=1478318101) ``` Upvotes: 0
2018/03/14
403
1,599
<issue_start>username_0: I'm not used to c or c++ or AHK. My problem is the following: There exists a tool called "TI Helper", which is composed of 1 EXE and several text files. This EXE enables you to press "CTR+SPACE" in TM1 application, which will popup a (right-click kind of menu) based on the text files... I opened the EXE with notepad and we can see the code... Can i simply re-use or modify this code? WHat should i keep in mind?<issue_comment>username_1: First of all - will any exe file modifications violate or comply software licensing terms? If it is allowed, you should know the format of exe file, better if assembler language too. Generally, modifying data segment in exe file (e.g. 13 characters "File created" to "Result is OK" - watching that total number of exe file bytes will not change) could eventually result only in changes in displayed text. Modifying binary code (code segment of exe file) requires understading what "mov ax,60" is, what could it cause and could give expected result ONLY if machine (assembler) code is fully understood. Upvotes: 0 <issue_comment>username_2: This has nothing to do with C, C++ or assembly and you have neither decompiled, nor can you recompile the executable. TIHelper is an open source AHK (autohotkey scripting language) file. As a script file, it is not compiled into unreadable machine gibberish, but instead interpreted in it's human readable form. You are free to make changes to that AHK file and run with those changes. [Link to the source code archive of TIHelper](https://archive.codeplex.com/?p=tihelper) Upvotes: 2
2018/03/14
421
1,511
<issue_start>username_0: How to automate the Facebook login without hard-coding my username and password?<issue_comment>username_1: You should use Facebook OAuth API, never hard-code password, some references: [Facebook SDK for Python](https://facebook-sdk.readthedocs.io/en/latest/index.html) [Python Social Auth documentation](https://python-social-auth.readthedocs.io/en/latest/backends/facebook.html) [Facebook OAuth 2 Tutorial](https://requests-oauthlib.readthedocs.io/en/latest/examples/facebook.html) Upvotes: 3 <issue_comment>username_2: To automate the **Facebook Login** without hard-coding the *username* and *password* you can use the `input()` function to take the *user input* from the console as follows : ``` from selenium import webdriver driver = webdriver.Firefox(executable_path=r'C:\Utility\BrowserDrivers\geckodriver.exe') driver.get("https://www.facebook.com/") emailid = input("What is your emailid?(Press enter at the end to continue):") driver.find_element_by_xpath("//input[@id='email']").send_keys(emailid) password = input("What is your password?(Press enter at the end to continue):") driver.find_element_by_xpath("//input[@id='pass']").send_keys("<PASSWORD>") driver.find_element_by_xpath("//input[starts-with(@id, 'u_0_')][@value='Log In']").click() ``` Console Output : ``` What is your emailid?(Press enter at the end to continue):<EMAIL> What is your password?(Press enter at the end to continue):<PASSWORD> ``` Upvotes: 2 [selected_answer]
2018/03/14
1,298
4,274
<issue_start>username_0: I am use `mgp25/Instagram-API` How can l get my instagram posts with the likes of a particular user? My code: ``` set_time_limit(0); date_default_timezone_set('UTC'); require __DIR__.'/vendor/autoload.php'; $username = 'myInstagramUsername'; $password = '<PASSWORD>'; $debug = false; $truncatedDebug = false; $ig = new \InstagramAPI\Instagram($debug, $truncatedDebug); try { $ig->login($username, $password); } catch (\Exception $e) { echo 'Something went wrong: '.$e->getMessage()."\n"; exit(0); } try { $userId = $ig->people->getUserIdForName($username); $act = json_encode($ig->people->getRecentActivityInbox(), true); ??????? } catch (\Exception $e) { echo 'Something went wrong: '.$e->getMessage()."\n"; } ```<issue_comment>username_1: Try looping through each item of your profile then get the likes and find the username. Then if the item has a like by that user put it in an item array like so: ``` // Get the UserPK ID for "natgeo" (National Geographic). $userId = $ig->people->getUserIdForName('natgeo'); // Starting at "null" means starting at the first page. $maxId = null; do { $response = $ig->timeline->getUserFeed($userId, $maxId); // In this example we're simply printing the IDs of this page's items. foreach ($response->getItems() as $item) { //loop through likes as u can see in [source 1][1] there is some method called 'getLikers()' which u can call on a media object. foreach($item->getMedia()->getLikers() as $h){ // here do some if with if response user == username } } ``` source 1:<https://github.com/mgp25/Instagram-API/blob/master/src/Request/Media.php> source 2:<https://github.com/mgp25/Instagram-API/tree/master/examples> source 3:<https://github.com/mgp25/Instagram-API/blob/e66186f14b9124cc82fe309c98f5acf2eba6104d/src/Response/MediaLikersResponse.php> By reading the source files this could work i havent tested it yet. Upvotes: 2 <issue_comment>username_2: **Worked** ``` set_time_limit(0); date_default_timezone_set('UTC'); require __DIR__.'/vendor/autoload.php'; $username = 'username'; $password = '<PASSWORD>'; $debug = false; $truncatedDebug = false; $ig = new \InstagramAPI\Instagram($debug, $truncatedDebug); try { $ig->login($username, $password); } catch (\Exception $e) { echo 'Something went wrong: '.$e->getMessage()."\n"; exit(0); } try { $posts = []; $comments = []; $userId = $ig->people->getUserIdForName($username); $maxId = null; $response = $ig->timeline->getUserFeed($userId, $maxId); foreach ($response->getItems() as $item) { foreach($item->getLikers($item->getId()) as $h){ $posts[] = ['id' => $item->getId(), 'username' => $h->username]; } foreach($ig->media->getComments($item->getId()) as $v){ if(count($v->comments) > 0){ foreach($v->comments as $c){ $comments[] = ['id' => $item->getId(), 'username' => $c->user->username, 'text' => $c->text]; } } } } print_r($posts); print_r($comments); } catch (\Exception $e) { echo 'Something went wrong: '.$e->getMessage()."\n"; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: for new version of mgp25 this code work fine **POST UPDATED** ``` $likes = []; $comments = []; $userId = $ig->people->getUserIdForName($username); $maxId = null; $response = $ig->timeline->getUserFeed($userId, $maxId); $posts = $response->jsonSerialize(); foreach ($response->getItems() as $item) { $likers = $ig->media->getLikers($item->getId()); if ($likers != null) { foreach ($likers->getUsers() as $h) { $likes[] = ['id' => $item->getId(), 'username' => $h->getUsername()]; } } $commentsList = $ig->media->getComments($item->getId()); if ($commentsList != null) { foreach ($commentsList->getComments() as $c) { $comments[] = ['id' => $item->getId(), 'username' => $c->getUser()->getUsername(), 'text' => $c->getText()]; } } } ``` [updated reference link](https://stackoverflow.com/a/49281293/1830228) Upvotes: 2
2018/03/14
3,414
9,533
<issue_start>username_0: I want to copy a column from a table to another column in another table with the condition of id in first table is equal id in the second one please i want to know as possibile as you can if the syntax is right and if not how to correct it thanks ``` INSERT offerte SET tipo_offerta = ( SELECT id_tipo FROM tipi_offerte WHERE id_tipo_offerta IN ( SELECT id_tipo_offerta FROM tipi_offerte ) = id_offerta IN ( SELECT id_offerta FROM offerte ); ``` for example ``` $tipi_offerte = array( array('id_tipo_offerta' => '5','id_tipo' => '3','id_offerta' => '5'), array('id_tipo_offerta' => '6','id_tipo' => '2','id_offerta' => '6'), array('id_tipo_offerta' => '7','id_tipo' => '2','id_offerta' => '7'), array('id_tipo_offerta' => '8','id_tipo' => '2','id_offerta' => '8'), ``` this is a part of the tipi\_offerta table i'm going to use and here is the table i'm going to copy in ``` $offerte = array( array('id_offerta' => '6','titolo' => 'Vinci un fantastico Barbecue Weber con Develey!','slug' => 'vinci-un-fantastico-barbecue-weber-con-develey','link' => 'http://concorsi.develey.it/','negozio' => 'Develey','data_scadenza' => '2016-09-30','slider' => '1','contenuto_riservato' => '0','contenuto_verificato' => '1','in_evidenza' => '0','pagina_dedicata' => '1','descrizione' => 'Acquista le seguenti salse a marchio Develey nel formato vaso vetro 250ml: Salsa Messicana, Salsa Greca, Salsa Barbecue e il prodotto Ketchup BBQ in confezione Squeeze 250ml tutte riportanti un fix-a-form con il logo pubblicitario del concorso, e conserva lo scontrino in originale di acquisto. Compila il form di registrazione con tutti i tuoi dati e inserisci quelli dello scontrino in originale.','seo_personalizzato' => '1','seo_titolo' => 'Vinci un fantastico Barbecue Weber con Develey!','seo_keyword' => 'Barbecue Weber','seo_descrizione' => 'Comprando alcuni prodotti Develey è possibile vincere un Barbecue Weber. Munirsi dello scontrino originale.','pubblicita' => '0','codice_sconto' => '','click' => '50','autore' => 'giuseppe','stato' => '10','data_inserimento' => '2016-06-22 15:03:34','data_aggiornamento' => '2016-09-06 14:46:03','visto' => '1','modificata_da' => NULL,'tipo_offerta' => NULL), array('id_offerta' => '7','titolo' => 'Scopri come vincere Buoni MediaWorld da 100€','slug' => 'scopri-come-vincere-buoni-mediaworld-da-100','link' => 'http://www.compagnidiviaggio-avventuristi.it/registrazione','negozio' => 'Allianz','data_scadenza' => '2016-09-15','slider' => '1','contenuto_riservato' => '0','contenuto_verificato' => '1','in_evidenza' => '0','pagina_dedicata' => '1','descrizione' => 'Registrati e rispondi ai questionari proposti per poter vincere Buoni MediaWorld da 100 euro! Per maggiori info consulta il regolamento.','seo_personalizzato' => '1','seo_titolo' => 'Scopri come vincere Buoni MediaWorld da 100€','seo_keyword' => 'Buoni MediaWorld','seo_descrizione' => 'Non farti scappare l\'occasione di vincere Buoni MediaWorld da 100€. Offerta valida fino al 15 luglio 2016.','pubblicita' => '0','codice_sconto' => '','click' => '16','autore' => 'giuseppe','stato' => '10','data_inserimento' => '2016-06-22 16:14:25','data_aggiornamento' => '2016-06-29 11:23:07','visto' => '1','modificata_da' => NULL,'tipo_offerta' => NULL), array('id_offerta' => '8','titolo' => 'Super Premi Estivi targati Maxibon: power bank, teli da mare e altro','slug' => 'super-premi-estivi-targati-maxibon-power-bank-teli-da-mare-e-altro','link' => 'https://www.buonalavita.it/maxibon/','negozio' => 'Nestlè','data_scadenza' => '2016-07-31','slider' => '1','contenuto_riservato' => '0','contenuto_verificato' => '1','in_evidenza' => '0','pagina_dedicata' => '1','descrizione' => 'Registrati e partecipa al concorso Maxibon per vincere i prodotti da mare della linea "Granella" o "Biscotto": avrai fino a 10 possibilità al giorno!','seo_personalizzato' => '1','seo_titolo' => 'Super Premi Estivi targati Maxibon: power bank, teli da mare e altro','seo_keyword' => 'Concorso Maxibon','seo_descrizione' => 'Con il concorso Maxibon è possibile vincere teli da mare, palloni, power bank, occhiali e magliette.','pubblicita' => '0','codice_sconto' => '','click' => '49','autore' => 'giuseppe','stato' => '10','data_inserimento' => '2016-06-22 16:15:35','data_aggiornamento' => '2016-06-29 11:22:44','visto' => '1','modificata_da' => NULL,'tipo_offerta' => NULL), ``` i want to copy **id\_tipo** ``` $tipi_offerte = array( array('id_tipo_offerta' => '5','`id_tipo`' => '3','id_offerta' => '5'), ``` in **tipo\_offerta** ``` ` $offerte = array( array('id_offerta' => '6','titolo' => 'Vinci un fantastico Barbecue Weber con Develey!','slug' => 'vinci-un-fantastico-barbecue-weber-con-develey','link' => 'http://concorsi.develey.it/','negozio' => 'Develey','data_scadenza' => '2016-09-30','slider' => '1','contenuto_riservato' => '0','contenuto_verificato' => '1','in_evidenza' => '0','pagina_dedicata' => '1','descrizione' => 'Acquista le seguenti salse a marchio Develey nel formato vaso vetro 250ml: Salsa Messicana, Salsa Greca, Salsa Barbecue e il prodotto Ketchup BBQ in confezione Squeeze 250ml tutte riportanti un fix-a-form con il logo pubblicitario del concorso, e conserva lo scontrino in originale di acquisto. Compila il form di registrazione con tutti i tuoi dati e inserisci quelli dello scontrino in originale.','seo_personalizzato' => '1','seo_titolo' => 'Vinci un fantastico Barbecue Weber con Develey!','seo_keyword' => 'Barbecue Weber','seo_descrizione' => 'Comprando alcuni prodotti Develey è possibile vincere un Barbecue Weber. Munirsi dello scontrino originale.','pubblicita' => '0','codice_sconto' => '','click' => '50','autore' => 'giuseppe','stato' => '10','data_inserimento' => '2016-06-22 15:03:34','data_aggiornamento' => '2016-09-06 14:46:03','visto' => '1','modificata_da' => NULL,'tipo_offerta' => NULL),` ``` where ***id\_tipo\_offerta*** is equal to **id\_offerta**<issue_comment>username_1: Try looping through each item of your profile then get the likes and find the username. Then if the item has a like by that user put it in an item array like so: ``` // Get the UserPK ID for "natgeo" (National Geographic). $userId = $ig->people->getUserIdForName('natgeo'); // Starting at "null" means starting at the first page. $maxId = null; do { $response = $ig->timeline->getUserFeed($userId, $maxId); // In this example we're simply printing the IDs of this page's items. foreach ($response->getItems() as $item) { //loop through likes as u can see in [source 1][1] there is some method called 'getLikers()' which u can call on a media object. foreach($item->getMedia()->getLikers() as $h){ // here do some if with if response user == username } } ``` source 1:<https://github.com/mgp25/Instagram-API/blob/master/src/Request/Media.php> source 2:<https://github.com/mgp25/Instagram-API/tree/master/examples> source 3:<https://github.com/mgp25/Instagram-API/blob/e66186f14b9124cc82fe309c98f5acf2eba6104d/src/Response/MediaLikersResponse.php> By reading the source files this could work i havent tested it yet. Upvotes: 2 <issue_comment>username_2: **Worked** ``` set_time_limit(0); date_default_timezone_set('UTC'); require __DIR__.'/vendor/autoload.php'; $username = 'username'; $password = '<PASSWORD>'; $debug = false; $truncatedDebug = false; $ig = new \InstagramAPI\Instagram($debug, $truncatedDebug); try { $ig->login($username, $password); } catch (\Exception $e) { echo 'Something went wrong: '.$e->getMessage()."\n"; exit(0); } try { $posts = []; $comments = []; $userId = $ig->people->getUserIdForName($username); $maxId = null; $response = $ig->timeline->getUserFeed($userId, $maxId); foreach ($response->getItems() as $item) { foreach($item->getLikers($item->getId()) as $h){ $posts[] = ['id' => $item->getId(), 'username' => $h->username]; } foreach($ig->media->getComments($item->getId()) as $v){ if(count($v->comments) > 0){ foreach($v->comments as $c){ $comments[] = ['id' => $item->getId(), 'username' => $c->user->username, 'text' => $c->text]; } } } } print_r($posts); print_r($comments); } catch (\Exception $e) { echo 'Something went wrong: '.$e->getMessage()."\n"; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: for new version of mgp25 this code work fine **POST UPDATED** ``` $likes = []; $comments = []; $userId = $ig->people->getUserIdForName($username); $maxId = null; $response = $ig->timeline->getUserFeed($userId, $maxId); $posts = $response->jsonSerialize(); foreach ($response->getItems() as $item) { $likers = $ig->media->getLikers($item->getId()); if ($likers != null) { foreach ($likers->getUsers() as $h) { $likes[] = ['id' => $item->getId(), 'username' => $h->getUsername()]; } } $commentsList = $ig->media->getComments($item->getId()); if ($commentsList != null) { foreach ($commentsList->getComments() as $c) { $comments[] = ['id' => $item->getId(), 'username' => $c->getUser()->getUsername(), 'text' => $c->getText()]; } } } ``` [updated reference link](https://stackoverflow.com/a/49281293/1830228) Upvotes: 2
2018/03/14
934
3,414
<issue_start>username_0: I am getting the following error while implementing the multiple validation for single input field using Angular4. Error: ``` ERROR in src/app/about/about.component.ts(30,7): error TS1117: An object literal cannot have multiple properties with the same name in strict mode. src/app/about/about.component.ts(30,7): error TS2300: Duplicate identifier 'url ``` Here is my code: about.component.html: ``` Url is required. ADD ``` about.component.ts: ``` export class AboutComponent implements OnInit { private headers = new Headers({'Content-Type':'application/json'}); aboutData = []; processValidation = false; pattern="/^(ftp|http|https):\/\/(\w+:{0,1}\w*@)?(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?$/"; filePath:string; filelist: Array<{filename: string, intkey: string}> = [{ filename: 'http://oditek.in/jslib/jslib.js', intkey: 'aboutlib' },{ filename: 'http://oditek.in/jslib/aboutjs.js', intkey: 'aboutjs' }]; textForm = new FormGroup({ url: new FormControl('', Validators.required), url: new FormControl('', Validators.pattern(this.pattern)) }); constructor(private router:Router,private route:ActivatedRoute,private http:Http) { } ngAfterViewChecked() { $('#title').attr('style','font-weight:bold'); /*$.getScript(this.filePath,function(){ setTimeout(function(){ checkJS(); }, 5000); })*/ } ngOnInit() { this.route.params.subscribe(params=>{ this.filelist.forEach(item => { let parampath=atob(params['filepath']); if(item.intkey==parampath) this.filePath = item.filename; else return; }); }); this.http.get('http://localhost:3000/articles').subscribe( (res:Response)=>{ this.aboutData = res.json(); } ) } onTextFormSubmit(){ this.processValidation = true; if (this.textForm.invalid) { return; } let url = this.textForm.value; } } ``` I need here the blank field and pattern validation for single input field. All respective error message will display below the input field but I am getting this error.<issue_comment>username_1: Actually, you are creating a new control of url and two controls can't be created in a single form ``` this.formName.group({ title: [null,[Validators.required,Validators.pattern(this.pattern)]], }) ``` Upvotes: 0 <issue_comment>username_2: You creation of `url` FormControl is wrong, because you dont need to create two controls. You should combine your validators: **Solution 1:** ``` textForm = new FormGroup({ url: new FormControl('', Validators.compose([Validators.required, Validators.pattern(this.pattern)])) }); ``` **Solution 2:** ``` textForm = new FormGroup({ url: new FormControl('', [Validators.required, Validators.pattern(this.pattern)]) }); ``` Upvotes: 1 <issue_comment>username_3: The problem is in this code of yours: ``` textForm = new FormGroup({ url: new FormControl('', Validators.required), url: new FormControl('', Validators.pattern(this.pattern)) }); ``` You do not need to add 2 controls of same name for just putting 2 validations. You can insert array of validators like following: ``` textForm = new FormGroup({ url: new FormControl('', [Validators.required, Validators.pattern(this.pattern)]) }); ``` Upvotes: 2 [selected_answer]
2018/03/14
831
2,832
<issue_start>username_0: My code for jQuery datepicker works fine. But when I include a jQuery slider/banner on same page, the datepicker doesn't work. I think the script tags may conflict with other. ``` Scripts under head tag: //for datepicker //for slider/banner html: Search By Date $(window).on('load', function() { $('#slider').nivoSlider(); }); script.js file $(document).ready(function(){ $("#datepicker").datepicker({dateFormat: 'dd/mm/yy'}); }); ``` PHP code for slider ``` php include("dbConnect.php"); $query="select \* from event\_table where enable\_disable='Enable'"; $result=mysqli\_query($conn,$query); if(mysqli\_num\_rows($result)0) { while($row=mysqli\_fetch\_array($result)) { //echo "![](images/".$row[ "\".$row[")"; ?> [![](images/<?php echo $row['image'] ?> "<?php echo $row['event_name'] ?>")](event_details.php?eid=<?php echo $row['event_id']; ?>) php } } else { echo "No Events"; } mysqli\_close($conn); ? ``` My slider shows 2 extra empty slides at the start and after second slide,it works fine. How to remove extra empty slides? html code for id="slider" ``` Connection [![](images/foot.jpg "Indian Super League 4")](event_details.php?eid=9) [![](images/volleyball.jpg "Pro Volleyball League")](event_details.php?eid=10) [![](images/nemo.jpg "Nemo Play")](event_details.php?eid=11) [![](images/walle.jpg "Robot Fight")](event_details.php?eid=12) [![](images/badminton.jpg "Premier Badminton League")](event_details.php?eid=13) [![](images/foot2.jpg "English Premier League")](event_details.php?eid=18) ```<issue_comment>username_1: Actually, you are creating a new control of url and two controls can't be created in a single form ``` this.formName.group({ title: [null,[Validators.required,Validators.pattern(this.pattern)]], }) ``` Upvotes: 0 <issue_comment>username_2: You creation of `url` FormControl is wrong, because you dont need to create two controls. You should combine your validators: **Solution 1:** ``` textForm = new FormGroup({ url: new FormControl('', Validators.compose([Validators.required, Validators.pattern(this.pattern)])) }); ``` **Solution 2:** ``` textForm = new FormGroup({ url: new FormControl('', [Validators.required, Validators.pattern(this.pattern)]) }); ``` Upvotes: 1 <issue_comment>username_3: The problem is in this code of yours: ``` textForm = new FormGroup({ url: new FormControl('', Validators.required), url: new FormControl('', Validators.pattern(this.pattern)) }); ``` You do not need to add 2 controls of same name for just putting 2 validations. You can insert array of validators like following: ``` textForm = new FormGroup({ url: new FormControl('', [Validators.required, Validators.pattern(this.pattern)]) }); ``` Upvotes: 2 [selected_answer]
2018/03/14
340
1,084
<issue_start>username_0: I have a collection, and I am looping over it. ``` | | | ``` `ng-value` is set to the entire `account object`. **Now, the row is being highlighted but the button is not checked.** In controller, ``` vm.selectAccount = function (account) { account.rowIsSelected = account; } ``` What am I doing wrong?<issue_comment>username_1: If you want to radio button get checked, you should set `ng-value` as `true`, you can't bind entire complex object it should be boolean. ``` ``` Upvotes: 0 <issue_comment>username_2: I think this is what you are looking for. Instead of using `ng-change` its more commong to use a seperate `selected` model. ### View ``` | | | --- | | Input: | ``` ### AngularJS application ``` var myApp = angular.module('myApp', []); myApp.controller('MyCtrl', function($scope) { let vm = this; vm.selected = null; vm.items = [{ rowIsSelected: false }, { rowIsSelected: false }, { rowIsSelected: false }]; }); ``` **> [demo fiddle](http://jsfiddle.net/z7038e1t/)** Upvotes: 2 [selected_answer]
2018/03/14
503
1,976
<issue_start>username_0: I need to call a action from static navigationOptions but cannot access my action through this.props. How to call this action? I am getting error "Cannot read property 'logout' of undefined" in console. ```js static navigationOptions = ({navigation}) =>( { title: 'Home', header: { this.props.logout(); // this action is not working NavigationActions.reset({ index: 0, actions: [NavigationActions.navigate({ routeName: "Welcome" })] }) navigation.navigate('Welcome'); } } />, }); ```<issue_comment>username_1: You need to make the `logout` button as a `component` and bind the `props` explicitly from the `react-redux` module `mapDispatchToProps` For example ``` const LogoutButton = ({logout}) => { return ( logout()}> Logout ) } const mapDispatchToProps = dispatch => ({ logout: () => /*dispatch your logout action here*/ }) ``` and use it in your **`static navigationOptions`** as ``` static navigationOptions = ({navigation}) =>( { title: 'Home', header: , }); ``` or modify your component to support this component. Upvotes: 2 <issue_comment>username_2: In component of screen where there is navigationOptions ``` componentDidMount() { this.props.navigation.setParams({ logOut: this.actionLogOut }); } actionLogOut = () => { this.props.dispatch(logOut()); }; ``` Load function action of logout in params of navigation when componentDidMount and then ``` static navigationOptions = ({ navigation }) => { return { headerTitle: ( Personal Data ), headerRight: ( { console.log(navigation); navigation.state.params.logOut(); navigation.dispatch(NavigationActions.back({ index: "Login" })); navigation.popToTop(); }} /> ) }; ``` }; In navigationOptions use as navigation.state.params.logOut(); This component must use connect using import { connect } from "react-redux" and connect with component Upvotes: 3
2018/03/14
469
1,830
<issue_start>username_0: I dont know about how to open `NSFileManager`. How to open `NSFileManager` in iPhone and upload document from `NSFileManager` please suggest any easy way. How can open it and upload the document and also get the path for saved file. Where I can find file physically.(Any location). **::EDIT::** I started coding in that year. so, i don't know about basic of `NSFileManager`.<issue_comment>username_1: You need to make the `logout` button as a `component` and bind the `props` explicitly from the `react-redux` module `mapDispatchToProps` For example ``` const LogoutButton = ({logout}) => { return ( logout()}> Logout ) } const mapDispatchToProps = dispatch => ({ logout: () => /*dispatch your logout action here*/ }) ``` and use it in your **`static navigationOptions`** as ``` static navigationOptions = ({navigation}) =>( { title: 'Home', header: , }); ``` or modify your component to support this component. Upvotes: 2 <issue_comment>username_2: In component of screen where there is navigationOptions ``` componentDidMount() { this.props.navigation.setParams({ logOut: this.actionLogOut }); } actionLogOut = () => { this.props.dispatch(logOut()); }; ``` Load function action of logout in params of navigation when componentDidMount and then ``` static navigationOptions = ({ navigation }) => { return { headerTitle: ( Personal Data ), headerRight: ( { console.log(navigation); navigation.state.params.logOut(); navigation.dispatch(NavigationActions.back({ index: "Login" })); navigation.popToTop(); }} /> ) }; ``` }; In navigationOptions use as navigation.state.params.logOut(); This component must use connect using import { connect } from "react-redux" and connect with component Upvotes: 3
2018/03/14
578
2,438
<issue_start>username_0: There is the use-permission and use-feature like below: ``` ``` I am just not quite clear about the "android:requiredFeature" attribute. Is it the same effect as the "android:required" in use-feature? I just cannot find the android:requiredFeature related infomation in the android developer site and google....<issue_comment>username_1: I think this is the main cause of the difference, which is not really a difference but a more efficient case of managing what do you want the app store to do with your app. > > In some cases, the permissions that you request through > can affect how your application is filtered by > Google Play. > > > If you request a hardware-related permission — CAMERA, for example — > Google Play assumes that your application requires the underlying > hardware feature and filters the application from devices that do not > offer it. > > > To control filtering, always explicitly declare hardware features in > elements, rather than relying on Google Play to > "discover" the requirements in elements. Then, if > you want to disable filtering for a particular feature, you can add a > android:required="false" attribute to the declaration. > > > For a list of permissions that imply hardware features, see the > documentation for the element. > > > from: [Android developers](https://developer.android.com/guide/topics/manifest/uses-permission-element.html) Upvotes: 0 <issue_comment>username_2: Yes more or less both have the same effect. `android:requiredFeature` is only used in API level 26 and higher if your app has `minSdkVersion` less than 26 the application will simply ignore the attribute. generally is used to specify a **permission** that a user must grant for the app to run correctly, it is not **necessarily** used to filter the app for devices on Google Play. If you want your app to be filtered for devices based on the hardware feature your app uses, the recommended way is to define element in your manifest. As mentioned in the other answer, based on Google play can assume that your app requires the underlying hardware feature and it can filter your app based on that, but its not always true, your app might work without that hardware feature but it prefers to have that feature, So to avoid filtering based on , `android:requiredFeature` attribute is used to enhance your control over the filtering. Upvotes: 3 [selected_answer]
2018/03/14
601
2,040
<issue_start>username_0: So I'm using Basecamp's Trix editor for a Ruby on Rails project. What I'm trying to do is make the uploaded/ attached image fit to the width of its parent. Trix automatically embeds the height and width of the image on the element itself. Any way I can stop trix from doing this?<issue_comment>username_1: To resize the height and width of the attached image, you can simply add trix-content class to your trix-editor tag. And then make sure also to include the class into the result div. From there you can adjust trix-content class from your application stylesheet as usual. The other way is to copy the trix stylesheets folder and play around with .attachment and img tag from within the content.scss The following is how mine looks like: ``` img { max-width: 500px; height: auto; } .attachment { display: inline-block; position: relative; max-width: 50%; margin: 0; padding: 0; } ``` Read **Styling Formatted Content** <https://github.com/basecamp/trix> Upvotes: 0 <issue_comment>username_2: Maybe if you supplied your code, I can give you an exact answer. But what you can do is target the elements with CSS and then set the rendered dimensions thus: ``` .col-sm-10 a { text-align: center; img { width: 600px; height: 400px; display: block; #ensures the caption renders on the next line } ``` where `.col-sm-10` is the class enclosing the tag which in turn enclosed the `![]()` tag. Of course, the `class` will be different depending on your HTML. Use ***inspect*** on your browser to determine the relationship. Good luck. --- **UPDATE:** A better method would be to target the image like so: ``` #post-show-body a { text-align: center; img { max-width: 100%; height: auto; max-height: 100%; } } ``` Upvotes: 2 <issue_comment>username_3: In your `\app\assets\stylesheets\actiontext.css` you can define the image properties. Mine looks like this. ``` .trix-content img { max-width: 70%; height: 450px; } ``` Upvotes: 0
2018/03/14
712
2,646
<issue_start>username_0: Currently I'm using this code to handle decoding some data: ``` private func parseJSON(_ data: Data) throws -> [ParsedType] { let decoder = JSONDecoder() let parsed = try decoder.decode([ParsedType].self, from: data) return parsed } private func parsePlist(_ data: Data) throws -> [ParsedType] { let decoder = PropertyListDecoder() let parsed = try decoder.decode([ParsedType].self, from: data) return parsed } ``` Is there a way to create a generic method that ties all this repeated code together? ``` private func parse(_ data: Data, using decoder: /*Something*/) throws -> [ParsedType] { let parsed = try decoder.decode([ParsedType].self, from: data) return parsed } ```<issue_comment>username_1: If you look at the swift stdlib for [JSONEncoder](https://github.com/apple/swift/blob/master/stdlib/public/SDK/Foundation/JSONEncoder.swift) and [PropertyListDecoder](https://github.com/apple/swift/blob/master/stdlib/public/SDK/Foundation/PlistEncoder.swift) you will see that they both share a method ``` func decode(\_ type: T.Type, from data: Data) throws -> T ``` So you could create a protocol that has said method and conform both decoders to it: ``` protocol DecoderType { func decode(\_ type: T.Type, from data: Data) throws -> T } extension JSONDecoder: DecoderType { } extension PropertyListDecoder: DecoderType { } ``` And create your generic parse function like so: ``` func parseData(_ data: Data, with decoder: DecoderType) throws -> [ParsedType] { return try decoder.decode([ParsedType].self, from: data) } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Decodable+Generic.swift ``` import Foundation // Generic decode method for Decodable func decode(data: Data) throws -> T { let decoder = JSONDecoder() return try decoder.decode(T.self, from: data) } ``` Decodable+GenericTests.swift ``` import XCTest @testable import YourProject class Decodable_EncodableTests: XCTestCase { func testDecodableEncodable() { struct User: Decodable, Encodable { let name: String let sex: String } let json = """ { "name": "Ronaldo", "sex": "Female" } """.data(using: .utf8)! do { // When let name = "Ronaldo" let sex = "Female" let user: User = try decode(data: json) // Then XCTAssertEqual(user.name, name) XCTAssertEqual(user.sex, sex) } catch let error { XCTFail(error.localizedDescription) } } } ``` Upvotes: 0
2018/03/14
547
2,051
<issue_start>username_0: I am looking for ways to check weather the selected pvob is under the group given, else the view should not be created.<issue_comment>username_1: If you look at the swift stdlib for [JSONEncoder](https://github.com/apple/swift/blob/master/stdlib/public/SDK/Foundation/JSONEncoder.swift) and [PropertyListDecoder](https://github.com/apple/swift/blob/master/stdlib/public/SDK/Foundation/PlistEncoder.swift) you will see that they both share a method ``` func decode(\_ type: T.Type, from data: Data) throws -> T ``` So you could create a protocol that has said method and conform both decoders to it: ``` protocol DecoderType { func decode(\_ type: T.Type, from data: Data) throws -> T } extension JSONDecoder: DecoderType { } extension PropertyListDecoder: DecoderType { } ``` And create your generic parse function like so: ``` func parseData(_ data: Data, with decoder: DecoderType) throws -> [ParsedType] { return try decoder.decode([ParsedType].self, from: data) } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Decodable+Generic.swift ``` import Foundation // Generic decode method for Decodable func decode(data: Data) throws -> T { let decoder = JSONDecoder() return try decoder.decode(T.self, from: data) } ``` Decodable+GenericTests.swift ``` import XCTest @testable import YourProject class Decodable_EncodableTests: XCTestCase { func testDecodableEncodable() { struct User: Decodable, Encodable { let name: String let sex: String } let json = """ { "name": "Ronaldo", "sex": "Female" } """.data(using: .utf8)! do { // When let name = "Ronaldo" let sex = "Female" let user: User = try decode(data: json) // Then XCTAssertEqual(user.name, name) XCTAssertEqual(user.sex, sex) } catch let error { XCTFail(error.localizedDescription) } } } ``` Upvotes: 0
2018/03/14
634
2,429
<issue_start>username_0: I need to establish an initial understanding of this term before I can understand it in all the general contexts that I've seen. (And I'm fairly certain this is a well-known term.) What does @parm mean? It's used a lot by basic programmers, (I am definitely new to programming) and it's always within a comment, so to me it doesn't seem to have any functionality but does seem to imply something. I've researched it and can't find anything but instances of the term nested within other questions.<issue_comment>username_1: If you look at the swift stdlib for [JSONEncoder](https://github.com/apple/swift/blob/master/stdlib/public/SDK/Foundation/JSONEncoder.swift) and [PropertyListDecoder](https://github.com/apple/swift/blob/master/stdlib/public/SDK/Foundation/PlistEncoder.swift) you will see that they both share a method ``` func decode(\_ type: T.Type, from data: Data) throws -> T ``` So you could create a protocol that has said method and conform both decoders to it: ``` protocol DecoderType { func decode(\_ type: T.Type, from data: Data) throws -> T } extension JSONDecoder: DecoderType { } extension PropertyListDecoder: DecoderType { } ``` And create your generic parse function like so: ``` func parseData(_ data: Data, with decoder: DecoderType) throws -> [ParsedType] { return try decoder.decode([ParsedType].self, from: data) } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Decodable+Generic.swift ``` import Foundation // Generic decode method for Decodable func decode(data: Data) throws -> T { let decoder = JSONDecoder() return try decoder.decode(T.self, from: data) } ``` Decodable+GenericTests.swift ``` import XCTest @testable import YourProject class Decodable_EncodableTests: XCTestCase { func testDecodableEncodable() { struct User: Decodable, Encodable { let name: String let sex: String } let json = """ { "name": "Ronaldo", "sex": "Female" } """.data(using: .utf8)! do { // When let name = "Ronaldo" let sex = "Female" let user: User = try decode(data: json) // Then XCTAssertEqual(user.name, name) XCTAssertEqual(user.sex, sex) } catch let error { XCTFail(error.localizedDescription) } } } ``` Upvotes: 0
2018/03/14
1,008
2,931
<issue_start>username_0: I've seen this sort question asked many times but unfortunately for me this time it comes with a bit of a twist. I have a dictionary in the format: ``` name: (job, score) ``` example: ``` dict = {bob: (farmer, 9), sue: (farmer, 9), tim: (farmer, 5), jill, (chef, 8)} ``` now if i use: ``` x = Counter(x for x in dict.values()) ``` I'll get the list as expected (but not what I want): ``` Counter({(farmer,9): 2, (farmer, 5): 1, (chef, 8): 1}) ``` what i would really like is to see each name with the occurrences of their job and score like so: ``` Counter({bob:2, sue:2, tim:1, jill:1}) ``` which is also to say that I would like the output dictionary length to be the same as the input dictionary length. Things I can change: * dictionary could maybe be set of nested tuples? i.e. (bob,(farmer,9)) if this helps? * wouldn't mind if only list of occurrences were returned i,e 2,2,1,1 * also wouldn't mind being told there's a much better way to do this. what im trying to do is make occurence the size of my bubble in a bubble chart. I'd like to be able to extract a list of the same length of occurrences at the same index as described above. So far I have an equal list of jobs and scores, that third list containing occurrences would help make the graph more clear I think.<issue_comment>username_1: **First don't use dict as a variable name:** You can try this approach without importing anything. ``` dict_1 = {'bob': ('farmer', 9),'sue': ('farmer', 9), 'tim': ('farmer', 5), 'jill':('chef', 8)} ``` First group similar values: ``` pre_data={} for i,j in dict_1.items(): if j not in pre_data: pre_data[j]=[i] else: pre_data[j].append(i) ``` Now catch the length of count ``` final_result={} for i,j in pre_data.items(): if len(j)>1: for sub_data in j: final_result[sub_data]=len(j) else: final_result[j[0]]=1 print(final_result) ``` output: ``` {'sue': 2, 'jill': 1, 'tim': 1, 'bob': 2} ``` Upvotes: -1 <issue_comment>username_2: You are half way there. Just link your 2 dictionaries and remember to *never named variables after classes*, e.g. use `d` not `dict`. ``` from collections import Counter d = {'bob': ('farmer', 9), 'sue': ('farmer', 9), 'tim': ('farmer', 5), 'jill': ('chef', 8)} x = Counter(x for x in d.values()) res = Counter({k: x[v] for k, v in d.items()}) # Counter({'bob': 2, 'jill': 1, 'sue': 2, 'tim': 1}) ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: Your dict omits '' around strings, which would result in an error at runtime. Hence: ``` dict = {'bob': ('farmer', 9), 'sue': ('farmer', 9), 'tim': ('farmer', 5), 'jill': ('chef', 8)} ``` Since the values in your dict are lists in identical format, you can just index them (just like any other list): ``` for k, v in dict.items(): print(k,v[1]) #OUTPUT: tim 5 bob 9 jill 8 sue 9 ``` Upvotes: 1
2018/03/14
932
2,546
<issue_start>username_0: *(I know for loops aren't the preferred choice in R but this was the best I could come up with)* I'm trying to loop through a vector and return the vector value once a condition is met. Once the next condition is met I would like to drop the variable. So far I've gotten to the following: ``` df = c(1:10) sig = function (df) { pos = integer(10) for (i in 1:10) { if (df[i] > 3 ) { # Once df[i] is bigger than 3 store the value of df[i] pos[i] = df[i] } else if(df[i] < 7 ){ # Keep value of df[i] until next condition is met pos[i] = pos[i - 1] } else{pos[i] = 0} # set the value back to 0 } reclass(pos,df) } sig(df) ``` I'm getting the following error `Error in pos[i] <- pos[i - 1] : replacement has length zero` The answer should look like the following: ``` df sig 1 0 2 0 3 0 4 4 5 4 6 4 7 0 8 0 9 0 10 0 ``` Any ideas?<issue_comment>username_1: **First don't use dict as a variable name:** You can try this approach without importing anything. ``` dict_1 = {'bob': ('farmer', 9),'sue': ('farmer', 9), 'tim': ('farmer', 5), 'jill':('chef', 8)} ``` First group similar values: ``` pre_data={} for i,j in dict_1.items(): if j not in pre_data: pre_data[j]=[i] else: pre_data[j].append(i) ``` Now catch the length of count ``` final_result={} for i,j in pre_data.items(): if len(j)>1: for sub_data in j: final_result[sub_data]=len(j) else: final_result[j[0]]=1 print(final_result) ``` output: ``` {'sue': 2, 'jill': 1, 'tim': 1, 'bob': 2} ``` Upvotes: -1 <issue_comment>username_2: You are half way there. Just link your 2 dictionaries and remember to *never named variables after classes*, e.g. use `d` not `dict`. ``` from collections import Counter d = {'bob': ('farmer', 9), 'sue': ('farmer', 9), 'tim': ('farmer', 5), 'jill': ('chef', 8)} x = Counter(x for x in d.values()) res = Counter({k: x[v] for k, v in d.items()}) # Counter({'bob': 2, 'jill': 1, 'sue': 2, 'tim': 1}) ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: Your dict omits '' around strings, which would result in an error at runtime. Hence: ``` dict = {'bob': ('farmer', 9), 'sue': ('farmer', 9), 'tim': ('farmer', 5), 'jill': ('chef', 8)} ``` Since the values in your dict are lists in identical format, you can just index them (just like any other list): ``` for k, v in dict.items(): print(k,v[1]) #OUTPUT: tim 5 bob 9 jill 8 sue 9 ``` Upvotes: 1
2018/03/14
762
2,534
<issue_start>username_0: given the following code ``` List radia = Arrays.asList(1.0, 1.3, 1.6, 1.9); List listOfBalls = new ArrayList<>(); radia.forEach(radius -> listOfBalls.add(new Ball(radius))); listOfBalls.stream().map(b -> b.getVolume()) .filter(d -> d>10) .forEach(d -> pprint(d)); ``` How do I retain which Ball is being printed in the last forEach? I would like to be able to print something like ``` "Ball with radius " b.getRadius() + " has volume" + d ```<issue_comment>username_1: As lambdas cannot assign to variables outside of their scope, you would have to use an object in the higher scope in order to store the result. A note is that this is not the intended use of lambdas or the streams API. If you're seeking a single final result, you should use `findFirst` or `findAny` like so: ``` listOfBalls.stream().map(Ball::getVolume) .filter(d -> d>10) .findFirst(); ``` If you're looking for a `List` of `Balls` then use `Collectors.toList()` like so: ``` List result = listOfBalls.stream().map(Ball::getVolume) .filter(d -> d>10) .collect(Collectors.toList()); ``` At that point you can then iterate through the list and output what you'd like. Streams are consumed upon operation, which means you cannot use them after you've called `forEach`, lists are not bound by this restriction. Upvotes: 3 [selected_answer]<issue_comment>username_2: You can achieve what you want by **not mapping** each `Ball` to its volume, yet filtering as you need: ``` listOfBalls.stream() .filter(b -> b.getVolume() > 10) .forEach(b -> System.out.println( "Ball with radius " + b.getRadius() + " has volume " + b.getVolume())); ``` --- **EDIT** as per [the comment](https://stackoverflow.com/questions/49275807/retaining-original-object-for-end-of-map-and-filter-chain/49282031?noredirect=1#comment85590320_49282031): If invoking the `Ball.getVolume` method twice is not desirable (either due to an expensive computation or a to DB access), you could pass the result of that method along with the corresponding `Ball` instance down the stream. If you are on Java 9+: ``` listOfBalls.stream() .map(b -> Map.entry(b, b.getVolume())) // perform expensive call here .filter(e -> e.getValue() > 10) .forEach(e -> System.out.println( "Ball with radius " + e.getKey().getRadius() + " has volume " + b.getValue())); ``` If you are on Java 8, you can use `new AbstractMap.SimpleEntry<>(...)` instead of `Map.entry(...)`. Upvotes: 0
2018/03/14
854
2,986
<issue_start>username_0: I tried the following code to print all the files to default printer. But now challenging task is that once files are moved to printer, I have to delete the folder. I tried to delete but files are getting deleted before the all files are moved to printer. How to check that once the files are moved then only to delete the folder? ``` TargetFolder = "C:\users\asankati\desktop\testsb" Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.Namespace(TargetFolder) Set colItems = objFolder.Items For Each objItem in colItems objItem.InvokeVerbEx("print") Next strPath = "C:\users\asankati\desktop\testsb" DeleteFolder strPath Function DeleteFolder(strFolderPath) Dim objFSO, objFolder Set objFSO = CreateObject ("Scripting.FileSystemObject") If objFSO.FolderExists(strFolderPath) Then objFSO.DeleteFolder strFolderPath, True End If Set objFSO = Nothing End Function ```<issue_comment>username_1: As lambdas cannot assign to variables outside of their scope, you would have to use an object in the higher scope in order to store the result. A note is that this is not the intended use of lambdas or the streams API. If you're seeking a single final result, you should use `findFirst` or `findAny` like so: ``` listOfBalls.stream().map(Ball::getVolume) .filter(d -> d>10) .findFirst(); ``` If you're looking for a `List` of `Balls` then use `Collectors.toList()` like so: ``` List result = listOfBalls.stream().map(Ball::getVolume) .filter(d -> d>10) .collect(Collectors.toList()); ``` At that point you can then iterate through the list and output what you'd like. Streams are consumed upon operation, which means you cannot use them after you've called `forEach`, lists are not bound by this restriction. Upvotes: 3 [selected_answer]<issue_comment>username_2: You can achieve what you want by **not mapping** each `Ball` to its volume, yet filtering as you need: ``` listOfBalls.stream() .filter(b -> b.getVolume() > 10) .forEach(b -> System.out.println( "Ball with radius " + b.getRadius() + " has volume " + b.getVolume())); ``` --- **EDIT** as per [the comment](https://stackoverflow.com/questions/49275807/retaining-original-object-for-end-of-map-and-filter-chain/49282031?noredirect=1#comment85590320_49282031): If invoking the `Ball.getVolume` method twice is not desirable (either due to an expensive computation or a to DB access), you could pass the result of that method along with the corresponding `Ball` instance down the stream. If you are on Java 9+: ``` listOfBalls.stream() .map(b -> Map.entry(b, b.getVolume())) // perform expensive call here .filter(e -> e.getValue() > 10) .forEach(e -> System.out.println( "Ball with radius " + e.getKey().getRadius() + " has volume " + b.getValue())); ``` If you are on Java 8, you can use `new AbstractMap.SimpleEntry<>(...)` instead of `Map.entry(...)`. Upvotes: 0
2018/03/14
1,374
3,933
<issue_start>username_0: I'm trying to create an HTML file, which certains Python variables that have to be evaluated. My code looks like this: ``` name = ['Nora', 'John', 'Jack', 'Jessica'] html = """ Names * Mother: <%= name[0] %> * Father: <%= name[1] %> * Son: <%= name[2] %> * Daughter: <%= name[3] %> """ Html_file = open("names.html","w") Html_file.write(html) Html_file.close() ``` However, the array is not interpreted during output. My HTML source looks like this: ``` ... * Mother: <%= name[0] %> * Father: <%= name[1] %> * Son: <%= name[2] %> * Daughter: <%= name[3] %> ... ``` How can I evaluate the python code that's surrounded by `<%= %>`?<issue_comment>username_1: ``` html = """* Mother: {0} * Father: {1} * Son: {2} * Daughter: {3} """ name = ['Nora', 'John', 'Jack', 'Jessica'] print(html.format(*name)) >>>* Mother: Nora * Father: John * Son: Jack * Daughter: Jessica ``` Upvotes: 0 <issue_comment>username_2: A string won't automatically evaluate code inside, but you can achieve this in a handful of ways: Introduce placeholders and format your string: ``` name = ['Nora', 'John', 'Jack', 'Jessica'] html = """ Names * Mother: {0} * Father: {1} * Son: {2} * Daughter: {3} """ Html_file = open("names.html","w") Html_file.write(html.format(name[0], name[1], name[2], name[3]) Html_file.close() ``` This is a very simple way to do it. There are more advanced approaches, such as using a template engine. [Here](https://www.fullstackpython.com/template-engines.html) you can read more about them. Upvotes: 0 <issue_comment>username_3: *There're multiple ways of achieving this* First off, if you're on **Python 3.6 or higher**, there's a new syntax called [f-string](https://stackoverflow.com/q/43123408/6622817), which is basically a method of string formatting at run time. ``` name = ['Nora', 'John', 'Jack', 'Jessica'] html = f""" Names * Mother: {name[0]} * Father: {name[1]} * Son: {name[2]} * Daughter: {name[3]} """ print(html) ``` The way you use f-string is fairly easy, add an `f` in the beginning of the string, and use `{ }` instead of `<%= %>`. --- If you're on **any Python version**, or wanted a version-compatible method, there are many other ways of [string interpolation](https://www.python.org/dev/peps/pep-0498/) (ie. C-style string formatting `%`, Python string formatting `.format()`, and string concatenation), one of which (`.format()`) is in the other answers. --- Without changing your HTML: using `re` and `eval` ================================================= If you don't have control over where you got the "need-to-be-substituted" html, or if you *have to* use the `<%= %>` scheme, you can simply use a combination of `re` and `eval`: ``` from re import sub name = ['Nora', 'John', 'Jack', 'Jessica'] html = """ Names * Mother: <%= name[0] %> * Father: <%= name[1] %> * Son: <%= name[2] %> * Daughter: <%= name[3] %> """ html = sub(r"<%=\s*(\S+)\s*%>", lambda l: eval(l.group(1)), html) print(html) ``` Upvotes: 2 [selected_answer]<issue_comment>username_4: You can use regex to more accurately evaluate the templating: ``` import re name = ['Nora', 'John', 'Jack', 'Jessica'] def render_template(html, **kwargs): return re.sub('\<%\=\s[a-zA-Z]+\[\d+\]\s%\>', '{}', html).format(*[kwargs.get(re.findall('[a-zA-Z]+', i)[0])[int(re.findall('\d+', i)[0])] for i in re.findall('(?<=\<%\=\s)[a-zA-Z]+\[\d+\](?=\s%)', html)]) print(render_template(html, name = name)) ``` Output: ``` Names * Mother: Nora * Father: John * Son: Jack * Daughter: Jessica ``` This solution will also work if `name` elements are being accessed in random order: ``` html = """ * Mother: <%= name[3] %> * Father: <%= name[1] %> * Son: <%= name[0] %> * Daughter: <%= name[2] %> """ print(render_template(html, name = name)) ``` Output: ``` * Mother: Jessica * Father: John * Son: Nora * Daughter: Jack ``` Upvotes: 0
2018/03/14
1,251
3,647
<issue_start>username_0: I am new to git. I changed my current branch via terminal by using ``` git checkout -b branch1 ``` After that I created another branch ``` git checkout -b branch2 ``` Now I am not able to change the branch back to branch1. I dont know whats wrong.I tried the following code ``` git checkout -b branch1 ``` I am using smartgit to access git files.<issue_comment>username_1: ``` html = """* Mother: {0} * Father: {1} * Son: {2} * Daughter: {3} """ name = ['Nora', 'John', 'Jack', 'Jessica'] print(html.format(*name)) >>>* Mother: Nora * Father: John * Son: Jack * Daughter: Jessica ``` Upvotes: 0 <issue_comment>username_2: A string won't automatically evaluate code inside, but you can achieve this in a handful of ways: Introduce placeholders and format your string: ``` name = ['Nora', 'John', 'Jack', 'Jessica'] html = """ Names * Mother: {0} * Father: {1} * Son: {2} * Daughter: {3} """ Html_file = open("names.html","w") Html_file.write(html.format(name[0], name[1], name[2], name[3]) Html_file.close() ``` This is a very simple way to do it. There are more advanced approaches, such as using a template engine. [Here](https://www.fullstackpython.com/template-engines.html) you can read more about them. Upvotes: 0 <issue_comment>username_3: *There're multiple ways of achieving this* First off, if you're on **Python 3.6 or higher**, there's a new syntax called [f-string](https://stackoverflow.com/q/43123408/6622817), which is basically a method of string formatting at run time. ``` name = ['Nora', 'John', 'Jack', 'Jessica'] html = f""" Names * Mother: {name[0]} * Father: {name[1]} * Son: {name[2]} * Daughter: {name[3]} """ print(html) ``` The way you use f-string is fairly easy, add an `f` in the beginning of the string, and use `{ }` instead of `<%= %>`. --- If you're on **any Python version**, or wanted a version-compatible method, there are many other ways of [string interpolation](https://www.python.org/dev/peps/pep-0498/) (ie. C-style string formatting `%`, Python string formatting `.format()`, and string concatenation), one of which (`.format()`) is in the other answers. --- Without changing your HTML: using `re` and `eval` ================================================= If you don't have control over where you got the "need-to-be-substituted" html, or if you *have to* use the `<%= %>` scheme, you can simply use a combination of `re` and `eval`: ``` from re import sub name = ['Nora', 'John', 'Jack', 'Jessica'] html = """ Names * Mother: <%= name[0] %> * Father: <%= name[1] %> * Son: <%= name[2] %> * Daughter: <%= name[3] %> """ html = sub(r"<%=\s*(\S+)\s*%>", lambda l: eval(l.group(1)), html) print(html) ``` Upvotes: 2 [selected_answer]<issue_comment>username_4: You can use regex to more accurately evaluate the templating: ``` import re name = ['Nora', 'John', 'Jack', 'Jessica'] def render_template(html, **kwargs): return re.sub('\<%\=\s[a-zA-Z]+\[\d+\]\s%\>', '{}', html).format(*[kwargs.get(re.findall('[a-zA-Z]+', i)[0])[int(re.findall('\d+', i)[0])] for i in re.findall('(?<=\<%\=\s)[a-zA-Z]+\[\d+\](?=\s%)', html)]) print(render_template(html, name = name)) ``` Output: ``` Names * Mother: Nora * Father: John * Son: Jack * Daughter: Jessica ``` This solution will also work if `name` elements are being accessed in random order: ``` html = """ * Mother: <%= name[3] %> * Father: <%= name[1] %> * Son: <%= name[0] %> * Daughter: <%= name[2] %> """ print(render_template(html, name = name)) ``` Output: ``` * Mother: Jessica * Father: John * Son: Nora * Daughter: Jack ``` Upvotes: 0
2018/03/14
153
567
<issue_start>username_0: I have a dropdown as follows. ``` {{TeacherDetail.TeacherName}} ``` Now this.userService.TeacherDetails is an array of Teacher objects which i am iterating.I have a value already set in UpdateTeacher.TeacherId as 2 When displaying the drop down 2 option must be pre selected. How to achieve it? Thanks in advance.<issue_comment>username_1: value should be written as an input `[value]="something"` i.e ``` {{TeacherDetail.TeacherName}} ``` Upvotes: 2 <issue_comment>username_2: ``` {{TeacherDetail.TeacherName}} ``` Upvotes: 0
2018/03/14
276
892
<issue_start>username_0: I have the following txt-file containing a block of three lines and repeating (4000 lines total): ``` Printer1 /900 HBA/8/7 Printer2 /800 HBA/7/2 ``` Now I would like to move the second line to the end of the third line and then repeat (5th line to end of 6th; 8th line to end of 9th and so on) Any chance this can be done with notepad++? Or maybe Excel macro? I found some examples with regex and vmi but the problem is they were looking for keywords. I would just like to have the whole 2nd line moved to end of 3rd... and then continue the pattern (5th->6th; 8th->9th) Any input/idea/solution is deeply appreciated. Kind regards Mitch<issue_comment>username_1: value should be written as an input `[value]="something"` i.e ``` {{TeacherDetail.TeacherName}} ``` Upvotes: 2 <issue_comment>username_2: ``` {{TeacherDetail.TeacherName}} ``` Upvotes: 0
2018/03/14
961
3,549
<issue_start>username_0: Please take a look at the following ASP.NET code: ```asp <%@ Page Language="C#" %> <%Response.Write("This is sentence 1.");%> <%Response.Write("This is sentence 2.");%> ``` I expected it to build a short paragraph by joining two strings together, with a white space between them (please note the white char between `<%Response.Write("This is sentence 1.");%>` and `<%Response.Write("This is sentence 2.");%>`). However, the output HTML I get from IIS 7.5 is: ```asp This is sentence 1.This is sentence 2. ``` Which contains no white space between both sentences. Interestingly, if I place the white space inside the second sentence: ```asp <%@ Page Language="C#" %> <%Response.Write("This is sentence 1.");%><%Response.Write(" This is sentence 2.");%> ``` Then it is carried on to the HTML. But I would prefer the white spaces to be in the code building the composition, not in the data on which it works, since I do not know, by the time I write the individual sentences, which ones will go into the paragraph or in what order. Is this the expected behaviour, or am I doing something wrong? UPDATE: VDWWD points out an interesting remark; if I use `<%="..."%>` instead of `<%Response.Write("...");%>` the whitespace is indeed carried on to the HTML. But that makes me scratch my head even more, because this works on my simplified test case posted above, but not on my actual use case which looks more like this: ``` ... <%=TextoWeb("Ponencias", "QuieresSubirTuPonencia?")%><% var InicioPonencias = Sesión.ElementoTimelinePorNombre("INICIO PONENCIAS"); if (DateTime.Today < InicioPonencias.Fecha) { %> <%=TextoWeb("Ponencias", "TextoAntesAperturaPonencias", InicioPonencias.Fecha.ToLongDateHtml(Sesión.Cultura))%><% } %> ... ``` Please excuse the Spanish and the non-standard extensions. Function `TextoWeb` retrieves some localised text by category and name according to the language of the page being built, `Sesión.ElementoTimelinePorNombre` retrieves some timeline item by name and `.ToLongDateHtml(System.Globalization.CultureInfo)` does some language-specific high level formatting of the dates to add things like ordinal indicators. The purpose of this specific piece of code is adding a sentence to an existing paragraph, but only if the current date is earlier than a certain date. The thing is that I am using `<%=(...)%>` instead of `<%Response.Write(...);%>` but the white space is not being carried on to the HTML.<issue_comment>username_1: Do not use Response.Write. Then the space is there. ``` <%= "This is sentence 1." %> <%= "This is sentence 2." %> ``` Update You probably don't get a space because you have split the `<% %>` between inline code. The compiler makes it a single line. Try to do the if statements in code behind for much cleaner aspx. Or you a ternary opertator, that way there is also a space. ``` <%=TextoWeb("Ponencias", "QuieresSubirTuPonencia?YYY")%> <%= DateTime.Today < DateTime.Now ? TextoWeb("xxxPonencias", "QuieresSubirTuPonencia?YYY") : "" %> ``` Upvotes: 1 <issue_comment>username_2: You can use a [HTML entity](https://www.w3schools.com/html/html_entities.asp) which is for a regular space, or the more memorable for a non-breaking space if that is desired. I don't know the exact mechanism behind the stripping, but most likely it has something to do with the order in which cshtml is compiled. Stripping the whitespace from an empty tag seen as by the engine before being populated would be expected behaviour. Upvotes: 3 [selected_answer]
2018/03/14
1,060
3,852
<issue_start>username_0: i am getting crazy with filtering a (postgres) JSONField in Django 2.0.3. The json is stored as an array. E.g. ``` tasks = [{"task":"test","level":"10"},{"task":"test 123","level":"20"}] ``` What i've tried: ``` myModel.objects.filter("tasks__task__contains"="test") myModel.objects.filter("tasks_task__0__contains"="test") myModel.objects.filter("tasks__0__task__contains"="test") myModel.objects.filter("tasks_task__0_x__contains"="test") myModel.objects.filter("tasks__0_x__task__contains"="test") ``` What goes wrong? What i want to do is a icontains - but as i already read there is not support for icontains on jsonfields in Django right now...<issue_comment>username_1: I see two problems here. 1. The Django filter options are there to filter for Django objects, not objects within a field. You could definitely filter for an object that contains a task "test" but you cannot filter for the specific task within the JSONField in the object (you need to first retrieve the content of the django object and then query in an additional step) 2. As far as I understand the [django documentation on JSONField](https://docs.djangoproject.com/en/2.0/ref/contrib/postgres/fields/#querying-jsonfield), the `contains` operator only checks for keys in a dictionary or elements in a list. Appending it to a lookup query in hope that it compares a value like I understand your examples will thus not work. However, it is possible to query a dictionary with `contains`. In your case, this should work for querying the django object: myModel.objects.filter(tasks\_\_contains={"task": "test"}) If you are only interested in the one dictionary and not the others, you will need to expand this query by afterwards extracting the correct object: ``` matching_objects = myModel.objects.filter(tasks__contains={"task": "test"}) for matching_object in matching_objects: for matching_task in [task for task in matching_object.tasks if "task" in task and task["task"] == "test" ]: print "found task", matching_task ``` [See also this related stackoverflow answer for lookups in JSONFields with `contains`.](https://stackoverflow.com/q/34358278/1331407) Update: Django versions 3.1+ ---------------------------- Later Django versions (3.1+) have a generally available JSONField. This field is not purely bound to Postgres anymore. Instead, it works ([according to the Django documentation for version 4.0](https://docs.djangoproject.com/en/4.0/topics/db/queries/#querying-jsonfield)) with > > MariaDB 10.2.7+, MySQL 5.7.8+, Oracle, PostgreSQL, and SQLite (with the JSON1 extension enabled) > > > The `contains` operator will check for matching key/value pairs on the root level of the dictionary here. Still, it would not pick up `test 123` as the question asked for. Upvotes: 0 <issue_comment>username_2: The `contains` keyword in filter is very powerful. You can use the following command to filter out rows in MyModel from any of your fields in the array of dictionaries in the Jsonb column type. ``` MyModel.objects.filter(tasks__contains=[{"task":"test"}]) ``` This is the most ORM friendly solution I have found to work here, without the case insensitive approach. For case insentitive, as you rightly said, Django does not have icontains for json, use `MyModel.objects.extra("")` for that by inserting the SQL query for "ILIKE" operator in postgres. Upvotes: 3 <issue_comment>username_3: ``` myModel.objects.filter(tasks__contains=["task":"test"]) ``` Upvotes: 1 <issue_comment>username_4: The right answer should be: ``` myModel.objects.filter(tasks__contains=[{"task":"test"}]) ``` You may want to add more filters to narrow down and speed up the query if needed, something like ``` myModel.objects.filter(Q(tasks_level=10, tasks__contains=[{"task":"test"}])) ``` Upvotes: 5 [selected_answer]
2018/03/14
611
2,089
<issue_start>username_0: I have a class in java that builds some sophisticated Spark DataFrame. ``` package companyX; class DFBuilder { public DataFrame build() { ... return dataframe; } } ``` I add this class to the pyspark/jupiter classpath so its callable by py4j. Now when I call it I get strange type: ``` b = sc._jvm.companyX.DFBuilder() print(type(b.build())) #prints: py4j.java_gateway.JavaObject ``` VS ``` print(type(sc.parallelize([]).toDF())) #prints: pyspark.sql.dataframe.DataFrame ``` Is there a way to convert this JavaObject into proper pyspark dataframe? One of the problems I have is that when I want to call df.show() on a DataFrame build in java is that it gets printed in spark logs, and not in notebook cell.<issue_comment>username_1: You can use `DataFrame` initializer: ``` from pyspark.sql import DataFrame, SparkSession spark = SparkSession.builder.getOrCreate() DataFrame(b.build(), spark) ``` If you use outdated Spark version replace `SparkSession` instance with `SQLContext`. Refference [Zeppelin: Scala Dataframe to python](https://stackoverflow.com/q/35719142) Upvotes: 3 [selected_answer]<issue_comment>username_2: As of spark 2.4 you still should be using `SQLContext` instead of `SparkSession` when wrapping scala dataframe in python one. Some relevant `pyspark` `session` code: ``` self._wrapped = SQLContext(self._sc, self, self._jwrapped) ... # in methods returning DataFrame return DataFrame(jdf, self._wrapped) ``` If `SparkSession` gets passed some methods like `toPandas()` won't work with such `DataFrame`. Upvotes: 2 <issue_comment>username_3: For someone with sparkSession object, even with newer spark (like 3.2) ``` # sparkSession spark = SparkSession.builder.master("local[*]") \ .appName('sample') \ .getOrCreate() # py4j.java_gateway.JavaObject javaOjbectDf= spark._jvm.com.your.javaPackage.DfBuilder() sqlContext = SQLContext(sparkContext=spark.sparkContext, sparkSession=spark) df_from_java = DataFrame(javaOjbectDf, sqlContext) # python DataFrame print(df_from_java) ``` Upvotes: 0
2018/03/14
833
3,302
<issue_start>username_0: I have reached out in several place and the help has been good but hasn't managed to make it work for me. Hopefully you guys can help me here. I am using JS inside a template page of a Flask application. I am using parsley validation to verify a web form. I have also created a custom validator which should ajax call with axios to the back end to determine if an email address is already registered. My Flask Backend returns a boolean string to my html page. All of this looks right. I am getting back the right value. However when I return out of my ".then" function in my axios function the Parsley validator doesn't work/respond. If i remove the axios POST call. and just simply say "return false" or "return true" the function works. The validation message is returned to the screen. So there appears to be an issue with putting the return statment inside my .axios .then function? Could some one see if I am doing anything wrong here? I cannot for the life of me work out why it doesn't work. Thanks. Tim. I have included the FAILING JS Code here. ``` window.Parsley.addValidator('existingEmail', { validateString: function(value) { // Performing a POST request var promise = axios.post('/api/v1.0/existingEmailCheck', {email : value}) .then(function(response){ var result = (response.data.toLowerCase() === "true"); console.log(result) return result }); console.log(promise) return promise }, messages: { en: "This email address has already been registered.", } }); ``` I have also included a slightly modified (without axios call) code that works here. ``` window.Parsley.addValidator('existingEmail', { validateString: function(value) { var string = "false" //This Simulates the Incoming Data. var result = (string.toLowerCase() === "true"); console.log(result) return result //Always returns false (due to hard coding) //However this means that it triggers the error message and validation fails //When you change the string to "true" it works as expected and validates }, messages: { en: "This email address has already been registered.", } }); ``` The example works as expected. Which leads me to believe the error in the first code example is somewhere between the call and the IF ELSE that returns the False or True Any help people can give me would be awesome!<issue_comment>username_1: I think you need to return a jQuery promise instead of whatever `axios` returns (native promise?). FWIW, it looks like you could use the builtin `remote` validator. Upvotes: 0 <issue_comment>username_2: Just attempted to use Axios myself (Wanted to do away with jQuery and it's huge library). Ended up creating a AsyncValidator (uses Ajax behind the scenes). **In your HTML just import jQuery and Parsley** ``` ``` **Define your DOM element to support remote validators** ``` ``` **In your `index.js` file, add you Async validator.** ``` Parsley.addAsyncValidator('checkExists', function (xhr) { return false === xhr.responseJSON; }, '/data-management/verify-data?filter=signup'); ``` The resulting Ajax request being made is `/data-management/verify-data?filter=signup&id=value` value being the value of the input field. That ended up working for me. Let me know if you need help. Upvotes: 1
2018/03/14
3,287
12,417
<issue_start>username_0: I'm trying to use Python's asyncio to run multiple servers together, passing data between them. For my specific case I need a web server with websockets, a UDP connection to an external device, as well as database and other interactions. I can find examples of pretty much any of these individually but I'm struggling to work out the correct way to have them run concurrently with data being pushed between them. The closest I have found here is here: [Communicate between asyncio protocol/servers](https://stackoverflow.com/q/25625153/868546) (although I've been unable to make it run on Python 3.6) For a more concrete example: How would I take the following aiohttp example code from <https://github.com/aio-libs/aiohttp>: ``` from aiohttp import web async def handle(request): name = request.match_info.get('name', "Anonymous") text = "Hello, " + name return web.Response(text=text) async def wshandler(request): ws = web.WebSocketResponse() await ws.prepare(request) async for msg in ws: if msg.type == web.MsgType.text: await ws.send_str("Hello, {}".format(msg.data)) elif msg.type == web.MsgType.binary: await ws.send_bytes(msg.data) elif msg.type == web.MsgType.close: break return ws app = web.Application() app.router.add_get('/echo', wshandler) app.router.add_get('/', handle) app.router.add_get('/{name}', handle) web.run_app(app) ``` and the following TCP echo server sample (<http://asyncio.readthedocs.io/en/latest/tcp_echo.html>): ``` import asyncio async def handle_echo(reader, writer): data = await reader.read(100) message = data.decode() addr = writer.get_extra_info('peername') print("Received %r from %r" % (message, addr)) print("Send: %r" % message) writer.write(data) await writer.drain() print("Close the client socket") writer.close() loop = asyncio.get_event_loop() coro = asyncio.start_server(handle_echo, '127.0.0.1', 8888, loop=loop) server = loop.run_until_complete(coro) # Serve requests until Ctrl+C is pressed print('Serving on {}'.format(server.sockets[0].getsockname())) try: loop.run_forever() except KeyboardInterrupt: pass # Close the server server.close() loop.run_until_complete(server.wait_closed()) loop.close() ``` and combine them into a single script where any messages received via either websockets or the TCP echo server were sent out to all clients of either? And how would I add a piece of code that (say) every second sent a message to all clients (for the sake of argument the current timestamp)?<issue_comment>username_1: First you need to get all of your coroutines into a single event loop. You can start by avoiding convenience APIs that start the event loop for you such as `run_app`. Instead of `web.run_app(app)`, write something like: ``` runner = aiohttp.web.AppRunner(app) loop.run_until_complete(runner.setup()) # here you can specify the listen address and port site = aiohttp.web.TCPSite(runner) loop.run_until_complete(site.start()) ``` Then run the echo server setup, and both are ready to share the asyncio event loop. At the end of the script, start the event loop using `loop.run_forever()` (or in any other way that makes sense in your application). To broadcast information to clients, create a broadcast coroutine and add it to the event loop: ``` # Broadcast data is transmitted through a global Future. It can be awaited # by multiple clients, all of which will receive the broadcast. At each new # iteration, a new future is created, to be picked up by new awaiters. broadcast_data = loop.create_future() async def broadcast(): global broadcast_data while True: broadcast_data.set_result(datetime.datetime.now()) broadcast_data = loop.create_future() await asyncio.sleep(1) loop.create_task(broadcast()) ``` Finally, await the broadcast in each coroutine created for a client, such as `handle_echo`: ``` def handle_echo(r, w): while True: data = await broadcast_data # data contains the broadcast datetime - send it to the client w.write(str(data)) ``` It should be straightforward to modify the websockets handler coroutine to await and relay the broadcast data in the same manner. Upvotes: 2 <issue_comment>username_2: Based on the advice of @username_1 this is my "working" code. I'm posting it as an answer because it is a complete working script which achieves all the requirements of my original question, but it isn't perfect as it doesn't currently exit cleanly. When run, it will accept web connections on port 8080 and tcp (eg telnet) connections on 8081. Any messages received via its web form or telnet will be broadcast to all connections. Additionally, every 5s the time will be broadcast. Advice on how to exit cleanly (`ctrl`+`C` with web connections established generates multiple "Task was destroyed but it is pending!" errors) would be appreciated so I can update this answer. (The code is quite long as it contains embedded HTML and JS for the websockets component.) ``` import asyncio from aiohttp import web import aiohttp import datetime import re queues = [] loop = asyncio.get_event_loop() # Broadcast data is transmitted through a global Future. It can be awaited # by multiple clients, all of which will receive the broadcast. At each new # iteration, a new future is created, to be picked up by new awaiters. broadcast_data = loop.create_future() def broadcast(msg): global broadcast_data msg = str(msg) print(">> ", msg) if not broadcast_data.done(): broadcast_data.set_result(msg) broadcast_data = loop.create_future() # Dummy loop to broadcast the time every 5 seconds async def broadcastLoop(): while True: broadcast(datetime.datetime.now()) # print('#',end='',flush=True) await asyncio.sleep(5) # Handler for www requests async def wwwhandler(r): host = re.search('https?://([^/]+)/', str(r.url)).group(1) name = r.match_info.get('name', "Anonymous") text = """ WebSocket PHP Open Group Chat App var output; var websocket; function WebSocketSupport() { if (browserSupportsWebSockets() === false) { document.getElementById("ws\_support").innerHTML = "<h2>Sorry! Your web browser does not supports web sockets</h2>"; var element = document.getElementById("wrapper"); element.parentNode.removeChild(element); return; } output = document.getElementById("chatbox"); websocket = new WebSocket('ws:{{HOST}}/ws'); websocket.onopen = function(e) { writeToScreen("You have have successfully connected to the server"); }; websocket.onmessage = function(e) { onMessage(e) }; websocket.onerror = function(e) { onError(e) }; } function onMessage(e) { writeToScreen('<span style="color: blue;"> ' + e.data + '</span>'); } function onError(e) { writeToScreen('<span style="color: red;">ERROR:</span> ' + e.data); } function doSend(message) { var validationMsg = userInputSupplied(); if (validationMsg !== '') { alert(validationMsg); return; } var chatname = document.getElementById('chatname').value; // document.getElementById('msg').value = ""; // document.getElementById('msg').focus(); var msg = chatname + ' says: ' + message; websocket.send(msg); writeToScreen(msg); } function writeToScreen(message) { var pre = document.createElement("p"); pre.style.wordWrap = "break-word"; pre.innerHTML = message; output.appendChild(pre); } function userInputSupplied() { var chatname = document.getElementById('chatname').value; var msg = document.getElementById('msg').value; if (chatname === '') { return 'Please enter your username'; } if (msg === '') { return 'Please the message to send'; } return ''; } function browserSupportsWebSockets() { if ("WebSocket" in window) { return true; } else { return false; } } ### Welcome to WebSocket PHP Open Group Chat App v1 **Name** """ text = text.replace('{{HOST}}', host) return web.Response(text=text, headers={'content-type':'text/html'}) # Handler for websocket connections async def wshandler(r): # Get the websocket connection ws = web.WebSocketResponse() await ws.prepare(r) # Append it to list so we can manage it later if needed r.app['websockets'].append(ws) try: # Create the broadcast task, and add it to list for later management echo_task = asyncio.Task(echo_loop(ws)) r.app['tasks'].append(echo_task) # Tell the world we've connected # Note: Connecting client won't get this message, not really sure why broadcast('Hello {}'.format(r.remote)) # await ws.send_str('Hello {}'.format(r.remote)) # Loop through any messages we get from the client async for msg in ws: # .. and broadcast them if msg.type == web.WSMsgType.TEXT: print('<< ', msg.data) broadcast(msg.data) # await ws.send_str("Hello, {}".format(msg.data)) # elif msg.type == web.WSMsgType.BINARY: # await ws.send_bytes(msg.data) elif msg.type == web.WSMsgType.CLOSE: print('WS Connection closed') break elif msg.type == web.WSMsgType.ERROR: print('WS Connection closed with exception %s' % ws.exception()) break else: print('WS Connection received unknown message type %2' % msg.type) # ws has stopped sending us data so broadcast goodbye broadcast('Goodbye {}'.format(r.remote)) except GeneratorExit: pass finally: # Close the ws and remove it from the list await ws.close() r.app['websockets'].remove(ws) # Cancel the task and remove it from the list # Note: cancel() only requests cancellation, it doesn't wait for it echo_task.cancel() r.app['tasks'].remove(echo_task) return ws # ws broadcast loop: Each WS connection gets one of these which waits for broadcast data then sends it async def echo_loop(ws): while True: msg = await broadcast_data await ws.send_str(str(msg)) # web app shutdown code: cancels any open tasks and closes any open websockets # Only partially working async def on_shutdown(app): print('Shutting down:', end='') for t in app['tasks']: print('#', end='') if not t.cancelled(): t.cancel() for ws in app['websockets']: print('.', end='') await ws.close(code=aiohttp.WSCloseCode.GOING_AWAY, message='Server Shutdown') print(' Done!') # Code to handle TCP connections async def echo_loop_tcp(writer): while True: msg = await broadcast_data writer.write( (msg + "\r\n").encode() ) await writer.drain() async def handle_echo(reader, writer): echo_task = asyncio.Task(echo_loop_tcp(writer)) while True: data = await reader.readline() if not data: break message = data.decode().strip() # addr = writer.get_extra_info('peername') broadcast(message) print("Connection dropped") echo_task.cancel() tcpServer = loop.run_until_complete(asyncio.start_server(handle_echo, '0.0.0.0', 8081, loop=loop)) print('Serving on {}'.format(tcpServer.sockets[0].getsockname())) # The application code: app = web.Application() app['websockets'] = [] app['tasks'] = [] app.router.add_get('/ws', wshandler) app.router.add_get('/', wwwhandler) app.router.add_get('/{name}', wwwhandler) app.on_shutdown.append(on_shutdown) def main(): # Kick off the 5s loop tLoop=loop.create_task(broadcastLoop()) # Kick off the web/ws server async def start(): global runner, site runner = web.AppRunner(app) await runner.setup() site = web.TCPSite(runner, '0.0.0.0', 8080) await site.start() async def end(): await app.shutdown() loop.run_until_complete(start()) # Main program "loop" try: loop.run_forever() except KeyboardInterrupt: pass finally: # On exit, kill the 5s loop tLoop.cancel() # .. and kill the web/ws server loop.run_until_complete( end() ) # Stop the main event loop loop.close() if __name__ == '__main__': main() ``` Upvotes: 2
2018/03/14
256
871
<issue_start>username_0: I am new to Java 8. I have a list of objects of class A, where structure of A is as follows: ``` class A { int name, boolean isActive } ``` Now I have a list of elements L of class A, in that list I want to update an element having name="test" with inactive=false. I can do this very easily by writing a for loop and creating a new list. But how would I do that using Java 8 stream API?<issue_comment>username_1: You can do it like this. ``` L.stream() .filter(item-> item.getName().equals("test")) .forEachOrdered(a -> a.setActiv(false)); ``` I believe data type of name should be `String` not `int` in your question Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` yourList.replaceAll(x -> { if(x.getName().equals("SomeName")){ x.setIsActive(false); return x; } return x; }) ``` Upvotes: 0
2018/03/14
330
1,182
<issue_start>username_0: Im facing a problem with my Wordpress Dashboard , Its displaying a warring in the main page and when adding a menu page, its states: ``` Warning: htmlspecialchars(): charset `Windows-1256' not supported, assuming utf-8 in /home1/khaledal/public_html/site/wp-admin/includes/template.php on line 1021 ``` I have tried to check the file mentioned in the warring but with no luck as I cant find this specific Charset which is `Windows-1256`<issue_comment>username_1: As per the docs, the [htmlspecialchars](https://secure.php.net/manual/en/function.htmlspecialchars.php) function uses `ini_get("default_charset")` when no charset is passed. I suggest you set your [default\_charset](https://secure.php.net/manual/en/ini.core.php#ini.default-charset) to `"UTF-8"`. If (as it seems is the case) your setting is explicitly `"Windows-1256"` you might want to watch out for other problems in case that was deliberate for some reason. Upvotes: 1 <issue_comment>username_2: I Fixed the problem by installing a plugin called [PHP Settings](https://wordpress.org/plugins/php-settings/) ! Just enter the line default\_charset = "utf-8" and your done !! Upvotes: 0
2018/03/14
676
1,828
<issue_start>username_0: ``` df A B C 0 500 515 Jack 1 510 515 Helen 2 520 515 Mathiew 3 530 515 Jordan ``` I want to get a new `df1` with next conditions: * Select the rows where *A = B*. * If it doesn´t exist any row where *A = B*, select the first existing row where *A > B*. In this case, `df1` should be: ``` A B C 2 520 515 Mathiew ``` I´ve tried: ``` df1 = df[df["A"] == df["B"]] ```<issue_comment>username_1: I'm not sure the structure of `df` so I will assume it is an list of dicts ``` df1 = [row for row in df if row["a"] == row["b"]] if len(df1) == 0: for row in df: if row["a"] > row["b"]: df1.append(row) break ``` Upvotes: 0 <issue_comment>username_2: First check if at least one equal row by [`any`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.any.html) and then filter by your solution, if not, get [`idxmax`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.idxmax.html) for index of max value (necessary unique index values): ``` if (df["A"] == df["B"]).any(): df1 = df[df["A"] == df["B"]] else: df1 = df.loc[[(df["A"] > df["B"]).idxmax()]] ``` Alternative is select first row by `iloc`: ``` if (df["A"] == df["B"]).any(): df1 = df[df["A"] == df["B"]] else: df1 = df.loc[(df["A"] > df["B"])].iloc[0] ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: ``` if (df["A"] == df["B"]).sum() == 0: first_bigger = (df["A"] > df["B"]).idxmax() new_df = df.iloc[first_bigger : first_bigger+1] else: new_df = df[df["A"] == df["B"]] new_df ``` If `if (df["A"] == df["B"]).sum()` is equal to zero, that means no two items are equal, `(df["A"] > df["B"]).idxmax()` return the first occurrence where A is bigger than B. Upvotes: 0
2018/03/14
1,876
6,277
<issue_start>username_0: Recently I got question about which one is the most fastest thing among `iterator`, `list comprehension`, `iter(list comprehension)` and `generator`. and then make simple code as below. ``` n = 1000000 iter_a = iter(range(n)) list_comp_a = [i for i in range(n)] iter_list_comp_a = iter([i for i in range(n)]) gene_a = (i for i in range(n)) import time import numpy as np for xs in [iter_a, list_comp_a, iter_list_comp_a, gene_a]: start = time.time() np.sum(xs) end = time.time() print((end-start)*100) ``` the result is below. ``` 0.04439353942871094 # iterator 9.257078170776367 # list_comprehension 0.006318092346191406 # iterator of list_comprehension 7.491207122802734 # generator ``` generator is so slower than other thing. and I don't know when it is useful?<issue_comment>username_1: `generators` do not store all elements in a memory in one go. They `yield` one at a time, and this behavior makes them memory efficient. Thus you can use them when memory is a constraint. Upvotes: 6 [selected_answer]<issue_comment>username_2: I think I asked a wrong question, maybe. in original code, it was not correct because the `np.sum` doesn't works well. `np.sum(iterator)` doesn't return correct answer. So, I changed my code like below. ``` n = 10000 iter_a = iter(range(n)) list_comp_a = [i for i in range(n)] iter_list_comp_a = iter([i for i in range(n)]) gene_a = (i for i in range(n)) import time import numpy as np import timeit for xs in [iter_a, list_comp_a, iter_list_comp_a, gene_a]: start = time.time() sum(xs) end = time.time() print("type: {}, performance: {}".format(type(xs), (end-start)*100)) ``` and then, performance is like below. the performance of `list` is best and iterator is not good. ``` type: , performance: 0.021791458129882812 type: , performance: 0.013279914855957031 type: , performance: 0.02429485321044922 type: , performance: 0.13570785522460938 ``` and like @username_1 already mentioned, the list is better for performance, but when memory size is not enough, sum of `list` with too high `n` make the computer slower, but sum of `iterator` with too high `n`, maybe it it really a lot of time to compute, but didn't make the computer slow. Thx for all. When I have to compute a lot of lot of data, generator is better. but, Upvotes: 0 <issue_comment>username_3: As a preamble : your whole benchmark is just plain wrong - the "list\_comp\_a" test doesn't test the construction time of a list using a list comprehension (nor does "iter\_list\_comp\_a" fwiw), and the tests using `iter()` are mostly irrelevant - `iter(iterable)` is just a shortcut for `iterable.__iter__()` and is only of any use if you want to manipulate the iterator itself, which is practically quite rare. If you hope to get some meaningful results, what you want to benchmark are the *execution* of a list comprehension, a generator expression and a generator function. To test their execution, the simplest way is to wrap all three cases in functions, one execution a list comprehension and the other two building lists from resp. a generator expression and a generator built from a generator function). In all cases I used `xrange` as the real source so we only benchmark the effective differences. Also we use `timeit.timeit` to do the benchmark as it's more reliable than manually messing with `time.time()`, and is actually the pythonic standard canonical way to benchmark small code snippets. ``` import timeit # py2 / py3 compat try: xrange except NameError: xrange = range n = 1000 def test_list_comp(): return [x for x in xrange(n)] def test_genexp(): return list(x for x in xrange(n)) def mygen(n): for x in xrange(n): yield x def test_genfunc(): return list(mygen(n)) for fname in "test_list_comp", "test_genexp", "test_genfunc": result = timeit.timeit("fun()", "from __main__ import {} as fun".format(fname), number=10000) print("{} : {}".format(fname, result)) ``` Here (py 2.7.x on a 5+ years old standard desktop) I get the following results: ``` test_list_comp : 0.254354953766 test_genexp : 0.401108026505 test_genfunc : 0.403750896454 ``` As you can see, list comprehensions are faster, and generator expressions and generator functions are mostly equivalent with a very slight (but constant if you repeat the test) advantage to generator expressions. **Now to answer your main question** "why and when would you use generators", the answer is threefold: 1/ memory use, 2/ infinite iterations and 3/ coroutines. First point : memory use. Actually, you don't need generators here, only lazy iteration, which can be obtained by [writing your own iterable / iterable](https://docs.python.org/2/library/stdtypes.html#iterator-types) - like for example the builtin `file` type does - in a way to avoid loading everything in memory and only generating values on the fly. Here generators expressions and functions (and the underlying `generator` class) are a generic way to implement lazy iteration without writing your own iterable / iterator (just like the builtin `property` class is a generic way to use custom `descriptors` without wrting your own descriptor class). Second point: infinite iteration. Here we have something that you can't get from sequence types (lists, tuples, sets, dicts, strings etc) which are, by definition, finite). An example is [the `itertools.cycle` iterator](https://docs.python.org/2/library/itertools.html#itertools.cycle): > > Return elements from the iterable until it is exhausted. > Then repeat the sequence indefinitely. > > > Note that here again this ability comes not from generator functions or expressions but from the iterable/iterator protocol. There are obviously less use case for infinite iteration than for memory use optimisations, but it's still a handy feature when you need it. And finally the third point: coroutines. Well, this is a rather complex concept specially the first time you read about it, so I'll let someone else do the introduction : <https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/> Here you have something that only generators can offer, not a handy shortcut for iterables/iterators. Upvotes: 1
2018/03/14
584
2,613
<issue_start>username_0: I am exploring nodejs and js frameworks. I noticed that when I create a project, for example with `vue` ``` vue init webpack my-project ``` I get a HUGE directory named `node_modules` containing a lot of things not related to my project. Newbie in this field my only wish is to gitignore this folder or better, put it somewhere else. Is it common to have local modules to a project? Is there a way to install all these dependencies globally or in a dedicated environment (e.g Python virtualenv)?<issue_comment>username_1: The directory does contain libraries that are required by your project - and their dependencies. From my experience, the dependencies of the libraries I'm using are about 3/4 of the folder size. You can install a library globally using the `-g` switch of `npm`, I'm not sure if `vue` has similar option. But this is **not recommended** - the point of installing libraries with your project is that the project will remember which libraries belong to it, those are saved in `package.json`. You could copy the `node_modules` directory to the root of your hard-drive and merge it with other `node_modules` directories, but you're risking that you'll mix different library versions that way, so this is not recommended. Unless you're running low on free space, just leave it be. Remember to add the `node_modules` to `.gitignore` if you're using git. Upvotes: 3 [selected_answer]<issue_comment>username_2: In short, **node\_modules is a place where all your project dependencies are stored**. And allows you to use these dependencies in the code if you want to and allows for the modules itself to have it own dependencies if any. And it is very common or rather always the case when a local node\_modules folder is created. You can install dependencies globally by doing `npm install -g module_name` command via your CLI. But these may cause the issue if the global paths are not configured properly.Also, it is not advisable to keep all the required dependencies by an application in global context. If you do not want some dependencies to be part of your production environment you can install them as dev dependencies via `npm install--save-dev module_name` command. These(normal & dev dependencies) will be installed when a developer clones your project and run npm install locally to run the project and run tests. But to ignore these from being installed on production you can execute `npm install --production` command, this will make sure that only dependencies required for your code to run will be installed in the node\_modules folder. Upvotes: 0
2018/03/14
827
2,510
<issue_start>username_0: I have a class To Do and it has a dead line property. ``` class ToDo(models.Model): ... ... dead_line = models.DateTimeField() user = models.ForeignKey(User, on_delete=models.CASCADE) ``` I would like to get all the to-do's except for the to-do's whose dead line value has crossed current date and time. I tried this way: ``` to_do_list = user.todo_set.all().exclude(dead_line__lte=datetime.now(pytz.timezone('Asia/Kolkata'))) ``` But, this gives me all the to-dos. Again, this works just fine and excludes the to-do's which are of current day: ``` to_do_list = user.todo_set.all().exclude(dead_line__day=datetime.now(pytz.timezone('Asia/Kolkata')).day) ``` What am I doing wrong? How can I get all the to-do's whose dead line is greater that current date and time? **Update** I have setup `TIME_ZONE = 'Asia/Kolkata'` and `USE_TZ = True` Sample data: ``` >>> datetime.now(pytz.timezone('Asia/Kolkata')).strftime("%Y-%m-%d %I:%M %p") '2018-03-14 05:04 PM' >>> user.todo_set.all() , , , , , ]> ``` As you can see, the ToDo object should be excluded from the list when excluding the list, but it doesn't. **Update** By looping using the for loop, returns the expected results. ``` >>> to_do_list = [] >>> >>> >>> for todo in user.todo_set.all(): ... if todo.dead_line > datetime.now(pytz.timezone('Asia/Kolkata')): ... to_do_list.append(todo) ... >>> to_do_list [, , , , ] ```<issue_comment>username_1: > > How can I get all the to-do's whose dead line is greater that current date and time > > > I guess you are using `lte` instead of `gt` (greater than)? get only todos whose deadline is later than now > > to\_do\_list = user.todo\_set.all().filter(dead\_line\_\_gt=datetime.now(pytz.timezone('Asia/Kolkata'))) > > > Unless you stored everything with timezone Koalkata, this can be an empty queryset. you should always use UTC. Notice that you are looking up the ToDos of certain user. Make sure the user has data. the second approach you use seems wrong. you're comparing the day, which is a number in the range 1-30, aprox <https://docs.python.org/2/library/datetime.html>: > > date.day > Between 1 and the number of days in the given month of the given year > > > Upvotes: 1 <issue_comment>username_2: Try using `filter()` ``` from django.utils import timezone user.todo_set.filter(dead_line_date__gt=timezone.now()) ``` And update your settings to use timezone aware ``` USE_TZ=True ``` Upvotes: 0
2018/03/14
808
3,079
<issue_start>username_0: I have a parent jsx component that has 2 different jsx components within it. The one component is a button and the other is a div that opens and closes itself when you click on it (it has a click handler and a state of open or closed). I now want to add the ability for the button to open and close the div as well. Is the only way to accomplish this is to pass a handler function down to the button from the parent, moving the div’s open and closed state to the parent component, and pass the state down to the div as props? The reason I ask is that this particular div component is used in a number of different components and removing the open and closed state would affect a lot of different parent components.<issue_comment>username_1: Here's a code example of allowing external state manipulation where you can mix the usage of the button or the div to toggle the state. You extend your Collapsible component to use passed props to update the state. ``` class Collapsible extends React.Component { constructor(props){ super(props); this.state = { isOpen: this.props.isOpen !== false }; this.toggleOpen = this.toggleOpen.bind(this); } componentWillReceiveProps({ isOpen }) { this.setState({ isOpen }); } toggleOpen(){ this.setState((prevState) => ({ isOpen: !prevState.isOpen })) } render() { let display = this.state.isOpen ? null : "none"; return ( header {this.props.children} ); } } class Parent extends React.Component { constructor(props){ super(props); this.state = { isOpen: true }; this.toggleOpen = this.toggleOpen.bind(this); } toggleOpen(){ this.setState((prevState) => ({ isOpen: !prevState.isOpen })) } render() { return ( content toggle ); } } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Here is another code example, hope it helps: [![Edit 3xrw9vny8m](https://codesandbox.io/static/img/play-codesandbox.svg)](https://codesandbox.io/s/3xrw9vny8m) I use the local state of the `Container` and pass this down to the child components. In a bigger app I'd advice you to use something like [Redux](https://redux.js.org/) to manage your state. The central idea is that the parent component passes a function which can "change it's state" to the button child. It also passed the current `isOpen` state to the panel. Clicking the button will change the state of the parent, thus triggering a reflow, thus updating the collapsable. For future reference: ``` import React from "react"; import { render } from "react-dom"; const Collapsable = ({ isOpen }) => isOpen ? ( {" "} Hey, I'm open{" "} ) : ( Oh no...closed :( ); const Button = ({ openPanel }) => ( openPanel()}>Open Panel ); class Container extends React.PureComponent { state = { open: false }; openPanel = () => { this.setState({ ...this.state, open: this.state.open ? false : true }); }; render() { return ( ); } } const App = () => ( ); render(, document.getElementById("root")); ``` Upvotes: 0
2018/03/14
985
2,999
<issue_start>username_0: I have 2 div elements on a HTML page. The first div is supposed to be the top header section (fixed), with the one below it a scrollable content div. The header section is using position fixed and does not scroll. However, when I scroll the second div, it ends up moving up behind the div above it. Is there a way to prevent this so that the second div does not scroll up above its initial top location? Also, the scrollbar itself has the height of the entire page (including the top section). Is there a way to limit the scroll bar to just the second div element. I have tried several permutations, including answers referenced on this page: [Scrolling only content div, others should be fixed](https://stackoverflow.com/questions/17954181/scrolling-only-content-div-others-should-be-fixed) HTML Code (Snippet): ```css .Toolbar { height: 40px; width: 100%; position: fixed; top: 0; left: 0; background-color: #40A4C8; padding: 0 20px; z-index: 90; } .Layout { top: 42px; position: absolute; width: 100%; overflow: auto; background-color: orange; } .Items { } .Items li { width: 80%; border: 1px solid #eee; box-shadow: 0 2px 3px #ccc; padding: 10px; margin: 10px auto; box-sizing: border-box; list-style-type: none; } ``` ```html * Item A * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B ```<issue_comment>username_1: Rather than fixing your header, why not just add scroll to your body - below I have made it so that the page is always the size of the viewport and then the body overflows ```css * { box-sizing: border-box; } body { margin: 0; } .container { height: 100vh; display: flex; flex-direction: column; } .Toolbar { min-height: 40px; width: 100%; background-color: #40A4C8; padding: 0 20px; } .Layout { top: 42px; width: 100%; overflow: auto; background-color: orange; flex-grow: 1; } .Items {} .Items li { width: 80%; border: 1px solid #eee; box-shadow: 0 2px 3px #ccc; padding: 10px; margin: 10px auto; box-sizing: border-box; list-style-type: none; } ``` ```html * Item A * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B * Item B ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: The `position` css property is working the way it should by fixing the position of the first div element. Even if you prevent the second div element from going behind the first div element with some workaround, it'd still look kind of very similar effect. At least you can add `height` css property to the `Items` class to limit the size of second div element. Upvotes: 0
2018/03/14
355
1,324
<issue_start>username_0: I want to get a string of numbers and add commas to create a more readable format for a long number. Normally I'd use `toLocaleString()` but it's not working as expected with a controlled input. In my code I'm doing: ``` handleChange(event) { const parseNumber = parseInt(event.target.value); const toLocale = parseNumber.toLocaleString(); this.setState({ value: toLocale }); } ``` It's resetting the field after 3 numbers are entered - any ideas? <https://codesandbox.io/s/91q75k22mo><issue_comment>username_1: You can use `FormattedNumber` from `react-intl`. The documentation is available here: <https://github.com/yahoo/react-intl/wiki/Components#number-formatting-components> A sample: ``` ``` Upvotes: 0 <issue_comment>username_2: Working solution - in your handleChange function, change this: ``` const toNumber = Number(event.target.value); ``` to this: ``` const toNumber = Number(event.target.value.replace(/\D/g, '')); ``` The reason it wasn't working was because it was creating a `Number` based on the input value, which isn't a plain number but a formatted string. Hence it contains non-digit characters. The above just removes the non-digit characters (though now you know the issue there are other ways you could solve it). Upvotes: 4 [selected_answer]
2018/03/14
1,343
3,258
<issue_start>username_0: I'm working with a huge dataframe in python and sometimes I need to add an empty row or several rows in a definite position to dataframe. For this question I create a small dataframe df in order to show, what I want to achieve. ``` > df = pd.DataFrame(np.random.randint(10, size = (3,3)), columns = > ['A','B','C']) > A B C > 0 4 5 2 > 1 6 7 0 > 2 8 1 9 ``` Let's say I need to add an empty row, if I have a zero-value in the column 'C'. Here the empty row should be added after the second row. So at the end I want to have a new dataframe like: ``` >new_df > A B C > 0 4 5 2 > 1 6 7 0 > 2 nan nan nan > 3 8 1 9 ``` I tried with concat and append, but I didn't get what I want to. Could you help me please?<issue_comment>username_1: something like this should work for you: ``` for key, row in df.iterrows(): if row['C'] == 0: df.loc[key+1] = pd.Series([np.nan]) ``` Upvotes: 0 <issue_comment>username_2: In case you know the index where you want to insert a new row, `concat` can be a solution. Example dataframe: ``` df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]}) # A B C # 0 1 4 7 # 1 2 5 8 # 2 3 6 9 ``` Your new row as a dataframe with index 1: ``` new_row = pd.DataFrame({'A': np.nan, 'B': np.nan,'C': np.nan}, index=[1]) ``` Inserting your new row after the second row: ``` new_df = pd.concat([df.loc[:1], new_row, df.loc[2:]]).reset_index(drop=True) # A B C # 0 1.0 4.0 7.0 # 1 2.0 5.0 8.0 # 2 NaN NaN NaN # 3 3.0 6.0 9.0 ``` Upvotes: 1 <issue_comment>username_3: You can try in this way: ``` l = df[df['C']==0].index.tolist() for c, i in enumerate(l): dfs = np.split(df, [i+1+c]) df = pd.concat([dfs[0], pd.DataFrame([[np.NaN, np.NaN, np.NaN]], columns=df.columns), dfs[1]], ignore_index=True) print df ``` Input: ``` A B C 0 4 3 0 1 4 0 4 2 4 4 2 3 3 2 1 4 3 1 2 5 4 1 4 6 1 0 4 7 0 2 0 8 2 0 3 9 4 1 3 ``` Output: ``` A B C 0 4.0 3.0 0.0 1 NaN NaN NaN 2 4.0 0.0 4.0 3 4.0 4.0 2.0 4 3.0 2.0 1.0 5 3.0 1.0 2.0 6 4.0 1.0 4.0 7 1.0 0.0 4.0 8 0.0 2.0 0.0 9 NaN NaN NaN 10 2.0 0.0 3.0 11 4.0 1.0 3.0 ``` Last thing: it can happen that the last row has 0 in 'C', so you can add: ``` if df["C"].iloc[-1] == 0 : df.loc[len(df)] = [np.NaN, np.NaN, np.NaN] ``` Upvotes: 3 [selected_answer]<issue_comment>username_4: Try using slice. First, you need to find the rows where C == 0. So let's create a bool df for this. I'll just name it 'a': ``` a = (df['C'] == 0) ``` So, whenever C == 0, a == True. Now we need to find the index of each row where C == 0, create an empty row and add it to the df: ``` df2 = df.copy() #make a copy because we want to be safe here for i in df.loc[a].index: empty_row = pd.DataFrame([], index=[i]) #creating the empty data j = i + 1 #just to get things easier to read df2 = pd.concat([df2.ix[:i], empty_row, df2.ix[j:]]) #slicing the df df2 = df2.reset_index(drop=True) #reset the index ``` I must say... I don't know the size of your df and if this is fast enough, but give it a try Upvotes: 2
2018/03/14
1,243
3,032
<issue_start>username_0: I have strings such as 'pXoX prawa', they might contain a random number of X's. I want to replace these X with polish special characters ``` ['ą', 'ć', 'ę', 'ł', 'ń', 'ó', 'ś', 'ź', 'ż'] ``` and generate strings with all possible variants. In the case of "pXoX prawa" there are two X's, so all the possible combinations are 9^2=81, where 9 is the number of Polish special characters. I could brute force program it, but I wonder if anybody can come up with a 1-2 lines solution. Maybe some recursive coding. Any idea? If you want to use external libraries no problem.<issue_comment>username_1: something like this should work for you: ``` for key, row in df.iterrows(): if row['C'] == 0: df.loc[key+1] = pd.Series([np.nan]) ``` Upvotes: 0 <issue_comment>username_2: In case you know the index where you want to insert a new row, `concat` can be a solution. Example dataframe: ``` df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]}) # A B C # 0 1 4 7 # 1 2 5 8 # 2 3 6 9 ``` Your new row as a dataframe with index 1: ``` new_row = pd.DataFrame({'A': np.nan, 'B': np.nan,'C': np.nan}, index=[1]) ``` Inserting your new row after the second row: ``` new_df = pd.concat([df.loc[:1], new_row, df.loc[2:]]).reset_index(drop=True) # A B C # 0 1.0 4.0 7.0 # 1 2.0 5.0 8.0 # 2 NaN NaN NaN # 3 3.0 6.0 9.0 ``` Upvotes: 1 <issue_comment>username_3: You can try in this way: ``` l = df[df['C']==0].index.tolist() for c, i in enumerate(l): dfs = np.split(df, [i+1+c]) df = pd.concat([dfs[0], pd.DataFrame([[np.NaN, np.NaN, np.NaN]], columns=df.columns), dfs[1]], ignore_index=True) print df ``` Input: ``` A B C 0 4 3 0 1 4 0 4 2 4 4 2 3 3 2 1 4 3 1 2 5 4 1 4 6 1 0 4 7 0 2 0 8 2 0 3 9 4 1 3 ``` Output: ``` A B C 0 4.0 3.0 0.0 1 NaN NaN NaN 2 4.0 0.0 4.0 3 4.0 4.0 2.0 4 3.0 2.0 1.0 5 3.0 1.0 2.0 6 4.0 1.0 4.0 7 1.0 0.0 4.0 8 0.0 2.0 0.0 9 NaN NaN NaN 10 2.0 0.0 3.0 11 4.0 1.0 3.0 ``` Last thing: it can happen that the last row has 0 in 'C', so you can add: ``` if df["C"].iloc[-1] == 0 : df.loc[len(df)] = [np.NaN, np.NaN, np.NaN] ``` Upvotes: 3 [selected_answer]<issue_comment>username_4: Try using slice. First, you need to find the rows where C == 0. So let's create a bool df for this. I'll just name it 'a': ``` a = (df['C'] == 0) ``` So, whenever C == 0, a == True. Now we need to find the index of each row where C == 0, create an empty row and add it to the df: ``` df2 = df.copy() #make a copy because we want to be safe here for i in df.loc[a].index: empty_row = pd.DataFrame([], index=[i]) #creating the empty data j = i + 1 #just to get things easier to read df2 = pd.concat([df2.ix[:i], empty_row, df2.ix[j:]]) #slicing the df df2 = df2.reset_index(drop=True) #reset the index ``` I must say... I don't know the size of your df and if this is fast enough, but give it a try Upvotes: 2
2018/03/14
720
2,458
<issue_start>username_0: **Can an XML start with anything other than a `<` character?** It was a random thought I just had, when I was trying to define how to differentiate a string containing a XML and one containing a path to a XML. I believe the answer is no, but I'm looking to be certain.<issue_comment>username_1: Only a `<` or a whitespace character can begin a [***well-formed***](https://stackoverflow.com/a/25830482/290085) XML document. The [**W3C XML Recommendation**](https://www.w3.org/TR/xml/) includes a EBNF which definitively defines an [**XML document**](https://www.w3.org/TR/xml/#sec-documents): > > > ``` > [1] document ::= prolog element Misc* > [22] prolog ::= XMLDecl? Misc* (doctypedecl Misc*)? > [23] XMLDecl ::= 'xml' VersionInfo EncodingDecl? SDDecl? S? '?' > [27] Misc ::= Comment | PI | S > [3] S ::= (#x20 | #x9 | #xD | #xA)+ > > ``` > > From these rules it follows that an XML document may start with a whitespace character or a `<` character from any one of the following constructs: * XML Declaration * Comment * PI * Doctype Declaration * Element An XML document may start with no other character. **Notes:** 1. An implication of these rules is that if an XML document contains an XML declaration, it must appear at the top (or you could receive a [*somewhat cryptic error message*](https://stackoverflow.com/q/19889132/290085)). So, for XML documents with an XML declaration, the first character will have to be a `<` and cannot be whitespace. 2. A [BOM](http://unicode.org/faq/utf_bom.html#BOM) may appear at the beginning of an XML document entity to indicate the byte order of the character encoding being used. These two bytes are typically not considered to be part of the XML document itself but rather the *storage unit* of the [physical structure](https://www.w3.org/TR/REC-xml/#sec-physical-struct) supporting the XML document. A BOM, along with an XML declaration, assist XML processors in [character encoding detection](https://www.w3.org/TR/REC-xml/#sec-guessing). [Suggestion for BOM mention thanks to [JonHanna]](https://stackoverflow.com/users/400547/jon-hanna) Upvotes: 4 [selected_answer]<issue_comment>username_2: A well-formed XML document entity always has "<" as its first non-whitespace character. A well-formed external general parsed entity need not start with "<". So if by "a XML" you mean "a well-formed XML document entity", then the answer is "no". Upvotes: 2
2018/03/14
295
857
<issue_start>username_0: How to extract make and model from this json output. ``` [ "{\"_id\":{\"$oid\":\"5a81d2136da3cd4c41b21509\"},\"make\":\"maruti\",\"model\":\"astar\",\"year\":1998}" ] ```<issue_comment>username_1: To extract JSON data in jQuery you can do following. ``` var d = $.parseJSON(data); var make = data.make; var model = data.model; ``` in make and model you will get your desired output. Hope it helps. Upvotes: 0 <issue_comment>username_2: Use JSON.parse to convert string to the JS object and then access the properties using dot notation. You don't even need jQuery for this. Following is an example: ```js let data = [ "{\"_id\":{\"$oid\":\"5a81d2136da3cd4c41b21509\"},\"make\":\"maruti\",\"model\":\"astar\",\"year\":1998}" ]; data[0] = JSON.parse(data[0]); console.log(data[0].make, data[0].model); ``` Upvotes: 1
2018/03/14
322
1,003
<issue_start>username_0: I use YQL to get weather data for that woeid is needed, but to get woeid when i call following url it return result null ``` https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20geo.places%20where%20text%3D%27delhi%27&format=json ``` Result i received ``` {"query":{"count":0,"created":"2018-03-14T10:51:42Z","lang":"en-US","results":null}} ``` Even on <https://developer.yahoo.com/weather/> it's showing null. Is there any other way to get woeid for yahoo weather. Best Regards, Ashish<issue_comment>username_1: I have the same problem!! I have a scheduled script that cycles the temperatures of some cities and that has received data until this morning at 07:00 gmt + 1. I think it depends on the yahoo service! maybe they have modified some parameters Upvotes: 1 <issue_comment>username_2: Should be working. I emailed <EMAIL> yesterday about this issue being down, did not get a response but the issues is resolved on my end. Upvotes: 0
2018/03/14
551
2,343
<issue_start>username_0: Does anyone know if exists any official or most accepted reference for React naming conventions to use when we build our applications? React has a lot of different type of components such as React.Component, directives, services and so on. Wouldn't you agree that having a reference naming convention when we implement them in our applications will make sense? For example: If we need to create new component how should we name them like [Something]Component or component[Something] or something else? And same applies for other classes. Other things I wonder about is if variables/functions that belongs to the scope should have an special prefix or suffix. In some situations it may be useful to have a way to differentiate them from functions and other (none react code).<issue_comment>username_1: I'm a big fan of the airbnb React style guide. <https://github.com/airbnb/javascript/tree/master/react> They also have an overall JS style guide. <https://github.com/airbnb/javascript> Upvotes: 3 <issue_comment>username_2: My understanding is that the React team is un-opinionated when it comes to naming conventions. With that said, it is also my understanding that components that return objects or classes traditionally start with capital letters and its how we differentiate from a component or other file that is not a class. So if you see `src/components/Header.js`, you immediately know its a class-based component and if you see `src/utils/validateEmails.js` you know its going to be a function and not a class in there. I would also warn about the airbnb style guide because I just took a look at it and they encourage the use of `.jsx` extensions, yet if you look at the Reactjs documentation: <https://reactjs.org/docs/react-without-jsx.html> they say that jsx is not a requirement when building with React, you can just use javascript all day long, so really one can infer that creating components with just a `.js` extension is satisfactory. What also backs up that inference is that the engineers at Facebook, the creators of React, do not recommend the utilization of `.jsx` and <NAME>amov says that using `.jsx` made a difference in the pre-Babel days, but now its not necessary, we can stick with `.js` extensions. source: <https://github.com/facebook/create-react-app/issues/87> Upvotes: 0
2018/03/14
652
2,402
<issue_start>username_0: My spigot plugin doesn't work. On the console, it says the plugin is enabled but i can't run the command in the plugin. Pls help. This is the main code Plugin.java ``` package lol.quacnooblol.mypvpplugin; import org.bukkit.Bukkit; import org.bukkit.command.Command; import org.bukkit.command.CommandSender; import org.bukkit.entity.Player; import org.bukkit.plugin.java.JavaPlugin; public class Plugin extends JavaPlugin{ @Override public void onEnable() { Bukkit.getServer().getLogger().info("Plugin Enabled"); } @Override public void onDisable() { Bukkit.getServer().getLogger().info("Plugin Disabled"); } public boolean onCommand(CommandSender sender, Command cmd, String commandLabel, String[] args) { if(!(sender instanceof Player)) { sender.sendMessage("You ran this command on the console"); } Player player = (Player) sender; if(cmd.getName().equalsIgnoreCase("test")) { player.sendMessage("You ran the test command in game."); return true; } return true; } } ``` This is the plugin.yml ``` name: Plugin version: 0.1 main: lol.quacnooblol.mypvpplugin.Plugin author: QuacNoobLoL description: A pvp plugin command: test: usage: / description: A test command ```<issue_comment>username_1: Change the plugin.yml `command` to `commands` In the future please refer to the [plugin.yml documentation](https://bukkit.gamepedia.com/Plugin_YAML) and remember even a single letter can break your code! Upvotes: 4 [selected_answer]<issue_comment>username_2: in your Plugin.yml you need to use three spaces and not tab here is the fixed file: ```yaml name: Plugin version: 0.1 main: lol.quacnooblol.mypvpplugin.Plugin author: QuacNoobLoL description: A pvp plugin commands: test: usage: / description: A test command ``` And your Plugin.java onCommand boolean needs the @Override annotation like this: ```java @Override public boolean onCommand(CommandSender, Command cmd, String commandLabel, String[] args) { if(cmd.getName().equalsIgnoreCase("test")){ if(!(sender instanceof Player)){ sender.sendMessage("You ran this command on the console"); } Player player = (Player)sender; if(sender instanceof Player){ player.sendMessage("You ran the test command in game."); } return true; } } ``` this should work Upvotes: 0
2018/03/14
1,480
5,145
<issue_start>username_0: So when got this trouble in Angular-CLI app. Here is what happening: `ng serve`, `ng serve --aot` produces no exceptions, everything works fine. Only running `ng serve --prod` breaks while surfing the app. [ERROR TypeError: (void 0) is not a function](https://i.stack.imgur.com/NYAd7.png) Was searching for answers, and found [these dont's](https://github.com/qdouble/angular-webpack2-starter#aot--donts), so I have written 'public' before every property in app and checked if there was function calls in providers, but nothing changed. Next I tried running it with `--aot` flag and was working just fine, but still crashing while `--prod` with same errors. It crashes exactly while runnig login process which is: * send http POST with login and password data * if request resolves, dispatch ngRx-Store custom login event (simple flag) * then router.navigateByUrl to another page It is weird that I have similar processes everywhere, but it crashes in the exact place. Can please someone provide me a little info abut where should I dig next? Cheers! --- UPDATE: I was cutting off in code every single part of login process, and then building it with `--prod`, and it turns out that MatDialog from @angular/material was producing the error. Login was triggered after MatDialog login component resolves with afterClosed() hook to its parent, and it produces exceptions somehow. So parent, who was triggered pop-up was HeaderComponent, which contains method: ``` openLoginDialog(): void { this.authService.login(); const dialogRef = this.dialog.open(LoginDialogComponent); dialogRef.afterClosed().subscribe(result => { if (result) { this.authService.login(); } }); } ``` And inside LoginDialogComponent was method which simply resolves: ``` login(login, password) { this.authService.tryLogin(login, password).subscribe(data => { this.dialogRef.close(true); }); } ``` After I removed MatDialog in another words got rid of the only popup on project, the error disappeared. I have made another component with its own route for Login form, but still consider that removing whole working module beacause of build errors is not a solution. my package.json: ``` "dependencies": { "@angular/animations": "^5.0.2", "@angular/cdk": "^5.0.0-rc.1", "@angular/common": "^5.0.0", "@angular/compiler": "^5.0.0", "@angular/core": "^5.0.0", "@angular/flex-layout": "^2.0.0-beta.10-4905443", "@angular/forms": "^5.0.0", "@angular/http": "^5.0.0", "@angular/material": "^5.0.0-rc.1", "@angular/platform-browser": "^5.0.0", "@angular/platform-browser-dynamic": "^5.0.0", "@angular/platform-server": "^5.0.0", "@angular/router": "^5.0.0", "@ngrx/store": "^5.0.0", "@nguniversal/common": "^5.0.0-beta.5", "@nguniversal/express-engine": "^5.0.0-beta.5", "@nguniversal/module-map-ngfactory-loader": "^5.0.0-beta.5", "core-js": "^2.4.1", "hammerjs": "^2.0.8", "ngrx-store-freeze": "^0.2.0", "primeng": "^5.0.2", "rxjs": "^5.5.2", "tassign": "^1.0.0", "zone.js": "^0.8.14" }, "devDependencies": { "@angular/cli": "^1.6.2", "@angular/compiler-cli": "^5.0.0", "@angular/language-service": "^5.0.0", "@types/jasmine": "~2.5.53", "@types/jasminewd2": "~2.0.2", "@types/node": "~6.0.60", "codelyzer": "~3.2.0", "jasmine-core": "~2.6.2", "jasmine-spec-reporter": "~4.1.0", "karma": "~1.7.0", "karma-chrome-launcher": "~2.1.1", "karma-cli": "~1.0.1", "karma-coverage-istanbul-reporter": "^1.2.1", "karma-jasmine": "~1.1.0", "karma-jasmine-html-reporter": "^0.2.2", "protractor": "~5.1.2", "ts-loader": "^2.3.7", "ts-node": "~3.2.0", "tslint": "~5.7.0", "typescript": "~2.4.2" } ```<issue_comment>username_1: You can check where is the exact issue by hitting the following command. ``` ng build --prod --source-map=true ``` There is a possibility that this will not workout for you but I hope this can. the reason behind this issue is bundling a js file which is already minified js file. If you find any package is creating the issue try to remove them to fix. Upvotes: 1 <issue_comment>username_2: The --prod option will activate the following codes in the angular.json file within your project folder ``` "configurations": { "production": { "fileReplacements": [ { "replace": "src/environments/environment.ts", "with": "src/environments/environment.prod.ts" } ``` Need to make sure your codes work when it is served from localhost:4200 or whatever dev server, especially with security as they are normally different from target 'prod' environment. Upvotes: 1 <issue_comment>username_3: You need use this syntax: ``` ng serve --configuration production ``` Upvotes: 0
2018/03/14
593
1,897
<issue_start>username_0: I have table: ``` CREATE TABLE MyTable ( RootId int, Direction bit, .... ); ``` Now, I must write select from this table and join some tables to it. Joined tables depends on Direction parameter. How to join MyTable3 like here: ``` select Root, Direction, Type from MyTable join MyTable1 on MyTable1.Id = RootId join MyTable2 on MyTable2.Id = RootId join MyTable3 on ... case select when Direction = 1 MyTable3.TypeId = MyTable1.TypeId else MyTable3.TypeId = MyTable2.TypeId ```<issue_comment>username_1: The predicate of a `CASE` expression (i.e what the `CASE` expression generates) cannot be an equality condition, but rather it has to be a value. You may write the final join condition as follows: ``` INNER JOIN MyTable3 t3 ON (Direction = 1 AND t3.TypeId = t1.TypeId) OR (Direction <> 1 AND t3.TypeId = t2.TypeId) ``` Here is the full query: ``` SELECT Root, Direction, Type FROM MyTable t INNER JOIN MyTable1 t1 ON t1.Id = t.RootId INNER JOIN MyTable2 t2 ON t2.Id = t.RootId INNER JOIN MyTable3 t3 ON (Direction = 1 AND t3.TypeId = t1.TypeId) OR (Direction <> 1 AND t3.TypeId = t2.TypeId); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: For performance reasons, you may want to be using two `left join`s, like this: ``` select Root,Direction, coalesce(m3_1.Type, m3_2.Type) as type from MyTable join MyTable1 on MyTable1.Id = MyTable.RootId join MyTable2 on MyTable2.Id = MyTable.RootId left join MyTable3 m3_1 on m3_1.Direction = 1 and m3_1.TypeId = MyTable1.TypeId left join MyTable3 m3_0 on m3_2.Direction = 1 and me_2.TypeId = MyTable2.TypeId; ``` The use of `or` or `case` (or really anything other than `and` in an `on` clause) can have a big impact on performance. Upvotes: 1
2018/03/14
2,559
7,106
<issue_start>username_0: I am playing with 7.2 version of solr. I've uploaded a nice collection of texts in German language and trying to query and highlight a few queries. If I fire this query with hightlight: <http://localhost:8983/solr/trans/select?q=trans:Zeit&hl=true&hl.fl=trans&hl.q=Kundigung&hl.snippets=3&wt=xml&rows=1> I get a nice text back: ``` true 0 10 3 trans:Zeit true Kundigung trans 1 xml x ... Zeit ... 2018-03-01T14:32:29.400Z 2305 1594374122229465088 ... *Kündigung* ... ... *Kündigung* ... ``` However, if I supply the `Kündigung` as highlight text, I get no answers, as the text/query parser replaced all the `ü` characters with `u`. I have a feeling that I need to supply the correct qparser. How should I specify it? It seems to me that the collection was build with and queried with the default `LuceneQParser` parser. How can I supply this parser in the url above? **UPDATE:** `http://localhost:8983/solr/trans/schema/fields/trans` returns ``` { "responseHeader":{ "status":0, "QTime":0}, "field":{ "name":"trans", "type":"text_de", "indexed":true, "stored":true}} ``` **Update 2**: So I've looked at the managed-schema of my solr installation/collection schema configuration and found the following: ``` ``` the way I interpret the information is that since query and index parts are omited, the above code is meant to be the same for both query and index. Which... does not show any misconfiguration issues similar to [the answer 2 below](https://stackoverflow.com/questions/49276093/solr-highlighting-terms-with-umlaut-not-found-not-highlighted/49314056#49314056)... I rememberred though, adding the field `trans` with type `text_de`: ``` curl -X POST -H 'Content-type:application/json' --data-binary '{ "add-field":{ "name":"trans", "type":"text_de", "stored":true, "indexed":true} }' http://localhost:8983/solr/trans/schema ``` I've deleted all the documents using ``` curl http://localhost:8983/solr/trans/update?commit=true -d "\*:\*< /query>" ``` and then reinserting them again: ``` curl -X POST http://localhost:8983/solr/trans/update?commit=true -H "Content-Type: application/json" -d @all.json ``` Is it the correct way to "rebuild" the indexes in solr? **UPDATE 3:** The Charset settings of the standart JAVA installation were not set to UTF-8: ``` C:\tmp>java -classpath . Hello Cp1252 Cp1252 windows-1252 C:\tmp>cat Hello.java public class Hello { public static void main(String args[]) throws Exception{ // not crossplateform safe System.out.println(System.getProperty("file.encoding")); // jdk1.4 System.out.println( new java.io.OutputStreamWriter( new java.io.ByteArrayOutputStream()).getEncoding() ); // jdk1.5 System.out.println(java.nio.charset.Charset.defaultCharset().name()); } } ``` **UPDATE 4**: Restarted the solr with UTF8 settings: ``` bin\solr.cmd start -Dfile.encoding=UTF8 -c -p 8983 -s example/cloud/node1/solr bin\solr.cmd start -Dfile.encoding=UTF8 -c -p 7574 -s example/cloud/node2/solr -z localhost:9983 ``` Checked the JVM settings: ``` http://localhost:8983/solr/#/~java-properties file.​encoding UTF8 file.​encoding.​pkg sun.io ``` reinserted the docs. No change: `http://localhost:8983/solr/trans/select?q=trans:Zeit&hl=true&hl.fl=trans&hl.q=Kundigung&hl.qparser=lucene&hl.snippets=3&rows=1&wt=xml` gives: ``` ... *Kündigung* ... ... *Kündigung* ... ``` `http://localhost:8983/solr/trans/select?q=trans:Zeit&hl=true&hl.fl=trans&hl.q=K%C3%BCndigung&hl.qparser=lucene&hl.snippets=3&rows=1&wt=xml` gives: ``` ``` `uchardet all.json` (`file -bi all.json`) reports `UTF-8` Running from the ubuntu subsystem under windows: ``` $ export LC_ALL='en_US.UTF-8' $ export LC_CTYPE='en_US.UTF-8' $ curl -H "Content-Type: application/json" http://localhost:8983/solr/trans/query?hl=true\&hl.fl=trans\&fl=id -d ' { "query" : "trans:Kündigung", "limit" : "1", params: {"hl.q":"Kündigung"} }' { "responseHeader":{ "zkConnected":true, "status":0, "QTime":21, "params":{ "hl":"true", "fl":"id", "json":"\n{\n \"query\" : \"trans:Kündigung\",\n \"limit\" : \"1\", params: {\"hl.q\":\"Kündigung\"}\n}", "hl.fl":"trans"}}, "response":{"numFound":124,"start":0,"maxScore":4.3724422,"docs":[ { "id":"b952b811-3711-4bb1-ae3d-e8c8725dcfe7"}] }, "highlighting":{ "b952b811-3711-4bb1-ae3d-e8c8725dcfe7":{}}} $ curl -H "Content-Type: application/json" http://localhost:8983/solr/trans/query?hl=true\&hl.fl=trans\&fl=id -d ' { "query" : "trans:Kündigung", "limit" : "1", params: {"hl.q":"Kundigung"} }' { "responseHeader":{ "zkConnected":true, "status":0, "QTime":18, "params":{ "hl":"true", "fl":"id", "json":"\n{\n \"query\" : \"trans:Kündigung\",\n \"limit\" : \"1\", params: {\"hl.q\":\"Kundigung\"}\n}", "hl.fl":"trans"}}, "response":{"numFound":124,"start":0,"maxScore":4.3724422,"docs":[ { "id":"b952b811-3711-4bb1-ae3d-e8c8725dcfe7"}] }, "highlighting":{ "b952b811-3711-4bb1-ae3d-e8c8725dcfe7":{ "trans":[" ... *Kündigung* ..."]}}} ``` **UPDATE 5** Without supplying `hl.q` (`http://localhost:8983/solr/trans/select?q=trans:Kundigung&hl=true&hl.fl=trans&hl.qparser=lucene&hl.snippets=3&rows=1&wt=xml` or `http://localhost:8983/solr/trans/select?q=trans:K%C3%BCndigung&hl=true&hl.fl=trans&hl.qparser=lucene&hl.snippets=3&rows=1&wt=xml`): ``` ... *Kündigung* ... ... *Kündigung* ... ... *Kündigung* ... ``` in this case, the `hl.q` took the highlighting terms from the query itself, and did a superb job..<issue_comment>username_1: Could be a problem with your JVM's encoding. What about -Dfile.encoding=UTF8? Check LC\_ALL and LC\_CTYPE too. Should be UTF-8. What field type is the trans field? I even indexed german text with text\_en and do not have any problems with Umlauts in highlighting or search and I use the LuceneQParser too. How looks the response when you query via Solr Admin UI (<http://localhost:8983/solr/#/trans/query>) and hl checkbox activated? Upvotes: 2 <issue_comment>username_1: Check your analyzer chain too. I get the same behaviour as you described, when I misconfigure the chain this way: ``` ``` The `GermanNormalizationFilterFactory` and `GermanLightStemFilterFactory` both replaces umlauts. Upvotes: 2 [selected_answer]<issue_comment>username_2: What you need to specify is the attribute, for which the highlighting is done. Similar to `q=trans:Zeit`, where you specified `trans` as an attribute, you need to specify `hl.q` to be `hl.q=trans:Kündigung`. Your request then becomes: [http://localhost:8983/solr/trans/select?q=trans:Zeit&hl=true&hl.fl=trans&hl.q=trans:Kündigung&hl.snippets=3&wt=xml&rows=1](http://localhost:8983/solr/trans/select?q=trans:Zeit&hl=true&hl.fl=trans&hl.q=trans:K%C3%BCndigung&hl.snippets=3&wt=xml&rows=1) This answer was humbly presented by <NAME>, <NAME>, and <NAME>, solr community and support. This is the post on their behalf. Upvotes: 1
2018/03/14
1,069
3,977
<issue_start>username_0: I'm building a very simple application using Flask as server and C# as client. The server receive an image via HTTP POST request and process it. My server seems to work fine because I tested it both with Postman and a python client. However the POST request with image attached from my C# client cannot be passed to server. I've tested with HttpClient and Restsharp but none worked, the server complains there's no image attached. Here are my server code: ``` from flask import Flask, jsonify from flask import abort from flask import make_response from flask import request, Response from flask import url_for from werkzeug.utils import secure_filename import jsonpickle import numpy as np import cv2 import os import json import io app = Flask(__name__) UPLOAD_FOLDER = os.path.basename('uploads') ALLOWED_EXTENSIONS = set(['png', 'jpg', 'jpeg', 'gif', 'mp4', '3gp', 'mov']) app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER app.config['MAX_CONTENT_LENGTH'] = 16 * 1024 * 1024 # 16 MB @app.route('/upload', methods=['POST']) def upload(): file = request.files['file'] filename = secure_filename(file.filename) in_memory_file = io.BytesIO() file.save(in_memory_file) data = np.fromstring(in_memory_file.getvalue(), dtype=np.uint8) color_image_flag = 1 img = cv2.imdecode(data, color_image_flag) cv2.imwrite("uploads\\" + file.filename, img) ``` and here are my client code using Restsharp ``` public void Test() { var client = new RestClient("http://127.0.0.1:5000/upload"); var request = new RestRequest(Method.POST); request.AddHeader("content-type", "multipart/form-data; boundary=----Boundary"); request.AddParameter("multipart/form-data; boundary=----Boundary", "------Boundary\r\nContent-Disposition: form-data; name=\"file\"; filename=\"path\"\r\nContent-Type: image/jpeg\r\n\r\n\r\n------Boundary--", ParameterType.RequestBody); IRestResponse response = client.Execute(request); } ``` here is client code using HttpClient ``` public void Upload() { string path = "path"; FileInfo fi = new FileInfo(path); string fileName = fi.Name; byte[] fileContents = File.ReadAllBytes(fi.FullName); Uri webService = new Uri(@"http://127.0.0.1:5000/upload"); HttpRequestMessage requestMessage = new HttpRequestMessage(HttpMethod.Post, webService); requestMessage.Headers.ExpectContinue = false; MultipartFormDataContent multiPartContent = new MultipartFormDataContent("----MyGreatBoundary"); ByteArrayContent byteArrayContent = new ByteArrayContent(fileContents); byteArrayContent.Headers.Add("Content-Type", "application/octet-stream"); multiPartContent.Add(byteArrayContent, "this is the name of the content", fileName); requestMessage.Content = multiPartContent; HttpClient httpClient = new HttpClient(); Task httpRequest = httpClient.SendAsync(requestMessage, HttpCompletionOption.ResponseContentRead, CancellationToken.None); HttpResponseMessage httpResponse = httpRequest.Result; } ```<issue_comment>username_1: You need to set the `name` argument to `"file"` instead of `"this is the name of the content"` in your C# code. ``` multiPartContent.Add(byteArrayContent, "file", "image.jpg"); ``` Here is a stripped down method that does the trick: ``` public Task UploadAsFormDataContent(string url, byte[] image) { MultipartFormDataContent form = new MultipartFormDataContent { { new ByteArrayContent(image, 0, image.Length), "file", "pic.jpeg" } }; HttpClient client = new HttpClient(); return client.PostAsync(url, form); } ``` Upvotes: 2 <issue_comment>username_2: For those who made the same mistakes, I had to change `name` argument to `file` (as username_1 suggests) and change `byteArrayContent.Headers.Add("Content-Type", "application/octet-stream");` to `byteArrayContent.Headers.Add("Content-Type", "multipart/form-data");` Upvotes: 1 [selected_answer]
2018/03/14
1,567
5,426
<issue_start>username_0: Ok, so I have some data that I want to convert from multiple rows to multiple columns. My input data looks loosely like this - ``` +----------+----------------+-----------------+ | SKU | Attribute Name | Attribute Value | +----------+----------------+-----------------+ | Product1 | Colour | Black | | Product1 | Size | Large | | Product1 | Height | 20cm | | Product1 | Width | 40cm | | Product2 | Colour | Red | | Product2 | Width | 30cm | | Product2 | Size | Large | | Product3 | Height | 25cm | | Product3 | Width | 30cm | | Product3 | Length | 90cm | | Product3 | Weight | 5kg | | Product3 | Size | Large | | Product3 | Colour | Blue | +----------+----------------+-----------------+ ``` What I want to achieve is an output like this - ``` +----------+--------+--------+--------+-------+--------+-------+ | SKU | Colour | Height | Length | Size | Weight | Width | +----------+--------+--------+--------+-------+--------+-------+ | Product1 | Black | 20cm | | Large | | 40cm | | Product2 | Red | | | Large | | 30cm | | Product3 | Blue | 25cm | 90cm | Large | 5kg | 30cm | +----------+--------+--------+--------+-------+--------+-------+ ``` I've tried Pivot tables, but you can only return numeric values, rather than the text values I'm looking for. I know I could probably achieve it using a number of step looking up values and filling them, but I feel like there should be a more simplistic way to achieve this. Maybe it's something better achieved in database rather than a spreadsheet. Any help would be very much appreciated.<issue_comment>username_1: You could do this using a helper column and then match it using index + match. Not as simple as you thought, but does work. 1) Add helper column to your data (call it 'Helper'). `=concat(SKU,'Attribute Name')` 2) Use a pivot to get a unique list of SKUs in the rows so that it's easy to update once the data changes. (I'm assuming this is in column A and values start at row 4). 3) Use another pivot to get a unique list of Attributes in the columns next to the other pivot. Then you have the structure of your results. (I'm assuming the first value is in B3). 4) Index match the values of the table `=index('Attribute Value', match(concat($A4,B$3),'Helper',0))` Note though that this only works when each combination of SKU and Attribute is unique. Upvotes: 0 <issue_comment>username_2: You can do this in ̶5̶ ̶s̶t̶e̶p̶s̶ 4 steps with Powerquery. This is in-built for 2016 and a free add-in from Microsoft from 2013 on wards ( or 2010 Professional Plus with Software Assurance). See info <https://www.microsoft.com/en-gb/download/details.aspx?id=39379> The advantage is you can easily add rows to the source and simply refresh the query. 1) You select any cell in the range, then in 2016 Get & Transform tab, earlier version use the Powerquery tab, select data from table. A window will pop up with your range of data in: [![Step 1](https://i.stack.imgur.com/9HuEJ.png)](https://i.stack.imgur.com/9HuEJ.png) 2) Transform > Pivot column > Attribute Name column for Attribute Value in Values Column (used advanced options to select "Don't aggregate") [![Pivot](https://i.stack.imgur.com/Ao7yv.png)](https://i.stack.imgur.com/Ao7yv.png) 3) Drag columns around to desired arrangement [![Column order](https://i.stack.imgur.com/GxxYm.png)](https://i.stack.imgur.com/GxxYm.png) 4) Home > Close and load to sheet Here is a version without the column re-ordering [![Image](https://i.stack.imgur.com/y2vUh.gif)](https://i.stack.imgur.com/y2vUh.gif) Edit: Thanks to @Ron Rosenfeld for reminding me that truly *null* values don't need replacing with blanks as they will appear as blanks when written to the sheet. So this step was removed: 4) Highlight columns to replace nulls in and go to transform > replace values > and Value to Find: null Replace With: [![Find and replace nulls](https://i.stack.imgur.com/czNFt.png)](https://i.stack.imgur.com/czNFt.png) Upvotes: 3 [selected_answer]<issue_comment>username_3: This assumes that the data is in columns **A** through **C**: ``` Sub croupier() Dim i As Long, N As Long, vA As String, vB As String, vC As String Dim rw As Long, cl As Long ' setup column headers Columns(2).SpecialCells(2).Offset(1).Copy Range("D1") Columns(4).RemoveDuplicates Columns:=1, Header:=xlNo Columns(4).SpecialCells(2).Copy Range("E1").PasteSpecial Transpose:=True Columns(4).SpecialCells(2).Clear ' setup row headers Columns(1).SpecialCells(2).Copy Range("D1") Columns(4).RemoveDuplicates Columns:=1, Header:=xlYes ' deal the data N = Cells(Rows.Count, "A").End(xlUp).Row For i = 2 To N vA = Cells(i, 1) vB = Cells(i, 2) vC = Cells(i, 3) cl = Rows(1).Find(what:=vB, after:=Range("A1")).Column rw = Columns(4).Find(what:=vA, after:=Range("D1")).Row Cells(rw, cl) = vC Next i End Sub ``` [![enter image description here](https://i.stack.imgur.com/09C67.png)](https://i.stack.imgur.com/09C67.png) Upvotes: 0
2018/03/14
1,450
5,015
<issue_start>username_0: I am a newbie on the AWS & Python and trying to implement a simple ML recommendation system using AWS Lambda function for self-learning. I am stuck on the packaging the combination of sklearn, numpy and pandas. If combined any two lib means (Pandas and Numpy) or (Numpy and Skype) is working fine and deploy perfectly. Because I am using ML system then i need sklearn (scipy and pandas and numpy) which cannot work and getting this error on aws lambda test. What I have done so far : my deployment package from within a python3.6 virtualenv, rather than directly from the host machine. (have python3.6, virtualenv and awscli already installed/configured, and that your lambda function code is in the ~/lambda\_code directory): 1. `cd ~` (We'll build the virtualenv in the home directory) 2. `virtualenv venv --python=python3.6` (Create the virtual environment) 3. `source venv/bin/activate` (Activate the virtual environment) 4. `pip install sklearn, pandas, numpy` 5. `cp -r ~/venv/lib/python3.6/site-packages/* ~/lambda_code` (Copy all installed packages into root level of lambda\_code directory. This will include a few unnecessary files, but you can remove those yourself if needed) 6. `cd ~/lambda_code` 7. `zip -r9 ~/package.zip .` (Zip up the lambda package) 8. `aws lambda update-function-code --function-name my_lambda_function --zip-file fileb://~/package.zip` (Upload to AWS) after that getting this error: ``` **"errorMessage": "Unable to import module 'index'"** ``` and ``` START RequestId: 0e9be841-2816-11e8-a8ab-636c0eb502bf Version: $LATEST Unable to import module 'index': **Missing required dependencies ['numpy']** END RequestId: 0e9be841-2816-11e8-a8ab-636c0eb502bf REPORT RequestId: 0e9be841-2816-11e8-a8ab-636c0eb502bf Duration: 0.90 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 33 MB ``` I have tried this on EC2 instance as well but did not a success.I did the google and read multiple blogs and solution but not worked. Please help me out on this.<issue_comment>username_1: You need to make sure all the dependent libraries AND the Python file containing your function are all in one zip file in order for it to detect the correct dependencies. So essentially, you will need to have Numpy, Panda and your own files all in one zip file before you upload it. Also make sure that your code is referring to the local files (in the same unzipped directory) as dependencies. If you have done that already, the issue is probably how your included libraries gets referenced. Make sure you are able to use the included libraries as a dependency by getting the correct relative path on AWS once it's deployed to Lambda. Upvotes: 0 <issue_comment>username_2: u are using python 3.6 . so pip3 install numpy should be used, make a try . Upvotes: 1 <issue_comment>username_3: So like <NAME> said, you need to use pip3 to install the libraries. so to figure out which python version is default you can type: ``` which python ``` or ``` python -v ``` So in order to install with python3 you need to type: ``` python3 -m pip install sklearn, pandas, numpy --user ``` Once that is done, you can make sure that the packages are installed with: ``` python3 -m pip freeze ``` This will show all the python libraries installed with your python model. Once you have the libraries you would want to continue with you regular steps. Of course you would first want to delete everything that you have placed in ~/venv/lib/python3.6/site-packages/\*. ``` cd ~/lambda_code zip -r9 ~/package.zip ``` Upvotes: 0 <issue_comment>username_4: If you're running this on Windows (like I was), you'll run into an issue with the libraries being compiled on an incompatible OS. You can use an Amazon Linux EC2 instance, or a Cloud9 development instance to build your virtualenv as detailed above. Or, you could just download the pre-compiled wheel files as discussed on this post: <https://aws.amazon.com/premiumsupport/knowledge-center/lambda-python-package-compatible/> Essentially, you need to go to the project page on <https://pypi.org> and download the files named like the following: * For Python 2.7: module-name-version-cp27-cp27mu-manylinux1\_x86\_64.whl * For Python 3.6: module-name-version-cp36-cp36m-manylinux1\_x86\_64.whl Then unzip the .whl files to your project directory and re-zip the contents together with your lambda code. Upvotes: 0 <issue_comment>username_5: Was having a similar problem on Ubuntu 18.04. Solved the issue by using `python3.7` and `pip3.7`. Its important to use `pip3.7` when installing the packages, like `pip3.7 install numpy` or `pip3.7 install numpy --user` To install `python3.7` and `pip3.7` on Ubuntu you can use `deadsnakes/ppa` ``` sudo add-apt-repository ppa:deadsnakes/ppa sudo apt-get update sudo apt-get install python3.7 curl https://bootstrap.pypa.io/get-pip.py -o /tmp/get-pip.py python3.7 /tmp/get-pip.py ``` This solution should also work on Ubuntu 16.04. Upvotes: 0
2018/03/14
1,029
3,686
<issue_start>username_0: I want to open Google Street View Android directly from my app. Can anyone help me in doing this? I have successfully opened the Maps app with Streeview thanks to SO but that's not what I am looking for. I actually want to open Streetview camera directly from my app so I can take a panoramic photo. My actual task is to develop a camera app that can take panoramic images but I couldn't find anything for that, So I am working on things can be done instead of camera app like cardboard. Here is the link to the question that I had asked earlier- [App to capture 360 View android](https://stackoverflow.com/questions/49249886/app-to-capture-360-view-android) Please help in this!<issue_comment>username_1: You need to make sure all the dependent libraries AND the Python file containing your function are all in one zip file in order for it to detect the correct dependencies. So essentially, you will need to have Numpy, Panda and your own files all in one zip file before you upload it. Also make sure that your code is referring to the local files (in the same unzipped directory) as dependencies. If you have done that already, the issue is probably how your included libraries gets referenced. Make sure you are able to use the included libraries as a dependency by getting the correct relative path on AWS once it's deployed to Lambda. Upvotes: 0 <issue_comment>username_2: u are using python 3.6 . so pip3 install numpy should be used, make a try . Upvotes: 1 <issue_comment>username_3: So like Wai kin chung said, you need to use pip3 to install the libraries. so to figure out which python version is default you can type: ``` which python ``` or ``` python -v ``` So in order to install with python3 you need to type: ``` python3 -m pip install sklearn, pandas, numpy --user ``` Once that is done, you can make sure that the packages are installed with: ``` python3 -m pip freeze ``` This will show all the python libraries installed with your python model. Once you have the libraries you would want to continue with you regular steps. Of course you would first want to delete everything that you have placed in ~/venv/lib/python3.6/site-packages/\*. ``` cd ~/lambda_code zip -r9 ~/package.zip ``` Upvotes: 0 <issue_comment>username_4: If you're running this on Windows (like I was), you'll run into an issue with the libraries being compiled on an incompatible OS. You can use an Amazon Linux EC2 instance, or a Cloud9 development instance to build your virtualenv as detailed above. Or, you could just download the pre-compiled wheel files as discussed on this post: <https://aws.amazon.com/premiumsupport/knowledge-center/lambda-python-package-compatible/> Essentially, you need to go to the project page on <https://pypi.org> and download the files named like the following: * For Python 2.7: module-name-version-cp27-cp27mu-manylinux1\_x86\_64.whl * For Python 3.6: module-name-version-cp36-cp36m-manylinux1\_x86\_64.whl Then unzip the .whl files to your project directory and re-zip the contents together with your lambda code. Upvotes: 0 <issue_comment>username_5: Was having a similar problem on Ubuntu 18.04. Solved the issue by using `python3.7` and `pip3.7`. Its important to use `pip3.7` when installing the packages, like `pip3.7 install numpy` or `pip3.7 install numpy --user` To install `python3.7` and `pip3.7` on Ubuntu you can use `deadsnakes/ppa` ``` sudo add-apt-repository ppa:deadsnakes/ppa sudo apt-get update sudo apt-get install python3.7 curl https://bootstrap.pypa.io/get-pip.py -o /tmp/get-pip.py python3.7 /tmp/get-pip.py ``` This solution should also work on Ubuntu 16.04. Upvotes: 0
2018/03/14
1,061
3,705
<issue_start>username_0: ``` { "code": 200, "status": "OK", "developerMessage": "OK", "userMessage": "Operation Successful", "data": { "settings": { "countries": { "1": "Afghanistan", "2": "Albania", "3": "Algeria", "4": "Andorra", "5": "Angola", }, "mobile-code": { "+93": "Afghanistan +93", "+355": "Albania +355", "+213": "Algeria +213", "+376": "Andorra +376", "+244": "Angola +244", } }, "status_code": 200, "success": true, }, "dataType": "map" } ```<issue_comment>username_1: You need to make sure all the dependent libraries AND the Python file containing your function are all in one zip file in order for it to detect the correct dependencies. So essentially, you will need to have Numpy, Panda and your own files all in one zip file before you upload it. Also make sure that your code is referring to the local files (in the same unzipped directory) as dependencies. If you have done that already, the issue is probably how your included libraries gets referenced. Make sure you are able to use the included libraries as a dependency by getting the correct relative path on AWS once it's deployed to Lambda. Upvotes: 0 <issue_comment>username_2: u are using python 3.6 . so pip3 install numpy should be used, make a try . Upvotes: 1 <issue_comment>username_3: So like Wai kin chung said, you need to use pip3 to install the libraries. so to figure out which python version is default you can type: ``` which python ``` or ``` python -v ``` So in order to install with python3 you need to type: ``` python3 -m pip install sklearn, pandas, numpy --user ``` Once that is done, you can make sure that the packages are installed with: ``` python3 -m pip freeze ``` This will show all the python libraries installed with your python model. Once you have the libraries you would want to continue with you regular steps. Of course you would first want to delete everything that you have placed in ~/venv/lib/python3.6/site-packages/\*. ``` cd ~/lambda_code zip -r9 ~/package.zip ``` Upvotes: 0 <issue_comment>username_4: If you're running this on Windows (like I was), you'll run into an issue with the libraries being compiled on an incompatible OS. You can use an Amazon Linux EC2 instance, or a Cloud9 development instance to build your virtualenv as detailed above. Or, you could just download the pre-compiled wheel files as discussed on this post: <https://aws.amazon.com/premiumsupport/knowledge-center/lambda-python-package-compatible/> Essentially, you need to go to the project page on <https://pypi.org> and download the files named like the following: * For Python 2.7: module-name-version-cp27-cp27mu-manylinux1\_x86\_64.whl * For Python 3.6: module-name-version-cp36-cp36m-manylinux1\_x86\_64.whl Then unzip the .whl files to your project directory and re-zip the contents together with your lambda code. Upvotes: 0 <issue_comment>username_5: Was having a similar problem on Ubuntu 18.04. Solved the issue by using `python3.7` and `pip3.7`. Its important to use `pip3.7` when installing the packages, like `pip3.7 install numpy` or `pip3.7 install numpy --user` To install `python3.7` and `pip3.7` on Ubuntu you can use `deadsnakes/ppa` ``` sudo add-apt-repository ppa:deadsnakes/ppa sudo apt-get update sudo apt-get install python3.7 curl https://bootstrap.pypa.io/get-pip.py -o /tmp/get-pip.py python3.7 /tmp/get-pip.py ``` This solution should also work on Ubuntu 16.04. Upvotes: 0
2018/03/14
172
768
<issue_start>username_0: I have created a project on Google Actions. I would like to share the action with my friend. I tried giving them Viewer and Role Viewer access but they are not able to access the action on their mobile's Google Assistant App. Could you tell me which Role shall I give it to them so that they can access my actions.<issue_comment>username_1: I think you should choose "Project / Viewer" as role. Upvotes: 1 <issue_comment>username_2: Once you have given them viewer access from your owner account, the Google Project will now be visible on their Actions console. Going in they will have to toggle the test status to ON, so that they can invoke the action from their mobile devices. I don't think there is a easier way around this. Upvotes: -1
2018/03/14
550
1,978
<issue_start>username_0: i am getting data from database which in need to group so i am converting database result set in to array and then passing it to laravel collect helper but i gives me error ``` Call to undefined function collect() ``` Code ``` $user_profile=collect(UserProfileItem::where('type', "age_group")->get()->toArray())->groupBy("age_group"); ``` please help me about what i am doing wrong i want to use laravel collections method groupby to group my database result array by "age\_group" like below data group by **account\_id** ``` [ 'account-x10' => [ ['account_id' => 'account-x10', 'product' => 'Chair'], ['account_id' => 'account-x10', 'product' => 'Bookcase'], ], 'account-x11' => [ ['account_id' => 'account-x11', 'product' => 'Desk'], ], ] ```<issue_comment>username_1: you need to first get the groups and loop through them and add data in those to the collection ``` $groups = UserProfileItem::groupBy("age_group")->get(); $collection = collect(); foreach($groups as $group){ $data = UserProfileItem::where('type', $group->type)->get(); $collection->put($group->type , $data); } return $collection; ``` Upvotes: 1 <issue_comment>username_2: You dont need to add collect function as you are already getting a collection. So you need to do it as : ``` $user_profile = UserProfileItem::where('type', "age_group")->get()->groupBy("age_group"); ``` Upvotes: 2 <issue_comment>username_3: i think for previous version of laravel creating your own group is the only solution ``` public function getGroupedUser($group="age_group"){ $users = $this->users->keyBy('id')->toArray(); $user_profile=UserProfileItem::where('type', "age_group")->get()->groupBy("age_group"); foreach ($user_profile as $row){ $urow[$row['data']][]=$row; } echo " ``` "; print_r($user_profile);die; } ``` ``` Upvotes: 0
2018/03/14
1,223
4,231
<issue_start>username_0: For what reason(s) could this code fail (*no element found*)... ``` element(by.id('loginButton')).click(); // triggers route change browser.wait(element(by.tagName('myComponent')).isPresent(), 10000, 'timeout'); element(by.tagName('myComponent')).click(); ``` ...while this code works ? ``` element(by.id('loginButton')).click(); // triggers route change const eC = protractor.ExpectedConditions; browser.wait(eC.visibilityOf(element(by.tagName('myComponent'))), 10000, 'timeout'); element(by.tagName('myComponent')).click(); ``` I'm working with Angular 5.2.5, Protractor 5.3.0 and Jasmine 2.8.0. *May be related*: I could also have asked why I need to add a `browser.wait()` while `element(by())` is supposed to be automatically added in the ControlFlow by Protractor, but there are already lots of related questions ([here](https://stackoverflow.com/questions/21748442/protractor-how-to-wait-for-page-complete-after-click-a-button "lots"), [here](https://github.com/angular/protractor/issues/909 "of"), [there](https://github.com/angular/protractor/issues/2358 "related"), [there](https://github.com/angular/protractor/issues/2887 "questions"),...), with no clear answer unfortunately.<issue_comment>username_1: The two statements are not equivalent as such. I created a simple page like below ``` Tarun var div = document.createElement('div'); div.innerText = 'lalwani'; div.id = 'last\_name'; setTimeout( () => document.body.appendChild(div), 3000); ``` And a simple test like below ``` describe('angularjs homepage todo list', function() { it('should add a todo', async function() { browser.waitForAngularEnabled(false); browser.get('http://0.0.0.0:8000'); const eC = protractor.ExpectedConditions; browser.wait(element(by.id('last_name')).isPresent(), 10000, 'timeout'); }); }); ``` When you run you will find the output is ``` Started ... 1 spec, 0 failures Finished in 0.617 seconds ``` Now if you change the code to ``` describe('angularjs homepage todo list', function() { it('should add a todo', async function() { browser.waitForAngularEnabled(false); browser.get('http://0.0.0.0:8000'); const eC = protractor.ExpectedConditions; browser.wait(eC.visibilityOf(element(by.id('last_name'))), 10000, 'timeout'); }); }); ``` The output of the same is below ``` Started ... 1 spec, 0 failures Finished in 3.398 seconds ``` As you can see the `visibilityOf` actually waited for the object to appear while the previous one didn't. This is because the controlFlow will make `isPresent` get executed and return the promise returning value of true/false to the wait. While `visibilityOf` will return a function that the `wait` can check by calling again and again. You can verify this by adding below in the test ``` console.log(typeof eC.visibilityOf(element(by.id('last_name')))) console.log(typeof element(by.id('last_name'))) ``` The output of same is ``` function object ``` So the assumption your below two statements are same is wrong and that is why you don't get the correct results with the first one ``` browser.wait(element(by.tagName('myComponent')).isPresent(), 10000, 'timeout'); browser.wait(eC.visibilityOf(element(by.tagName('myComponent'))), 10000, 'timeout'); ``` Upvotes: 2 <issue_comment>username_2: There is a not-so obvious difference between the two . But the webdriver [docs](http://nr-synthetics-sw-jsdoc.s3-website-us-east-1.amazonaws.com/module_selenium-webdriver_phantomjs_class_Driver.html) are clear about this. `eC.visibilityOf(...)` - **Returns a function**. browser.wait() repeatedly evaluates functions until they return true. `isPresent()` - **Returns a promise**. browser.wait() does not / cannot repeatedly evalute promises(!) browser.wait() will continue immediately when the promise resolves , regardless of whether it returns true or false. If you want to use isPresent() you can wrap it in a **function**. This allows webdriver to call it over and over. ``` browser.wait(() => element(by.tagName('myComponent')).isPresent(), 10000, 'timeout'); ``` Works exactly as you expect. Upvotes: 4 [selected_answer]
2018/03/14
681
1,886
<issue_start>username_0: I have a list like this one: ``` l1 = [{'id':'78798798','gender':'Male'}, {'id':'78722228','gender':'Female'}, {'id':'33338','gender':'Male'}] ``` I need to check the length of a list obtained using a list comprehension and filtered by 'gender'. I try ``` len([x for x in l1 if x['gender'] == "Male"]) ``` but return an error Then i try: ``` [[(k,v) for k,v in d.items()] for d in l1 if l1['gender'] == 'Male'] ``` but return the same error and also ``` [[(k,v) for k,v in d.items() if k['gender'] == 'Male'] for d in l1] ``` How can i achieve my goal? Thanks in advance<issue_comment>username_1: Quick solution: `print(len([x for x in l1 if x['gender'] == "Male"]))` It looks for elements in the list `l1` and checks if the value of equals to "Male". Upvotes: 0 <issue_comment>username_2: Probably one of the entries is missing the gender field. Try ``` print([x for x in l1 if 'gender' not in x]) ``` to find it. Upvotes: 1 <issue_comment>username_3: Clearly one of the dictionaries in the list doesn't have a `gender` key. You can get rid of this by adding an extra part to the list comprehension ``` len([x for x in l1 if "gender" in x and x['gender'] == "Male"]) ``` Upvotes: 2 <issue_comment>username_4: try this ``` l =[{'id':'78798798','gender':'Male'}, {'id':'78722228','gender':'Female'}, {'id':'33338','gender':'Male'}] len(filter(lambda x: x['gender']=='Male', l)) ``` Upvotes: 0 <issue_comment>username_5: You can try in one line like this : ``` print(len(list(filter(lambda x:x['gender']=='Male',l1)))) ``` output: ``` 2 ``` Upvotes: 0 <issue_comment>username_6: You can use `sum`: ``` l1 = [{'id':'78798798','gender':'Male'}, {'id':'78722228','gender':'Female'}, {'id':'33338','gender':'Male'}] result = sum(i['gender'] == 'Male' for i in l1) print(result) ``` Output: ``` 2 ``` Upvotes: 0
2018/03/14
1,125
4,147
<issue_start>username_0: I have a problem when I try to use the function similarity proposed in the academic knowledge API. I tested the following commad to compute the similarity between two string: ``` curl -v -X GET "https://api.labs.cognitive.microsoft.com/academic/v1.0/similarity?s1={string}&s2={string}" -H "Ocp-Apim-Subscription-Key: {subscription key}" ``` The error that I get is : > > {"error":{"code":"Unspecified","message":"Access denied due to invalid > subscript ion key. Make sure you are subscribed to an API you are > trying to call and provi de the right key."}} > > > * Curl\_http\_done: called premature == 0 > * Connection #0 to host (nil) left intact > > > Can you tell how can I generate the `Ocp-Apim-Subscription-Key`? At the moment I used the key generated automatically when I visit the following url : <https://labs.cognitive.microsoft.com/en-us/subscriptions?productId=/products/5636d970e597ed0690ac1b3f&source=labs> Thank you for your help<issue_comment>username_1: I think you need to sign up for a free account, there is a link you can follow from here: <https://westus.dev.cognitive.microsoft.com/docs/services/56332331778daf02acc0a50b/operations/58076bdadcf4c40708f83791> Except for the invalid key, you curl-call looks right. Upvotes: 0 <issue_comment>username_2: You need a valid subscription key to be able to make API calls. Production key ============== Have a look at this [page](https://learn.microsoft.com/en-us/azure/cognitive-services/cognitive-services-apis-create-account) on how to created the needed services in the Azure portal and how to find the endpoint, as well as they key from there. Trial key ========= However, if you just want to try out the service, you can create a temporary key [here](https://azure.microsoft.com/en-us/try/cognitive-services/#know). This key is very limited in use but it should get you up and running. Limitations are: * 50,000 transactions per month, up to 20 per second. * Trial keys expire after a 90 day period. Upvotes: 0 <issue_comment>username_3: Unfortunately, primarily not an answer to your question, but rather a warning for all with the "same" problem, who could came across the original question like me, as the question helped me to solve a very, very similar problem: check whether you are using `api.labs.cognitive.microsoft.com` instead of `westus.api.cognitive.microsoft.com`. But may be you need the opposite. It seems the whole project has been moved inside Microsoft (see <https://www.microsoft.com/en-us/research/project/academic/articles/sign-academic-knowledge-api/>, I would bet that this blogpost was at the top of some "entrypoint" blog even yesterday morning, but now I am not able to find this blog, perhaps the things are changing right now) and may be the project is somewhere in the middle of the transition process and not all documentation etc. corresponds with the new state. E.g. <https://learn.microsoft.com/en-us/azure/cognitive-services/academic-knowledge/home>, in the submenu Reference, links to two "versions" of API which seem to be almost same except for the URLs `westus.api...` and `api.labs...`, respectively. But there seem to be no info what is the difference, which one should be preferred etc. My original keys expired yesterday, thus I generated new ones and was not able to use them until I have changed the URL to `api.labs...`, thanks to your question. May be you have the opposite problem, that you still have the "old" keys, so you need to use the "old" url `westus.api...`, but I am not able to test it, as my original keys which worked with `westus.api...` are expired. Both your query and your link where to get keys are OK and work for me. Just one additional detail: did you try the circle arrow next to the key value, which generates a new key? May be your key is somehow broken or expired and this could solve your problem. You can also try to create a completely new account at MS site. PS: I have added `microsoft-cognitive` tag as MS refers to <https://stackoverflow.com/questions/tagged/microsoft-cognitive> from many pages related to Cognitive Services Upvotes: 2
2018/03/14
700
2,597
<issue_start>username_0: I have a user register APIView. view: ``` class UserCreateAPIView(CreateAPIView): serializer_class = UserCreateSerializer permission_classes = [AllowAny] queryset = User.objects.all() ``` serializer: ``` class UserCreateSerializer(ModelSerializer): """ User register """ class Meta: model = User fields = [ 'username', 'wechat_num', 'password', ] extra_kwargs = { "password":{"write_only":True} } def create(self, validated_data): username=validated_data.pop('username') wechat_num = validated_data.pop('wechat_num') password=validated_data.pop('password') user_obj = User( username=username, wechat_num=wechat_num, ) user_obj.set_password(<PASSWORD>) user_obj.save() group=getOrCreateGroupByName(USER_GROUP_CHOICES.User) user_obj.groups.add(group) return validated_data ``` when I access this APIView, I will get the error: > > KeyError at /api/users/register/ > "Got KeyError when attempting to get a value for field `username` on serializer `UserCreateSerializer`.\nThe serializer field might be named incorrectly and not match any attribute or key on the `dict` instance.\nOriginal exception text was: 'username'." > > > but the database it will create the user success. all test success: [![enter image description here](https://i.stack.imgur.com/sje9o.jpg)](https://i.stack.imgur.com/sje9o.jpg)<issue_comment>username_1: You are popping all of the fields from `validated_data`, so they won't be in the dictionary when you finally return it. ``` username=validated_data.pop('username') wechat_num = validated_data.pop('wechat_num') password=validated_data.pop('password') ... return validated_data ``` Perhaps you want to change it to: ``` username=validated_data['username'] wechat_num = validated_data['wechat_num'] password=validated_data.pop('password') ... return validated_data ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: After defining your class you have to define those field, that what i did and it worked fine. class UserCreateSerializer(ModelSerializer): username = serializer.Cahrfield(max\_length) like this way you have to do. In mine i had same error. and i forgot to define password. Upvotes: 0 <issue_comment>username_3: From the method ``` def create(self, validated_data): ``` you should return the created instance. In your case, it should be ``` return user_obj ``` Upvotes: 0