date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/14
476
1,375
<issue_start>username_0: I have the following code that uses `sql`: ``` val yesterdayDate = "2018-03-13" df.createOrReplaceTempView("myTable") val result = spark.sql("select * from myTable where date(field_date) = '" + yesterdayDate) ``` The values of `field_date` have the following format `2018-02-13 23:55:11.382928`. I want to get the same result using `filter` or `where` of Spark.<issue_comment>username_1: Do you want to say, in spark sql API? If yes you can do ``` import spark.implicits._ val yesterdayDate = "2018-03-13" val df = spark.read // read the source val result = df.select("*").where($"field_date" === s"$yesterdayDate") ``` I had to edit here instead in comments. ``` df.select("*").where(functions.date_format($"field_date".cast(DateType), "yyyy-MM-dd") === s"$yesterdayDate") ``` Upvotes: 0 <issue_comment>username_2: following *sql query* should work for you ``` val yesterdayDate = "2018-03-13" df.createOrReplaceTempView("myTable") val result = spark.sql(s"select * from myTable where date(field_date) == '${yesterdayDate}'") result.show(false) ``` **If you don't want to use sql query** then you can *use sql api* as below ``` val yesterdayDate = "2018-03-13" import org.apache.spark.sql.functions._ df.select("*").where(date_format(col("field_date"), "yyyy-MM-dd") === yesterdayDate).show(false) ``` Upvotes: 2 [selected_answer]
2018/03/14
1,619
6,603
<issue_start>username_0: I have created two scripts in a unity game I am developing. The first is a script called Changetext (See below) and this seems to work fine, however, I would like it to only run when a player enters a set area in the game world so I created a second script called TRigger (See below) which I tried to use to create a way in which whenever the player enters a trigger collider it calls this script making but whenever I try this unity just errors saying "NullReferenceException: Object reference not set to an instance of an object" ``` using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.UI; public class Changetext : MonoBehaviour { public float timeLeft = 5; public Text countdownText; void Update() { timeLeft -= Time.deltaTime; countdownText.text = ("Time Left = " + timeLeft); if (timeLeft <= 0) { countdownText.text = "You got the cash"; } } } using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.UI; public class TRigger : MonoBehaviour { void Start() { GetComponent().enabled = false; } void OnTriggerEnter() { GetComponent().enabled = true; } void OnTriggerLeave() { GetComponent().enabled = false; } } ```<issue_comment>username_1: First of all: you don't have to include `UnityEngine.UI` in your second script. It seems like your scripts are not attached to the same gameobject, which is essential for `GetComponent` in this case. If your scripts shouldn't be on the same gameobject, you can just make a reference to your changetext, like ``` private GameObject changetext; void Start() { changetext = FindObjectOfType().gameObject; } ``` And then call it with ``` changetext.GetComponent().enabled = true; ``` Note: this only works if you only have one `Changetext` in your scene. The version without declaring a gameobject should be the following: ``` private Changetext changetext; void Start() { changetext = FindObjectOfType().gameObject.GetComponent(); } ``` And then call it with ``` changetext.enabled = true; ``` Upvotes: 2 <issue_comment>username_2: As username_1 said in his answer, your code only works if both scripts are attached to the same object. His answer suggests one method of acquiring that reference, but it is performance intensive and only works if there's exactly 1 copy of that script in the entire scene (two copies will have deterministic, but possibly undesired, behavior). You probably want something like this: ``` public class TRigger : MonoBehaviour { void OnTriggerEnter(Collider other) { other.GetComponent().enabled = true; } void OnTriggerExit(Collider other) { other.GetComponent().enabled = false; } } ``` Note the added parameter to `OnTriggerEnter` and `OnTriggerExit` (renamed from `OnTriggerLeave`): These are neccessary otherwise you have functions which will never be called because they are not in the [MonoBehaviour script reference](https://docs.unity3d.com/ScriptReference/MonoBehaviour.html). The above code will find the `Changetext` *on the object that touched the trigger volume.* If this is not where the script is located, you will need other methods to get a reference, such as manually assigning it to a field or using [`GameObject.Find()`](https://docs.unity3d.com/ScriptReference/GameObject.Find.html), which should only be done in `Start()` or `Awake()` or similar and the result cached in a class property. Upvotes: 2 <issue_comment>username_3: Since you said you need these scripts to be on separate objects, you should also do proper separation of concerns here. Key aspects: Your Text object should have a ChangeTextScript on it. Its only concern is to start updating text when told so from another script. That's why it needs a public method called **StartChangingText** that can be called from outside and possibly a **ResetText** method to be called once player exits the trigger area in case you wish to display a "Player left the trigger area" message or something similar. Your player object should have a TriggerScript on it. Once player enters the trigger area, this script should just call **StartChangingText** mentioned above. When player leaves the trigger area, it could call **ResetText**. To implement this, first rename your Text object in the hierarchy to "CountdownText" in order for TriggerScript on the player object to find it by name, which is a safer method of referencing objects than drag and dropping objects to inspector fields. Then add a new component to the player object and name it ChangeTextScript. Paste this updated version of your code into it: ```cs using System.Collections; using System.Collections.Generic; using UnityEngine; using UnityEngine.UI; public class ChangeTextScript : MonoBehaviour { public float timeLeft = 5f; private Text countdownText; //you can also set Text to public and // drag the Text object into the inspector field, // but it's prone to initialization errors when // switching scenes or not saving your project bool shouldBeChangingText = false; void Start() { countdownText = GetComponent(); } public void StartChangingText() { shouldBeChangingText = true; } public void ResetText() { shouldBeChangingText = false; countdownText.text = "Player left the area"; timeLeft = 5f; } void Update() { if (shouldBeChangingText) { timeLeft -= Time.deltaTime; countdownText.text = ("Time Left = " + timeLeft); if (timeLeft <= 0) { shouldBeChangingText = false; countdownText.text = "You got the cash"; timeLeft = 5f; } } } } ``` Then, add a new component to the player object, name it TriggerScript and paste this updated code into it: ``` using System.Collections; using System.Collections.Generic; using UnityEngine; public class TriggerScript : MonoBehaviour { private ChangeTextScript changeTextScript; void Start() { changeTextScript = GameObject.Find("CountdownText").GetComponent(); } void OnTriggerEnter() { changeTextScript.StartChangingText(); } void OnTriggerExit() { changeTextScript.ResetText(); } } ``` Note, when the timer runs out, we tell our script it should stop updating text, but we reset the timer so it can be started again, no need to enable and disable the script when you wish to run it again. Also, in your original script, setting **gameObject.enabled** to true/false is deprecated in favor of **gameObject.SetActive(bool value)**. Upvotes: 0
2018/03/14
1,656
6,012
<issue_start>username_0: I have an entity which has multiple fields, some of which are associations. Assume the entity has similar structure to the following: ``` @Entity @Table(name="foos") public class Foo { public Foo() {} @Id private Long id; @Column private String name; @OneToOne(fetch = FetchType.EAGER) @JoinColumn(name = "bar_id") private Bar bar; @ManyToOne @JoinColumn(name = "bar1_id") private Bar1 bar1; @ManyToOne @JoinColumn(name = "bar2_id") private Bar2 bar2; @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER, orphanRemoval = true) @JoinColumn(name = "foo_id") private List bar3; @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER, orphanRemoval = true) @JoinColumn(name = "foo\_id") private List bar4; //getters and setters } ``` My question is how can I load only some fields and not the whole entity using Hibernate query? I want to fetch only id, bar, bar1 and bar3, and don't want to fetch the remaining fields(name, bar2, bar4) in the created Foo object. I'm pretty new to Hibernate so any advice will be appreciated.<issue_comment>username_1: The easiest way would be creating a repository that will get the objects by chosen criteria, then extract only the fields that you need from the object list if you are using Spring Data: <https://docs.spring.io/spring-data/data-commons/docs/1.6.1.RELEASE/reference/html/repositories.html> For hibernate only it should go like this: ``` EntityManager entityManager = entityManagerFactory.createEntityManager(); entityManager.getTransaction().begin(); Foo foo = entityManager.find(Foo.class,id); entityManager.getTransaction().commit(); ``` Or with a custom SQL query, for chosen fields only: ``` EntityManager entityManager = entityManagerFactory.createEntityManager(); entityManager.getTransaction().begin(); Query query = entityManager.createQuery("SELECT e FROM Foo e"); List workerList = query.getResultList(); ``` Upvotes: 0 <issue_comment>username_2: I was able to solve the problem with the help of the following [link](https://www.thoughts-on-java.org/hibernate-tips-map-multiple-entities-same-table/). I've created a new entity FooShort, with the same table name "foos", and containing only the fields required from Foo. ``` @Entity @Table(name="foos") public class FooShort { public FooShort() {} @Id private Long id; @OneToOne(fetch = FetchType.EAGER) @JoinColumn(name = "bar_id") private Bar bar; @ManyToOne @JoinColumn(name = "bar1_id") private Bar1 bar1; @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER, orphanRemoval = true) @JoinColumn(name = "foo_id") private List bar3; //getters and setters } ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: Hibernate 5, Spring 5 You have several options, but the simplest option is to create a simple @Query with the columns/fields you want, see example code from one of my projects. First and foremost you are not required to create an DTO with the subset fields, I will show options for both and a hybrid version. Option One: DTO Object with column/field subset. DTO Repository Code: ``` @Query( "SELECT new com.your.package.name.customers.CompanyDTOIdName(c.id, c.name) " + "FROM Company c " + "WHERE c.enabled = 1 " + "ORDER BY c.name ASC" ) List compDtoNameWhereEnabledTrue(); ``` DTO Subset object: ``` public class CompanyDTOIdName { private long id; private String name; public CompanyDTOIdName(long id, String name) { this.id = id; this.name = name; } // accessors/mutators methods } ``` Hibernate SQL execution code: ``` SELECT new com.your.package.name.customers.CompanyDTOIdName(c.id, c.name) FROM Company c WHERE c.enabled = 1 ORDER BY c.name ASC select company0_.companyId as col_0_0_, company0_.name as col_1_0_ from ome_company company0_ where company0_.enabled=1 order by company0_.name ASC ``` Major disadvantage is that you will require a separate DTO object with every different combination of columns. --- Option Two: Using original @Entity Your second option, much simpler and will work with any column subset is as follows: Repository Code: ``` @Query( "SELECT new com.your.package.name.customers.Company(c.id, c.name) " + "FROM Company c " + "WHERE c.enabled = 1 " + "ORDER BY c.name ASC" ) List compNameWhereEnabledTrue(); ``` Where Company is your original @Entity object. Your Entity object must contain empty constructors as well as constructors that match your @Query statement in this case: ``` public void Company() {} public void Company(Long id, String name) { /* code omitted */ } ``` Hibernate SQL execution code: ``` SELECT new com.your.package.name.customers.Company(c.id, c.name) FROM Company c WHERE c.enabled = 1 ORDER BY c.name ASC select company0_.companyId as col_0_0_, company0_.name as col_1_0_ from ome_company company0_ where company0_.enabled=1 order by company0_.name ASC ``` Caution: I'm sure you can see the problem with this approach. Only the indicated columns/fields are loaded. You will get exceptions if you attempt to access non initiated fields. --- Hybrid Version using Interfaces (Best of both worlds) Create interface with required accessor fields. ``` public interface CompanyDTOIdNameInterface { public Long getId(); public String getName(); } ``` Implement interface on @Entity object. ``` public class Company implements Serializable, CompanyDTOIdNameInterface { /* omitted code */ } ``` Repository Code: ``` @Query( "SELECT new com.your.package.name.customers.Company(c.id, c.name) " + "FROM Company c " + "WHERE c.enabled = 1 " + "ORDER BY c.name ASC" ) List compNameWhereEnabledTrue(); ``` Its as simple as that. Good luck :) Upvotes: 1
2018/03/14
1,450
5,313
<issue_start>username_0: ``` create table project_supervisor ( supervisor_ID VARCHAR2 (5) Primary key, last_name varchar2 (250), other_names varchar2 (250) ); CREATE TABLE Project_description ( project_id VARCHAR2 NOT NULL, project_title varchar, project_summary varchar, PRIMARY KEY (project_id), FOREIGN KEY (supervisor_ID) REFERENCES "project_supervisor" (supervisor_ID) ); ```<issue_comment>username_1: The easiest way would be creating a repository that will get the objects by chosen criteria, then extract only the fields that you need from the object list if you are using Spring Data: <https://docs.spring.io/spring-data/data-commons/docs/1.6.1.RELEASE/reference/html/repositories.html> For hibernate only it should go like this: ``` EntityManager entityManager = entityManagerFactory.createEntityManager(); entityManager.getTransaction().begin(); Foo foo = entityManager.find(Foo.class,id); entityManager.getTransaction().commit(); ``` Or with a custom SQL query, for chosen fields only: ``` EntityManager entityManager = entityManagerFactory.createEntityManager(); entityManager.getTransaction().begin(); Query query = entityManager.createQuery("SELECT e FROM Foo e"); List workerList = query.getResultList(); ``` Upvotes: 0 <issue_comment>username_2: I was able to solve the problem with the help of the following [link](https://www.thoughts-on-java.org/hibernate-tips-map-multiple-entities-same-table/). I've created a new entity FooShort, with the same table name "foos", and containing only the fields required from Foo. ``` @Entity @Table(name="foos") public class FooShort { public FooShort() {} @Id private Long id; @OneToOne(fetch = FetchType.EAGER) @JoinColumn(name = "bar_id") private Bar bar; @ManyToOne @JoinColumn(name = "bar1_id") private Bar1 bar1; @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER, orphanRemoval = true) @JoinColumn(name = "foo_id") private List bar3; //getters and setters } ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: Hibernate 5, Spring 5 You have several options, but the simplest option is to create a simple @Query with the columns/fields you want, see example code from one of my projects. First and foremost you are not required to create an DTO with the subset fields, I will show options for both and a hybrid version. Option One: DTO Object with column/field subset. DTO Repository Code: ``` @Query( "SELECT new com.your.package.name.customers.CompanyDTOIdName(c.id, c.name) " + "FROM Company c " + "WHERE c.enabled = 1 " + "ORDER BY c.name ASC" ) List compDtoNameWhereEnabledTrue(); ``` DTO Subset object: ``` public class CompanyDTOIdName { private long id; private String name; public CompanyDTOIdName(long id, String name) { this.id = id; this.name = name; } // accessors/mutators methods } ``` Hibernate SQL execution code: ``` SELECT new com.your.package.name.customers.CompanyDTOIdName(c.id, c.name) FROM Company c WHERE c.enabled = 1 ORDER BY c.name ASC select company0_.companyId as col_0_0_, company0_.name as col_1_0_ from ome_company company0_ where company0_.enabled=1 order by company0_.name ASC ``` Major disadvantage is that you will require a separate DTO object with every different combination of columns. --- Option Two: Using original @Entity Your second option, much simpler and will work with any column subset is as follows: Repository Code: ``` @Query( "SELECT new com.your.package.name.customers.Company(c.id, c.name) " + "FROM Company c " + "WHERE c.enabled = 1 " + "ORDER BY c.name ASC" ) List compNameWhereEnabledTrue(); ``` Where Company is your original @Entity object. Your Entity object must contain empty constructors as well as constructors that match your @Query statement in this case: ``` public void Company() {} public void Company(Long id, String name) { /* code omitted */ } ``` Hibernate SQL execution code: ``` SELECT new com.your.package.name.customers.Company(c.id, c.name) FROM Company c WHERE c.enabled = 1 ORDER BY c.name ASC select company0_.companyId as col_0_0_, company0_.name as col_1_0_ from ome_company company0_ where company0_.enabled=1 order by company0_.name ASC ``` Caution: I'm sure you can see the problem with this approach. Only the indicated columns/fields are loaded. You will get exceptions if you attempt to access non initiated fields. --- Hybrid Version using Interfaces (Best of both worlds) Create interface with required accessor fields. ``` public interface CompanyDTOIdNameInterface { public Long getId(); public String getName(); } ``` Implement interface on @Entity object. ``` public class Company implements Serializable, CompanyDTOIdNameInterface { /* omitted code */ } ``` Repository Code: ``` @Query( "SELECT new com.your.package.name.customers.Company(c.id, c.name) " + "FROM Company c " + "WHERE c.enabled = 1 " + "ORDER BY c.name ASC" ) List compNameWhereEnabledTrue(); ``` Its as simple as that. Good luck :) Upvotes: 1
2018/03/14
1,025
3,109
<issue_start>username_0: Given a MapSet, how can one detect if it's empty using pattern matching? ``` # What goes in the question marks? def my_func(????), do: IO.puts("This mapset is empty") def my_func(%MapSet{}), do: IO.puts("This mapset is not empty") my_func(MapSet.new()) ``` If this were a list, I would have just matched it on `([])` but that doesn't work for MapSets (because the type is different) Here are some of the other things I've tried, unsuccessfully. ``` def myfunc([]), do: IO.puts("This only works for lists") # This is a syntax error # def myfunc(MapSize.new()) def myfunc(%MapSet{}), do: IO.puts("This matches every mapset") def myfunc(a) when map_size(a), do: IO.puts("the map size is always 3") ```<issue_comment>username_1: A MapSet stores its entries in a field called `map`. I'm not 100% sure whether this is an implementation detail or is it guaranteed to remain the same, but for now you can check whether the `map` field is empty using `map_size/1`: ``` defmodule A do def empty?(%MapSet{map: map}) when map_size(map) == 0, do: true def empty?(%MapSet{}), do: false end IO.inspect A.empty?(MapSet.new) IO.inspect A.empty?(MapSet.new([1, 2])) ``` Output: ``` true false ``` Upvotes: 3 <issue_comment>username_2: You may also consider [the solution I provided for matching against an empty map](https://stackoverflow.com/questions/33248816/pattern-match-function-against-empty-map) as follows: ``` defmodule A do def empty?(some_map_set = %MapSet{}) do an_empty_map_set = MapSet.new some_map_set |> case do ^an_empty_map_set ->true # Application of pin operator _ -> false end end end ``` You can test as follows: `A.empty?(MapSet.new)` and `A.empty?(MapSet.new([1]))` In that link you can see other solutions that you can leverage accordingly. One is already provided by @dogbert. The other solution would work as follows: ``` defmodule A do @empty MapSet.new def empty?(some_map_set) when some_map_set == @empty, do: true def empty?(%MapSet{}), do: false end ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: There is a hacky but still legit way to achieve a goal: ```rb defmodule MapSetTest do def my_func(map_set, empty_map_set \\ MapSet.new) def my_func(empty_map_set, empty_map_set), do: IO.puts("This mapset is empty") def my_func(%MapSet{}, _), do: IO.puts("This mapset is not empty") end MapSetTest.my_func(MapSet.new) #⇒ This mapset is empty MapSetTest.my_func(MapSet.new([])) #⇒ This mapset is empty MapSetTest.my_func(MapSet.new([1])) #⇒ This mapset is not empty ``` The trick here is that while we cannot call `MapSet.new` directly in match, we can assign it’s value to be the default for the hidden second argument. Upvotes: 2 <issue_comment>username_4: This approach does not use pattern matching, but if your goal to tell whether a `MapSet` is empty, you can use [`Enum.empty?()`](https://hexdocs.pm/elixir/1.14.5/Enum.html#empty?/1). ``` iex(1)> [] |> MapSet.new() |> Enum.empty?() true ``` ``` iex(2)> [1] |> MapSet.new() |> Enum.empty?() false ``` Upvotes: -1
2018/03/14
489
1,522
<issue_start>username_0: Hello I have the following dictionary: ``` dictionary = {'key1': ['color1','color2'],'key2':['car1','car2'],'key3':['frut1']} ``` from it I would like to get the following lists: ``` key1 = ['color1','color2'] key2 = ['car1','car2'] key3 = ['frut1'] ``` I need to create this lists on the fly from the dictionary I tried: ``` list_keys = dictionary.keys() list_values = dictionary.values() ``` and then: ``` keys, values = zip(*dictionary.items()) ``` However I dont have the result that I need I think that maybe this task could be completed using exec, but I dont know how to proceed since that I would like to appreciate support to overcomet this task.<issue_comment>username_1: You can unpack after sorting the contents of the dictionary: ``` import re dictionary = {'key1': ['color1','color2'],'key2':['car1','car2'],'key3':['frut1']} key1, key2, key3 = sorted(dictionary.items(), key=lambda x:int(re.findall('\d+$', x[0])[0])) ``` Upvotes: 0 <issue_comment>username_2: A dictionary is the only good way to store a variable number of variables. Your dictionary looks fine as it is. If you want integer identifiers, e.g. the keys in your use case aren't actually numbered, below is an example of what you can do: ``` d = {'key1': ['color1', 'color2'], 'key2': ['car1', 'car2'], 'key3': ['frut1']} values = {i: list(v) for i, (k, v) in enumerate(sorted(d.items()), 1)} {1: ['color1', 'color2'], 2: ['car1', 'car2'], 3: ['frut1']} ``` Upvotes: 2 [selected_answer]
2018/03/14
1,634
5,620
<issue_start>username_0: I probably did not make the title correctly but please someone explain why I can't create prototype for person object? Only works when I put hit to Object.prototype chain. ``` const person = { isHuman: false, printIntroduction: function () { console.log(`My name is ${this.name}. Am I human? ${this.isHuman}`); } }; //person.prototype.hit = function(){ // console.log("hitting me"); //} Object.prototype.hit = function(){ console.log("hitting me"); } const me = Object.create(person); me.name = "Matthew"; // "name" is a property set on "me", but not on "person" me.isHuman = true; // inherited properties can be overwritten me.printIntroduction(); me.hit(); ``` (UPDATE) . Why does THIS work?? Not sure what the differences are actually from this example but this code works. ``` function person { isHuman: false, printIntroduction: function () { console.log(`My name is ${this.name}. Am I human? ${this.isHuman}`); } }(); person.prototype.hit = function(){ console.log("hitting me"); } /* person.prototype.hit = function(){ console.log("hitting me"); } */ Object.prototype.hit = function(){ console.log("hitting me"); } const me = Object.create(person); me.name = "Matthew"; // "name" is a property set on "me", but not on "person" me.isHuman = true; // inherited properties can be overwritten me.printIntroduction(); me.hit(); // expected output: "My name is Matthew. Am I human? true" ``` update 2 -------- Ok, so I make it work like below but clearly prototype doesn't work the way I expected.. so I am clearly confused about prototype ``` function person(){ console.log(`My name is ${this.name}. Am I human? ${this.isHuman}`); } person.prototype.hit = function(){ console.log("hitting me1"); } Object.prototype.hit = function(){ console.log("hitting me2"); } const me = Object.create(person); me.hit(); ``` UPDATE 3. thank you.. this is the explanation that I got from below.. thank you and this is clear now. ``` function person(){ console.log(`My name is ${this.name}. Am I human? ${this.isHuman}`); } person.prototype.hit = function(){ console.log("hitting me1"); } Object.prototype.hit = function(){ console.log("hitting me2"); } //const me = Object.create(person); const me = new person; me.hit(); ```<issue_comment>username_1: If you would do this (quite similar to what you are doing): ``` const person = { isHuman: false, printIntroduction: function () { console.log(`My name is ${this.name}. Am I human? ${this.isHuman}`); }, prototype: { hit(){ console.log("hutting me"); } } }; ``` Then when you instantiate your object got a prototype property: ``` const me = Object.create(person); me.prototype.hit(); ``` The inheritance chain is: ``` me -> person -> Object.prototype ``` And NOT: ``` me -> person -> person.prototype -> Object.prototype ``` The `prototype` property actually hasnt to do much with inheritance, just imagine that it would not exist. --- However it is important when talking about *constructors*. When you call a function with `new` in front, it is treated as a constructor, e.g.: ``` var me = new Person() ``` And that is just syntactic sugar for: ``` var me = Object.create(Person.prototype /*!!!*/); Person.call(me); ``` So when you set up a *function*, set its *prototype* property and call it with `new`, only then the prototype property of the constructor gets part of the instances prototype chain: ``` function Person(){} Person.prototype.hit = () => console.log("works"); const me = new Person(); me.hit(); ``` Now the chain is: ``` me -> Person.prototype -> Object.prototype ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Because JS is trying to access to a property called `prototype` rather than a "prototype" of that object. To define a prototype, you need to use the function `setPrototypeOf()` ```js const person = { isHuman: false, printIntroduction: function() { console.log(`My name is ${this.name}. Am I human? ${this.isHuman}`); } }; Object.setPrototypeOf(person, { 'hit': function() { console.log("hitting me"); } }); const me = Object.create(person); me.name = "Matthew"; // "name" is a property set on "me", but not on "person" me.isHuman = true; // inherited properties can be overwritten me.printIntroduction(); me.hit(); ``` Or, you can declare that function directly into the object `person` ```js const person = { isHuman: false, printIntroduction: function() { console.log(`My name is ${this.name}. Am I human? ${this.isHuman}`); }, 'hit': function() { console.log("hitting me"); } }; const me = Object.create(person); me.name = "Matthew"; // "name" is a property set on "me", but not on "person" me.isHuman = true; // inherited properties can be overwritten me.printIntroduction(); me.hit(); ``` Upvotes: 0 <issue_comment>username_3: > > Why can't I create prototype for person object? > > > Because `person` does not have a `.prototype` property, and doesn't need one. The `person` object already has a prototype from which it inherits: `Object.prototype` (the default for all object literals). You shouldn't change that though. Also, `person` does act as a prototype for the `me` object (i.e. `me` inherits from `person`). So if you want to give it another method, you should just write ``` person.hit = function(){ console.log("hitting me"); }; ``` that puts the `hit` function as a property of `person` in exactly the same way that `printIntroduction` is. Upvotes: 2
2018/03/14
1,564
3,983
<issue_start>username_0: I learned that Python lists can also be traversed using negative index, so I tried to slice/sublist a list using negative index, but I cannot slice it till end. My list is: ``` areas = ["hallway", 11.25, "kitchen", 18.0, "living room", 20.0, "bedroom", 10.75, "bathroom", 9.50] ``` Knowing that slicing syntax is `[start:end]` and end index is not calculated, I did `upstairs = areas[-4:0]` but this doesn't give me last element of the list.<issue_comment>username_1: `areas[-4:0]` translates to `areas[len(areas) - 4: 0]`, which is effectively slicing from a higher index to a lower. Semantically, this doesn't make much sense, and the result is an empty list. You're instead looking for: ``` >>> areas[-4:] ['bedroom', 10.75, 'bathroom', 9.5] ``` When the last index is not specified, it is assumed you slice till the very end. --- As an aside, specifying `0` would make sense when you slice in reverse. For example, ``` >>> areas[-4:0:-1] ['bedroom', 20.0, 'living room', 18.0, 'kitchen', 11.25] ``` Happens to be perfectly valid. Here, you slice from `len(areas) - 4` down to (but not including) index `0`, in reverse. Upvotes: 3 <issue_comment>username_2: Firstly, I wonder how you could access a slice by using areas[-4:0], it should return an empty list. <https://repl.it/repls/CarefreeOilyLine> You can use areas[-4:] which should return ``` > areas[-4:] => ['bedroom', 10.75, 'bathroom', 9.5] ``` Actually, you can think `-4` as `len(areas) - 4` like ``` > areas[len(areas)-4:] => ['bedroom', 10.75, 'bathroom', 9.5] > areas[0:-4] => ['hallway', 11.25, 'kitchen', 18.0, 'living room', 20.0] > areas[0:len(areas)-4] => ['hallway', 11.25, 'kitchen', 18.0, 'living room', 20.0] > areas[0:6] => ['hallway', 11.25, 'kitchen', 18.0, 'living room', 20.0] ``` and 0 cannot represent 0 and len(areas) - 0 at the same time. Of course, it works with `areas[-4:len(areas)]`. Upvotes: 0 <issue_comment>username_3: `0` is not a negative number which is why it will always refer to the left-most element. If you are hardcoding a single slice that is no problem, because you can just leave out the right boundary as in `areas[-4:]` But what to do if your boundaries are computed at runtime? ``` >>> for left in range(-8, -3, 2): ... right = left + 4 ... print(areas[left:right]) ... ['kitchen', 18.0, 'living room', 20.0] ['living room', 20.0, 'bedroom', 10.75] [] ``` As you found out this doesn't work. You'll often hear to just add the length of the list: ``` >>> for left in range(-8, -3, 2): ... right = left + 4 ... print(areas[len(areas)+left:len(areas)+right]) ... ['kitchen', 18.0, 'living room', 20.0] ['living room', 20.0, 'bedroom', 10.75] ['bedroom', 10.75, 'bathroom', 9.5] ``` But that doesn't always work either: ``` >>> for left in range(-12, -3, 2): ... right = left + 4 ... print(areas[len(areas)+left:len(areas)+right]) ... [] ['hallway', 11.25, 'kitchen', 18.0] ['kitchen', 18.0, 'living room', 20.0] ['living room', 20.0, 'bedroom', 10.75] ['bedroom', 10.75, 'bathroom', 9.5] ``` So here is an idiom that works in a few more cases: ``` >>> for left in range(-12, -3, 2): ... right = left + 4 ... print(areas[left or None:right or None]) ... ['hallway', 11.25] ['hallway', 11.25, 'kitchen', 18.0] ['kitchen', 18.0, 'living room', 20.0] ['living room', 20.0, 'bedroom', 10.75] ['bedroom', 10.75, 'bathroom', 9.5] ``` But you can break this as well: ``` >>> for left in range(-12, -1, 2): ... right = left + 4 ... print(areas[left or None:right or None]) ... ['hallway', 11.25] ['hallway', 11.25, 'kitchen', 18.0] ['kitchen', 18.0, 'living room', 20.0] ['living room', 20.0, 'bedroom', 10.75] ['bedroom', 10.75, 'bathroom', 9.5] [] ``` What do we learn from this? Negative indices are ok for hard coding but require some care when used dynamically. In a program, it may be safest to avoid negative semantics and consistently use `max(0, index)`. Upvotes: 1
2018/03/14
212
671
<issue_start>username_0: How can i get die value of a field **folder.x\_code** inside a div script data-configid ``` --- " style="width:100%; height:371px;" class="issuuembed"> ```<issue_comment>username_1: You can use t-attf-$name syntax in QWeb like this: ``` ``` I have not used this with data attributes but I see no reason for it not to work with those too. Please give feedback if it works! You can find more information on qweb from Odoo reference at <https://www.odoo.com/documentation/11.0/reference/qweb.html#attributes>. Br, username_1 Upvotes: 0 <issue_comment>username_2: The solution thx to username_1 ``` --- ``` Upvotes: 2 [selected_answer]
2018/03/14
706
2,750
<issue_start>username_0: I'm using the core [React Native Modal component](https://facebook.github.io/react-native/docs/modal.html). Within the modal content I have a `Done` button. Pressing `Done`is the only way we want users to close the modal. But the Modal component allows swiping down from the top of the screen to close. How do you turn off "swipe to close"?<issue_comment>username_1: To answer @Nikolai in the comments, I am using React Navigation. I didn't realize the gesture settings from the navigator also controls the gestures of the react native modal. Turning off gestures solved my problem. ``` const HomeScreenContainer = StackNavigator( { HomeScreen: { screen: Screens.HomeScreen }, PostScreen: { screen: Screens.PostScreen }, CameraScreen: { screen: Screens.CameraScreen }, CameraRollScreen: { screen: Screens.CameraRollScreen }, }, { navigationOptions: { gestureEnabled: false, }, }, ); ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: In addition to @username_1's answer, if you want to disable the swipe gesture for a single Modal you can also do this: ``` const AppNavigator = StackNavigator({ ModalScreen: { screen: ModalScreen, navigationOptions: { gesturesEnabled: false }, } } ``` Upvotes: 3 <issue_comment>username_3: Struggled with it a bit too. Here is what worked for me: If you have root navigator as modal and inside it another stacked navigator for which you want to disable gestures, then put this inside root navigator for the stacked navigator, worked for me in v2.12 iOS `navigationOptions: { gesturesEnabled: false, },` here's full code: ``` const RootStack = createStackNavigator( { LoginNavigator: { screen: LoginNavigator, navigationOptions: { gesturesEnabled: false, }, }, ModerationNavigator: { screen: ModerationNavigator, }, WalletNavigator: { screen: WalletNavigator, }, FloatingNavigator: { screen: FloatingNavigator, }, UIKitNavigator: { screen: UIKitNavigator, }, MainMapViewScreen: { screen: MainMapViewScreen, }, FullscreenPhotoScreen: { screen: FullscreenPhotoScreen, }, }, { mode: 'modal', initialRouteName: 'MainMapViewScreen', headerMode: 'none', header: null, }, ); ``` Upvotes: 4 <issue_comment>username_4: Since React Navigation Version 5.x, they have changed it to `gestureEnabled` instead of `gesturesEnabled` *(without the 's)* for both [StackNavigator](https://reactnavigation.org/docs/stack-navigator/#gestureenabled) and [DrawerNavigator](https://reactnavigation.org/docs/drawer-navigator/#gestureenabled) Sample usage: ``` ``` Upvotes: 4
2018/03/14
389
1,420
<issue_start>username_0: When trying to use mapped types with interface, i get a weird error - which makes me think its not possible to use them together at all.. See identical type and interface declarations: ``` type AllWorks = { [K in keyof T]: T[K]; } interface DoesNotWork { [K in keyof T]: T[K]; } ``` While first one works as expected, second one gives the TS error: ``` [ts] A computed property name must be of type 'string', 'number', 'symbol', or 'any'. [ts] Member '[K in keyof' implicitly has an 'any' type. [ts] Cannot find name 'keyof'. ``` So my question is - **is it even possible to map over interfaces ? if yes - then how ?**<issue_comment>username_1: So far in TypeScript (as of version 2.7.2), union type signatures in interfaces are not allowed and instead mapped types should be used (as you correctly have). [Docs](https://www.typescriptlang.org/docs/handbook/advanced-types.html). Upvotes: 4 [selected_answer]<issue_comment>username_2: Interfaces are not built with such a purpose. You should use `Type` there instead. The Typescript compiler is telling you that you are using the wrong tool for the job :) The most complex thing an interface can do is `extend` otherwise use mapped types as a general rule. I've built an article [here](https://medium.com/better-programming/mastering-typescripts-mapped-types-5fa5700385eb) about mapped types if you want to dig deeper. Upvotes: -1
2018/03/14
271
953
<issue_start>username_0: ``` print W.shape ``` outputs (7,12288) as it supposed to do. However, ``` print W[0].shape ``` outputs 12288, when it should be 7. What am I doing wrong?<issue_comment>username_1: So far in TypeScript (as of version 2.7.2), union type signatures in interfaces are not allowed and instead mapped types should be used (as you correctly have). [Docs](https://www.typescriptlang.org/docs/handbook/advanced-types.html). Upvotes: 4 [selected_answer]<issue_comment>username_2: Interfaces are not built with such a purpose. You should use `Type` there instead. The Typescript compiler is telling you that you are using the wrong tool for the job :) The most complex thing an interface can do is `extend` otherwise use mapped types as a general rule. I've built an article [here](https://medium.com/better-programming/mastering-typescripts-mapped-types-5fa5700385eb) about mapped types if you want to dig deeper. Upvotes: -1
2018/03/14
415
1,768
<issue_start>username_0: I have access to a Java library that let's me pass it an OutputStream object and it writes a report to it. The reports consists of data, each column is delimited by tabs and each record is on a new row i.e. separated by newline. Currently I am passing it a fileOutput stream as below. I need to ultimately insert the data in a database. Instead of writing to a file, I wish to directly insert the data in a database. What is the best way to achieve this ? To rephrase : I want to directly insert the data in the OutputStream to a database without having to first put it in a file. I am aiming for this with the assumption that it will improve performance. ``` OutputStream report = null; try { report = new FileOutputStream( "report-" + sellerID + ".xml" ); } catch (FileNotFoundException e) { // TODO Auto-generated catch block e.printStackTrace(); } request.setReportOutputStream( report ); ```<issue_comment>username_1: So far in TypeScript (as of version 2.7.2), union type signatures in interfaces are not allowed and instead mapped types should be used (as you correctly have). [Docs](https://www.typescriptlang.org/docs/handbook/advanced-types.html). Upvotes: 4 [selected_answer]<issue_comment>username_2: Interfaces are not built with such a purpose. You should use `Type` there instead. The Typescript compiler is telling you that you are using the wrong tool for the job :) The most complex thing an interface can do is `extend` otherwise use mapped types as a general rule. I've built an article [here](https://medium.com/better-programming/mastering-typescripts-mapped-types-5fa5700385eb) about mapped types if you want to dig deeper. Upvotes: -1
2018/03/14
1,102
3,898
<issue_start>username_0: Why the stored output of SQLCMD has only `Length` property instead of `column names`?. Is it not possible to store `sqlcmd` output with its properties? `Invoke-sqlcmd` stores it correctly but `Invoke-SQLcmd` takes a bit longer to process so I'm trying to make it work with SQLcmd as this method will be part of different scripts that are scheduled to run every minute, once ever hour etc., Any idea if this is possible or what the issue is? Store output and echo $var: ``` PS C:> $var=(SQLCMD -S 'x.x.x.x' -U 'user' -P 'password' -i "C:\query.sql" -W -m 1) PS C:> $var job_id name ------ ---- 12345-aaaa-1234-5678-000000000000000 Clear DB entries 12345-bbbb-1234-5678-000000000000000 TempLog DB ``` Echo $var[0,1,2] which doesn't show property names. ``` PS C:> $var[0] job_id name PS C:> $var[1] ------ ---- PS C:> $var[2] 12345-aaaa-1234-5678-000000000000000 Clear DB entries ``` Show $var properties ``` PS C:> $var | select * Length ------ 11 53 ``` Show $var type ``` PS C:> $var.GetType() IsPublic IsSerial Name BaseType -------- -------- ---- -------- True True Object[] System.Array ```<issue_comment>username_1: ``` $var=(SQLCMD -S 'x.x.x.x' -U 'user' -P '<PASSWORD>' -i "C:\query.sql" -W -m 1) ``` You're calling `sqlcmd.exe`, which has no concept of what .Net objects are let alone how to pass them to PowerShell. As far as PowerShell is concerned, that command outputs strings. You will need to convert the strings to objects yourself. If you have to use `sqlcmd.exe`, I would suggest something like this: ``` $Delimiter = "`t" $var = SQLCMD -S 'x.x.x.x' -U 'user' -P '<PASSWORD>' -i "C:\query.sql" -W -m 1 -s $Delimiter | ConvertFrom-Csv -Delimiter $Delimiter | Select-Object -Skip 1 ``` I'm using tab as the field separator. If your data contains tabs, you'll need a different separator. You could also run into problems if your data contains double quotes. The `Select-Object -Skip 1` is to skip the underline row that `sqlcmd` always creates below the header. Also be aware that you should use the `-w` parameter on `sqlcmd` to prevent any incorrect wrapping. Also beware that null values are always output as a literal string `NULL`. That said, I would still probably stick with `Invoke-Sqlcmd`. It's much less error prone and much more predictable. If I really needed performance, I'd probably use direct .Net methods or SSIS. Upvotes: 3 [selected_answer]<issue_comment>username_2: I have written a function for that purpose... ist not fully fleshed out... hope it helps ``` function Invoke-MSSqlCommand { [CmdletBinding()] param ( [Parameter(Position=0, Mandatory=$true)] [ValidateNotNullOrEmpty()] [string] $Query, [Parameter(Position=1, Mandatory=$true)] [ValidateNotNullOrEmpty()] [string] $ConnectionString, [Switch] $NoOutput ) try { $connection = New-Object -TypeName System.Data.SqlClient.SqlConnection $connection.ConnectionString = $ConnectionString $null = $connection.Open() } catch { Throw "$connectionstring could not be contacted" } $command = New-Object -TypeName System.Data.SqlClient.SqlCommand $command.CommandText = $query $command.Connection = $connection if ($NoOutput) { $null = $command.ExecuteNonQuery() } else { if ($dataset.Tables[0].Rows[0] -eq $null) { write-verbose -Message 'no record' $connection.Close() return $null } $dataset.Tables[0].Rows $connection.close() } } ``` Upvotes: 0
2018/03/14
276
1,029
<issue_start>username_0: I am trying to adopt a git branching strategy in our office, but it requires that all merges are made without using fast forward. Since we all use different tools to work with git, I would like to know if it's possible to configure Gitolite to prohibit pushes with fast forward merges? Is it even possible to detect that kind of merge?<issue_comment>username_1: ``` git config merge.ff false ``` See <https://git-scm.com/docs/git-config#git-config-mergeff> No, it's not possible to detect because it's not a merge at all. Upvotes: 3 [selected_answer]<issue_comment>username_2: You could detect that the leading commit pushed to gitolite is a merge commit (e.g : a commit which has at least 2 parents). You could probably do this in an [`update` hook](https://git-scm.com/book/gr/v2/Customizing-Git-Git-Hooks#_code_update_code) on the server side : * if the branch name (first argument) matches one of your protected branches, * check if the pushed commit (third argument) has two parents Upvotes: 1
2018/03/14
2,900
11,082
<issue_start>username_0: I have what may be a rather complicated issue. I have an extended gridview control that I swear used to work all the time, but I went away for a while, came back, and it doesn't work anymore (I'm the sole programmer). The extended gridview is designed so that it always shows a footer row (for inserting new rows). It loads and displays existing data correctly. If there are no rows, then adding the data works fine. But if I'm adding a new row to a gridview that already has existing rows, I get an issue where gvPhones.FooterRow is null, so it can't find the control I'm referencing. Here's the extended gridview class (gotten from a stackoverflow page): ``` using System.Linq; using System.Web.UI; using System.Web.UI.WebControls; using System.ComponentModel; //https://stackoverflow.com/questions/994895/always-show-footertemplate-even-no-data/10891744#10891744 namespace WebForms.LocalCodeLibrary.Controls { //modified from https://stackoverflow.com/questions/3437581/show-gridview-footer-on-empty-grid public class GridViewExtended : GridView { private GridViewRow _footerRow; [DefaultValue(false), Category("Appearance"), Description("Include the footer when the table is empty")] public bool ShowFooterWhenEmpty { get; set; } [DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden), Browsable(false)] public override GridViewRow FooterRow { get { if ((this._footerRow == null)) { this.EnsureChildControls(); } return this._footerRow; } } protected override int CreateChildControls(System.Collections.IEnumerable dataSource, bool dataBinding) { //creates all the rows that would normally be created when instantiating the grid int returnVal = base.CreateChildControls(dataSource, dataBinding); //if no rows were created (i.e. returnVal == 0), and we need to show the footer row, then we need to create and bind the footer row. if (returnVal == 0 && this.ShowFooterWhenEmpty) { Table table = this.Controls.OfType ().First (); DataControlField[] dcf = new DataControlField[this.Columns.Count]; this.Columns.CopyTo(dcf, 0); //creates the footer row this.\_footerRow = this.CreateRow(-1, -1, DataControlRowType.Footer, DataControlRowState.Normal, dataBinding, null, dcf, table.Rows, null); if (!this.ShowFooter) { \_footerRow.Visible = false; } } return returnVal; } private GridViewRow CreateRow(int rowIndex, int dataSourceIndex, DataControlRowType rowType, DataControlRowState rowState, bool dataBind, object dataItem, DataControlField[] fields, TableRowCollection rows, PagedDataSource pagedDataSource) { GridViewRow row = this.CreateRow(rowIndex, dataSourceIndex, rowType, rowState); GridViewRowEventArgs e = new GridViewRowEventArgs(row); if ((rowType != DataControlRowType.Pager)) { this.InitializeRow(row, fields); } else { this.InitializePager(row, fields.Length, pagedDataSource); } //if the row has data, sets the data item if (dataBind) { row.DataItem = dataItem; } //Raises the RowCreated event this.OnRowCreated(e); //adds the row to the gridview's row collection rows.Add(row); //explicitly binds the data item to the row, including the footer row and raises the RowDataBound event. if (dataBind) { row.DataBind(); this.OnRowDataBound(e); row.DataItem = null; } return row; } } } ``` Here's the relevant stuff in the ASPX page: ``` <%@ Page Title="" Language="C#" MasterPageFile="~/Site.Master" AutoEventWireup="true" CodeBehind="ContactEdit.aspx.cs" Inherits="WebForms.Directory.ContactEdit" %> <%@ Register TagPrefix="gcctl" Namespace="WebForms.LocalCodeLibrary.Controls" Assembly="WebForms" %> ### All Phones NUMBERS ONLY - NO LETTER CODES IN THE PHONE FIELD! Be sure to always enter the area code, especially if you're also adding an extension. Note that only numbers will stay in the "Phone" field. Anything else you enter will disappear once it goes behind the scenes. The first 10 digits will become the phone number, and any remaining digits will become the extension. ``` Here's the code where I get the error: ``` protected void gvPhones_RowCommand(object sender, GridViewCommandEventArgs e) { // Insert data if the CommandName == "Insert" // and the validation controls indicate valid data... if (e.CommandName == "FooterInsert" && Page.IsValid) { //ERROR HAPPENS ON THE FOLLOWING LINE: DropDownList PhoneTypeID = (DropDownList)gvPhones.FooterRow.FindControl("cboPhoneTypeID"); TextBox FormattedPhone = (TextBox)gvPhones.FooterRow.FindControl("txtPhone"); gvPhonesDataSource.InsertParameters["PhoneTypeID"].DefaultValue = PhoneTypeID.SelectedValue.ToString(); string sFormattedPhone = null; if (!string.IsNullOrEmpty(FormattedPhone.Text)) sFormattedPhone = FormattedPhone.Text; gvPhonesDataSource.InsertParameters["FormattedPhone"].DefaultValue = sFormattedPhone; gvPhonesDataSource.InsertParameters["CustomerID"].DefaultValue = customerid.Text.ToString(); gvPhonesDataSource.InsertParameters["CustomerContactID"].DefaultValue = contactid.Text.ToString(); gvPhonesDataSource.InsertParameters["StaffID"].DefaultValue = System.Web.HttpContext.Current.Session["StaffID"].ToString(); // Insert new record gvPhonesDataSource.Insert(); } } ``` The full error I get is: --- ``` Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Source Error: Line 276: if (e.CommandName == "FooterInsert" && Page.IsValid) Line 277: { Line 278: DropDownList PhoneTypeID = (DropDownList)gvPhones.FooterRow.FindControl("cboPhoneTypeID"); Line 279: TextBox FormattedPhone = (TextBox)gvPhones.FooterRow.FindControl("txtPhone"); Line 280: Source File: Line: 278 Stack Trace: [NullReferenceException: Object reference not set to an instance of an object.] GCWebForms.Directory.ContactEdit.gvPhones\_RowCommand(Object sender, GridViewCommandEventArgs e) in ContactEdit.aspx.cs:278 System.Web.UI.WebControls.GridView.OnRowCommand(GridViewCommandEventArgs e) +137 System.Web.UI.WebControls.GridView.HandleEvent(EventArgs e, Boolean causesValidation, String validationGroup) +95 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +49 System.Web.UI.WebControls.GridViewRow.OnBubbleEvent(Object source, EventArgs e) +146 System.Web.UI.Control.RaiseBubbleEvent(Object source, EventArgs args) +49 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +5450 ``` --- When stepping through (when trying to add a new row to a gridview that already has data in it), I found that gvPhones.FooterRow says that it's null. Again, this only happens if there is data in gvPhones. If the datatable is empty, then the footerrow insert code works without a hitch. Any help would be greatly appreciated! :-) EDIT: adding the relevant code behind Page\_Load. I just added the DataBind() statement, but it didn't make a difference. ``` protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { bool bolNewRec = (this.iContactID == null); phonesformcontent.Visible = (!bolNewRec); if (bolNewRec) { //snipping unrelated code } else { //snipping code that loads the data into the page gvPhones.Sort("SortOrder, PhoneType", SortDirection.Ascending); } } if (phonesformcontent.Visible) gvPhones.DataBind(); } ``` ...and, just in case, here's RowDataBound: ``` protected void gvPhones_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { DataRowView rowView = (DataRowView)e.Row.DataItem; bool bolShowInactive = chkPhoneShowInactive.Checked; if (!bolShowInactive && (Convert.ToBoolean(rowView["IsActive"]) == false)) e.Row.Visible = false; else e.Row.Visible = true; rowView = null; } if (e.Row.RowType == DataControlRowType.Footer) { CheckBox chkIsActive = (CheckBox)e.Row.FindControl("chkPhoneIsActive"); chkIsActive.Checked = true; chkIsActive = null; } } ```<issue_comment>username_1: Try using the sender in your code as below: Replace this line: ``` DropDownList PhoneTypeID = (DropDownList)gvPhones.FooterRow.FindControl("cboPhoneTypeID"); ``` For this: ``` DropDownList PhoneTypeID = (DropDownList)((GridView)sender).FooterRow.FindControl("cboPhoneTypeID"); ``` Also, check the page load if the problem is not with the postback. My answer is based on this question: [Unable to get gridview footer values in RowCommand](https://stackoverflow.com/questions/33029737/unable-to-get-gridview-footer-values-in-rowcommand) **UPDATE:** Change your GridViewExtended class, **ShowFooterWhenEmpty property:** ``` [Category("Behavior")] [Themeable(true)] [Bindable(BindableSupport.No)] public bool ShowFooterWhenEmpty { get { if (this.ViewState["ShowFooterWhenEmpty"] == null) { this.ViewState["ShowFooterWhenEmpty"] = false; } return (bool)this.ViewState["ShowFooterWhenEmpty"]; } set { this.ViewState["ShowFooterWhenEmpty"] = value; } } ``` **GridViewRow**: ``` private GridViewRow _footerRow; public override GridViewRow FooterRow { get { GridViewRow f = base.FooterRow; if (f != null) return f; else return _footerRow; } } ``` I based my changes on this link: [Always show FooterTemplate, even no data](https://stackoverflow.com/questions/994895/always-show-footertemplate-even-no-data/10891744#10891744) Upvotes: 1 <issue_comment>username_2: I wound up scrapping this entire class. Instead, I made regular asp:gridviews that are based on datasources that have union selects with one row with -1 in the key column (since all of my tables have single autoincrement PKs, no row will legitimately have -1 in the key column), and then put the following in RowDataBound: ``` if (e.Row.RowType == DataControlRowType.DataRow) { DataRowView rowView = (DataRowView)e.Row.DataItem; string sKeyName = gvPhones.DataKeyNames[0].ToString(); if ((rowView[sKeyName].ToString() == "-1")) e.Row.Visible = false; else e.Row.Visible = true; rowView = null; } ``` This hides any row with -1 in the key column. So there's always at least one row in the gridview (even if that one row is hidden), and the footer row always shows. Upvotes: 1 [selected_answer]
2018/03/14
519
2,342
<issue_start>username_0: Since I am trying hard to understand the microservice architecture pattern for some work, I came across the following question: It's always said that a microservice usually has its own database. But does this mean that it always has to be on the same server or container (for example having **one** docker container that runs a MongoDB and my JAR)? Or can this also mean that on one server my JAR is running while my MongoDB is located somewhere else (so **two** containers for example)? If the first one is correct (JAR **and** database within **one** container), how can I prevent that after some changes regarding my application and after a new deployment of my JAR my data of the MongoDB is resetted (since a whole new container is now running)? Thanks a lot already :-)<issue_comment>username_1: Yes, Each Microservice should have its own database and if any other Microservice needs data owned by another microservice, then they do it using an API exposed by Microservices. **No, it's not at all necessary** to have the Microservice and its database to be hosted on the same server. For Example - A Microservice can be hosted on-premise and its database can live in the cloud like AWS DynamoDB or RDS. Upvotes: 2 [selected_answer]<issue_comment>username_2: Alternative opinion: * In 99% of real life cases you musnt have a single container that runs database and the application, those should be separated, since one (db) is keeping state, while the other (app) should be stateless. * You don't need a separate database for microservice, very often a separate schema is more than enough (e.g. you dont want to deploy a separate Exadata for each microservice :)). What is important is that only this microservice can read and write and make modifications to given tables others can operate on those tabls only through interfaces exposed by the microservice. Upvotes: 2 <issue_comment>username_3: First of all each Microservice should have its own database. Secondly it's not necessary and also not recommended to have the Microservice and its database on the same container. Generally a single Microservice will have multiple deployments for scaling and they all connect to a single Database instance which should be a diff. container and if using things like NoSql DB's its a database cluster. Upvotes: 2
2018/03/14
2,483
8,840
<issue_start>username_0: So here is my problem, I collect data via the API of the movie DB in a provider called "TheMoviedbServicesProvider" and which is stored in a **model**. I manage to collect the data in the HTML part, but when I try to retrieve the informations in the .ts file it worries me that the variable is **undefined**. So what did I do and create a button and place a **console.log** hooked to it and I realized that it worked once the page was loaded completely. I would like to find a way to solve this problem. **here is the code in the models** ``` import {TheMovieDbApiConfigImage} from './themoviedbapi-config-image-model'; import {TheMovieDbApiChangeKeys} from './themoviedbapi-config-change-key-model'; export class TheMovieDbApiConfig { change_keys: TheMovieDbApiChangeKeys[]; images: TheMovieDbApiConfigImage[]; } ``` and ``` import {BackdropSizes} from './backdropsizes'; import {LogoSizes} from './logosizes'; import {PosteSizes} from './postesizes'; import {ProfileSizes} from './profilesizes'; import {StillSizes} from './stillsizes'; export class TheMovieDbApiConfigImage { backdrop_sizes: BackdropSizes[]; base_url: string; logo_sizes: LogoSizes[]; poster_sizes: PosteSizes[]; profile_sizes: ProfileSizes[]; secure_base_url: string; still_sizes: StillSizes[]; } ``` **Provider** ``` // Core companents import { Http } from '@angular/http'; import { Injectable } from '@angular/core'; // RxJS import 'rxjs/add/operator/toPromise'; import 'rxjs/add/operator/map'; // Models // import {TheMovieDbApiSearch} from '../models/search/themoviedbapi-search-model'; import {TheMovieDbApiConfig} from '../models/config/themoviedbapi-config-model'; import {TheMovieDbApiDescription} from '../models/description/themoviedbapi-desciption-model'; /* Generated class for the ThemoviesdbServicesProvider provider. See https://angular.io/guide/dependency-injection for more info on providers and Angular DI. */ @Injectable() export class TheMoviedbServicesProvider { private baseUrl: string = 'https://api.themoviedb.org/3/'; private apiConf : string = 'configuration?api_key='; private apiSearch : string = 'search/movie?api_key='; private apiKey: string = '078016f3132847b07af647afd854c75e'; configMovie : TheMovieDbApiConfig = new TheMovieDbApiConfig(); constructor(private http: Http) { this.resultConfig(); } public getConfig(): Promise { const url = `${this.baseUrl}${this.apiConf}${this.apiKey}`; return this.http.get(url) .toPromise() .then(response => response.json() as TheMovieDbApiConfig) .catch(error => console.log('Une erreur est survenue : ') + error) } public resultConfig() { this.getConfig() .then(configMovieFetched => { this.configMovie = configMovieFetched; }) } public getUpcoming() { const url = `https://api.themoviedb.org/3/movie/upcoming?api\_key=${this.apiKey}&language=fr-FR&page=1`; return this.http.get(url) .toPromise() .then(response => response.json()) .catch(error => console.log('Une erreur est survenue : ') + error) } public getMovies(arg, arg1, arg2, arg3): Promise { const url = `${this.baseUrl}${this.apiSearch}${this.apiKey}&language=${arg}&query=${arg1}&page=${arg2}&include\_adult=${arg3}`; return this.http.get(url) .toPromise() .then(response => response.json() as TheMovieDbApiSearch) .catch(error => console.log('Une erreur est survenue : ') + error) } public getChoice(arg, arg1): Promise { const url = `${this.baseUrl}movie/${arg}?api\_key=${this.apiKey}&language=${arg1}`; return this.http.get(url) .toPromise() .then(response => response.json() as TheMovieDbApiDescription) .catch(error => console.log('Une erreur est survenue : ') + error) } } ``` **Prez.ts** ``` import { Component } from '@angular/core'; import { NavController, NavParams, IonicPage } from 'ionic-angular'; import { Observable } from 'rxjs'; // Providers // import {TheMoviedbServicesProvider} from '../../providers/themoviedb-services'; // Models // import {TheMovieDbApiConfig} from '../../models/config/themoviedbapi-config-model'; import {TheMovieDbApiDescription} from '../../models/description/themoviedbapi-desciption-model'; /** * Generated class for the PrezPage page. * * See https://ionicframework.com/docs/components/#navigation for more info on * Ionic pages and navigation. */ @IonicPage({ defaultHistory: ['FilmPage'] }) @Component({ selector: 'page-prez', templateUrl: 'prez.html', }) export class PrezPage { choiceMovie : TheMovieDbApiDescription = new TheMovieDbApiDescription(); id: number; langue: string; urlJaquette: string; manuJaquette: string; jaquettePerso: string; format: string; qualite: string; lang: string; sousTitre: string; release: string; constructor(public navCtrl: NavController, public navParams: NavParams, private themoviedbServicesProvider: TheMoviedbServicesProvider) { this.id = this.navParams.get('id'); this.langue = this.navParams.get('langue'); console.log(this.id); this.resultGetChoice(this.id, this.langue); this.testConsole(); if(this.choiceMovie['poster_path'] == null || '') { this.urlJaquette = '../../assets/imgs/no-image.PNG'; } else { this.urlJaquette = this.themoviedbServicesProvider.configMovie['images']['base_url'] + this.themoviedbServicesProvider['images']['backdrop_sizes'][0] + this.choiceMovie['poster_path']; } } resultGetChoice(arg, arg1) { this.themoviedbServicesProvider.getChoice(arg, arg1) .then(choiceMovieFetched => { this.choiceMovie = choiceMovieFetched; console.log(this.choiceMovie); }) } testConsole() { setTimeout(() => { console.log(this.themoviedbServicesProvider.configMovie['images']['secure_base_url'] + this.themoviedbServicesProvider.configMovie['images']['backdrop_sizes'][0] + this.choiceMovie['poster_path']); }, 60000) } button () { console.log(this.themoviedbServicesProvider.configMovie['images']['secure_base_url'] + this.themoviedbServicesProvider.configMovie['images']['backdrop_sizes'][0] + this.choiceMovie['poster_path']); } } ``` To make it simple in the provider I created the function **testConsole()** which delays the execution of **console.log** with my **concatenation**, when the time of the **60000 ms** are passed it is displayed correctly. Conversely, if it is **not delayed**, I have an error that the variable is **undefined** in the **concatenation**. So I have to find a way to load the data before the page is fully displayed. Can you help me !! Thank you.<issue_comment>username_1: Yes, http is asynchronous. You send a request and at some later point in time you get a response. One way you can ensure that all of the data for a route is loaded prior to displaying the page for that route, you can use a route resolver. For example, here is one of my resolvers: ``` import { Injectable } from '@angular/core'; import { Resolve, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router'; import { Observable } from 'rxjs/Observable'; import { IMovie } from './movie'; import { MovieService } from './movie.service'; @Injectable() export class MovieResolver implements Resolve { constructor(private movieService: MovieService) { } resolve(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable { const id = route.paramMap.get('id'); return this.movieService.getMovie(+id); } } ``` This code retrieves an Id from the route, then gets the movie with the defined Id. All of this will occur *before* the movie detail page is displayed. Upvotes: 2 <issue_comment>username_1: Here is an alternative answer, though I don't use Ionic and not sure how to translate this into something you can use from Ionic. **Service method to retrieve data** ``` import { HttpClient } from '@angular/common/http'; constructor(private http: HttpClient) { } getMovies(): Observable { return this.http.get(this.moviesUrl); } ``` Note that it returns an Observable and does not work with promises. **Component calling this method** ``` import { HttpClient } from '@angular/common/http'; constructor(private http: HttpClient) { } ngOnInit(): void { this.movieService.getMovies() .subscribe( (movies: IMovie[]) => { this.movies = movies; // Any other code here }, (error: any) => this.errorMessage = error); } ``` Any code you add to the first function passed to the `subscribe` method will be run *after* the code is retrieved. The above is the *standard* pattern used in Angular for http (see the docs) and helps ensure that any operations on the data are performed after the data is retrieved. Upvotes: 3 [selected_answer]
2018/03/14
1,077
2,608
<issue_start>username_0: Let's assume I have one table in postgres with just 2 columns: * `ID` which is PK for the table (`bigint`) * `time` which is type of `timestamp` Is there any way how to get IDs grouped by time BY YEAR- when the time is date 18 February 2005 it would fit in 2005 group (so result would be) ``` year number of rows 1998 2 2005 5 ``` **AND** if the number of result rows is smaller than some number (for example 3) SQL will return the result **by month** Something like ``` month number of rows (February 2018) 5 (March 2018) 2 ``` Is that possible some nice way in postgres SQL?<issue_comment>username_1: You can do it using window functions (as always). I use this table: ``` TABLE times; id | t ----+------------------------------- 1 | 2018-03-14 20:04:39.81298+01 2 | 2018-03-14 20:04:42.92462+01 3 | 2018-03-14 20:04:45.774615+01 4 | 2018-03-14 20:04:48.877038+01 5 | 2017-03-14 20:05:08.94096+01 6 | 2017-03-14 20:05:16.123736+01 7 | 2017-03-14 20:05:19.91982+01 8 | 2017-01-14 20:05:32.249175+01 9 | 2017-01-14 20:05:35.793645+01 10 | 2017-01-14 20:05:39.991486+01 11 | 2016-11-14 20:05:47.951472+01 12 | 2016-11-14 20:05:52.941504+01 13 | 2016-10-14 21:05:52.941504+02 (13 rows) ``` First, group by month (subquery `per_month`). Then add the sum per year with a window function (subquery `with_year`). Finally, use `CASE` to decide which one you will output and remove duplicates with `DISTINCT`. ``` SELECT DISTINCT CASE WHEN yc > 5 THEN mc ELSE yc END AS count, CASE WHEN yc > 5 THEN to_char(t, 'YYYY-MM') ELSE to_char(t, 'YYYY') END AS period FROM (SELECT mc, sum(mc) OVER (PARTITION BY date_trunc('year', t)) AS yc, t FROM (SELECT count(*) AS mc, date_trunc('month', t) AS t FROM times GROUP BY date_trunc('month', t) ) per_month ) with_year ORDER BY 2; count | period -------+--------- 3 | 2016 3 | 2017-01 3 | 2017-03 4 | 2018 (4 rows) ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Just count years. If it's at least 3, then you group by years, else by months: ``` select case (select count(distinct extract(year from time)) from mytable) >= 3 then to_char(time, 'yyyy') else to_char(time, 'yyyy-mm') end as season, count(*) from mytable group by season order by season; ``` (Unlike many other DBMS, PostgreSQL allows to use alias names in the `GROUP BY` clause.) Upvotes: 0
2018/03/14
780
2,916
<issue_start>username_0: In my code below I have created an array of items in my .JS file. I was then able to pass this array to the .Jade and use each value in the array as an item in a dropdown list. I now want to pass the user input of which item they will click in the dropdown back to the server side (.js) so that I can use the user input to find more data. My problem is that I don't know how to send the .jade variables to the server side. I want to send the "this.selectedIndex"/selected "val" so I can use it as a variable in the javascript file. **.JS** ``` router.get('/', function(req, res) { var projectPathArray = []; async function main() { var projects = await _db.listProjects(); projects.forEach(async (item) => { var pathy = item.path; projectPathArray.push(pathy) }) res.render('index', { title: 'Projects', projectPathArray:projectPathArray}); } main(); ``` **.jade** ``` extends layout script(src="libs/jquery-1.11.3.min.js") link(rel='stylesheet', href='/stylesheets/style.css') block content h1= title p To start, please select a project html body form#test-form(action='', method='get') select#menu1(name='menu1', size=projectPathArray.length) each val in projectPathArray option=val ```<issue_comment>username_1: You will need to use some mechanism for communicating from the frontend back to the server. This includes, but is not limited to, websockets and/or AJAX. Upvotes: 0 <issue_comment>username_2: Without understanding exactly what you want this should at least get you closer to what you are asking for. 1) Add the route to handle the `post` where you can retrieve the values posted back in the form using `req.body`. 2) In your `Pug/Jade` template I indented the form elements so they are under the form, added a submit button, and changed the method of the form to `post`. **.JS** ``` router.post('/', function(req, res) { console.log(req.body); res.redirect('/'); }); router.get('/', function(req, res) { var projectPathArray = []; async function main() { var projects = await _db.listProjects(); projects.forEach(async (item) => { var pathy = item.path; projectPathArray.push(pathy) }) res.render('index', { title: 'Projects', projectPathArray:projectPathArray}); }); main(); ``` **.jade** ``` extends layout script(src="libs/jquery-1.11.3.min.js") link(rel='stylesheet', href='/stylesheets/style.css') block content h1= title p To start, please select a project html body form#test-form(action='', method='post') ^ select#menu1(name='menu1', size=projectPathArray.length) each val in projectPathArray option=val button(type='submit') Submit ``` Upvotes: 3 [selected_answer]
2018/03/14
373
1,311
<issue_start>username_0: for the past couple of hours I was trying to change an image in Xcode with a delay. I have tried achieving this using the following code: ``` UIImageView.animate(withDuration: 1, delay: 2, options: [], animations: { self.TapTap_intro.image = UIImage(named: "Second TapTap") }, completion: nil)} ``` The problem while using this code is that it appears to not respect the delay and change the image immediately. Could anyone please explain me what am I doing wrong and how could I possibly fix this issue?<issue_comment>username_1: Use a timer instead of animate: ``` let timer = Timer.scheduledTimer(timeInterval: 1.0, target: self, selector: #selector(changeImage), userInfo: nil, repeats: false) ``` And create a function to change the image: ``` func changeImage() { self.TapTap_intro.image = UIImage(named: "Second TapTap") } ``` Upvotes: 0 <issue_comment>username_2: To make a change without animation , there is no need to use `UIView.animate` You can dispatch it after some delay ``` DispatchQueue.main.asyncAfter(deadline: .now() + 2 ) { self.TapTap_intro.image = UIImage(named: "Second TapTap") } ``` Upvotes: 1 <issue_comment>username_3: ``` DispatchQueue.main.asyncAfter(deadline: .now() + delay) { // Do your thing } ``` Upvotes: 1
2018/03/14
3,432
10,847
<issue_start>username_0: i have problem in constructing a `object` in desired format. i want to get the object in this manner ``` {"pendrive":['went to each object and picked "TEST_HOME" value'],"minichip":['went to each object and picked "TEST_PROXY" value']} ``` the value of the above `array` must be collected from each `object` by going to `mount_status` then `key` , for **example** `TEST_PROXY`,`TEST_HOME`,`TEST_ARCHIVE`,`TEST_TARGET` **Desired Output:** `{"pendrive":["error","na","error","ok","na","na"]}` for key `"TEST_HOME"` **AND same for others** **here is what i have tried:** ```js var nodeSum = {}; var nodeStatus = { "node_stats": { "pendrive": { "mount_status": { "TEST_PROXY": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_HOME": "error" } }, "minichip": { "mount_status": { "TEST_PROXY": "na", "TEST_TARGET": "na", "TEST_ARCHIVE": "na", "TEST_HOME": "na" } }, "simcard": { "mount_status": { "TEST_HOME": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_PROXY": "error" } }, "hostname": [ "nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda" ], "airtel": { "mount_status": { "TEST_PROXY": "ok", "TEST_TARGET": "ok", "TEST_ARCHIVE": "ok", "TEST_HOME": "ok" } }, "voda": { "mount_status": { "TEST_HOME": "na", "TEST_TARGET": "na", "TEST_ARCHIVE": "na", "TEST_PROXY": "na" } }, "nikola": { "mount_status": { "TEST_HOME": "na", "TEST_ARCHIVE": "na", "TEST_TARGET": "na", "TEST_PROXY": "na" } } } } console.log(nodeStatus); var hostNames = nodeStatus["node_stats"]["hostname"]; for(var i = 0; i ```<issue_comment>username_1: If possible i would change the following ``` const nodeStatus = { "hostname": [ "nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda" ], "node_stats": { "pendrive": { "mount_status": { "TEST_PROXY": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_HOME": "error" } }, "minichip": { "mount_status": { "TEST_PROXY": "na", "TEST_TARGET": "na", "TEST_ARCHIVE": "na", "TEST_HOME": "na" } }, "simcard": { "mount_status": { "TEST_HOME": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_PROXY": "error" } }, "airtel": { "mount_status": { "TEST_PROXY": "ok", "TEST_TARGET": "ok", "TEST_ARCHIVE": "ok", "TEST_HOME": "ok" } }, "voda": { "mount_status": { "TEST_HOME": "na", "TEST_TARGET": "na", "TEST_ARCHIVE": "na", "TEST_PROXY": "na" } }, "nikola": { "mount_status": { "TEST_HOME": "na", "TEST_ARCHIVE": "na", "TEST_TARGET": "na", "TEST_PROXY": "na" } } } } const testHome = nodeStatus.hostname.map(name => nodeStatus["node_stats"][name] ? nodeStatus["node_stats"][name]["mount_status"]["TEST_HOME"] : undefined); // ["nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda"] console.log(testHome) //["na", undefined, "error", "error", "ok", "na", "na"] ``` hostname differs from all others, so you want to get it out of `node_stats`. The order of the result will be the same as the order of `hostname` If you don't want the undefined, then you can use `Array.reduce`, but the side effect would be that the order of the `Array` would be unpredictable. Upvotes: 0 <issue_comment>username_2: You can use an object lookup for each type of item and property name. Then Using `Object.keys()` iterate through each key of lookup and `array#reduce` each key with all the values corresponding to lookup property and create an array using `array#reduce` of all the values of each object. ```js var nodeStatus = { "node_stats": { "pendrive": { "mount_status": { "TEST_PROXY": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_HOME": "error" } }, "minichip": { "mount_status": { "TEST_PROXY": "na", "TEST_TARGET": "na", "TEST_ARCHIVE":"na", "TEST_HOME": "na" } }, "simcard": { "mount_status":{ "TEST_HOME": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_PROXY": "error" } }, "hostname": [ "nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda" ], "airtel":{ "mount_status": { "TEST_PROXY": "ok", "TEST_TARGET": "ok", "TEST_ARCHIVE": "ok", "TEST_HOME": "ok" } }, "voda": { "mount_status": { "TEST_HOME": "na", "TEST_TARGET": "na", "TEST_ARCHIVE": "na", "TEST_PROXY": "na" } }, "nikola": { "mount_status": { "TEST_HOME":"na", "TEST_ARCHIVE": "na", "TEST_TARGET": "na", "TEST_PROXY": "na" } } } }, lookup = {pendrive: 'TEST_HOME', minichip: 'TEST_PROXY'} result = Object.keys(lookup).reduce((o,k) => { o[k] = Object.keys(nodeStatus.node_stats).reduce((r,key) => { if('mount_status' in nodeStatus.node_stats[key]) r.push(nodeStatus.node_stats[key].mount_status[lookup[k]]); return r; },[]); return o; },{}); console.log(result); ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: If you want output like this: ----------------------------- for key of `TEST_ARCHIVE` ``` {error: ["pendrive", "simcard"], na: ["minichip", "voda", "nikola"], ok: ["airtel"]} ``` Use this: --------- ```js function getNestedValuesForKey(key_name){ var hosts = nodeStatus.node_stats; var return_values = {}; for( var _hostname in hosts ){ var _host = hosts[_hostname]; if( _host.mount_status ){ var _status = _host.mount_status[key_name]; if( !return_values[_status] ){ return_values[_status] = new Array; }//if return_values[_status].push(_hostname); }//if }//for return return_values }//function var nodeStatus = {"node_stats":{"pendrive":{"mount_status":{"TEST_PROXY":"error","TEST_TARGET":"error","TEST_ARCHIVE":"error","TEST_HOME":"error"}},"minichip":{"mount_status":{"TEST_PROXY":"na","TEST_TARGET":"na","TEST_ARCHIVE":"na","TEST_HOME":"na"}},"simcard":{"mount_status":{"TEST_HOME":"error","TEST_TARGET":"error","TEST_ARCHIVE":"error","TEST_PROXY":"error"}},"hostname":["nikola","goldplus","pendrive","simcard","airtel","minichip","voda"],"airtel":{"mount_status":{"TEST_PROXY":"ok","TEST_TARGET":"ok","TEST_ARCHIVE":"ok","TEST_HOME":"ok"}},"voda":{"mount_status":{"TEST_HOME":"na","TEST_TARGET":"na","TEST_ARCHIVE":"na","TEST_PROXY":"na"}},"nikola":{"mount_status":{"TEST_HOME":"na","TEST_ARCHIVE":"na","TEST_TARGET":"na","TEST_PROXY":"na"}}}}; console.log( getNestedValuesForKey('TEST_ARCHIVE') ); ``` Upvotes: 0 <issue_comment>username_4: It's fairly unclear what you want. One possibility is that when you're looking for `TEST_PROXY`, you should get ``` ["na", undefined, "error", "error", "ok", "na", "na"] ``` (The `undefined` is because `hostname` includes "goldplus", which is not included in `node_stats`.) If that's what you want, then this should work: ```js const nodeStatus = {"node_stats": {"airtel": {"mount_status": {"TEST_ARCHIVE": "ok", "TEST_HOME": "ok", "TEST_PROXY": "ok", "TEST_TARGET": "ok"}}, "hostname": ["nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda"], "minichip": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}, "nikola": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}, "pendrive": {"mount_status": {"TEST_ARCHIVE": "error", "TEST_HOME": "error", "TEST_PROXY": "error", "TEST_TARGET": "error"}}, "simcard": {"mount_status": {"TEST_ARCHIVE": "error", "TEST_HOME": "error", "TEST_PROXY": "error", "TEST_TARGET": "error"}}, "voda": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}}} const nodeSum = (key, nodes) => nodes.node_stats.hostname .map(hostname => (nodes.node_stats[hostname] || {mount_status: {}}).mount_status[key]) console.log(nodeSum("TEST_PROXY", nodeStatus)) ``` But that's format is fairly unclear. If instead you would like something like this: ``` { "nikola":"na", "pendrive":"error", "simcard":"error", "airtel":"ok", "minichip":"na", "voda":"na" } ``` Then you could modify the function a bit like this: ```js const nodeStatus = {"node_stats": {"airtel": {"mount_status": {"TEST_ARCHIVE": "ok", "TEST_HOME": "ok", "TEST_PROXY": "ok", "TEST_TARGET": "ok"}}, "hostname": ["nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda"], "minichip": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}, "nikola": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}, "pendrive": {"mount_status": {"TEST_ARCHIVE": "error", "TEST_HOME": "error", "TEST_PROXY": "error", "TEST_TARGET": "error"}}, "simcard": {"mount_status": {"TEST_ARCHIVE": "error", "TEST_HOME": "error", "TEST_PROXY": "error", "TEST_TARGET": "error"}}, "voda": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}}} const nodeSum = (key, nodes) => nodes.node_stats.hostname .map(hostname => ({[hostname]: (nodes.node_stats[hostname] || {mount_status: {}}) .mount_status[key]})) .reduce((a, b) => Object.assign(a, b), {}) console.log(nodeSum("TEST_HOME", nodeStatus)) ``` If you could guarantee that there were no scenarios like "goldplus" in your example where a hostname is included in the lists of hostnames but not in `node_stats`, then you could simplify these functions to ``` const nodeSum = (key, nodes) => nodes.node_stats.hostname .map(hostname => nodes.node_stats[hostname].mount_status[key]) ``` and ``` const nodeSum = (key, nodes) => nodes.node_stats.hostname .map(hostname => ({[hostname]: nodes.node_stats[hostname].mount_status[key]})) .reduce((a, b) => Object.assign(a, b), {}) ``` And finally, it might be cleaner to do the latter in a single iteration, like this: ``` const nodeSum = (key, nodes) => nodes.node_stats.hostname .reduce((acc, hostname) => { acc[hostname] = nodes.node_stats[hostname].mount_status[key] return acc }, {}) ``` --- `nodeSum` is a *terrible* name for this function, but it seemed that you were updating a variable with this name to hold your output. I would change it to something more explicit for real code. Upvotes: 0
2018/03/14
4,001
12,505
<issue_start>username_0: I have a grid built using flexbox to not only improve the flexibility (wordplay!) but to also more easily make the element heights the same within the grid. I'm running into issues with controlling the vertical alignment of content within the boxes. Each parent container has two nested containers: one for the date and another for the rest of the content. The content in the second container includes a link that should align at the bottom of the parent container. See the first image for an example of what it should look like. 01 - [Correct vertical alignment - full height.](https://i.stack.imgur.com/7BLTj.png) The issue comes up when the content (city, company in the example) does not fill the full height of the container — it ends up adding the extra space to the content. When the button does not align with the bottom of the container, the space is at the bottom. (Image 2) When I have gotten the button to align with the bottom (using `margin-top:auto`), the space then appears between the city and the date. (Image 3) 02 - [Extra space at the bottom of the container.](https://i.stack.imgur.com/5BoGs.png) 03 - [Extra space at the top of the container.](https://i.stack.imgur.com/IwdM7.png) Apologies up front if this has been addressed elsewhere. I have worked through these answers to get to this current iteration: * [Flexbox vertical align specific content within a wrapper?](https://stackoverflow.com/questions/40478368/flexbox-vertical-align-specific-content-within-a-wrapper) * [Remove space (gaps) between multiple lines of flex items when they wrap](https://stackoverflow.com/questions/40890613/remove-space-gaps-between-multiple-lines-of-flex-items-when-they-wrap/40890703) Thank you in advance for any direction or assistance with what I have missed! ```css /*---- FLEXBOX STYLING ----*/ #events-list { display: -webkit-flex; display: -ms-flexbox; display: flex; overflow: auto; flex-flow: row wrap; align-content: flex-end; } #events-list .event { flex: 0 0 32%; display: flex; flex-flow: row wrap; align-content: flex-start; background-color: #e6e6e6; margin: 0 2% 1rem 0; border: solid 2px #666; } #events-list .event:nth-child(3n) { margin-right: 0; } #events-list .event .event-date { align-self: flex-start; } #events-list .event .event-details { display: flex; flex-direction: column; width: 100%; } #events-list .event .event-details .register { margin-top: auto; } /* ---- GENERIC VISUAL STYLING ----*/ #events-list .event { background-color: #e6e6e6; margin: 0 auto 1rem; border: solid 2px #666; } #events-list .event .event-date { width: 100%; text-align: center; background-color: #aaa; margin-bottom: 1rem; } #events-list .event .event-date p { display: inline-block; font: normal 2rem/1 "agenda-bold", Arial; } #events-list .event .event-date p:first-of-type { margin-right: 0.25rem; } #events-list .event .event-details .event-city { font: normal 1rem/1.5 "agenda-bold", Arial; padding: 0 1rem; } #events-list .event .event-details .event-time { padding: 0 1rem; } #events-list .event .event-details .event-bus { margin-bottom: 1rem; padding: 0 1rem; } #events-list .event .event-details .register { display: block; background-color: #666; text-align: center; line-height: 2.5rem; text-transform: uppercase; color: #fff; } ``` ```html MAR 30 City, US Company HQ - Company Double Line [RSVP](#) MAR 30 City, US Company HQ [RSVP on 4/13](#) MAR 30 City, US Company HQ - Company Double Line [RSVP](#) ```<issue_comment>username_1: If possible i would change the following ``` const nodeStatus = { "hostname": [ "nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda" ], "node_stats": { "pendrive": { "mount_status": { "TEST_PROXY": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_HOME": "error" } }, "minichip": { "mount_status": { "TEST_PROXY": "na", "TEST_TARGET": "na", "TEST_ARCHIVE": "na", "TEST_HOME": "na" } }, "simcard": { "mount_status": { "TEST_HOME": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_PROXY": "error" } }, "airtel": { "mount_status": { "TEST_PROXY": "ok", "TEST_TARGET": "ok", "TEST_ARCHIVE": "ok", "TEST_HOME": "ok" } }, "voda": { "mount_status": { "TEST_HOME": "na", "TEST_TARGET": "na", "TEST_ARCHIVE": "na", "TEST_PROXY": "na" } }, "nikola": { "mount_status": { "TEST_HOME": "na", "TEST_ARCHIVE": "na", "TEST_TARGET": "na", "TEST_PROXY": "na" } } } } const testHome = nodeStatus.hostname.map(name => nodeStatus["node_stats"][name] ? nodeStatus["node_stats"][name]["mount_status"]["TEST_HOME"] : undefined); // ["nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda"] console.log(testHome) //["na", undefined, "error", "error", "ok", "na", "na"] ``` hostname differs from all others, so you want to get it out of `node_stats`. The order of the result will be the same as the order of `hostname` If you don't want the undefined, then you can use `Array.reduce`, but the side effect would be that the order of the `Array` would be unpredictable. Upvotes: 0 <issue_comment>username_2: You can use an object lookup for each type of item and property name. Then Using `Object.keys()` iterate through each key of lookup and `array#reduce` each key with all the values corresponding to lookup property and create an array using `array#reduce` of all the values of each object. ```js var nodeStatus = { "node_stats": { "pendrive": { "mount_status": { "TEST_PROXY": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_HOME": "error" } }, "minichip": { "mount_status": { "TEST_PROXY": "na", "TEST_TARGET": "na", "TEST_ARCHIVE":"na", "TEST_HOME": "na" } }, "simcard": { "mount_status":{ "TEST_HOME": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_PROXY": "error" } }, "hostname": [ "nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda" ], "airtel":{ "mount_status": { "TEST_PROXY": "ok", "TEST_TARGET": "ok", "TEST_ARCHIVE": "ok", "TEST_HOME": "ok" } }, "voda": { "mount_status": { "TEST_HOME": "na", "TEST_TARGET": "na", "TEST_ARCHIVE": "na", "TEST_PROXY": "na" } }, "nikola": { "mount_status": { "TEST_HOME":"na", "TEST_ARCHIVE": "na", "TEST_TARGET": "na", "TEST_PROXY": "na" } } } }, lookup = {pendrive: 'TEST_HOME', minichip: 'TEST_PROXY'} result = Object.keys(lookup).reduce((o,k) => { o[k] = Object.keys(nodeStatus.node_stats).reduce((r,key) => { if('mount_status' in nodeStatus.node_stats[key]) r.push(nodeStatus.node_stats[key].mount_status[lookup[k]]); return r; },[]); return o; },{}); console.log(result); ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: If you want output like this: ----------------------------- for key of `TEST_ARCHIVE` ``` {error: ["pendrive", "simcard"], na: ["minichip", "voda", "nikola"], ok: ["airtel"]} ``` Use this: --------- ```js function getNestedValuesForKey(key_name){ var hosts = nodeStatus.node_stats; var return_values = {}; for( var _hostname in hosts ){ var _host = hosts[_hostname]; if( _host.mount_status ){ var _status = _host.mount_status[key_name]; if( !return_values[_status] ){ return_values[_status] = new Array; }//if return_values[_status].push(_hostname); }//if }//for return return_values }//function var nodeStatus = {"node_stats":{"pendrive":{"mount_status":{"TEST_PROXY":"error","TEST_TARGET":"error","TEST_ARCHIVE":"error","TEST_HOME":"error"}},"minichip":{"mount_status":{"TEST_PROXY":"na","TEST_TARGET":"na","TEST_ARCHIVE":"na","TEST_HOME":"na"}},"simcard":{"mount_status":{"TEST_HOME":"error","TEST_TARGET":"error","TEST_ARCHIVE":"error","TEST_PROXY":"error"}},"hostname":["nikola","goldplus","pendrive","simcard","airtel","minichip","voda"],"airtel":{"mount_status":{"TEST_PROXY":"ok","TEST_TARGET":"ok","TEST_ARCHIVE":"ok","TEST_HOME":"ok"}},"voda":{"mount_status":{"TEST_HOME":"na","TEST_TARGET":"na","TEST_ARCHIVE":"na","TEST_PROXY":"na"}},"nikola":{"mount_status":{"TEST_HOME":"na","TEST_ARCHIVE":"na","TEST_TARGET":"na","TEST_PROXY":"na"}}}}; console.log( getNestedValuesForKey('TEST_ARCHIVE') ); ``` Upvotes: 0 <issue_comment>username_4: It's fairly unclear what you want. One possibility is that when you're looking for `TEST_PROXY`, you should get ``` ["na", undefined, "error", "error", "ok", "na", "na"] ``` (The `undefined` is because `hostname` includes "goldplus", which is not included in `node_stats`.) If that's what you want, then this should work: ```js const nodeStatus = {"node_stats": {"airtel": {"mount_status": {"TEST_ARCHIVE": "ok", "TEST_HOME": "ok", "TEST_PROXY": "ok", "TEST_TARGET": "ok"}}, "hostname": ["nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda"], "minichip": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}, "nikola": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}, "pendrive": {"mount_status": {"TEST_ARCHIVE": "error", "TEST_HOME": "error", "TEST_PROXY": "error", "TEST_TARGET": "error"}}, "simcard": {"mount_status": {"TEST_ARCHIVE": "error", "TEST_HOME": "error", "TEST_PROXY": "error", "TEST_TARGET": "error"}}, "voda": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}}} const nodeSum = (key, nodes) => nodes.node_stats.hostname .map(hostname => (nodes.node_stats[hostname] || {mount_status: {}}).mount_status[key]) console.log(nodeSum("TEST_PROXY", nodeStatus)) ``` But that's format is fairly unclear. If instead you would like something like this: ``` { "nikola":"na", "pendrive":"error", "simcard":"error", "airtel":"ok", "minichip":"na", "voda":"na" } ``` Then you could modify the function a bit like this: ```js const nodeStatus = {"node_stats": {"airtel": {"mount_status": {"TEST_ARCHIVE": "ok", "TEST_HOME": "ok", "TEST_PROXY": "ok", "TEST_TARGET": "ok"}}, "hostname": ["nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda"], "minichip": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}, "nikola": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}, "pendrive": {"mount_status": {"TEST_ARCHIVE": "error", "TEST_HOME": "error", "TEST_PROXY": "error", "TEST_TARGET": "error"}}, "simcard": {"mount_status": {"TEST_ARCHIVE": "error", "TEST_HOME": "error", "TEST_PROXY": "error", "TEST_TARGET": "error"}}, "voda": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}}} const nodeSum = (key, nodes) => nodes.node_stats.hostname .map(hostname => ({[hostname]: (nodes.node_stats[hostname] || {mount_status: {}}) .mount_status[key]})) .reduce((a, b) => Object.assign(a, b), {}) console.log(nodeSum("TEST_HOME", nodeStatus)) ``` If you could guarantee that there were no scenarios like "goldplus" in your example where a hostname is included in the lists of hostnames but not in `node_stats`, then you could simplify these functions to ``` const nodeSum = (key, nodes) => nodes.node_stats.hostname .map(hostname => nodes.node_stats[hostname].mount_status[key]) ``` and ``` const nodeSum = (key, nodes) => nodes.node_stats.hostname .map(hostname => ({[hostname]: nodes.node_stats[hostname].mount_status[key]})) .reduce((a, b) => Object.assign(a, b), {}) ``` And finally, it might be cleaner to do the latter in a single iteration, like this: ``` const nodeSum = (key, nodes) => nodes.node_stats.hostname .reduce((acc, hostname) => { acc[hostname] = nodes.node_stats[hostname].mount_status[key] return acc }, {}) ``` --- `nodeSum` is a *terrible* name for this function, but it seemed that you were updating a variable with this name to hold your output. I would change it to something more explicit for real code. Upvotes: 0
2018/03/14
3,036
9,650
<issue_start>username_0: I'm trying to parse a `.dita` file, but there is a node inside another node, and while that isn't weird, there is actually text surrounding the inner node, it looks a bit like this: ``` Hello this is a LINK that you may click ``` I can get the text from `node` and i can get all instances of `xlink`, yet the text from the `node` will look like this: ``` Hello this is a that you may click ``` As you can see, the word `LINK` is missing, and even though i can call the `xlink` node and get an array containing the word `LINK`, it hasn't thus far been possible to place the words back, as their position is unknown. I'll have to add that checking for 2 spaces wouldn't work, as there can also be 2 spaces in the original text, and thus the position of the words won't be correct.<issue_comment>username_1: If possible i would change the following ``` const nodeStatus = { "hostname": [ "nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda" ], "node_stats": { "pendrive": { "mount_status": { "TEST_PROXY": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_HOME": "error" } }, "minichip": { "mount_status": { "TEST_PROXY": "na", "TEST_TARGET": "na", "TEST_ARCHIVE": "na", "TEST_HOME": "na" } }, "simcard": { "mount_status": { "TEST_HOME": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_PROXY": "error" } }, "airtel": { "mount_status": { "TEST_PROXY": "ok", "TEST_TARGET": "ok", "TEST_ARCHIVE": "ok", "TEST_HOME": "ok" } }, "voda": { "mount_status": { "TEST_HOME": "na", "TEST_TARGET": "na", "TEST_ARCHIVE": "na", "TEST_PROXY": "na" } }, "nikola": { "mount_status": { "TEST_HOME": "na", "TEST_ARCHIVE": "na", "TEST_TARGET": "na", "TEST_PROXY": "na" } } } } const testHome = nodeStatus.hostname.map(name => nodeStatus["node_stats"][name] ? nodeStatus["node_stats"][name]["mount_status"]["TEST_HOME"] : undefined); // ["nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda"] console.log(testHome) //["na", undefined, "error", "error", "ok", "na", "na"] ``` hostname differs from all others, so you want to get it out of `node_stats`. The order of the result will be the same as the order of `hostname` If you don't want the undefined, then you can use `Array.reduce`, but the side effect would be that the order of the `Array` would be unpredictable. Upvotes: 0 <issue_comment>username_2: You can use an object lookup for each type of item and property name. Then Using `Object.keys()` iterate through each key of lookup and `array#reduce` each key with all the values corresponding to lookup property and create an array using `array#reduce` of all the values of each object. ```js var nodeStatus = { "node_stats": { "pendrive": { "mount_status": { "TEST_PROXY": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_HOME": "error" } }, "minichip": { "mount_status": { "TEST_PROXY": "na", "TEST_TARGET": "na", "TEST_ARCHIVE":"na", "TEST_HOME": "na" } }, "simcard": { "mount_status":{ "TEST_HOME": "error", "TEST_TARGET": "error", "TEST_ARCHIVE": "error", "TEST_PROXY": "error" } }, "hostname": [ "nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda" ], "airtel":{ "mount_status": { "TEST_PROXY": "ok", "TEST_TARGET": "ok", "TEST_ARCHIVE": "ok", "TEST_HOME": "ok" } }, "voda": { "mount_status": { "TEST_HOME": "na", "TEST_TARGET": "na", "TEST_ARCHIVE": "na", "TEST_PROXY": "na" } }, "nikola": { "mount_status": { "TEST_HOME":"na", "TEST_ARCHIVE": "na", "TEST_TARGET": "na", "TEST_PROXY": "na" } } } }, lookup = {pendrive: 'TEST_HOME', minichip: 'TEST_PROXY'} result = Object.keys(lookup).reduce((o,k) => { o[k] = Object.keys(nodeStatus.node_stats).reduce((r,key) => { if('mount_status' in nodeStatus.node_stats[key]) r.push(nodeStatus.node_stats[key].mount_status[lookup[k]]); return r; },[]); return o; },{}); console.log(result); ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: If you want output like this: ----------------------------- for key of `TEST_ARCHIVE` ``` {error: ["pendrive", "simcard"], na: ["minichip", "voda", "nikola"], ok: ["airtel"]} ``` Use this: --------- ```js function getNestedValuesForKey(key_name){ var hosts = nodeStatus.node_stats; var return_values = {}; for( var _hostname in hosts ){ var _host = hosts[_hostname]; if( _host.mount_status ){ var _status = _host.mount_status[key_name]; if( !return_values[_status] ){ return_values[_status] = new Array; }//if return_values[_status].push(_hostname); }//if }//for return return_values }//function var nodeStatus = {"node_stats":{"pendrive":{"mount_status":{"TEST_PROXY":"error","TEST_TARGET":"error","TEST_ARCHIVE":"error","TEST_HOME":"error"}},"minichip":{"mount_status":{"TEST_PROXY":"na","TEST_TARGET":"na","TEST_ARCHIVE":"na","TEST_HOME":"na"}},"simcard":{"mount_status":{"TEST_HOME":"error","TEST_TARGET":"error","TEST_ARCHIVE":"error","TEST_PROXY":"error"}},"hostname":["nikola","goldplus","pendrive","simcard","airtel","minichip","voda"],"airtel":{"mount_status":{"TEST_PROXY":"ok","TEST_TARGET":"ok","TEST_ARCHIVE":"ok","TEST_HOME":"ok"}},"voda":{"mount_status":{"TEST_HOME":"na","TEST_TARGET":"na","TEST_ARCHIVE":"na","TEST_PROXY":"na"}},"nikola":{"mount_status":{"TEST_HOME":"na","TEST_ARCHIVE":"na","TEST_TARGET":"na","TEST_PROXY":"na"}}}}; console.log( getNestedValuesForKey('TEST_ARCHIVE') ); ``` Upvotes: 0 <issue_comment>username_4: It's fairly unclear what you want. One possibility is that when you're looking for `TEST_PROXY`, you should get ``` ["na", undefined, "error", "error", "ok", "na", "na"] ``` (The `undefined` is because `hostname` includes "goldplus", which is not included in `node_stats`.) If that's what you want, then this should work: ```js const nodeStatus = {"node_stats": {"airtel": {"mount_status": {"TEST_ARCHIVE": "ok", "TEST_HOME": "ok", "TEST_PROXY": "ok", "TEST_TARGET": "ok"}}, "hostname": ["nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda"], "minichip": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}, "nikola": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}, "pendrive": {"mount_status": {"TEST_ARCHIVE": "error", "TEST_HOME": "error", "TEST_PROXY": "error", "TEST_TARGET": "error"}}, "simcard": {"mount_status": {"TEST_ARCHIVE": "error", "TEST_HOME": "error", "TEST_PROXY": "error", "TEST_TARGET": "error"}}, "voda": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}}} const nodeSum = (key, nodes) => nodes.node_stats.hostname .map(hostname => (nodes.node_stats[hostname] || {mount_status: {}}).mount_status[key]) console.log(nodeSum("TEST_PROXY", nodeStatus)) ``` But that's format is fairly unclear. If instead you would like something like this: ``` { "nikola":"na", "pendrive":"error", "simcard":"error", "airtel":"ok", "minichip":"na", "voda":"na" } ``` Then you could modify the function a bit like this: ```js const nodeStatus = {"node_stats": {"airtel": {"mount_status": {"TEST_ARCHIVE": "ok", "TEST_HOME": "ok", "TEST_PROXY": "ok", "TEST_TARGET": "ok"}}, "hostname": ["nikola", "goldplus", "pendrive", "simcard", "airtel", "minichip", "voda"], "minichip": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}, "nikola": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}, "pendrive": {"mount_status": {"TEST_ARCHIVE": "error", "TEST_HOME": "error", "TEST_PROXY": "error", "TEST_TARGET": "error"}}, "simcard": {"mount_status": {"TEST_ARCHIVE": "error", "TEST_HOME": "error", "TEST_PROXY": "error", "TEST_TARGET": "error"}}, "voda": {"mount_status": {"TEST_ARCHIVE": "na", "TEST_HOME": "na", "TEST_PROXY": "na", "TEST_TARGET": "na"}}}} const nodeSum = (key, nodes) => nodes.node_stats.hostname .map(hostname => ({[hostname]: (nodes.node_stats[hostname] || {mount_status: {}}) .mount_status[key]})) .reduce((a, b) => Object.assign(a, b), {}) console.log(nodeSum("TEST_HOME", nodeStatus)) ``` If you could guarantee that there were no scenarios like "goldplus" in your example where a hostname is included in the lists of hostnames but not in `node_stats`, then you could simplify these functions to ``` const nodeSum = (key, nodes) => nodes.node_stats.hostname .map(hostname => nodes.node_stats[hostname].mount_status[key]) ``` and ``` const nodeSum = (key, nodes) => nodes.node_stats.hostname .map(hostname => ({[hostname]: nodes.node_stats[hostname].mount_status[key]})) .reduce((a, b) => Object.assign(a, b), {}) ``` And finally, it might be cleaner to do the latter in a single iteration, like this: ``` const nodeSum = (key, nodes) => nodes.node_stats.hostname .reduce((acc, hostname) => { acc[hostname] = nodes.node_stats[hostname].mount_status[key] return acc }, {}) ``` --- `nodeSum` is a *terrible* name for this function, but it seemed that you were updating a variable with this name to hold your output. I would change it to something more explicit for real code. Upvotes: 0
2018/03/14
1,142
3,744
<issue_start>username_0: I wanted to implement a overload for `operator<<` that allowed me to call a given function and output the result. I therefore wrote an overload, but the conversion to bool is selected and when writing a function myself, it would not compile. EDIT: Know that I do not want to call the lambda, but instead pass it to the function where it should be called with a default constructed parameter list. I have appended my code: ``` #include template void test(T \*) { std::cout << "ptr" << std::endl; } template void test(bool) { std::cout << "bool" << std::endl; } template void test(Ret(\*el)(Args...)) { std::cout << "function ptr\n" << el(Args()...) << std::endl; } template std::basic\_ostream& operator<<( std::basic\_ostream &str, Ret(\*el)(Args...)) { return str << el(Args()...); } int main() { std::boolalpha(std::cout); std::cout << []{return 5;} << std::endl; // true is outputted test([]{return 5;}); // will not compile } ``` I use gcc 7.3.1 with the version flag `-std=c++14`. EDIT: Error message: ``` main.cc: In function ‘int main()’: main.cc:25:23: error: no matching function for call to ‘test(main()::)’ test([]{return 5;}); ^ main.cc:5:6: note: candidate: template void test(T\*) void test(T \*) { ^~~~ main.cc:5:6: note: template argument deduction/substitution failed: main.cc:25:23: note: mismatched types ‘T\*’ and ‘main()::’ test([]{return 5;}); ^ main.cc:9:6: note: candidate: template void test(bool) void test(bool) { ^~~~ main.cc:9:6: note: template argument deduction/substitution failed: main.cc:25:23: note: couldn't deduce template parameter ‘T’ test([]{return 5;}); ^ main.cc:13:6: note: candidate: template void test(Ret (\*)(Args ...)) void test(Ret(\*el)(Args...)) { ^~~~ main.cc:13:6: note: template argument deduction/substitution failed: main.cc:25:23: note: mismatched types ‘Ret (\*)(Args ...)’ and ‘main()::’ test([]{return 5;}); ```<issue_comment>username_1: Your problem here is that Template Argument Deduction is **only** done on the actual argument passed to `test`. It's not done on all possible types that the argument could possibly converted to. That might be an infinite set, so that's clearly a no-go. So, Template Argument Deduction is done on the actual lambda object, which has an unspeakable class type. So the deduction for `test(T*)` fails as the lambda object is not a pointer. `T` can't be deduced from `test(bool)`, obviously. Finally, the deduction fails for `test(Ret(*el)(Args...))` as the lambda object is not a pointer-to-function either. There are a few options. You might not even need a template, you could accept a `std::function` and rely on the fact that it has a templated constructor. Or you could just take a `test(T t)` argument and call it as `t()`. `T` will now deduce to the actual lambda type. The most fancy solution is probably using [`std::invoke`](https://stackoverflow.com/a/43680610/15416), and accepting a template vararg list. Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` template void test(bool) { std::cout << "bool" << std::endl; } ``` Template is not needed. In fact you overload functions, not templates. Replace it with ``` void test(bool) { std::cout << "bool" << std::endl; } ``` Now your sample will compile. Upvotes: 0 <issue_comment>username_3: Even though non-capturing lambdas have an implicit conversion to function pointers, function templates must match exactly for deduction to succeed, no conversions will be performed. Therefore the easiest fix is to force the conversion with a `+` ``` int main() { std::boolalpha(std::cout); std::cout << []{return 5;} << std::endl; // true is outputted test(+[]{return 5;}); // ^ } ``` Upvotes: 0
2018/03/14
332
976
<issue_start>username_0: I'm trying to build a lookup table in jq. Input: ``` { "Object1": { "id": 1, "name": "object1name" }, "Object2": { "id": 24, "name": "object2name" } } ``` Expected output: ``` { "object1name":1, "object2name":2 } ``` I tried the following jq code but it gives me an error: ``` {.[] | ((.name):.id)} ```<issue_comment>username_1: Collect the objects in an array and reduce it with `add`: ``` $ jq '[.[] | {(.name): .id}] | add' tmp.json { "object1name": 1, "object2name": 24 } ``` You can write the filter a little more tersely as `map({(.name): .id}) | add`. Upvotes: 3 [selected_answer]<issue_comment>username_2: > > Why doesn't my code [work] though? > > > You were close! Although @username_1's answers are the most straightforward, an answer along the lines you seem to have been thinking is certainly possible: ``` [ {(.[] | .name): .id}] | add ``` Upvotes: 1
2018/03/14
622
2,334
<issue_start>username_0: I want have a small table with two columns `[Id] [bigint]` and `[Name] [nvarchar](63)`. The column is used for tags and it will contain all tags that exist. I want to force an alphabetical sorting by the Name column so that a given tag is found more quickly. Necessary points are: 1. The Id is my primary key, I use it e.g. for foreign keys. 2. The Name is unique as well. 3. I want to sort by Name alphabetically. 4. I need the SQL command for creating the constraints since I use scripts to create the table. I know you can sort the table by using a clustered index, but I know that the table is not necessarily in that order. My query looks like this but I don't understand how to create the clustered index on Name but still keep the Id as Primary Key: ``` IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Tags]') AND type in (N'U')) BEGIN CREATE TABLE [dbo].[Tags] ( [Id] [bigint] IDENTITY(1,1) PRIMARY KEY NOT NULL, [Name] [nvarchar](63) NOT NULL, CONSTRAINT AK_TagName UNIQUE(Name) ) END ``` **Edit:** I decided to follow paparazzo's advice. So if you have the same problem make sure you read his answer as well.<issue_comment>username_1: You can specify that the Primary Key is `NONCLUSTERED` when declaring it as a constraint, you can then declare the Unique Key as being the `CLUSTERED` index. ``` CREATE TABLE [dbo].[Tags] ( [Id] [bigint] IDENTITY(1,1) NOT NULL, [Name] [nvarchar](63) NOT NULL, CONSTRAINT PK_Tag PRIMARY KEY NONCLUSTERED (Id ASC), CONSTRAINT AK_TagName UNIQUE CLUSTERED (Name ASC) ); ``` Also specifying `ASC` or `DESC` after the Column name (within the key/index declaration) sets the index sort order. The default is usually ascending. Upvotes: 2 [selected_answer]<issue_comment>username_2: You should **NOT** do what you want to do. Let the Id identity be the clustered PK. It (under normal use) will not fragment. A table has no natural order. You have to `sort by` to get an order. Yes data is typically presented in PK order but that is just a convenience the query optimizer may or may not use. Just put a non clustered unique index on `Name` and sort by it in the `select`. You really need bigint? That is a massive table. Upvotes: 2
2018/03/14
725
2,785
<issue_start>username_0: I've a serial queue and I use that queue to call a `performSelectorWithDelay` like below `dispatch_async(serialQueue, ^(void) { [self performSelector:@selector(fetchConfigFromNetwork) withObject:nil afterDelay:rootConfig.waitTime]; });` However, the method `fetchConfigFromNetwork` never gets called. However, if instead of `serialQueue`, I use `mainQueue` - it starts working. Cannot understand what's happening here and how to fix it?<issue_comment>username_1: The explanation why your code doesn't work is in the documentation: <https://developer.apple.com/documentation/objectivec/nsobject/1416176-performselector?language=occ> > > This method registers with the runloop of its current context, and > depends on that runloop being run on a regular basis to perform > correctly. One common context where you might call this method and end > up registering with a runloop that is not automatically run on a > regular basis is when being invoked by a dispatch queue. If you need > this type of functionality when running on a dispatch queue, you > should use dispatch\_after and related methods to get the behavior you > want. > > > I'm assuming you want that method to be called on the serial queue with a delay. The most straight forward (and recommended way) is to use `dispatch_after`: ``` __weak typeof(self) wself = self; dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(rootConfig.waitTime * NSEC_PER_SEC)), serialQueue, ^{ [wself fetchConfigFromNetwork]; }); ``` Upvotes: 2 <issue_comment>username_2: > > > > > > This method sets up a timer to perform the aSelector message on the current thread’s run loop. The timer is configured to run in the default mode (NSDefaultRunLoopMode). When the timer fires, the thread attempts to dequeue the message from the run loop and perform the selector. It succeeds if the run loop is running and in the default mode; otherwise, the timer waits until the run loop is in the default mode. > > > > > > > > > This is the discussion about the method `performSelector:withObject:afterDelay:`, I think the `block` of `dispatch_async` will execute on a new thread (not main thread), but you would not know which thread it is, so you can not new a runloop and open it and assign it to this thread. because the runloop of thread is close in default except the main thread, the timer will wait forever. On my opinion, you should use `NSThread` instead of `dispatch_async`, and create a runloop for the thread that you use, then specified the mode of runloop with NSDefaultRunLoopMode, if you actually want to `cancelPreviousPerformRequestsWithTarget`, otherwise use `dispatch_after` instead of `performSelector`. That's my understanding. I can't promise it is right. Upvotes: 0
2018/03/14
1,036
4,196
<issue_start>username_0: I am trying to execute a loop while ignoring exceptions. I think `pass` or `continue` will allow me to ignore exceptions in a loop. Where should I put the `pass` or `continue`? ``` class KucoinAPIException(Exception): """Exception class to handle general API Exceptions `code` values `message` format """ def __init__(self, response): self.code = '' self.message = 'Unknown Error' try: json_res = response.json() except ValueError: self.message = response.content pass else: if 'error' in json_res: self.message = json_res['error'] if 'msg' in json_res: self.message = json_res['msg'] if 'message' in json_res and json_res['message'] != 'No message available': self.message += ' - {}'.format(json_res['message']) if 'code' in json_res: self.code = json_res['code'] if 'data' in json_res: try: self.message += " " + json.dumps(json_res['data']) except ValueError: pass self.status_code = response.status_code self.response = response self.request = getattr(response, 'request', None) def __str__(self): return 'KucoinAPIException {}: {}'.format(self.code, self.message) ``` And this doesn't work: ``` from kucoin.exceptions import KucoinAPIException, KucoinRequestException, KucoinResolutionException for i in range(10): # Do kucoin stuff, which might raise an exception. continue ```<issue_comment>username_1: **Quick solution:** Catching the exceptions *inside* your loop. ``` for i in range(10): try: # Do kucoin stuff, which might raise an exception. except Exception as e: print(e) pass ``` **Adopting best practices:** Note that it is generally considered bad practice to catch all exceptions that inherit from `Exception`. Instead, determine which exceptions might be raised and handle those. In this case, you probably want to handle your `Kucoin` exceptions. (`KucoinAPIException`, `KucoinResolutionException`, and `KucoinRequestException`. In which case your loop should look like this: ``` for i in range(10): try: # Do kucoin stuff, which might raise an exception. except (KucoinAPIException, KucoinRequestException, KucoinResolutionException) as e: print(e) pass ``` We can make the except clause less verbose by refactoring your custom exception hierarchy to inherit from a custom exception class. Say, `KucoinException`. ``` class KucoinException(Exception): pass class KucoinAPIException(KucoinException): # ... class KucoinRequestException(KucoinException): # ... class KucoinResolutionException(KucoinException): # ... ``` And then your loop would look like this: ``` for i in range(10): try: # Do kucoin stuff, which might raise an exception. except KucoinException as e: print(e) pass ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: use try except in main block where KucoinAPIException is thrown ``` for i in range(10): try: # do kucoin stuff # . # . # . except: pass ``` Since you mentioned *ignoring exceptions* I am assuming you would pass all exceptions. So no need to mention individual exceptions at `except:` line. Upvotes: -1 <issue_comment>username_3: You can use finally block to execute the block no matter what. ``` for i in range(10): try: #do something except: #catch exceptions finally: #do something no matter what ``` Is that is what you were looking for? Upvotes: 0 <issue_comment>username_4: `Exception` classes aren't designed to *handle* exceptions. They shouldn't actually have any logic in them. Exception classes essentially function like `enums` to allow us to quickly and easily differentiate between different *types* of exceptions. The logic you have to either raise or ignore an exception should be in your main code flow, not in the exception itself. Upvotes: 1
2018/03/14
911
3,387
<issue_start>username_0: I have this in a module: ``` export const Category = require('./category'); export const Roles = require('./roles'); export const FunctionalTeams = require('./functional-team'); export const WorkSteams = require('./workstream'); ``` I tried changing it to TS imports: ``` export * as Category from './category'; export * as Roles from './roles'; export * as FunctionalTeams from './functional-team'; export * as WorkSteams from'./workstream'; ``` but that doesn't work, tsc doesn't even recognize that syntax, I see these errors: ``` models/enums/index.ts(17,22): error TS1005: ';' expected. models/enums/index.ts(17,27): error TS1005: ';' expected. models/enums/index.ts(18,10): error TS1005: 'from' expected. models/enums/index.ts(18,13): error TS1005: ';' expected. models/enums/index.ts(18,19): error TS1005: ';' expected. ```<issue_comment>username_1: **Quick solution:** Catching the exceptions *inside* your loop. ``` for i in range(10): try: # Do kucoin stuff, which might raise an exception. except Exception as e: print(e) pass ``` **Adopting best practices:** Note that it is generally considered bad practice to catch all exceptions that inherit from `Exception`. Instead, determine which exceptions might be raised and handle those. In this case, you probably want to handle your `Kucoin` exceptions. (`KucoinAPIException`, `KucoinResolutionException`, and `KucoinRequestException`. In which case your loop should look like this: ``` for i in range(10): try: # Do kucoin stuff, which might raise an exception. except (KucoinAPIException, KucoinRequestException, KucoinResolutionException) as e: print(e) pass ``` We can make the except clause less verbose by refactoring your custom exception hierarchy to inherit from a custom exception class. Say, `KucoinException`. ``` class KucoinException(Exception): pass class KucoinAPIException(KucoinException): # ... class KucoinRequestException(KucoinException): # ... class KucoinResolutionException(KucoinException): # ... ``` And then your loop would look like this: ``` for i in range(10): try: # Do kucoin stuff, which might raise an exception. except KucoinException as e: print(e) pass ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: use try except in main block where KucoinAPIException is thrown ``` for i in range(10): try: # do kucoin stuff # . # . # . except: pass ``` Since you mentioned *ignoring exceptions* I am assuming you would pass all exceptions. So no need to mention individual exceptions at `except:` line. Upvotes: -1 <issue_comment>username_3: You can use finally block to execute the block no matter what. ``` for i in range(10): try: #do something except: #catch exceptions finally: #do something no matter what ``` Is that is what you were looking for? Upvotes: 0 <issue_comment>username_4: `Exception` classes aren't designed to *handle* exceptions. They shouldn't actually have any logic in them. Exception classes essentially function like `enums` to allow us to quickly and easily differentiate between different *types* of exceptions. The logic you have to either raise or ignore an exception should be in your main code flow, not in the exception itself. Upvotes: 1
2018/03/14
300
1,170
<issue_start>username_0: I have a question to all the experienced Teamcity users out there. I would like to exit out of a job based on a particular condition, but I do not want the status of the job as a failure. Is it possible to mark a job as successful even when you exit out of the job with an "exit code 1" or any pointers to achieve the same (exit out of a Teamcity job but mark the job as successful) through an alternative way is greatly appreciated! Thanks!<issue_comment>username_1: If you have Command Line Build Step and you are using TeamCity 2017.2 then you can format stderr output as warning. Here is a documentation: <https://confluence.jetbrains.com/display/TCD10/Command+Line> Upvotes: 1 <issue_comment>username_2: You can use TeamCity [service messages](https://confluence.jetbrains.com/display/TCD10/Build+Script+Interaction+with+TeamCity#BuildScriptInteractionwithTeamCity-ReportingBuildStatus) to update build status, e.g. write to the output ``` ##teamcity[buildStatus status='SUCCESS' text='{build.status.text} and then made green'] ``` to get build status text concatenated with the `and then made green` string. Upvotes: 3 [selected_answer]
2018/03/14
706
2,294
<issue_start>username_0: for my Unity game I created a JSON file that holds the speedrun times. The saved times are Timespan strings. When loading the data I parse the strings to Timespan values. When saving the time I save the Timespan as a string to the file. Example level in the JSON file: ``` { "id": 1, "personalBest": "00:00:00.0001336", "perfectTime": "00:00:00.0001335", } ``` If a level is not passed yet I want the personalBest property having the value `null` instead of something like this **00:00:00.0000000** or **99:99:99.9999999** In my serializable class I currently have this code ``` [Serializable] public class Level { public int id; public string? personalBest; // this value could be null public string perfectTime; } ``` But I get this error > > **CS0453** The type 'string' must be a non-nullable type in order to use > it as parameter T in the generic type or method 'System.Nullable' > > > Is a workaround possible?<issue_comment>username_1: So in C# `string` is already a nullable type. You should be fine just using a normal `string` in place of `string?`. This means you can just set it to null by doing the following: ``` string myString = null; ``` If I'm completely misunderstanding your question, please let me know. For saving null in JSON, [check here](https://stackoverflow.com/a/21121267/2748412) Upvotes: 3 [selected_answer]<issue_comment>username_2: An important thing to understand regarding the `string` type is this: > > `string` is a reference type, thus nullable, and when a `string` field > is declared but not initialized it will have its value set to `""` and > not `null`. > > > I.e.: ```cs public string myString; if (myString != null) { Debug.Log("String is not null"); } ``` will print `String is not null` in the console. This is what throws some people off, since usually reference types when declared but not yet initialized have their value set to `null` by default. However, if you declare the variable with an autoproperty instead of a field, then it will behave as any other reference type, i.e. `null` by default. ```cs public string myString {get;set;} if (myString == null) { Debug.Log("String is null"); } ``` will print `String is null` in the console. Upvotes: 1
2018/03/14
600
2,181
<issue_start>username_0: I would like to make a **`PLAY/PAUSE`** button to **`responsive-voice.js`** lib. It will start reading after clicking PLAY and change button label on PAUSE When again I will click this button it will of course pause and change the label on RESUME. After click RESUME it will continue reading the text from **PHP**. Right now my script not working. ``` ");' type='button' value='PLAY' /> function changeState(tekst) { var buttonvalue = document.getElementById('playsound').value; switch (buttonvalue) { case "PLAY": document.getElementById('playsound').value = "PAUSE"; document.getElementById('playsound').onclick = responsiveVoice.speak(tekst, "Polish Female"); break; case "PAUSE": document.getElementById('playsound').value = "PLAY"; document.getElementById('playsound').onclick = responsiveVoice.pause(); break; } } ```<issue_comment>username_1: So in C# `string` is already a nullable type. You should be fine just using a normal `string` in place of `string?`. This means you can just set it to null by doing the following: ``` string myString = null; ``` If I'm completely misunderstanding your question, please let me know. For saving null in JSON, [check here](https://stackoverflow.com/a/21121267/2748412) Upvotes: 3 [selected_answer]<issue_comment>username_2: An important thing to understand regarding the `string` type is this: > > `string` is a reference type, thus nullable, and when a `string` field > is declared but not initialized it will have its value set to `""` and > not `null`. > > > I.e.: ```cs public string myString; if (myString != null) { Debug.Log("String is not null"); } ``` will print `String is not null` in the console. This is what throws some people off, since usually reference types when declared but not yet initialized have their value set to `null` by default. However, if you declare the variable with an autoproperty instead of a field, then it will behave as any other reference type, i.e. `null` by default. ```cs public string myString {get;set;} if (myString == null) { Debug.Log("String is null"); } ``` will print `String is null` in the console. Upvotes: 1
2018/03/14
1,036
2,973
<issue_start>username_0: I have this string: ``` var s = '/channels/mtb/videos?page=2&per_page=100&fields=uri%2Cname%2Cdescription%2Cduration%2Cwidth%2Cheight%2Cprivacy%2Cpictures.sizes&sort=date&direction=asc&filter=embeddable&filter_embeddable=true' ``` I want to repace per\_page number (in this case 100, but it can be any number from 1-100, maybe more?) I can select first part of the string with: ``` var s1 = s.substr(0, s.lastIndexOf('per_page=')+9) ``` which give me: ``` /channels/mtb/videos?page=2&per_page= ``` but how would I select next '&' after that so I can replace number occurrence? dont assume same order of parameters!<issue_comment>username_1: Use `replace` with a regular expression to find the numbers after the text `per_page=`. Like this: ``` s.replace(/per_page=\d+/,"per_page=" + 33) ``` Replace the `33` with the number you want. Result: ``` "/channels/mtb/videos?page=2&per_page=33&fields=uri%2Cname%2Cdescription%2Cduration%2Cwidth%2Cheight%2Cprivacy%2Cpictures.sizes&sort=date&direction=asc&filter=embeddable&filter_embeddable=true" ``` Upvotes: 2 <issue_comment>username_2: You can use following regex to replace the content you want. regex:- `/per_page=[\d]*/g`(this is only for your requirement) ```js var new_no=12; //change 100 to 12 var x='/channels/mtb/videos?page=2&per_page=100&fields=uri%2Cname%2Cdescription%2Cduration%2Cwidth%2Cheight%2Cprivacy%2Cpictures.sizes&sort=date&direction=asc&filter=embeddable&filter_embeddable=true'; var y=x.replace(/per_page=[\d]*/g,'per_page='+new_no); console.log(y); ``` Explanation:- ``` /per_page=[\d]*/g / ----> is for regex pattern(it inform that from next character onward whatever it encounter will be regex pattern) per_page= ----> try to find 'per_page=' in string [\d]* ----> match 0 or more digit (it match until non digit encounter) /g ---->/ to indicate end of regex pattern and 'g' is for global means find in all string(not only first occurrence) ``` Upvotes: 2 <issue_comment>username_3: ``` var matches = /(.*\bper_page=)(\d+)(.*)/; if (matches) { s = matches[0] + newValue + matches[2]; } ``` Upvotes: 0 <issue_comment>username_4: With `Array.filter` you can do this, where one split the text into key/value pairs, and filter out the one that starts with `per_page=`. Stack snippet ```js var s = '/channels/mtb/videos?page=2&per_page=100&fields=uri%2Cname%2Cdescription%2Cduration%2Cwidth%2Cheight%2Cprivacy%2Cpictures.sizes&sort=date&direction=asc&filter=embeddable&filter_embeddable=true' var kv_pairs = s.split('&'); var s2 = s.replace((kv_pairs.filter(w => w.startsWith('per_page=')))[0],'per_page=' + 123); //console.log(s2); ``` Upvotes: 1 <issue_comment>username_5: Start with the index from the lastIndexOf-per\_page instead of 0. Get the index of the first & and create a substr s2 to the end. Then concat s1 + nr + s2. I would not use regex, because it is much slower for this simple stuff. Upvotes: 1
2018/03/14
820
2,973
<issue_start>username_0: I have this: ``` validateForm = () => { for (let i = 0; i < formInputs.length; i++) { const inputName = formInputs[i]; if (!this.state.form[inputName].length) { return false; } } } ``` which im refactoring in to this: ``` validateForm2 = () => { Object.keys(this.state.form).map(input => { if(!this.state.form[input].length) { return false } return true; }) } ``` the first one works, when i fill in my form and the function returns true, if one is empty it returns false. however i cant seem to quite understand the `return` keyword to get the same result. Object.keys says it returns an array but even if I say `return Object.keys...` or `else {return true}` I don't seem to get the same result. what am I misunderstanding about return?<issue_comment>username_1: You could modify your function to do what you want it to do. ``` validateForm2 = () => { return Object.keys(this.state.form).every(input => { return this.state.form[input].length; }) } ``` You are checking that every property has a length (true). If one of them doesn't, your function returns false. Upvotes: 1 [selected_answer]<issue_comment>username_2: In the first example you have only one (arrow) function which returns either `false` or `undefined`. In the second example you have outer (arrow) function that never returns anything - `undefined` to the calling code, and the second function that you pass as a parameter to `Array.map` method. `return` statements inside the parameter function are not returning anything from the outer function. ``` validateForm2 = () => { var emptyItems = Object.keys(this.state.form).filter(input => { return !this.state.form[input].length; }); return emptyItems.length == 0; } ``` Upvotes: 1 <issue_comment>username_3: You could use [`Array#every`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/every), which uses the return value for a short circuit and for returning the check of all truthy items. ``` validateForm2 = () => Object.keys(this.state.form).every(input => this.state.form[input].length); ``` [`Array#map`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) utilizes the return value as new item for each item of the array for a new array, which is dicarded in the given example. Upvotes: 1 <issue_comment>username_4: I think you can avoid using `.map` in favor of `.every()` which iterates over every single element and checks whether it has a length greater than zero. ```js const validateForm = (form) => Object.values(form).every((field) => field.length); let semiEmptyForm = { firstField : "", secondfield : "notEmpty" }; let nonEmptyForm = { firstField : "notEmpty", secondfield : "notEmpty" }; console.log(validateForm(semiEmptyForm)) console.log(validateForm(nonEmptyForm)) ``` Upvotes: 0
2018/03/14
668
2,372
<issue_start>username_0: So I just installed the latest version of rabbitmq and I've been trying to get it to work. The server is running and I've restarted it once just to be sure it's a consistent problem. If I `telnet localhost 5672`, I get ``` Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. Connection closed by foreign host. ``` As you can see, the connection is accepted but rabbitmq does not accept any input. The connection is closed immediately. No further information shows up in logs. `rabbitmqctl` works without any problems. This is running on Windows Subsystem for Linux / Ubuntu. I don't have any other options for a local dev environment because I'm on a work computer which is locked down pretty tightly.<issue_comment>username_1: Telnet lets you confirm the system is listening and allows incoming connections. But even an "out of the box" install of RabbitMQ expects credentials for connections. `rabbitmqctl list_users` to see which users are configured. If guest present, typical creds are guest / guest Either install management plugin (or confirm it is installed), or script your test, most languages have a package available for connecting to RabbitMQ. Upvotes: 1 <issue_comment>username_2: I ran into the same issue, using Ubuntu(16.04) as a subsystem on Windows and rabbitmq 3.7.8. I noticed that when running `sudo rabbitmqctl status` the listeners showed the following: ``` {listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]} ``` I fixed this issue by creating a rabbitmq config file and specifying the localhost and port 5762 Here is what i did step by step. 1. Using sudo && vim, I created a 'rabbitmq.conf' file, located in /etc/rabbitmq/ ``` sudo vim /etc/rabbimq/rabbitmq.conf ``` 2. I specified the localhost(127.0.0.1) and port(5672) for the default tcp listener in the rabbitmq.conf file ``` listeners.tcp.default = 127.0.0.1:5672 ``` 3. Restart rabbitmq `sudo service rabbitmq-server stop` then `sudo service rabbitmq-server start` 4. Check `sudo rabbitmqctl status` and look at the listeners, you should see your new tcp listener with the localhost ip sepcified {listeners,[{clustering,25672,"::"},{amqp,5672,"127.0.0.1"}]} Here is the [config docs](https://www.rabbitmq.com/configure.html#erlang-term-config-file) from rabbitmq that may help clarify some of these steps. Upvotes: 4
2018/03/14
1,220
3,989
<issue_start>username_0: I am trying to send an email to multiple contacts in my database list from an input form, it works when i hard code a specific email address, but when i try to refer to my database, i get the error: **Message: mail(): SMTP server response: 503 5.0.0 Need RCPT (recipient) Filename: libraries/Email.php** I also get another error, which i think may be just my email configuration in general and i am unsure of it to; **Unable to send email using PHP mail(). Your server might not be configured to send mail using this method.** MY Controller: ``` public function sendmail() { $this->load->library('email'); $this->load->model('Email_model'); $this->load->library('session'); $this->email->from($this->input->post('email'), $this->input->post('name')); $sendTo['email'] = $this->Email_model->emailsend(); $this->email->to($sendTo); $this->email->subject('Hello This is an email'); $this->email->message($this->input->post('message')); if ($this->email->send()){ $this->session->set_flashdata('success','Email has been sent'); redirect('dashboard'); }else{ echo $this->email->print_debugger(); } ``` My Model ``` class Email_model extends CI_Model { public function emailsend() { $query = $this->db->query("SELECT email from contacts"); $sendTo=array(); foreach ($query->result() as $row) { $sendTo['email']=$row->email; } ``` My View ``` php echo form\_open('/Dashboard/sendmail');? Your Name Your Email Your Message php echo form\_close(); ? php echo $flash;? ``` Any help for either errpr will be massively appreciated<issue_comment>username_1: CodeIgniter's email module expects [an array of email addresses](https://www.codeigniter.com/user_guide/libraries/email.html#CI_Email::to) in the `to()` function. Your model's `emailsend()` function adds an `email` index to that array that isn't expected. Also, you're overwriting that `email` index's value on every iteration of the `foreach` loop, meaning your array will only ever contain the last address from your database. It also looks like your function doesn't actually return the array after filling it, but it could be that this part is simply truncated in your question. ``` public function emailsend() { $query = $this->db->query("SELECT email from contacts"); $sendTo=array(); foreach ($query->result() as $row) { $sendTo[] = $row->email; // ^^ remove the 'email' index } return $sendTo; // <-- add this line if you don't have it } ``` Then, from your controller, you don't need to store the result in a separate variable unless you want to perform some additional logic on it. You can simply do this: ``` $this->email->from($this->input->post('email'), $this->input->post('name')); $this->email->to($this->Email_model->emailsend()); $this->email->subject('Hello This is an email'); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` public function emailsend() { $query = $this->db->query("SELECT email from contacts"); $sendTo=array(); foreach ($query->result() as $row) { //u are not changing the key $sendTo['email']=$row->email; } //iguess u are returning return $sendTo; ``` } i mean a var dump on that array would be always one value, because u are reasigning the value always, then one email would be returned. I suggest to build the array like this ``` foreach ($query->result() as $row) { $sendTo[]=$row->email; } ``` then the vardump will look like this ``` array(11) { [0]=> string(15) "<EMAIL>" [1]=> string(15) "<EMAIL>" [2]=> string(15) "<EMAIL>" [3]=> string(15) "<EMAIL>" [4]=> string(15) "<EMAIL>" [5]=> string(15) "<EMAIL>" [6]=> string(15) "<EMAIL>" [7]=> string(15) "<EMAIL>" [8]=> string(15) "<EMAIL>" [9]=> string(15) "<EMAIL>" [10]=> string(16) "<EMAIL>" } ``` Upvotes: 2
2018/03/14
1,237
3,775
<issue_start>username_0: I have created a variable scope in one part of my graph, and later in another part of the graph I want to add OPs to an existing scope. That equates to this distilled example: ``` import tensorflow as tf with tf.variable_scope('myscope'): tf.Variable(1.0, name='var1') with tf.variable_scope('myscope', reuse=True): tf.Variable(2.0, name='var2') print([n.name for n in tf.get_default_graph().as_graph_def().node]) ``` Which yields: ``` ['myscope/var1/initial_value', 'myscope/var1', 'myscope/var1/Assign', 'myscope/var1/read', 'myscope_1/var2/initial_value', 'myscope_1/var2', 'myscope_1/var2/Assign', 'myscope_1/var2/read'] ``` My desired result is: ``` ['myscope/var1/initial_value', 'myscope/var1', 'myscope/var1/Assign', 'myscope/var1/read', 'myscope/var2/initial_value', 'myscope/var2', 'myscope/var2/Assign', 'myscope/var2/read'] ``` I saw this question which didn't seem to have an answer that addressed the question directly: [TensorFlow, how to reuse a variable scope name](https://stackoverflow.com/questions/45614045/tensorflow-how-to-reuse-a-variable-scope-name)<issue_comment>username_1: Here is one straightforward way to do this using `as` with `somename` in a context manager. Using this `somename.original_name_scope` property, you can retrieve that scope and then add more variables to it. Below is an illustration: ``` In [6]: with tf.variable_scope('myscope') as ms1: ...: tf.Variable(1.0, name='var1') ...: ...: with tf.variable_scope(ms1.original_name_scope) as ms2: ...: tf.Variable(2.0, name='var2') ...: ...: print([n.name for n in tf.get_default_graph().as_graph_def().node]) ...: ['myscope/var1/initial_value', 'myscope/var1', 'myscope/var1/Assign', 'myscope/var1/read', 'myscope/var2/initial_value', 'myscope/var2', 'myscope/var2/Assign', 'myscope/var2/read'] ``` **Remark** Please also note that setting `reuse=True` is optional; That is, even if you pass `reuse=True`, you'd still get the same result. --- Another way (thanks to OP himself!) is to just add `/` at the end of the variable scope when *reusing* it as in the following example: ``` In [13]: with tf.variable_scope('myscope'): ...: tf.Variable(1.0, name='var1') ...: ...: # reuse variable scope by appending `/` to the target variable scope ...: with tf.variable_scope('myscope/', reuse=True): ...: tf.Variable(2.0, name='var2') ...: ...: print([n.name for n in tf.get_default_graph().as_graph_def().node]) ...: ['myscope/var1/initial_value', 'myscope/var1', 'myscope/var1/Assign', 'myscope/var1/read', 'myscope/var2/initial_value', 'myscope/var2', 'myscope/var2/Assign', 'myscope/var2/read'] ``` **Remark**: Please note that setting `reuse=True` is again optional; That is, even if you pass `reuse=True`, you'd still get the same result. Upvotes: 3 [selected_answer]<issue_comment>username_2: Answer mentioned by username_1 is correct but there is a tricky case with variables created by `tf.get_variable`: ``` with tf.variable_scope('myscope'): print(tf.get_variable('var1', shape=[3])) with tf.variable_scope('myscope/'): print(tf.get_variable('var2', shape=[3])) ``` This snippet will output: ``` ``` It seems that `tensorflow` has not provided a formal way to handle this circumstance yet. The only possible method I found is to manually assign the correct name (**Warning: The correctness is not guaranteed**): ``` with tf.variable_scope('myscope'): print(tf.get_variable('var1', shape=[3])) with tf.variable_scope('myscope/') as scope: scope._name = 'myscope' print(tf.get_variable('var2', shape=[3])) ``` And then we can get the correct names: ``` ``` Upvotes: 2
2018/03/14
991
3,090
<issue_start>username_0: I have difficulties with understanding: ``` key = lambda x: x[1] list = [(i,sum(1 for y in lower_letters if y==i)) l.sort(cmp=lambda x,y: cmp(x[1],y[1]))) #only 2.x??? cmp(a,b) ``` Does it return anything unlike '.sort'? Can someone explain me what it supposed to do? Why we just don't use somthing like list.sort()? I don't really know what x,y means, that's what my teacher wrote and I'm trying to guess what she meant<issue_comment>username_1: Here is one straightforward way to do this using `as` with `somename` in a context manager. Using this `somename.original_name_scope` property, you can retrieve that scope and then add more variables to it. Below is an illustration: ``` In [6]: with tf.variable_scope('myscope') as ms1: ...: tf.Variable(1.0, name='var1') ...: ...: with tf.variable_scope(ms1.original_name_scope) as ms2: ...: tf.Variable(2.0, name='var2') ...: ...: print([n.name for n in tf.get_default_graph().as_graph_def().node]) ...: ['myscope/var1/initial_value', 'myscope/var1', 'myscope/var1/Assign', 'myscope/var1/read', 'myscope/var2/initial_value', 'myscope/var2', 'myscope/var2/Assign', 'myscope/var2/read'] ``` **Remark** Please also note that setting `reuse=True` is optional; That is, even if you pass `reuse=True`, you'd still get the same result. --- Another way (thanks to OP himself!) is to just add `/` at the end of the variable scope when *reusing* it as in the following example: ``` In [13]: with tf.variable_scope('myscope'): ...: tf.Variable(1.0, name='var1') ...: ...: # reuse variable scope by appending `/` to the target variable scope ...: with tf.variable_scope('myscope/', reuse=True): ...: tf.Variable(2.0, name='var2') ...: ...: print([n.name for n in tf.get_default_graph().as_graph_def().node]) ...: ['myscope/var1/initial_value', 'myscope/var1', 'myscope/var1/Assign', 'myscope/var1/read', 'myscope/var2/initial_value', 'myscope/var2', 'myscope/var2/Assign', 'myscope/var2/read'] ``` **Remark**: Please note that setting `reuse=True` is again optional; That is, even if you pass `reuse=True`, you'd still get the same result. Upvotes: 3 [selected_answer]<issue_comment>username_2: Answer mentioned by username_1 is correct but there is a tricky case with variables created by `tf.get_variable`: ``` with tf.variable_scope('myscope'): print(tf.get_variable('var1', shape=[3])) with tf.variable_scope('myscope/'): print(tf.get_variable('var2', shape=[3])) ``` This snippet will output: ``` ``` It seems that `tensorflow` has not provided a formal way to handle this circumstance yet. The only possible method I found is to manually assign the correct name (**Warning: The correctness is not guaranteed**): ``` with tf.variable_scope('myscope'): print(tf.get_variable('var1', shape=[3])) with tf.variable_scope('myscope/') as scope: scope._name = 'myscope' print(tf.get_variable('var2', shape=[3])) ``` And then we can get the correct names: ``` ``` Upvotes: 2
2018/03/14
1,121
4,040
<issue_start>username_0: I am trying to add data from a split range (X5:X?,AX5:AX?) into a VBA dictionary. ? Is determined as the last row of data within the sheet. I am new to VBA and trying to force my way through this. ``` Public Sub Test 'Creates a dictionary object Dim orderstatus As Object, path As String Set orderstatus = CreateObject("Scripting.Dictionary") Dim order, status 'key and object names order = "Order #": status = "Order Status" path = ThisWorkbook.path 'Central District--A Head Water Order Summary Dim app As New Excel.Application, book As Excel.Workbook app.Visible = False Set book = app.Workbooks.Add(path & "\CENTRAL DIST\A HEAD - WATER ORDER SUMMARY.xls") 'A Head #1 Dim A1Head As Integer, last As Integer, l as Integer l = 4 book.Worksheets("A HEAD #1").Activate last = Range("X" & Rows.Count).End(xlUp).Row Set lastCol = Range("X5:X" & last, "AX5:AX" & last) For Each l In lastCol.Cells orderstatus.Add lastCol.Value Next End Sub ``` Any help is greatly appreciated!<issue_comment>username_1: Change this ``` orderstatus.Add lastCol.Value ``` to this ``` orderstatus.Add l.Value, 1 ``` This assumes you will have no duplicates because you aren't checking for that and will get an error if you do have duplicates. Upvotes: 0 <issue_comment>username_2: I think something like this is what you're looking for: ``` Sub tgr() Dim OrderStatus As Object Dim i As Long Dim Key As Variant Set OrderStatus = CreateObject("Scripting.Dictionary") With Application .ScreenUpdating = False .EnableEvents = False End With With Workbooks.Open(ThisWorkbook.Path & "\CENTRAL DIST\A HEAD - WATER ORDER SUMMARY.xls").Sheets("A HEAD #1") For i = 5 To .Cells(.Rows.Count, "X").End(xlUp).Row If Not OrderStatus.Exists(.Cells(i, "X").Value) Then OrderStatus(.Cells(i, "X").Value) = .Cells(i, "AX").Value Next i .Parent.Close False End With 'Print dictionary to text file Close #1 Open ThisWorkbook.Path & "\OrderStatus Output.txt" For Output As #1 Print #1, "Key" & vbTab & "Value" For Each Key In OrderStatus.Keys Print #1, Key & vbTab & OrderStatus(Key) Next Key Close #1 With Application .ScreenUpdating = True .EnableEvents = True End With End Sub ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: you're messing up with `Range` object and `Row` index and you'd better abandon the `Activate`/`ActiveXXX` pattern and use fully qualified range references give this code a try ``` Option Explicit Public Sub Test() 'Creates a dictionary object Dim orderstatus As Object Set orderstatus = CreateObject("Scripting.Dictionary") 'Central District--A Head Water Order Summary Dim app As New Excel.Application, book As Excel.Workbook app.Visible = False Set book = app.Workbooks.Add(ThisWorkbook.path & "\CENTRAL DIST\A HEAD - WATER ORDER SUMMARY.xls") 'A Head #1 Dim dataRng As Range, r As Range Dim last As Integer With book.Worksheets("A HEAD #1") For Each r In .Range("X5", .Cells(.Rows.Count, "X").End(xlUp)) orderstatus(r.value) = r.Offset(, 26).value Next End With End Sub ``` Moreover if you're running this macro from within an Excel session already, you don't need to get another instance of it nor explicitly reference it: ``` Option Explicit Public Sub Test() 'Creates a dictionary object Dim orderstatus As Object Set orderstatus = CreateObject("Scripting.Dictionary") 'Central District--A Head Water Order Summary Dim book As Workbook Set book = Workbooks.Add(ThisWorkbook.path & "\CENTRAL DIST\A HEAD - WATER ORDER SUMMARY.xls") 'A Head #1 Dim dataRng As Range, r As Range Dim last As Integer With book.Worksheets("A HEAD #1") For Each r In .Range("X5", .Cells(.Rows.Count, "X").End(xlUp)) orderstatus(r.value) = r.Offset(, 26).value Next End With End Sub ``` Upvotes: 0
2018/03/14
701
2,467
<issue_start>username_0: ``` Scanner s = new Scanner(System.in); System.out.println("Type a number"); int n = s.nextInt(); int start = 0; int largest = 0; int occurrence = 0; while(n > start){ int number = (int)(Math.random()*100); if(number > largest){ largest = number; } n--; System.out.print(number+" "); } System.out.println("max is "+largest); System.out.println("Occurrence is "+occurrence); ``` I would like to be able to find the occurrence of the largest int, and I am not sure how to go about doing so, in a rather simple way. I tried adding `occurrence++;` under `largest = number;`, but that did not work. For example, I would type `6` as input, and I would get random numbers `54, 74, 61, 89, 13, 89`. The desired output would be `max is 89. Occurrence is 2.` Also, I am trying to only get the code to print only 10 numbers per line, then it would skip to the next line and continue.<issue_comment>username_1: 1. In case of reoccurrence, increase occurrence variable; 2. set occurrence to `1` in case of new larger number You can use ``` if(number > largest){ largest = number; // found larger value occurrence = 1; // reset occurrence back to initial }else if(number == largest){ occurrence++; // keep track of same large value } ``` Upvotes: 1 <issue_comment>username_2: You need to use an `else if` condition to check equality like this ``` Scanner s = new Scanner(System.in); System.out.println("Type a number"); int n = s.nextInt(); int start = 0; int largest = 0; int occurrence = 1; while(n > start){ int number = (int)(Math.random()*100); if(number > largest){ largest = number; }else if(number == largest){ occurrence++; } n--; System.out.print(number+" "); } System.out.println("max is "+largest); System.out.println("Occurrence is "+occurrence); ``` **P.S** - Initialize `occurrence = 1` as there will be at least one time the largest number will be present Upvotes: 0 <issue_comment>username_3: I think you are almost there. Basically, you are missing a check on whether the new random number is equal to the current largest number. Something like: ``` if (number == largest) { occurrences++; } ``` Also, remember to reset occurrences when a new largest number has been found: ``` if (number > largest){ largest = number; occurrences = 1; } ``` Upvotes: 0
2018/03/14
6,736
19,244
<issue_start>username_0: I pulled a table of Tour de France winners from wikipedia using BeautifulSoup, but its returning the table in what appears to be a dataset, but the rows are separable. First, here is what I did to grab the page and table: ``` import requests response = requests.get("Https://en.wikipedia.org/wiki/List_of_Tour_de_France_general_classification_winners") content = response.content from bs4 import BeatifulSoup parser = BeautifulSoup(content, 'html.parser') # I know its the second table on the page, so grab it as such winners_table = parser.find_all('table')[1] import pandas as pd data = pd.read_html(str(winners_table), flavor = 'html5lib') ``` Note that I used html5lib here because pycharm was telling me that there is no lxml, despite it certainly being there. When I print out the table, it appears as a table with 116 rows and 9 columns, but it isn't appearing to split on rows. It looks like this: ``` [ 0 1 \ 0 Year Country 1 1903 France 2 1904 France 3 1905 France 4 1906 France 5 1907 France 6 1908 France 7 1909 Luxembourg 8 1910 France 9 1911 France 10 1912 Belgium 11 1913 Belgium 12 1914 Belgium 13 1915 World War I 14 1916 NaN 15 1917 NaN 16 1918 NaN 17 1919 Belgium 18 1920 Belgium 19 1921 Belgium 20 1922 Belgium 21 1923 France 22 1924 Italy 23 1925 Italy 24 1926 Belgium 25 1927 Luxembourg 26 1928 Luxembourg 27 1929 Belgium 28 1930 France 29 1931 France .. ... ... 86 1988 Spain 87 1989 United States 88 1990 United States 89 1991 Spain 90 1992 Spain 91 1993 Spain 92 1994 Spain 93 1995 Spain 94 1996 Denmark 95 1997 Germany 96 1998 Italy 97 1999[B] United States 98 2000[B] United States 99 2001[B] United States 100 2002[B] United States 101 2003[B] United States 102 2004[B] United States 103 2005[B] United States 104 2006 Spain 105 2007 Spain 106 2008 Spain 107 2009 Spain 108 2010 Luxembourg 109 2011 Australia 110 2012 Great Britain 111 2013 Great Britain 112 2014 Italy 113 2015 Great Britain 114 2016 Great Britain 115 2017 Great Britain 2 \ 0 Cyclist 1 Garin, MauriceMaurice Garin 2 Garin, MauriceMaurice Garin Cornet, HenriHenri... 3 Trousselier, <NAME> 4 Pottier, <NAME> 5 Petit-Breton, <NAME> 6 Petit-Breton, <NAME> 7 Faber, FrançoisFrançois Faber 8 Lapize, OctaveOctave Lapize 9 Garrigou, GustaveGustave Garrigou 10 Defraye, OdileOdile Defraye 11 Thys, <NAME> 12 Thys, <NAME> 13 NaN 14 NaN 15 NaN 16 NaN 17 Lambot, <NAME> 18 Thys, <NAME> 19 Scieur, LéonLéon Scieur 20 Lambot, <NAME> 21 Pélissier, <NAME> 22 Bottecchia, OttavioOttavio Bottecchia 23 Bottecchia, OttavioOttavio Bottecchia 24 Buysse, <NAME> 25 Frantz, NicolasNicolas Frantz 26 Frantz, NicolasNicolas Frantz 27 De Waele, MauriceMaurice De Waele 28 Leducq, <NAME> 29 Magne, AntoninAntonin Magne .. ... 86 Delgado, <NAME> 87 LeMond, GregGreg LeMond 88 LeMond, GregGreg LeMond 89 Indurain, MiguelMiguel Indurain 90 Indurain, MiguelMiguel Indurain 91 Indurain, MiguelMiguel Indurain 92 Indurain, MiguelMiguel Indurain 93 Indurain, <NAME> 94 Riis, BjarneBjarne Riis[A] 95 Ullrich, JanJan Ullrich# 96 Pantani, <NAME> 97 Armstrong, LanceLance Armstrong 98 Armstrong, LanceLance Armstrong 99 Armstrong, LanceLance Armstrong 100 Armstrong, LanceLance Armstrong 101 Armstrong, LanceLance Armstrong 102 Armstrong, LanceLance Armstrong 103 Armstrong, LanceLance Armstrong 104 Landis, FloydFloyd Land<NAME>, ÓscarÓscar ... 105 Contador, AlbertoAlberto Contador# 106 Sastre, <NAME>* 107 Contador, AlbertoAlberto Contador 108 Contador, AlbertoAlberto Contador Schleck, And... 109 Evans, <NAME> 110 Wiggins, <NAME> 111 Froome, <NAME> 112 Nibali, VincenzoVincenzo Nibali 113 Froome, <NAME>* 114 Froome, <NAME> 115 Froome, <NAME> 3 4 \ 0 Sponsor/Team Distance 1 La Française 2,428 km (1,509 mi) 2 Conte 2,428 km (1,509 mi) 3 Peugeot–Wolber 2,994 km (1,860 mi) 4 Peugeot–Wolber 4,637 km (2,881 mi) 5 Peugeot–Wolber 4,488 km (2,789 mi) 6 Peugeot–Wolber 4,497 km (2,794 mi) 7 Alcyon–Dunlop 4,498 km (2,795 mi) 8 Alcyon–Dunlop 4,734 km (2,942 mi) 9 Alcyon–Dunlop 5,343 km (3,320 mi) 10 Alcyon–Dunlop 5,289 km (3,286 mi) 11 Peugeot–Wolber 5,287 km (3,285 mi) 12 Peugeot–Wolber 5,380 km (3,340 mi) 13 NaN NaN 14 NaN NaN 15 NaN NaN 16 NaN NaN 17 La Sportive 5,560 km (3,450 mi) 18 La Sportive 5,503 km (3,419 mi) 19 La Sportive 5,485 km (3,408 mi) 20 Peugeot–Wolber 5,375 km (3,340 mi) 21 Automoto–Hutchinson 5,386 km (3,347 mi) 22 Automoto 5,425 km (3,371 mi) 23 Automoto–Hutchinson 5,440 km (3,380 mi) 24 Automoto–Hutchinson 5,745 km (3,570 mi) 25 Alcyon–Dunlop 5,398 km (3,354 mi) 26 Alcyon–Dunlop 5,476 km (3,403 mi) 27 Alcyon–Dunlop 5,286 km (3,285 mi) 28 Alcyon–Dunlop 4,822 km (2,996 mi) 29 France 5,091 km (3,163 mi) .. ... ... 86 Reynolds 3,286 km (2,042 mi) 87 AD Renting–W-Cup–Bottecchia 3,285 km (2,041 mi) 88 Z–Tomasso 3,504 km (2,177 mi) 89 Banesto 3,914 km (2,432 mi) 90 Banesto 3,983 km (2,475 mi) 91 Banesto 3,714 km (2,308 mi) 92 Banesto 3,978 km (2,472 mi) 93 Banesto 3,635 km (2,259 mi) 94 Team Telekom 3,765 km (2,339 mi) 95 Team Telekom 3,950 km (2,450 mi) 96 Mercatone Uno–Bianchi 3,875 km (2,408 mi) 97 U.S. Postal Service 3,687 km (2,291 mi) 98 U.S. Postal Service 3,662 km (2,275 mi) 99 U.S. Postal Service 3,458 km (2,149 mi) 100 U.S. Postal Service 3,272 km (2,033 mi) 101 U.S. Postal Service 3,427 km (2,129 mi) 102 U.S. Postal Service 3,391 km (2,107 mi) 103 Discovery Channel 3,593 km (2,233 mi) 104 Caisse d'Epargne–Illes Balears 3,657 km (2,272 mi) 105 Discovery Channel 3,570 km (2,220 mi) 106 Team CSC 3,559 km (2,211 mi) 107 Astana 3,459 km (2,149 mi) 108 Team Saxo Bank 3,642 km (2,263 mi) 109 BMC Racing Team 3,430 km (2,130 mi) 110 Team Sky 3,496 km (2,172 mi) 111 Team Sky 3,404 km (2,115 mi) 112 Astana 3,660.5 km (2,274.5 mi) 113 Team Sky 3,360.3 km (2,088.0 mi) 114 Team Sky 3,529 km (2,193 mi) 115 Team Sky 3,540 km (2,200 mi) 5 6 7 8 0 Time/Points Margin Stage wins Stages in lead 1 094 !94h 33' 14" 24921 !+ 2h 59' 21" 3 6 2 096 !96h 05' 55" 21614 !+ 2h 16' 14" 1 3 3 35 26 5 10 4 31 8 5 12 5 47 19 2 5 6 36 32 5 13 7 37 20 6 13 8 63 4 4 3 9 43 18 2 13 10 49 59 3 13 11 197 !197h 54' 00" 00837 !+ 8' 37" 1 8 12 200 !200h 28' 48" 00150 !+ 1' 50" 1 15 13 NaN NaN NaN NaN 14 NaN NaN NaN NaN 15 NaN NaN NaN NaN 16 NaN NaN NaN NaN 17 231 !231h 07' 15" 14254 !+ 1h 42' 54" 1 2 18 228 !228h 36' 13" 05721 !+ 57' 21" 4 14 19 221 !221h 50' 26" 01836 !+ 18' 36" 2 14 20 222 !222h 08' 06" 04115 !+ 41' 15" 0 3 21 222 !222h 15' 30" 03041 !+ 30 '41" 3 6 22 226 !226h 18' 21" 03536 !+ 35' 36" 4 15 23 219 !219h 10' 18" 05420 !+ 54' 20" 4 13 24 238 !238h 44' 25" 12225 !+ 1h 22' 25" 2 8 25 198 !198h 16' 42" 14841 !+ 1h 48' 41" 3 14 26 192 !192h 48' 58" 05007 !+ 50' 07" 5 22 27 186 !186h 39' 15" 04423 !+44' 23" 1 16 28 172 !172h 12' 16" 01413 !+ 14' 13" 2 13 29 177 !177h 10' 03" 01256 !+ 12' 56" 1 16 .. ... ... ... ... 86 084 !84h 27' 53" 00713 !+ 7' 13" 1 11 87 087 !87h 38' 35" 00008 !+ 8" 3 8 88 090 !90h 43' 20" 00216 !+ 2' 16" 0 2 89 101 !101h 01' 20" 00336 !+ 3' 36" 2 10 90 100 !100h 49' 30" 00435 !+ 4' 35" 3 10 91 095 !95h 57' 09" 00459 !+ 4' 59" 2 14 92 103 !103h 38' 38" 00539 !+ 5' 39" 1 13 93 092 !92h 44' 59" 00435 !+ 4' 35" 2 13 94 095 !95h 57' 16" 00141 !+ 1' 41" 2 13 95 100 !100h 30' 35" 00909 !+ 9' 09" 2 12 96 092 !92h 49' 46" 00321 !+ 3' 21" 2 7 97 091 !91h 32' 16" 00737 !+ 7' 37" 4 15 98 092 !92h 33' 08" 00602 !+ 6' 02" 1 12 99 086 !86h 17' 28" 00644 !+ 6' 44" 4 8 100 082 !82h 05' 12" 00717 !+ 7' 17" 4 11 101 083 !83h 41' 12" 00101 !+ 1' 01" 1 13 102 083 !83h 36' 02" 00619 !+ 6' 19" 5 7 103 086 !86h 15' 02" 00440 !+ 4' 40" 1 17 104 089 !89h 40' 27" 00032 !+ 32" 0 8 105 091 !91h 00' 26" 00023 !+ 23" 1 4 106 087 !87h 52' 52" 00058 !+ 58" 1 5 107 085 !85h 48' 35" 00411 !+ 4' 11" 2 7 108 091 !91h 59' 27" 00122 !+ 1' 22" 2 12 109 086 !86h 12' 22" 00134 !+ 1' 34" 1 2 110 087 !87h 34' 47" 00321 !+ 3' 21" 2 14 111 083 !83h 56' 20" 00420 !+ 4' 20" 3 14 112 089 !89h 59' 06" 00737 !+ 7' 37" 4 19 113 084 !84h 46' 14" 00112 !+ 1' 12" 1 16 114 089 !89h 04' 48" 00405 !+ 4' 05" 2 14 115 086 !86h 20' 55" 00054 !+ 54" 0 15 [116 rows x 9 columns]] ``` This is all well and good, but the problem is it doesn't seem to be differentiating by rows. For instance, when I try to print just the first row, it reprints the whole dataset. Here's an example of trying to just print the first row and second column (so should just be one value): ``` print(data[0][2]) 0 Country 1 France 2 France 3 France 4 France 5 France 6 France 7 Luxembourg 8 France 9 France 10 Belgium 11 Belgium 12 Belgium 13 World War I 14 NaN 15 NaN 16 NaN 17 Belgium 18 Belgium 19 Belgium 20 Belgium 21 France 22 Italy 23 Italy 24 Belgium 25 Luxembourg 26 Luxembourg 27 Belgium 28 France 29 France ... 86 Spain 87 United States 88 United States 89 Spain 90 Spain 91 Spain 92 Spain 93 Spain 94 Denmark 95 Germany 96 Italy 97 United States 98 United States 99 United States 100 United States 101 United States 102 United States 103 United States 104 Spain 105 Spain 106 Spain 107 Spain 108 Luxembourg 109 Australia 110 Great Britain 111 Great Britain 112 Italy 113 Great Britain 114 Great Britain 115 Great Britain Name: 1, Length: 116, dtype: object ``` **All I want is for this to behave as a data frame, with 116 rows and 9 columns. Any idea how to fix this?**<issue_comment>username_1: The pandas function `read_html` returns a list of dataframes. So in your case I believe you need to choose the first index of the returned list as done in the 8th line in the code below. Also note the you have a typo in the import line of BeautifulSoup, please update your code accordingly in the question. I hope my output is what you're looking for. Code: ``` import requests import pandas as pd from bs4 import BeautifulSoup response = requests.get("Https://en.wikipedia.org/wiki/List_of_Tour_de_France_general_classification_winners") parser = BeautifulSoup(response.content, 'html.parser') winners_table = parser.find_all('table')[1] data = pd.read_html(str(winners_table), flavor = 'lxml')[0] print("type of variable data: " + str(type(data))) print(data[0][2]) ``` Output: `type of variable data:` `1904` Note I used `lxml` instead of `html5lib` Upvotes: 3 [selected_answer]<issue_comment>username_2: If we take a look at the documentation [here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html) we can see that read\_html actually outputs a *list* of DataFrames and not a single DataFrame. We can confirm this when we run: ``` >> print(type(data)) ``` The format of the list is such that the first element of the list is the actual DataFrame containing your values. ``` >> print(type(data[0])) ``` The simple solution to this is to reassign `data` to `data[0]`. From this you can then call individual rows. Indexing of rows for DataFrames doesn't behave like normal lists so I would recommend looking into `.iloc` and `.loc`. [This](https://www.shanelynn.ie/select-pandas-dataframe-rows-and-columns-using-iloc-loc-and-ix/) is a nice article I found on indexing of DataFrames. An example of this solution: ``` >> data = data[0] >> print(data.iloc[1]) 0 1903 1 France 2 Garin, MauriceM<NAME> 3 La Française 4 2,428 km (1,509 mi) 5 094 !94h 33' 14" 6 24921 !+ 2h 59' 21" 7 3 8 6 Name: 1, dtype: object ``` Upvotes: 2 <issue_comment>username_3: You could try this: ``` df = data[0] # iterate through the data frame using iterrows() for index, row in df.iterrows(): print ("Col1:", row[0], " Col2: ", row[1], "Col3:", row[2], "Col4:", row[3]) #etc for all cols ``` I hope this helps! Upvotes: 1
2018/03/14
596
1,921
<issue_start>username_0: I want to update multiple records from table "a" depending on each other. The values of the table "a" look like: ``` +------------+---------------+-------+ | date | transfervalue | value | +------------+---------------+-------+ | 01.03.2018 | 0 | 10 | | 02.03.2018 | 0 | 6 | | 03.03.2018 | 0 | 13 | +------------+---------------+-------+ ``` After the update the values of the table "a" should look like: ``` +------------+---------------+-------+ | date | transfervalue | value | +------------+---------------+-------+ | 01.03.2018 | 0 | 10 | | 02.03.2018 | 10 | 6 | | 03.03.2018 | 16 | 13 | +------------+---------------+-------+ ``` What is the most efficient way to do this? I've tried three different solutions, but the last solution doesn't work. * Solution 1: do a loop and iterate over each day to do the update statement * Solution 2: do an update statement statement for each day * Solution 3: do the update for the whole timespan in one statement The output of solution 3 was: ``` +------------+---------------+-------+ | date | transfervalue | value | +------------+---------------+-------+ | 01.03.2018 | 0 | 10 | | 02.03.2018 | 10 | 6 | | 03.03.2018 | 6 | 13 | +------------+---------------+-------+ ```<issue_comment>username_1: You seem to want a cumulative sum: ``` with toupdate as ( select t.*, sum(value) over (order by date rows between unbounded preceding and 1 preceding) as running_value from t ) update toupdate set transfervalue = coalesce(running_value, 0); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This should work: ``` select t1.*, coalesce((select sum(value) from table1 t2 where t2.date < t1.date), 0) MyNewValue from table1 t1 ``` Upvotes: 0
2018/03/14
836
3,366
<issue_start>username_0: Most of [Jest](https://facebook.github.io/jest/docs/en/expect.html)'s expect(arg1).xxxx() methods will throw an exception if the comparison fails to match expectations. One exception to this pattern seems to be the toMatchSnapshot() method. It seems to never throw an exception and instead stores the failure information for later Jest code to process. How can we cause toMatchSnapshot() to throw an exception? If that's not possible, is there another way that our tests can detect when the snapshot comparison failed?<issue_comment>username_1: If you run a test (e.g. /Foobar.test.js) which contains a `toMatchSnapshot` matcher jest by default will create a snapshot file on the first run (e.g. /\_\_snapshots\_\_/Foobar.test.js.snap). This first run that creates the snapshot will pass. If you want the test to fail you need to commit the snapshot alongside with your test. The next test builds will compare the changes you make to the committed snapshot and if they differ the test will fail. Here is the official [link](https://facebook.github.io/jest/docs/en/snapshot-testing.html) to the Documentation on 'Snapshot Testing' with Jest. Upvotes: 0 <issue_comment>username_2: One, less than ideal, way to cause toMatchSnapshot to throw an exception when there is a snapshot mismatch is to edit the implementation of toMatchSnapshot. Experienced Node developers will consider this to be bad practice, but if you are very strongly motivated to have that method throw an exception, this approach is actually easy and depending on how you periodically update your tooling, only somewhat error-prone. The file of interest will be named something like "node\_modules/jest-snapshot/build/index.js". The line of interest is the first line in the method: ``` const toMatchSnapshot = function (received, testName) { this.dontThrow && this.dontThrow(); const currentTestName = .... ``` You'll want to split that first line and omit the calling of *this.dontThrow()*. The resulting code should look similar to this: ``` const toMatchSnapshot = function (received, testName) { //this.dontThrow && this.dontThrow(); const currentTestName = .... ``` A final step you might want to take is to send a feature request to the Jest team or support an existing feature request that is of your liking like the following: [link](https://github.com/facebook/jest/issues/5802) Upvotes: 0 <issue_comment>username_3: This will work! After running your `toMatchSnapshot` assertion, check the global state: `expect(global[GLOBAL_STATE].state.snapshotState.matched).toEqual(1);` Just spent the last hour trying to figure it out for our own tests. This doesn't feel hacky to me either, though a maintainer of Jest may be able to tell me whether accessing `Symbol.for('$$jest-matchers-object')` is a good idea or not. Here's a full code snippet for context: ```js const GLOBAL_STATE = Symbol.for('$$jest-matchers-object'); describe('Describe test', () => { it('should test something', () => { try { expect({}).toMatchSnapshot(); // replace with whatever you're trying to test expect(global[GLOBAL_STATE].state.snapshotState.matched).toEqual(1); } catch (e) { console.log(`\x1b[31mWARNING!!! Catch snapshot failure here and print some message about it...`); throw e; } }); }); ``` Upvotes: 3
2018/03/14
729
2,073
<issue_start>username_0: I created a Dataframe with two columns and would like to append them based on the counting of values from other array. ``` cols = ['count_pos','count_neg'] df_count = pd.DataFrame(columns=cols) ``` I have the array y with values like y = [1,-1,-1,1,1,1,1,-1,-1] Now i want to update for every change in value in y, count those occurrences and append to respective columns. ``` for i in range(1,10): if y[i] == -1: print(y[i]) if count_pos > 0: df_count.loc['count_pos'].append = count_pos count_pos = 0 count_neg = count_neg - 1 else: if count_neg< 0: print(count_neg) df_count.loc['count_neg'].append = count_neg count_neg = 0 count_pos = count_pos + 1 ``` But I am not getting the result.Please let me know how can I append values to dataframe column. My desired output is df\_count ``` count_pos count_neg 1 -2 4 -2 ```<issue_comment>username_1: Count consecutive groups of positive/negative values using `groupby`: ``` s = pd.Series(y) v = s.gt(0).ne(s.gt(0).shift()).cumsum() pd.DataFrame( v.groupby(v).count().values.reshape(-1, 2), columns=['pos', 'neg'] ) pos neg 0 1 2 1 4 2 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Adapted from @username_1's answer: ``` a = pd.Series([-1, 2, 15, 3, 45, 5, 23, 0, 6, -4, -8, -5, 3, -9, -7, -36, -71, -2, 25, 47, -8]) def pos_neg_count(a): v = a.ge(0).ne(a.ge(0).shift()).cumsum() vals = v.groupby(v).count().values cols = ['pos', 'neg'] if a[0] >= 0 else ['neg', 'pos'] try: result = pd.DataFrame(vals.reshape(-1, 2), columns=cols) except ValueError: vals = np.insert(vals, len(vals), 0) result = pd.DataFrame(vals.reshape(-1, 2), columns=cols) return result pos_neg_count(a) # neg pos # 0 1 8 # 1 3 1 # 2 5 2 # 3 1 0 ``` I think, this would take care of cases where the array being reshaped has odd no. of elements. Upvotes: 0
2018/03/14
291
1,023
<issue_start>username_0: `Color.GREEN` looks like an attribute and not an object if so then how can I assign class member(`Color.GREEN`) to an object reference of type Color? ``` import java.awt.*; public class StopLight { public static final Color GREEN = Color.GREEN; public static final Color YELLOW = Color.YELLOW; public static final Color RED = Color.RED; public StopLight() { state = GREEN; } private Color state; } ```<issue_comment>username_1: It is a public static object defined in java awt's [`Color`](https://docs.oracle.com/javase/7/docs/api/java/awt/Color.html) : ``` /** * The color green. In the default sRGB space. */ public final static Color green = new Color(0, 255, 0); /** * The color green. In the default sRGB space. * @since 1.4 */ public final static Color GREEN = green; ``` So you can access it as `Color.GREEN`. Upvotes: 3 [selected_answer]<issue_comment>username_2: It's a *constant*, and of course an object of class `Color`. Upvotes: 1
2018/03/14
589
2,165
<issue_start>username_0: In sql-server, There are three column in same table,i want to make Column\_C from Column\_A and Column\_B i want to get in same table in sql ``` Column_A Column_B Column_C 1,2,3,4 2,3 1,2,3 1 Column_A Column_B Column_C 1,2,3,4 2,3 1,4 1,2,3 1 2,3 ```<issue_comment>username_1: I'm very unsure what the actual table looks like or the columns for that matter... But I'll give you a general response for now and you can let me know if you need something else: ``` SELECT (ColumnA - ColumnB) AS Column C FROM tablename ``` But since your table isn't normalized, this is very confusing. As well, you should never store multiple values separated by a comma in your table, because you can do it, but it'll make SQL development SOOOO much harder. Upvotes: 0 <issue_comment>username_2: This should do it. It replaces Column\_B's string present in Column\_A with an empty string. It also deals with potential double commas and leading/trailing commas ``` SELECT Column_A, Column_B, LTRIM( RTRIM( REPLACE( REPLACE(Column_A, Column_B, ''), ',,', ','), ','), ',') AS Column_C FROM Table1 ``` This is correct for Oracle SQL. If you are using a different DBMS, I think you just need to use the corresponding TRIM syntax. Upvotes: 0 <issue_comment>username_3: If you use Postgres, you can install the [intarray](https://www.postgresql.org/docs/current/static/intarray.html) extension, then the solution is as simple as: ``` select column_a, column_b, array_to_string( string_to_array(column_a,',')::int[] - string_to_array(column_b,',')::int[] , ',') as column_c from badly_designed_table ``` despite the horrible design. `string_to_array(column_a,',')::int[]` converts the string to an array. The intarray extensions provides the `-` operator that removes elements from the first array that are contained in the second. `array_to_string()` then converts the array back to a string. Upvotes: 2 [selected_answer]
2018/03/14
2,028
5,924
<issue_start>username_0: With `java.time` , I'm trying to format the time as the following "2018-03-15T23:47:15+01:00" . With this formatter I'm close to the result in Scala. ``` val formatter: DateTimeFormatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ssZ") ZonedDateTime.now() // 2018-03-14T19:25:23.397+01:00 ZonedDateTime.now().format(formatter) // => 2018-03-14 19:25:23+0100 ``` But I cannot insert the extra character "T" between the day and hour. What does this "T" mean BTW ? **How to format as "2018-03-15T23:47:15+01:00"** ? *Notes:* In case you wonder why `LocalDateTime` cannot be formatted [Format LocalDateTime with Timezone in Java8](https://stackoverflow.com/questions/25561377/format-localdatetime-with-timezone-in-java8)<issue_comment>username_1: Try this ``` val ZONED_DATE_TIME_ISO8601_FORMATTER3 = DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss.SSSxxx") ZonedDateTime.now().format(ZONED_DATE_TIME_ISO8601_FORMATTER3) // 2018-03-14T19:35:54.321+01:00 ``` See [here](https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html) > > Offset X and x: This formats the offset based on the number of pattern letters. One letter outputs just the hour, such as '+01', unless the minute is non-zero in which case the minute is also output, such as '+0130'. Two letters outputs the hour and minute, without a colon, such as '+0130'. Three letters outputs the hour and minute, with a colon, such as '+01:30'. Four letters outputs the hour and minute and optional second, without a colon, such as '+013015'. Five letters outputs the hour and minute and optional second, with a colon, such as '+01:30:15'. Six or more letters throws IllegalArgumentException. Pattern letter 'X' (upper case) will output 'Z' when the offset to be output would be zero, whereas pattern letter 'x' (lower case) will output '+00', '+0000', or '+00:00'. > > > Upvotes: 2 <issue_comment>username_2: As your question already shows, you may just rely on `ZonedDateTime.toString()` for getting a string like `2018-03-14T19:25:23.397+01:00`. BTW, that string is in ISO 8601 format, the international standard. Only two minor modifications may be needed: * If you don’t want the fraction of second — well, I don’t see what harm it does, it agrees with ISO 8601, so whoever receives your ISO 8601 string should be happy to have it. But if you don’t want it, you may apply `myZonedDateTime.truncatedTo(ChronoUnit.SECONDS)` to get rid of it. * `ZonedDateTime.toString()` often appends a zone name, for example `2018-03-14T19:25:23+01:00[Europe/Paris]`, which is not part of the ISO 8601 standard. To avoid that, convert to `OffsetDateTime` before using *its* `toString` method: `myZonedDateTime.toOffsetDateTime().toString()` (or `myZonedDateTime.truncatedTo(ChronoUnit.SECONDS).toOffsetDateTime().toString()`). Building your own formatter through a format pattern string is very flexible when this is what you need. However, very often we can get through with less (and then should do for the easier maintainability of our code): `toString` methods or built-in formatters including both the ISO ones and the localized ones that we can get from `DateTimeFormatter.ofLocalizedPattern()`. > > What does this "T" mean BTW ? > > > The `T` is part of the ISO 8601 format. It separates the date part from the time-of-day part. You may think of it as T for time since it denotes the start of the time part. If there is only a date (`2018-04-25`) or only a time-of-day (`21:45:00`), the `T` is not used, but when we have both, the `T` is required. You may think that the format might have been specified without the `T`, and you are probably right. When it comes to the format for periods/durations it is indispensable, however, and also needed when there are no days: `P3M` means a period of 3 months, while `PT3M` means 3 minutes. **Link:** Read more in the [Wikipedia article on ISO 8601](https://en.wikipedia.org/wiki/ISO_8601). Upvotes: 0 <issue_comment>username_3: Converting the `ZonedDateTime` to `OffsetDateTime` - as suggested in the other answers - works, but if you want to use a `DateTimeFormatter`, there's a built-in constant that does the job: ``` ZonedDateTime.now().format(DateTimeFormatter.ISO_OFFSET_DATE_TIME) ``` But it's important to note some differences between all the approaches. Suppose that the `ZonedDateTime` contains a date/time equivalent to `2018-03-15T23:47+01:00` (the seconds and milliseconds are zero). All the approaches covered in the answers will give you different results. `toString()` omits seconds and milliseconds when they are zero. So this code: ``` ZonedDateTime zdt = // 2018-03-15T23:47+01:00 zdt.toOffsetDateTime().toString() ``` prints: > > 2018-03-15T23:47+01:00 > > *only hour and minute, because seconds and milliseconds are zero* > > > The built-in formatter will omit only the milliseconds if it's zero, but it'll print the seconds, regardless of the value. So this: ``` zdt.format(DateTimeFormatter.ISO_OFFSET_DATE_TIME) ``` prints: > > 2018-03-15T23:47:**00**+01:00 > > *seconds printed, even if it's zero; milliseconds ommited* > > > And the formatter that uses an explicit pattern will always print all the fields specified, regardless of their values. So this: ``` zdt.format(DateTimeFormatter.ofPattern("yyyy-MM-dd'T'HH:mm:ss.SSSxxx")) ``` prints: > > 2018-03-15T23:47:**00.000**+01:00 > > *seconds and milliseconds are printed, regardless of their values* > > > --- You'll also find a difference in values such as `2018-03-15T23:47:10.120+01:00` (note the `120` milliseconds). `toString()` and `ofPattern` will give you: > > 2018-03-15T23:47:10.120+01:00 > > > While the built-in `DateTimeFormatter.ISO_OFFSET_DATE_TIME` will print only the first 2 digits: > > 2018-03-15T23:47:10.12+01:00 > > > Just be aware of these details when choosing which approach to use. Upvotes: 1
2018/03/14
2,910
7,478
<issue_start>username_0: I am working with Azure Functions on Premises V2 (Runtime). Trying to test a simple function that is executed each minute and write in logs. However Function is not executed, and I am getting this error. It is running on a Virtual Machine with Windows server 2016. (what I think could be causing this problem). I am new wiht Docker and Hyper-V, any help will be appreciated. There are some command lets I can execute, to comprobate if Hyper V and Docker are running well ? Thank You in Advance. This is the Log I'm getting. 3/14/2018 5:42:44 PM Welcome, you are now connected to log-streaming service.3/14/2018 5:41:48 PM [myteslaVM] Function App 'MyContainerFuncApp' is starting...3/14/2018 5:41:49 PM [myteslaVM] Pull container image azfuncrun/azure-functions-runtime:preview2-latest-nanoserver3/14/2018 5:41:50 PM [myteslaVM] preview2-latest-nanoserver: Pulling from azfuncrun/azure-functions-runtime3/14/2018 5:41:50 PM [myteslaVM] Digest: sha256:dfdb47a7638d0cdd1a42c603e3e59311ee0d229222f88329a7c561fc3d8b2ff33/14/2018 5:41:50 PM [myteslaVM] Status: Image is up to date for azfuncrun/azure-functions-runtime:preview2-latest-nanoserver 3/14/2018 5:41:50 PM [myteslaVM] The container image azfuncrun/azure-functions-runtime:preview2-latest-nanoserver has been pulled successfully! 3/14/2018 5:41:50 PM [myteslaVM] Provisioning container MyContainerFuncApp\_03/14/2018 5:41:51 PM [myteslaVM] f2788b6c9c5d6115bba891f463759dd0df75c2344e6c217b0280a8b9f686fb32 3/14/2018 5:41:51 PM [myteslaVM] C:\Program Files\Azure Functions Runtime\docker\docker.exe: Error response from daemon: container f2788b6c9c5d6115bba891f463759dd0df75c2344e6c217b0280a8b9f686fb32 ``` encountered an error during CreateContainer: failure in a Windows system call: No hypervisor is present on this system. (0xc0351000) extra info: {"SystemType":"Container","Name":"f2788b6c9c5d6115bba891f463759dd0df75c2344e6c217b0280a8b9f686fb32","Owner":"docker","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\Program Files\\Azure Functions Runtime\\docker\\windows\\windowsfilter\\f2788b6c9c5d6115bba891f463759dd0df75c2344e6c217b0280a8b9f686fb32","Layers":[{"ID":"951f96e7-ad8c-5e5c-8533-4f70d49a4e6a","Path":"C:\\Program Files\\Azure Functions ``` "HostName":"f2788b6c9c5d","MappedDirectories":[{"HostPath":"c:\windows\temp\fwas\mycontainerfuncapp","ContainerPath":"c:\home","ReadOnly":false,"BandwidthMaximum":0,"IOPSMaximum":0,"CreateInUtilityVM":false}],"HvPartition":true,"EndpointList":["d6c4c527-5520-40f1-9c84-2f3d8a013900"],"HvRuntime":{"ImagePath":"C:\Program Files\Azure Functions Runtime\docker\windows\windowsfilter\545c1be2ad30f1933a07de48e4ffe611900bacafbf4b717de969b56fb8a725d6\UtilityVM"},"AllowUnqualifiedDNSQuery":true}. 3/14/2018 5:41:51 PM [myteslaVM] Issuing stop command to container: MyContainerFuncApp\_ ``` 03/14/2018 5:41:52 PM [myteslaVM] Error response from daemon: Cannot kill container: MyContainerFuncApp_0: No such container: MyContainerFuncApp_03/14/2018 5:41:52 PM [myteslaVM] Stop complete for container: MyContainerFuncApp_03/14/2018 5:41:52 PM [myteslaVM] ExecuteCommand called with C:\Program Files\Azure Functions Runtime\docker\docker.exe returned 125.3/14/2018 5:42:03 PM [myteslaVM] Function App 'MyContainerFuncApp' is starting...3/14/2018 5:42:04 PM [myteslaVM] Pull container image azfuncrun/azure-functions-runtime:preview2-latest-nanoserver3/14/2018 5:42:06 PM [myteslaVM] preview2-latest-nanoserver: Pulling from azfuncrun/azure-functions-runtime3/14/2018 5:42:06 PM [myteslaVM] Digest: sha256:dfdb47a7638d0cdd1a42c603e3e59311ee0d229222f88329a7c561fc3d8b2ff3 3/14/2018 5:42:06 PM [myteslaVM] Status: Image is up to date for azfuncrun/azure-functions-runtime:preview2-latest-nanoserver3/14/2018 5:42:06 PM [myteslaVM] The container image azfuncrun/azure-functions-runtime:preview2-latest-nanoserver has been pulled successfully!3/14/2018 5:42:06 PM [myteslaVM] Provisioning container MyContainerFuncApp_13/14/2018 5:42:07 PM [myteslaVM] 10dfbab15acef64780c34d5d01ded776a10d074500b33ba916b032920404d1d13/14/2018 5:42:07 PM [myteslaVM] C:\Program Files\Azure Functions Runtime\docker\docker.exe: ```<issue_comment>username_1: After some research I did next steps in order to try to resolve this error. 1.Windows Server 2016 Configuration <https://app.pluralsight.com/player?course=installing-windows-server-2016&author=greg-shields&name=installing-windows-server-2016-m2&clip=5&mode=live> ``` --Install Features and Roles. ``` 2.Install Hyper-V ``` https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/get-started/install-the-hyper-v-role-on-windows-server To check Hyper-V installed, run this cmd let in Power Shell. Get-WindowsFeature -ComputerName Get-WindowsFeature -ComputerName MyteslaVM To Install Hyper-V Add-WindowsFeature –name RSAT-Hyper-V-Tools https://social.technet.microsoft.com/Forums/windows/en-US/f750d5e3-69f8-4cbd-a7aa-98e2fd41c618/need-to-install-hyperv-management-tools-on-server-2012-vm?forum=winserverhyperv ``` 3.Execute Docker version. 4.Execute Dockerd "Daemon" . ``` https://www.bountysource.com/issues/40602674-can-t-start-docker-service-on-windows-server-vm dockerd --debug Stop-service docker Get-ContainerNetwork | Remove-ContainerNetwork -Force Start-service docker ``` 5. Get-VM | Set-VMProcessor -ExposeVirtualizationExtensions $true ``` https://github.com/Azure/Azure-Functions/issues/359 ``` 6.Install Docker ``` https://learn.microsoft.com/en-us/virtualization/windowscontainers/quick-start/quick-start-windows-server Install-Module -Name DockerMsftProvider -RequiredVersion 1.0.0.3 Install-Module -Name DockerMsftProvider -Repository PSGallery -Force Install-Package -Name docker -ProviderName DockerMsftProvider ``` 7.Install The latest azure-functions-core-tools: ``` npm i -g azure-functions-core-tools@core Run npm i -g npm **to update ``` 8. Install .NET Core 2.0 <https://www.microsoft.com/net/download/windows> 9.To enable nested virtualization, you need to run the following cmdlet in the Hyper-V host: ``` Get-VM | Set-VMProcessor -ExposeVirtualizationExtensions $true ``` 10.Run Script ``` https://github.com/moby/moby/issues/19685 ``` 11.Download azure-functions-runtime:2.0.0-nanoserver ``` docker pull microsoft/azure-functions-runtime:2.0.0-nanoserver-1709 ``` 12.One of the Hyper-V components is not running <https://social.technet.microsoft.com/Forums/en-US/b4d0761d-6048-4cb8-9a1f-1a2544b4ceb2/one-of-the-hyperv-components-is-not-running?forum=win10itprovirt> 13.Disable Windows defender. <https://www.windowscentral.com/how-permanently-disable-windows-defender-windows-10> Upvotes: 0 <issue_comment>username_1: It seems as Azuer Function runtime component is not working for Windows Server 2016, because containers. I tried it with a new Virtual Machine, with Windows 10 Pro. And it is now working. Steps: 1. Create Virtual Machine, with Windows 10 Pro 2. Install/enable Hyper-V 3. Install and configure Component for Azure Functions. [see image for Azure Function Running](https://i.stack.imgur.com/jLEsG.png) Upvotes: 0 <issue_comment>username_1: Researching more on the subject, I realized that it does not work for Basic Plan (SKU) for virtual Machines. Only works for SKU in Standard plan. Applies for both Windows Pro and Windows Server 2016. [![enter image description here](https://i.stack.imgur.com/GDCFo.jpg)](https://i.stack.imgur.com/GDCFo.jpg) Upvotes: 1
2018/03/14
708
2,371
<issue_start>username_0: Given this example class template: ``` template class Stack { T \* data; int size; int nextIndex; public: Stack(int size = 100); Stack(const Stack& stack); ~Stack(); Stack& operator=(const Stack& s); void push(const T& t); void pop(); T& top(); const T& top() const; int getSize() const; class Full { }; class Empty { }; }; template void Stack::push(const T& t) { if (nextIndex >= size) { throw Full(); } data[nextIndex++] = t; } template void Stack::pop() { if (nextIndex <= 0) { throw Empty(); } nextIndex--; } ``` Is it ok the part of the implementaion of the `push` and `pop` methods? I don't understand if I need to write `void Stack::push(const T& t)` instead of `void Stack::push(const T& t)` (and the same for the `pop` method). NOTE: Eclipse (according to C++11) gives me the next error: > > Member declaration not found > > > because of these lines: ``` void Stack::push(const T& t) { void Stack::pop() { ```<issue_comment>username_1: > > The part of the implementaion of push method and pop method it's ok? I don't understand if I need to write void Stack::push(const T& t) instead void Stack::push(const T& t) (and the same for pop method). > > > You need to use ``` template void Stack::push(const T& t) { ... } template void Stack::pop() { ... } ``` The name `Stack` is the same as `Stack` inside the class template definition when it is used as a typename. Outside the class template definition, you have to supply the template parameter explicitly. Upvotes: 4 [selected_answer]<issue_comment>username_2: Both `push()` and `pop()` are (non-template) *member functions* of the `Stack` *class template*. Since this *class template*, `Stack`, is parameterized by a *type template parameter* (i.e.: `T`), so are those member functions as well. Therefore, the implementation of those member functions needs a type template parameter: ``` template void Stack::push(const A& t) { ... } template void Stack**::pop() { ... }** ``` Note that the name of the template parameter is actually irrelevant (`A` and `B` above). Note as well that the name of the class template `Stack` not followed by any *template arguments* **inside the body of its definition** is equivalent to the class template with its *template parameter* as the *template argument*, i.e.: `Stack`. Upvotes: 2
2018/03/14
935
2,411
<issue_start>username_0: I want to split binary with input. i try to divide with input '6', so I get an error. because the length of binary string not a multiple of 6. the result must be show the last binary even though not a multiple of six. **My code** ``` static String s=""; public static String countBit(String message) { int k = Integer.parseInt(Test2.s); String result=""; long m=0L; for(int i = 0; i < message.length(); i += k) { result += message.substring(i, i + k) + " "; long n = Long.parseLong(message.substring(i, i+k), 2); m = n; System.out.print(m+" "); } return result; } public static void main(String[] args) throws IOException { String message = "0101100010000100011000000100010000010000010000000000000001101101"; BufferedReader br=new BufferedReader(new InputStreamReader(System.in)); System.out.print("Input bit split = "); s=br.readLine(); System.out.println("\nBinary = "+countBit(message)); } ``` When I input bit split by 8, it's show every binary and decimal. and when I try to input by 6, I get error because binary string not multiple of 6. the output must be like this: ``` 22 8 17 32 17 1 1 0 0 6 13 ``` **Binary** ``` 010110 001000 010001 100000 010001 000001 000001 000000 000000 000110 1101 ```<issue_comment>username_1: Just the for loop from your code: ``` for(int i = 0; i < message.length(); i += k) { result += message.substring(i, Math.min(i + k, message.length() - 1)) + " "; long n = Long.parseLong(message.substring(i, Math.min(i + k, message.length() - 1)), 2); m += n; System.out.print(n+" "); } ``` Used `Math.min(i + k, message.length() - 1)` to ensure that it doesn't go out of bounds. Also, changed the `System.out` to print `n` instead of `m`. Upvotes: 2 [selected_answer]<issue_comment>username_2: If you are using Java 8 you can use : ``` String message = "0101100010000100011000000100010000010000010000000000000001101101"; List result = Arrays.asList(message.split("(?<=\\G.{6})")) .stream().map(bin -> Long.parseLong(bin, 2)) .collect(toList()); ``` Outputs ``` 22 8 17 32 17 1 1 0 0 6 13 ``` --- **Details** 1. `message.split("(?<=\\G.{6})")` split 6 by 6 numbers 2. `map(bin -> Long.parseLong(bin, 2))` convert each element from binary to Long 3. `collect(toList())` collect the result as a List of Long Upvotes: 0
2018/03/14
720
2,585
<issue_start>username_0: I am trying to do a basic chaining of RxJS HTTP calls where the first call is going to pass the ID of an object that was created server-side into the second call. What I am observing with my implementation is that only the first call is made and the second chained call is never executed. Below is roughly the code that I have. First call: ``` createItem(itemData) { return this.http.post(createUrl, itemData) .map((res) => res.json()) .catch(this.handleError); } ``` Second call: ``` addFileToItem(id, file): Observable { return this.http.post(secondUrl + id, file) .map((res) => log(res)) .catch(this.handleError); } ``` Function defined mapping one call into the other: ``` createAndUpload(itemData, file): Observable { return createItem(itemData, file) .map((res) => { if (file) { return addFileToItem(res.id, file); } }); } ``` And finally, where the observable is executed through a `subscribe()`: ``` someFunction(itemData, file) { createAndUpload(itemData, file) .subscribe(res => { log(res); //go do something else }; } ```<issue_comment>username_1: Apparently the `map()` function doesn't actually return anything, meaning that an `Observable` object is not returned. Instead, a transforming operator such as `mergeMap` is needed. Here is the code that ended up working, also using the "newer" `.pipe()` operator. The only place the code needed to be changed was in the Observable function where I defined the combination of the two separate Observables. In order to access the `mergeMap` operator, don't forget to import it from `rxjs/operators`. For a reason that I will eventually figure out, I couldn't access the `mergeMap` without the pipe... ``` createAndUpload(itemData, file): Observable { return createItem(itemData, file) .pipe( mergeMap((res) => { if (file) { return addFileToItem(res.id, file); } }) ) } ``` Upvotes: 1 <issue_comment>username_2: The problem is in `createAndUpload` where `map` just turns the result from the first call into an Observable but you never subscribe to it. So simply said you just need to use `mergeMap` (or `concatMap` in this case it doesn't matter) instead of `map`. The `mergeMap` operator will subscribe to the inner Observable and emit its result: ``` createAndUpload(itemData, file): Observable { return createItem(itemData, file) .mergeMap((res) => { if (file) { return addFileToItem(res.id, file); } return Observable.empty(); // you can't return `undefined` from the inner Observable }); } ``` Upvotes: 3 [selected_answer]
2018/03/14
2,546
8,534
<issue_start>username_0: I am trying to create a stacked bar chart like in the following example: <http://jsfiddle.net/fct1p8j8/4/> The chart it self works just fine when hard coding the data, everything is good there. I am struggling to figure out how to get the data in the correct format from my database structure. Here is an example output of my data setup: ``` [ { "invDept": "Due Diligence", "programs": { "data": [ { "program": "Brand Risk Management", "total": "1847" }, { "program": "Due Diligence", "total": "2718" }, { "program": "SAR", "total": "17858" } ] } }, { "invDept": "Sanctions", "programs": { "data": [ { "program": "Brand Risk Management", "total": "500" }, { "program": "Due Diligence", "total": "2100" }, { "program": "SAR", "total": "16593" } ] } } ] ``` The x axis will be the `invDepartment` value which come from the object. The series data is what I need to craft into the necessary format for the chart. For each department, I need the value of each program in an array format. For example, `Brand Risk Management` is the program name and I need the value of it from both the `Due Diligence` department and the `Sanctions` department. I started with doing a basic loop to create the array structure like so: // Get our departments for the X Axis ``` $.each(data.data, function (key, value) { d = value; xAxis.push(value.invDept); // If an array for the department doesn't exist, create it now if (typeof res[d.invDept] == "undefined" || !(res[d.invDept] instanceof Array)) { res[d.invDept] = []; } }); ``` From here I have something like: ``` res['Due Diligence'] = [] ``` I am stuck at this point. Not quite sure how I need to set up my loops to get this data in the flat format. The final output would be like this: ``` series: [{ name: 'Brand Risk Management', data: [1847, 500] }, { name: 'Due Diligence', data: [2718, 2100] }, { name: 'SAR', data: [17858, 16593] }] ```<issue_comment>username_1: You can use `array#reduce` to iterate through your array and store your values in an object accumulator. Iterate `programs` `data` using `array#forEach` and populate the object accumulator. Then extract out all the values using `Object.values()` ```js var data = [ { "invDept": "Due Diligence", "programs": { "data": [ { "program": "Brand Risk Management", "total": "1847" }, { "program": "Due Diligence", "total": "2718" }, { "program": "SAR", "total": "17858" }, { "program": "Sanctions - WLM", "total": "885" }] } }, { "invDept": "Sanctions", "programs": { "data": [ { "program": "Brand Risk Management", "total": "500" }, { "program": "Due Diligence", "total": "2100" }, { "program": "SAR", "total": "16593" }, { "program": "Sanctions - WLM", "total": "443" }] } } ], result = Object.values(data.reduce((r,o) => { o.programs.data.forEach(({program, total}) => { r[program] = r[program] || {name: program, data: []}; r[program].data.push(total); }); return r; },{})), output = {series: result}; console.log(output); ``` ```css .as-console-wrapper{ max-height: 100% !important; top: 0;} ``` Upvotes: 0 <issue_comment>username_2: Use [`Array.concat()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat), [`Array.map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) and the [spread syntax](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) to flatten the data to a single array. Then [reduces](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) the array to a [Map](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) that merges objects with the same key into the desired result. When done, converts the Map back to array with [`Map.values()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map/values) and the spread syntax. ```js const data = [{"invDept":"Due Diligence","programs":{"data":[{"program":"Brand Risk Management","total":"1847"},{"program":"Due Diligence","total":"2718"},{"program":"SAR","total":"17858"},{"program":"Sanctions - WLM","total":"885"}]}},{"invDept":"Sanctions","programs":{"data":[{"program":"Brand Risk Management","total":"500"},{"program":"Due Diligence","total":"2100"},{"program":"SAR","total":"16593"},{"program":"Sanctions - WLM","total":"443"}]}}] const result = [... // spread the iterator to a new array // flatten the array [].concat(...data.map(({ programs }) => programs.data)) // reduce the data into a map .reduce((r, { program: name, total }) => { // if key doesn't exist create the object r.has(name) || r.set(name, { name, data: [] }) // get the object, and add the total to the data array r.get(name).data.push(total) return r; }, new Map()) .values()] // get the Map's values iterator console.log(result) ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: You can use the function `reduce`. ```js var array = [ { "invDept": "Due Diligence", "programs": { "data": [ { "program": "Brand Risk Management", "total": "1847" }, { "program": "Due Diligence", "total": "2718" }, { "program": "SAR", "total": "17858" } ] } }, { "invDept": "Sanctions", "programs": { "data": [ { "program": "Brand Risk Management", "total": "500" }, { "program": "Due Diligence", "total": "2100" }, { "program": "SAR", "total": "16593" } ] } }], result = { series: Object.values(array.reduce((a, c) => { c.programs.data.forEach((d) => (a[d.program] || (a[d.program] = {data: [], name: d.program})).data.push(d.total)); return a; }, {}))}; console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 1 <issue_comment>username_4: You can use the `reduce` method and store each program in a new object as the key to its array. I couldn't tell if you wanted duplicates, but if not you would simply replace the `array object` for a `set object` and `push` for `add` ``` let myPrograms = data.reduce((accumulator, item) => { item && item.programs && item.programs.data && accumulateData(item.programs.data); function accumulateData(program_data) { for (let item of program_data) { accumulator[item.program] || (accumulator[item.program] = [item.total], accumulator[item.program].push(item.total)); } } return accumulator; }, {}); ``` You can access the array of data inside `myPrograms` like this: ``` myPrograms["name of program"]; ``` ```js let data = [{ "invDept": "Due Diligence", "programs": { "data": [{ "program": "Brand Risk Management", "total": "1847" }, { "program": "Due Diligence", "total": "2718" }, { "program": "SAR", "total": "17858" }, { "program": "Sanctions - WLM", "total": "885" } ] } }, { "invDept": "Sanctions", "programs": { "data": [{ "program": "Brand Risk Management", "total": "500" }, { "program": "Due Diligence", "total": "2100" }, { "program": "SAR", "total": "16593" }, { "program": "Sanctions - WLM", "total": "443" } ] } } ]; let myPrograms = data.reduce((accumulator, item) => { item&&item.programs&&item.programs.data&& accumulateData(item.programs.data); function accumulateData(program_data) { for (let item of program_data) { accumulator[item.program] || (accumulator[item.program] = [item.total], accumulator[item.program].push(item.total)); } } return accumulator; }, {}); console.log(myPrograms); console.log(myPrograms["Brand Risk Management"]); ``` Upvotes: 0
2018/03/14
1,992
6,705
<issue_start>username_0: With a multidimensional array of dynamic length, how can I get an array of just the specefic key values. Let's say I just want color values. ``` 0 { color => green, size => large, } 1 { color => green, size => small, } 2 { color => orange, size => small, } ``` For example the desired output for that array would be: {green,green,orange}<issue_comment>username_1: You can use `array#reduce` to iterate through your array and store your values in an object accumulator. Iterate `programs` `data` using `array#forEach` and populate the object accumulator. Then extract out all the values using `Object.values()` ```js var data = [ { "invDept": "Due Diligence", "programs": { "data": [ { "program": "Brand Risk Management", "total": "1847" }, { "program": "Due Diligence", "total": "2718" }, { "program": "SAR", "total": "17858" }, { "program": "Sanctions - WLM", "total": "885" }] } }, { "invDept": "Sanctions", "programs": { "data": [ { "program": "Brand Risk Management", "total": "500" }, { "program": "Due Diligence", "total": "2100" }, { "program": "SAR", "total": "16593" }, { "program": "Sanctions - WLM", "total": "443" }] } } ], result = Object.values(data.reduce((r,o) => { o.programs.data.forEach(({program, total}) => { r[program] = r[program] || {name: program, data: []}; r[program].data.push(total); }); return r; },{})), output = {series: result}; console.log(output); ``` ```css .as-console-wrapper{ max-height: 100% !important; top: 0;} ``` Upvotes: 0 <issue_comment>username_2: Use [`Array.concat()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat), [`Array.map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) and the [spread syntax](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) to flatten the data to a single array. Then [reduces](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) the array to a [Map](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) that merges objects with the same key into the desired result. When done, converts the Map back to array with [`Map.values()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map/values) and the spread syntax. ```js const data = [{"invDept":"Due Diligence","programs":{"data":[{"program":"Brand Risk Management","total":"1847"},{"program":"Due Diligence","total":"2718"},{"program":"SAR","total":"17858"},{"program":"Sanctions - WLM","total":"885"}]}},{"invDept":"Sanctions","programs":{"data":[{"program":"Brand Risk Management","total":"500"},{"program":"Due Diligence","total":"2100"},{"program":"SAR","total":"16593"},{"program":"Sanctions - WLM","total":"443"}]}}] const result = [... // spread the iterator to a new array // flatten the array [].concat(...data.map(({ programs }) => programs.data)) // reduce the data into a map .reduce((r, { program: name, total }) => { // if key doesn't exist create the object r.has(name) || r.set(name, { name, data: [] }) // get the object, and add the total to the data array r.get(name).data.push(total) return r; }, new Map()) .values()] // get the Map's values iterator console.log(result) ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: You can use the function `reduce`. ```js var array = [ { "invDept": "Due Diligence", "programs": { "data": [ { "program": "Brand Risk Management", "total": "1847" }, { "program": "Due Diligence", "total": "2718" }, { "program": "SAR", "total": "17858" } ] } }, { "invDept": "Sanctions", "programs": { "data": [ { "program": "Brand Risk Management", "total": "500" }, { "program": "Due Diligence", "total": "2100" }, { "program": "SAR", "total": "16593" } ] } }], result = { series: Object.values(array.reduce((a, c) => { c.programs.data.forEach((d) => (a[d.program] || (a[d.program] = {data: [], name: d.program})).data.push(d.total)); return a; }, {}))}; console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 1 <issue_comment>username_4: You can use the `reduce` method and store each program in a new object as the key to its array. I couldn't tell if you wanted duplicates, but if not you would simply replace the `array object` for a `set object` and `push` for `add` ``` let myPrograms = data.reduce((accumulator, item) => { item && item.programs && item.programs.data && accumulateData(item.programs.data); function accumulateData(program_data) { for (let item of program_data) { accumulator[item.program] || (accumulator[item.program] = [item.total], accumulator[item.program].push(item.total)); } } return accumulator; }, {}); ``` You can access the array of data inside `myPrograms` like this: ``` myPrograms["name of program"]; ``` ```js let data = [{ "invDept": "Due Diligence", "programs": { "data": [{ "program": "Brand Risk Management", "total": "1847" }, { "program": "Due Diligence", "total": "2718" }, { "program": "SAR", "total": "17858" }, { "program": "Sanctions - WLM", "total": "885" } ] } }, { "invDept": "Sanctions", "programs": { "data": [{ "program": "Brand Risk Management", "total": "500" }, { "program": "Due Diligence", "total": "2100" }, { "program": "SAR", "total": "16593" }, { "program": "Sanctions - WLM", "total": "443" } ] } } ]; let myPrograms = data.reduce((accumulator, item) => { item&&item.programs&&item.programs.data&& accumulateData(item.programs.data); function accumulateData(program_data) { for (let item of program_data) { accumulator[item.program] || (accumulator[item.program] = [item.total], accumulator[item.program].push(item.total)); } } return accumulator; }, {}); console.log(myPrograms); console.log(myPrograms["Brand Risk Management"]); ``` Upvotes: 0
2018/03/14
556
1,976
<issue_start>username_0: I need to go through an array of objects in javascript but no for, forEach loop, of filter function would recognize the array elements. Array length shows as 0. When I log the array variable in Chrome, it does contain in some way the elements I need to go through though and the array length shows as the correct one (6). [![Console logging in Chrome](https://i.stack.imgur.com/TnP16.png)](https://i.stack.imgur.com/TnP16.png) Any ideas how I can loop through these Drupal.Ajax elements?<issue_comment>username_1: Try using a Promise (or async/wait if you are using ES6+) to wait for the elements to fill the array. This way you never loop over the target array until there is an element in it. ``` let targetArray = []; let p = new Promise(resolve => { // get elements from a file or over the network // or the simplest is to just set a small timeout resolve(targetArray); }); p.then(arr => { if (arr.length > 0) { // loop over arr } }); ``` For older vanilla JS callbacks can be used also. Upvotes: 1 <issue_comment>username_2: Problem ------- As already answered in comments, the array is not filled when you try to operate on it, see the following example: ``` const foo = [] console.log(foo) foo.push(1) console.log(foo) // output [] 0: 1 length: 1 __proto__: Array(0) [1] 0: 1 length: 1 __proto__: Array(0) ``` Notice the difference in the first line, where chrome outputs the real value of the array: ``` [] [1] ``` But once you expand the output you can see that chrome evaluates the values and adds a **i** icon that alerts you about this fact. [![chrome console](https://i.stack.imgur.com/F9kb7.png)](https://i.stack.imgur.com/F9kb7.png) Solution -------- It all depends on your code, since you didn't share any code is hard to advance any solution. But the approach is to be sure that the value of the array is already populated before attempt any operations. Upvotes: 3 [selected_answer]
2018/03/14
799
2,941
<issue_start>username_0: I am new to AWS. I created a VPC and created 2 subnets (1 private and 1 public). Enabled the internet gateway and updated the Route table with internet gateway entries. Then created 2 Linux EC2 Instances for the private subnet and public subnet and associated security groups accordingly i.e. for public instance (SSH and HTTP) and for private (SSH, HTTP, ICMP, HTTPS) inbound rules. I am using putty as a windows user to ssh into my public instance using mypkv.ppk and successfully able to login into that. And i am successfully able to ping the private instance from this public instance using ``` command: ping private-ip-address ``` But I don't know how to ssh into this private instance through this public instance? I have tried the following command: ``` ssh ec2-user@private-ip-address -i mypvk.pem ssh ec2-user@private-ip-address -i mypvk.ppk ``` where mypvk.pem and mypvk.ppk are files stored onto my public instance which are private keys given my AWS while creating EC2 instances. This above command says: ``` Enter passphrase for key 'mypvk.pem': Enter passphrase for key 'mypvk.ppk': ``` I don't know the passphrase or what does that mean and how to ssh into private instance? Any help would be appreciated.<issue_comment>username_1: Passphrase is just another passoword to gain access to system, if you have tried converting pem to ppk then you would see that in putty ... since you don't know what passphrase is I can assume that you did not enter a passphrase for the key ssh ec2-user@private-ip-address -i mypvk.pem ; use this and not the ppk file as in the second command , can you get more logs on this .. probably the verbose ssh ec2-user@private-ip-address -i mypvk.pem -vv , Possibly you tried to setup the public key, but ended up with setting it with a passphrase Upvotes: 1 <issue_comment>username_2: Assuming both the public and private instances were launched with the same key, in order to gain access to the private instance from the public instance you will need to enable key forwarding with putty. There is a good answer on how to do this on [superuser](https://superuser.com/questions/878943/how-to-use-putty-for-forwarding-keys), but basically it involves running the agent (pageant.exe) and adding the appropriate key(s). Upvotes: 0 <issue_comment>username_3: 1. Enable agent forwarding in your putty. `Under “Connection” -> “SSH” -> “Auth”.` 2. Login to Public Instance and do `ssh ec2-user@private-ip-address` [![enter image description here](https://i.stack.imgur.com/QV51b.png)](https://i.stack.imgur.com/QV51b.png) [Image Source](https://www.howtogeek.com/125364/how-to-ssh-hop-with-key-forwarding-from-windows/) Upvotes: 2 [selected_answer]<issue_comment>username_4: copy the pem file to your instance in public subnet, the try to connect to your private. make sure there are no errors in the contents of the file pem even a character. Upvotes: 0
2018/03/14
878
3,246
<issue_start>username_0: I have the below code that I am running ``` try: child = pexpect.spawn( ('some command --path {0} somethingmore --args {1}').format( ,something), timeout=300) child.logfile = open(file\_name,'w') child.expect('x\*') child.sendline(something) child.expect('E\*') child.sendline(something)) #child.read() child.interact() time.sleep(15) print child.status except Exception as e: print "Exception in child process" print str(e) ``` Now, the command in pexpect creates subprocess by taking the one of the input from a loop, now everytime it spins up a subprocess I try to capture the logs via the child.read, in this case it waits for that subprocess to complete before going to the loop again, how do I make it to keep running it in the background(I get the logs of command input/output that I enter dynamically, but not of the process that runs thereafter unless I use the read or interact? I used this [How do I make a command to run in background using pexpect.spawn?](https://stackoverflow.com/questions/34921807/how-do-i-make-a-command-to-run-in-background-using-pexpect-spawn) but it uses interact which again waits for that subprocess to complete .. since the loop will be iterated alomst more than 100 times I cannot wait on one to complete before moving to other, as the command in pexpect is an AWS lambda call, all I need to make sure is the command is triggered but I am not able to capture the process output of that call without waiting for it to complete.... Please let me know your suggestions<issue_comment>username_1: Passphrase is just another passoword to gain access to system, if you have tried converting pem to ppk then you would see that in putty ... since you don't know what passphrase is I can assume that you did not enter a passphrase for the key ssh ec2-user@private-ip-address -i mypvk.pem ; use this and not the ppk file as in the second command , can you get more logs on this .. probably the verbose ssh ec2-user@private-ip-address -i mypvk.pem -vv , Possibly you tried to setup the public key, but ended up with setting it with a passphrase Upvotes: 1 <issue_comment>username_2: Assuming both the public and private instances were launched with the same key, in order to gain access to the private instance from the public instance you will need to enable key forwarding with putty. There is a good answer on how to do this on [superuser](https://superuser.com/questions/878943/how-to-use-putty-for-forwarding-keys), but basically it involves running the agent (pageant.exe) and adding the appropriate key(s). Upvotes: 0 <issue_comment>username_3: 1. Enable agent forwarding in your putty. `Under “Connection” -> “SSH” -> “Auth”.` 2. Login to Public Instance and do `ssh ec2-user@private-ip-address` [![enter image description here](https://i.stack.imgur.com/QV51b.png)](https://i.stack.imgur.com/QV51b.png) [Image Source](https://www.howtogeek.com/125364/how-to-ssh-hop-with-key-forwarding-from-windows/) Upvotes: 2 [selected_answer]<issue_comment>username_4: copy the pem file to your instance in public subnet, the try to connect to your private. make sure there are no errors in the contents of the file pem even a character. Upvotes: 0
2018/03/14
722
2,932
<issue_start>username_0: I am wondering if it is possible how to add a similar to flattened layer for images of variable length. Say we have an input layer for our CNN as: ``` input_shape=(1, None, None) ``` After performing your typical series of convolution/maxpooling layers, can we create a flattened layer, such that the shape is: ``` output_shape=(None,...) ``` If not, would someone be able to explain why not?<issue_comment>username_1: I don't think you can because the compile step uses those dimensions to allocate fixed memory when your model is instanced for training or prediction. Some dimensions need to be known ahead of time, so the matrix dimensions can be allocated. I understand why you want variable-sized image input, the world is not (226, 226, 3). It depends on your specific goals, but for me, scaling up or windowing to a region of interest using say Single Shot Detection as a preprocessing step may be helpful. You could just start with Keras's ImageDataGenerator to scale all images to a fixed size - then you see how much of a performance gain you get from conditional input sizing or windowing preprocessing. @mikkola, I have found flatten to be very helpful for TimeDistributed models. You can add flatten after the convolution steps using: ``` your_model.add(Flatten()) ``` Upvotes: 0 <issue_comment>username_2: You can add `GlobalMaxPooling2D` and `GlobalAveragePooling2D`. These will eliminate the spatial dimensions and keep only the channels dimension. `Max` will take the maximum values, `Average` will get the mean value. I don't really know why you can't use a `Flatten` layer, but in fact you can't with variable dimensions. I understand why a `Dense` wouldn't work: it would have a variable number of parameters, which is totally infeasible for backpropagation, weight update and things like that. (PS: Dense layers act only on the last dimension, so that is the only that needs to be fixed). Examples: * A Dense layer requires the last dimension fixed * A Conv layer can have variable spatial dimensions, but needs fixed channels (otherwise the number of parameters will vary) * A recurrent layer can have variable time steps, but needs fixed features and so on Also, notice that: * For classification models, you'd need a fixed dimension output, so, how to flatten and still guarantee the correct number of elements in each dimension? It's impossible. * For models with variable output, why would you want to have a fixed dimension in the middle of the model anyway? If you're going totally custom, you can always use `K.reshape()` inside a `Lambda` layer and work with the tensor shapes: ``` import keras.backend as K def myReshape(x): shape = K.shape(x) batchSize = shape[:1] newShape = K.variable([-1],dtype='int32') newShape = K.concatenate([batchSize,newShape]) return K.reshape(x,newShape) The layer: Lambda(myReshape) ``` Upvotes: 1
2018/03/14
1,497
5,515
<issue_start>username_0: I tried many ways to solve this problem, but I couldn't. My `tableView` jumps after it loads more data. I call the downloading method in `willDisplay`: ``` func tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) { let lastObject = objects.count - 1 if indexPath.row == lastObject { page = page + 1 getObjects(page: page) } } ``` and insert rows here: ``` func getObjects(page: Int) { RequestManager.sharedInstance.getObjects(page: page, success: { objects in DispatchQueue.main.async(execute: { self.objects = self.objects + objects self.tableView.beginUpdates() var indexPaths = [IndexPath]() for i in 0...objects.count - 1 { indexPaths.append(IndexPath(row: i, section: 0)) } self.tableView.insertRows(at: indexPaths, with: .bottom) self.tableView.endUpdates() }); }) } ``` So what do I wrong? Why `tableView` jumps after inserting new rows?<issue_comment>username_1: I had a similar problem with `tableView`. Partially I decided this with [beginUpdates()](https://developer.apple.com/documentation/uikit/uitableview/1614908-beginupdates) and [endUpdates()](https://developer.apple.com/documentation/uikit/uitableview/1614890-endupdates) ``` self.tableView.beginUpdates() self.tableView.endUpdates() ``` But this didn't solve the problem. For iOS 11, the problem remained. I added an array with the heights of all the cells and used this data in the method [tableView(\_:heightForRowAt:)](https://developer.apple.com/documentation/uikit/uitableviewdelegate/1614998-tableview) ``` func tableView(_ tableView: UITableView, heightForRowAt indexPath: IndexPath) -> CGFloat { return cellHeights[indexPath.row] ?? 0 } ``` Also add this method [tableView(\_:estimatedHeightForRowAt:)](https://developer.apple.com/documentation/uikit/uitableviewdelegate/1614926-tableview) ``` func tableView(_ tableView: UITableView, estimatedHeightForRowAt indexPath: IndexPath) -> CGFloat { return cellHeights[indexPath.row] ?? 0 } ``` After that, the jumps stopped. Upvotes: 3 <issue_comment>username_2: Because your loop runs from 0 to objects count: ``` for i in 0...objects.count - 1 { indexPaths.append(IndexPath(row: i, section: 0)) } ``` The indexpaths generated counting for row 0 till object's count. and hence the rows are getting added at top of table (i.e. at row 0) and hence causing tableview to jump as you are there at bottom of tableview. Try changing range as: ``` let rangeStart = self.objects.count let rangeEnd = rangeStart + (objects.count - 1) for i in rangeStart...rangeEnd { indexPaths.append(IndexPath(row: i, section: 0)) } ``` Hope it helps..!!! Upvotes: 0 <issue_comment>username_3: First, check your `tableView(_:estimatedHeightForRowAt:)` - this will never be accurate but the more likely the cell height ends up with this estimate the less work the table view will do. So if there are 100 cells in your table view, 50 of them you are sure will end up with a height of 75 - that should be the estimate. Also it's worth a while noting that there is no limit on the number of times the table view may ask its delegate of the exact cell height. So if you have a table view of 1000 rows there will a big performance issue on the layout out of the cells (delays in seconds) - implementing the estimate reduces drastically these calls. Second thing you need to revisit the cell design, are there any views or controls whose height need to calculated by the table view? Like an image with top and bottom anchors equivalent to some other view whose height changes from cell to cell? The more fixed heights these views/ controls have the easier it becomes for the table view to layout its cells. I had the same issue with two table views, one of them had a variable height image embedded into a stack view where I had to implement the estimate. The other didn't had fixed size images and I didn't need to implement the estimate to make it scroll smoothly. Both table views use pagination. Last but not least, arrays are structs. structs are value types. So maybe you don't want to store any heights in an array, see how many copies you're making? calculating the heights inside `tableView(_:heightForRowAt:)` is quite fast and efficient enough to work out really well. Upvotes: 1 <issue_comment>username_4: > > I have just find the solution to stop jumping the table view while > inserting multiple rows in the table View. Am facing this issue from > last few days so finally I resolved the issue. > > > We have to just set the **content offset** of table view while > inserting rows in the table view. You have to just pass your array of > IndexPath rest you can use this function. > > > Hope so this method will help you. ``` func insertRows() { if #available(iOS 11.0, *) { self.tableView.performBatchUpdates({ self.tableView.setContentOffset(self.tableView.contentOffset, animated: false) self.tableView.insertRows(at: [IndexPath], with: .bottom) }, completion: nil) } else { // Fallback on earlier versions self.tableView.beginUpdates() self.tableView.setContentOffset(self.tableView.contentOffset, animated: false) self.tableView.insertRows(at: [IndexPath], with: .right) self.tableView.endUpdates() } } ``` Upvotes: 4
2018/03/14
1,195
4,604
<issue_start>username_0: I'm trying to deploy my nginx on docker container with Cloudflare. docker-compose.yml ``` version: "3.5" services: nginx: image: xxx/panel-nginx:VERSION volumes: - type: volume source: panel_nginx_certs target: /etc/nginx/certs ports: - target: 443 published: 443 protocol: tcp mode: host networks: - panel_nginx stop_grace_period: 1m deploy: replicas: 1 update_config: parallelism: 1 delay: 180s restart_policy: condition: on-failure networks: panel_nginx: external: true volumes: panel_nginx_certs: external: true ``` nginx.conf ``` upstream panel-uwsgi { server panel_app:8000; } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name panel.xxx.com; ssl_certificate /etc/nginx/certs/panel.pem; ssl_certificate_key /etc/nginx/certs/panel.key; location / { include uwsgi_params; uwsgi_pass panel-uwsgi; } # location /media/ { # root /usr/share/nginx/html; # try_files $uri $uri/; # access_log off; # expires 30d; # } location /static/ { access_log off; expires 30d; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # error_page 404 /404.html; # error_page 500 502 503 504 /50x.html; # location = /50x.html { # root /usr/share/nginx/html; # } } gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305"; ssl_ecdh_curve secp384r1; ssl_dhparam /etc/nginx/certs/dhparam.pem; ssl_session_timeout 24h; ssl_session_cache shared:SSL:12m; ssl_session_tickets off; ssl_stapling on; ssl_trusted_certificate /etc/nginx/certs/cloudflare_origin_ecc.pem; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; server_names_hash_bucket_size 64; ssl_client_certificate /etc/nginx/certs/cloudflare.crt; ssl_verify_client on; server_tokens off; charset utf-8; add_header X-Robots-Tag none; add_header Warning 'This computer system including all related equipment, network devices (specifically including Internet access), are provided only for authorized use. Unauthorized use may subject you to criminal prosecution. By accessing this system, you have agreed to the term and condition of use and your actions will be monitored and recorded.'; ``` Deploying with this config cause 525 Error: SSL handshake failed. I have dedicated IP and NSI. I'm using chacha-poly ciphers. I'm publishing 443 port. Any thoughts why i'm getting this error? Thanks Disclaimer: Stackoverflow, please remove the warning about too much code and not enough text. Thanks<issue_comment>username_1: If the record is gray clouded (not proxied through Cloudflare) are you able to connect to the server using SSL? It's likely an issue on the origin server int he config, but eliminating Cloudflare at least temporarily makes troubleshooting easier. You can also try curl -Ikv --resolve panel.backupner.com:443:your.ip.address https:://panel.backupner.com for additional details which may be helpful in troubleshooting. Actually just checked... looks like your site is working now, if you figured out the root cause it would be helpful to post what the issue was. :) Upvotes: 0 <issue_comment>username_2: In my case, replacing OpenSSL with LibreSSL solved the problem. Upvotes: 2 [selected_answer]
2018/03/14
460
1,746
<issue_start>username_0: I am stuck in a query; if you have time can you let me know how to fix it? I have a (hypothetical) table with TypeID, SubTypeID, Option1, Option2 and Option3 columns; last three columns are Boolean. So, I might have something like this: ``` TypeID SubTypeID Option1 Option2 Option3 1 5 false false false 2 0 true false false 2 0 false true true 2 0 true true true ``` What I am trying to get, in case TypeID=2 is the following, using example above: ``` 'Option 1' 'Option 2, Option 3' 'Option1, Option 2, Option 3' ``` I tried this but getting syntax error and I am not even sure it is correct: ``` case when fd.TypeID=1 then ft.SubType else (case when fd.Option1=1 then 'Option 1,' else (case when fd.Option2=1 then 'Option 2,' else (case when fd.Option3=1 then 'Option 3' else '' ))) as SubType, ```<issue_comment>username_1: If the record is gray clouded (not proxied through Cloudflare) are you able to connect to the server using SSL? It's likely an issue on the origin server int he config, but eliminating Cloudflare at least temporarily makes troubleshooting easier. You can also try curl -Ikv --resolve panel.backupner.com:443:your.ip.address https:://panel.backupner.com for additional details which may be helpful in troubleshooting. Actually just checked... looks like your site is working now, if you figured out the root cause it would be helpful to post what the issue was. :) Upvotes: 0 <issue_comment>username_2: In my case, replacing OpenSSL with LibreSSL solved the problem. Upvotes: 2 [selected_answer]
2018/03/14
818
3,175
<issue_start>username_0: **Please read the whole Q before disliking or commenting something. I have searched on internet before posting it here.** I'm having the below project structure. ``` pages(package) > Homepage.java test(package) > Flipkart.java ``` Inside **Homepage.java** i have declared all the WebElements using **POM Page Factory methods** and created **respective method to click on Electronics link.** ``` @FindBy(xpath = '//input[@title='Electronics']') private WebElement lnkElectronics; ``` Inside **Fipkart.java** I'm calling the **Electronics click method**. My doubt over here is the declared WebElement is specifically for Electronics. **Is there a way i can create a WebElement with type like mentioned below and pass value for %s dynamically from main method?** ``` @FindBy(xpath = '//input[@title='%s']') private WebElement lnkElectronics; ```<issue_comment>username_1: As per the *Test Design Consideration* following [*Page Object Design Pattern*](https://docs.seleniumhq.org/docs/06_test_design_considerations.jsp#page-object-design-pattern) : * A *Page Object* is an object-oriented class that serves as an interface to a page of the *Application Under Test*. Your `@Tests` uses the methods of this *Page Object* class whenever they need to interact with the *User Interface* of that page. The benefit is that if the UI changes for the page your `@Tests` themselves don’t needs to be changed. Only the code within the *Page Object* needs to be changed. Advantages : + Clean separation between test code and page specific code such as locators, methods and layout. + A single repository for the operations offered by the page rather than having these services scattered throughout the tests. Based on these *Page Factory* features you **won't be able** to create any generic *WebElement* for which you can pass value dynamically from `main()` or `@Test` annotated method e.g. ``` @FindBy(xpath = '//input[@title='%s']') private WebElement lnkElectronics; ``` You can find almost a similar discussion in [Where should I define modal specific code in Selenium Page Object Model pattern](https://stackoverflow.com/questions/49001569/where-should-i-define-modal-specific-code-in-selenium-page-object-model-pattern/49002231#49002231) Upvotes: 1 <issue_comment>username_2: Answer referenced from [Page Object Model in Selenium](http://chercher.tech/java/page-object-model-selenium-webdriver) You cannot create a FindBy with Variable, FindBy accepts only constants. In case if you want to achieve that variability then you should write or find the element using normal findElement method Upvotes: 2 <issue_comment>username_3: **WorkAround : on page class you can define a method, and can pass the text on the fly from the calling class to click on specific tab** if you want to click any common text on the page. You can create a method as given below and can pass the text on the fly to click on that specific tab on that page ``` public void clickTab(String tabText){ String tabxpath = "//div[contains(text(), '" + tabText + "')]"; driver.findElement(By.xpath(tabxpath)).click(); } ``` Upvotes: 0
2018/03/14
780
3,024
<issue_start>username_0: I have a xaml which contains a tab control (Name="MyTabControl"). I'm a beginner with wpf and in code and I want to dynamically add tab items that I then add a list box to each created tab item. Below is the code I have so far. ``` ListBox listbox = new ListBox() TabItem tab = new TabItem() tab.AddChild(listbox) MyTabControl.Add(tab) ``` My issue is that I can't figure out how dynamically create new tabs that also would add a list box to each new tab and the new tabs then added to MyTabControl. Then, I would want to be able to access each list box control, individually, in each tab to edit the list box content. How is this done in code? How can i access the created list box controls to edit them?<issue_comment>username_1: As per the *Test Design Consideration* following [*Page Object Design Pattern*](https://docs.seleniumhq.org/docs/06_test_design_considerations.jsp#page-object-design-pattern) : * A *Page Object* is an object-oriented class that serves as an interface to a page of the *Application Under Test*. Your `@Tests` uses the methods of this *Page Object* class whenever they need to interact with the *User Interface* of that page. The benefit is that if the UI changes for the page your `@Tests` themselves don’t needs to be changed. Only the code within the *Page Object* needs to be changed. Advantages : + Clean separation between test code and page specific code such as locators, methods and layout. + A single repository for the operations offered by the page rather than having these services scattered throughout the tests. Based on these *Page Factory* features you **won't be able** to create any generic *WebElement* for which you can pass value dynamically from `main()` or `@Test` annotated method e.g. ``` @FindBy(xpath = '//input[@title='%s']') private WebElement lnkElectronics; ``` You can find almost a similar discussion in [Where should I define modal specific code in Selenium Page Object Model pattern](https://stackoverflow.com/questions/49001569/where-should-i-define-modal-specific-code-in-selenium-page-object-model-pattern/49002231#49002231) Upvotes: 1 <issue_comment>username_2: Answer referenced from [Page Object Model in Selenium](http://chercher.tech/java/page-object-model-selenium-webdriver) You cannot create a FindBy with Variable, FindBy accepts only constants. In case if you want to achieve that variability then you should write or find the element using normal findElement method Upvotes: 2 <issue_comment>username_3: **WorkAround : on page class you can define a method, and can pass the text on the fly from the calling class to click on specific tab** if you want to click any common text on the page. You can create a method as given below and can pass the text on the fly to click on that specific tab on that page ``` public void clickTab(String tabText){ String tabxpath = "//div[contains(text(), '" + tabText + "')]"; driver.findElement(By.xpath(tabxpath)).click(); } ``` Upvotes: 0
2018/03/14
768
2,940
<issue_start>username_0: I'm opening a new html file on button press using this: ``` window.location = "menu.html"; ``` I want to go back to the index.html file in my "menu" activity after a button press. I tried using ``` window.location = "index.html"; ``` But it creates a new screen and if I click the return buton to go to my homepage it returns to the previous activities. Also ``` window.opener.location = '/redirect.html'; window.close(); ``` Doesn't work. So how do I close the menu activity to go back to my main activity. PS: should I use window.location or window.location.href to open new html file Thanks in advance!<issue_comment>username_1: As per the *Test Design Consideration* following [*Page Object Design Pattern*](https://docs.seleniumhq.org/docs/06_test_design_considerations.jsp#page-object-design-pattern) : * A *Page Object* is an object-oriented class that serves as an interface to a page of the *Application Under Test*. Your `@Tests` uses the methods of this *Page Object* class whenever they need to interact with the *User Interface* of that page. The benefit is that if the UI changes for the page your `@Tests` themselves don’t needs to be changed. Only the code within the *Page Object* needs to be changed. Advantages : + Clean separation between test code and page specific code such as locators, methods and layout. + A single repository for the operations offered by the page rather than having these services scattered throughout the tests. Based on these *Page Factory* features you **won't be able** to create any generic *WebElement* for which you can pass value dynamically from `main()` or `@Test` annotated method e.g. ``` @FindBy(xpath = '//input[@title='%s']') private WebElement lnkElectronics; ``` You can find almost a similar discussion in [Where should I define modal specific code in Selenium Page Object Model pattern](https://stackoverflow.com/questions/49001569/where-should-i-define-modal-specific-code-in-selenium-page-object-model-pattern/49002231#49002231) Upvotes: 1 <issue_comment>username_2: Answer referenced from [Page Object Model in Selenium](http://chercher.tech/java/page-object-model-selenium-webdriver) You cannot create a FindBy with Variable, FindBy accepts only constants. In case if you want to achieve that variability then you should write or find the element using normal findElement method Upvotes: 2 <issue_comment>username_3: **WorkAround : on page class you can define a method, and can pass the text on the fly from the calling class to click on specific tab** if you want to click any common text on the page. You can create a method as given below and can pass the text on the fly to click on that specific tab on that page ``` public void clickTab(String tabText){ String tabxpath = "//div[contains(text(), '" + tabText + "')]"; driver.findElement(By.xpath(tabxpath)).click(); } ``` Upvotes: 0
2018/03/14
1,263
3,511
<issue_start>username_0: I am trying to publish an sbt plugin to a local file repo. In the plugin's build.sbt I have: ``` publishTo := Some(Resolver.file("localtrix", file("/Users/jast/repo/localtrix"))) ``` I run the `publish` task and it gets published fine to `/Users/jast/repo/localtrix/org/me/sbt-plugin_2.12_1.0/1.2.3` In another project, I want to resolve this plugin. in `project/plugins.sbt` I have: ``` resolvers += Resolver.file("localtrix", file("/Users/jast/repo/localtrix")) addSbtPlugin("org.me" % "sbt-plugin" % "1.2.3") ``` I try to run sbt in the this project and I get: ``` [info] Updating ProjectRef(uri("file:/Users/jast/playspace/untitled38/project/"), "untitled38-build")... [warn] module not found: org.me#sbt-plugin;1.2.3 [warn] ==== typesafe-ivy-releases: tried [warn] https://repo.typesafe.com/typesafe/ivy-releases/org.me/sbt-plugin/scala_2.12/sbt_1.0/1.2.3/ivys/ivy.xml [warn] ==== sbt-plugin-releases: tried [warn] https://repo.scala-sbt.org/scalasbt/sbt-plugin-releases/org.me/sbt-plugin/scala_2.12/sbt_1.0/1.2.3/ivys/ivy.xml/2017.2+4-3037ba82+20180314-1919/ivys/ivy.xml [warn] ==== local: tried [warn] /Users/jast/.ivy2/local/org.me/sbt-plugin/scala_2.12/sbt_1.0/1.2.3/ivys/ivy.xml [warn] ==== public: tried [warn] https://repo1.maven.org/maven2/org/me/sbt-plugin_2.12_1.0/1.2.3/sbt-plugin-1.2.3.pom [warn] ==== local-preloaded-ivy: tried [warn] /Users/jast/.sbt/preloaded/org.me/sbt-plugin/scala_2.12/sbt_1.0/1.2.3/ivys/ivy.xml [warn] ==== local-preloaded: tried [warn] file:////Users/jast/.sbt/preloaded/org/me/sbt-plugin_2.12_1.0/1.2.3/sbt-plugin-1.2.3.pom [warn] ==== localtrix: tried [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] :: UNRESOLVED DEPENDENCIES :: [warn] :::::::::::::::::::::::::::::::::::::::::::::: [warn] :: org.me#sbt-plugin;1.2.3: not found [warn] :::::::::::::::::::::::::::::::::::::::::::::: ``` So how can I publish to a local repo it in a way that also gets resolved correctly? Note: `publishLocal` and resolving from `.ivy2/local` works, but I want to be able to publish to a repo that I can copy to another machine without messing with that directory.<issue_comment>username_1: You missed scala version in name. And you have also strange suffix in plugin name `_1.0` in your published artifact, so just fixing scala version could be not enough. This should work. ``` addSbtPlugin("org.me" % "sbt-plugin_2.12_1.0" % "1.2.3") ``` If you find out where came this suffix `_1.0` from, fix on scala version should help: ``` addSbtPlugin("org.me" %% "sbt-plugin" % "1.2.3") ``` **Update after comment** Ok, thanks, I did not know that for plugins it works differently. But try to define resolver differently for resolvers (works for me): ``` resolvers += "localtrix" at "file:///Users/jast/repo/localtrix" addSbtPlugin("org.me" % "sbt-plugin" % "1.2.3") ``` Upvotes: 1 <issue_comment>username_2: sbt plugins by default are published ivy-style, so you when you refer to your local repository, use `Resolver.ivyStylePatterns`. To publish: ``` publishTo := Some(Resolver.file("localtrix", file("/Users/jast/repo/localtrix"))(Resolver.ivyStylePatterns)) ``` And to resolve: ``` resolvers += Resolver.file("localtrix", file("/Users/jast/repo/localtrix"))(Resolver.ivyStylePatterns) addSbtPlugin("org.me" % "sbt-plugin" % "1.2.3") ``` --- Alternatively you can set `publishMavenStyle := true` for the plugin, but I see that you already figured that out. Upvotes: 3 [selected_answer]
2018/03/14
1,386
4,965
<issue_start>username_0: I am implementing the json parsing using retrofit but I didnot get any response from url. The Json data is in the form of Array inside which there are objects.I tried to set the response coming from Url in the TextView. I have implemented as follows: My JSON structure is as follows: ``` [{ "id": "2", "name": "Nbn", "code": "001", "ecozone": "ter", "devregion": "east" }, { "id": "3", "name": "hari", "code": "002", "ecozone": "hill", "devregion": "west" } ] ``` Api Service ``` public interface ApiService { @GET("/data_new.php?q=district") Call getMyJSON(); } ``` RetrofitClient ``` public class RetrofitClient { private static final String url = "url/"; private static Retrofit getRetrofitInstance() { OkHttpClient client=new OkHttpClient(); return new Retrofit.Builder() .baseUrl(url) .client(client) .addConverterFactory(GsonConverterFactory.create()) .build(); } public static ApiService getApiService(){ return getRetrofitInstance().create(ApiService.class); } } ``` My ModelDto class ``` public class DistrictDTO { @SerializedName("id") @Expose String id; public String getId() { return id; } public void setId(String id) { this.id = id; } @SerializedName("name") @Expose String name; public String getName() { return name; } public void setName(String name) { this.name = name; } @SerializedName("code") @Expose String code; public String getCode() { return code; } public void setCode(String code) { this.code = code; } @SerializedName("ecozone") @Expose String ecozone; public String getEcozone() { return ecozone; } public void setEcozone(String ecozone) { this.ecozone = ecozone; } @SerializedName("devregion") @Expose String devregion; public String getDevregion() { return devregion; } public void setDevregion(String devregion) { this.devregion = devregion; } } ``` DistrictDTO ``` public class District { public ArrayList districtDTOS; public ArrayList getDistrictDTOS() { return districtDTOS; } public void setDistrictDTOS(ArrayList districtDTOS) { this.districtDTOS = districtDTOS; } } ``` And Activity ``` public class DistrictActivity extends AppCompatActivity { private ArrayList district; private TextView textView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity\_district); district = new ArrayList<>(); textView = (TextView) findViewById(R.id.textVi); ApiService api = RetrofitClient.getApiService(); retrofit2.Call call = api.getMyJSON(); call.enqueue(new Callback() { @Override public void onResponse(retrofit2.Call call, Response response) { if (response.isSuccessful()) { //got successfully district = response.body().getDistrictDTOS(); Log.e("Respose", "" + district); textView.setText(response.body().getDistrictDTOS().toString()); } } @Override public void onFailure(retrofit2.Call call, Throwable t) { t.fillInStackTrace(); } }); } } ```<issue_comment>username_1: try ``` public interface ApiService { @GET("/data_new.php?q=district") Call getMyJSON(); } ``` you are calling an array, not an object Edit: Actually, I think you should rename the class DistrictDTO to District, and delete the other one. And, change to `District[]` all the parts where you want that array Upvotes: 1 <issue_comment>username_2: Just make these changes it should work(**return array instead of object**) ``` public class DistrictActivity extends AppCompatActivity { private ArrayList district; private TextView textView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity\_district); district = new ArrayList<>(); textView = (TextView) findViewById(R.id.textVi); ApiService api = RetrofitClient.getApiService(); retrofit2.Call> call = api.getMyJSON(); call.enqueue(new Callback>() { @Override public void onResponse(retrofit2.Call> call, Response> response) { if (response.isSuccessful()) { //got successfully district = response.body().toString(); Log.e("Respose", "" + district); textView.setText(response.body().toString()); } } @Override public void onFailure(retrofit2.Call> call, Throwable t) { Toast.makeText(DistrictActivity.this,t.toString(),Toast.LENGTH\_SHORT).show(); t.fillInStackTrace(); } }); } } ``` and your interface should be like this ``` public interface ApiService { @GET("/data_new.php?q=district") Call> getMyJSON(); } ``` Infact you should expect an array instead of an object like above, just return a list of `DistrictDTO`, `District` class is not needed FYI : when the response from server is of the form `{..}` you should expect an object(like you did) but when its like `[..]` its a json array and hence you should expect an array(this is your case, like i did in my answer) Upvotes: 0
2018/03/14
2,029
5,626
<issue_start>username_0: I have three tables in SQL Server 2014, each table has millions of data and keep growing. I am trying to find discrepancies between the tables, for example: ``` DECLARE @ab TABLE ( k1 int, k2 int, val char(1) ) DECLARE @cd TABLE ( k1 int, k2 int, val char(1), add_cd varchar(50) ) DECLARE @ef TABLE ( k1 int, k2 int, val char(1), add_ef varchar(50) ) INSERT INTO @ab VALUES(1,1,'a'), (2, 2, 'c'), (3, 3, 'c'), (4, 4, 'd'), (5, 5, NULL), (7, 7, 'g') INSERT INTO @cd VALUES(1,1,'a', 'DSFS'), (2, 2, 'b', 'ASDF'), (4, 4, NULL, 'SDFE') INSERT INTO @ef VALUES(1,1,'a', 'SD1245'), (2, 2, 'b', 'EW3464'), (3, 3, 'd', 'DF3452'),(4, 4, 'd', 'FG4576'), (6, 6, 'e', 'RT3453') ``` The common Key columns for all three sets are k1 and k2, I would like pull only the discrepancies either the value of "val" should be different or the key combination shouldnt exist in all three sets. No need to compare the additional columns(add\_cd and add\_ef) just needed in the final result. The desired result is: ``` k1 K2 val k1 k2 val add_cd k1 k2 val add_ef 2 2 c 2 2 b ASDF 2 2 b EW3464 3 3 c NULL NULL NULL NULL 3 3 d DF3452 4 4 d 4 4 NULL SDFE 4 4 d FG4576 5 5 NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL NULL 6 6 e RT3453 7 7 g NULL NULL NULL NULL NULL NULL NULL NULL ``` I tried the below query, it gives desired result but works only with few thousands not with millions. Created indexes for the key columns but sill I see it uses table scan. Can anyone please advise on this? ``` SELECT a.*, c.*, e.* FROM @ab a FULL OUTER JOIN @cd c ON a.k1 = c.k1 AND a.k2 = c.k2 FULL OUTER JOIN @ef e ON (c.k1 = e.k1 AND c.k2 = e.k2 ) OR (a.k1 = e.k1 AND a.k2 = e.k2 ) WHERE (a.k1 IS NULL OR c.k1 IS NULL OR e.k1 IS NULL) OR (ISNULL(a.val, '') != ISNULL(c.val, '')) OR (ISNULL(c.val, '') != ISNULL(e.val, '')) OR (ISNULL(a.val, '') != ISNULL(e.val, '')) ```<issue_comment>username_1: Would something like this work for you? ``` SELECT Z.k1, Z.k2, Z.val, Y.k1, Y.k2, Y.val, Y.add_cd, X.k1, X.k2, X.val, X.add_ef FROM @ab AS Z FULL OUTER JOIN @cd AS Y ON Z.k1 = Y.k1 AND Z.k2 = Y.k2 FULL OUTER JOIN @ef AS X ON X.k1 = Y.k1 AND X.k2 = Y.k2 WHERE NOT EXISTS ( SELECT A.k1, A.k2, A.val, C.k1, C.k2, C.val, C.add_cd, E.k1, E.k2, E.val, E.add_ef FROM @ab AS A INNER JOIN @cd AS C ON A.k1 = C.k1 AND A.k2 = C.k2 AND A.val = C.val INNER JOIN @ef AS E ON C.k1 = E.k1 AND C.k2 = E.k2 AND C.val = E.val WHERE Z.k1 = A.k1 AND Z.k2 = A.k2 AND Y.k1 = C.k1 AND Y.k2 = C.k2 AND X.k1 = E.k1 AND X.k2 = E.k2 ) ``` I'm worried there may be nuances with your NULLS that are or are not compared the way you want them to be... Upvotes: 0 <issue_comment>username_2: I think you're going down the right path with using a `full outer join`, just need to make the where clause work for ya. Might not be the most efficient answer, but will do the trick. ``` select * from @ab as ab full outer join @cd as cd on ab.k1 = cd.k1 and ab.k2 = cd.k2 full outer join @ef as ef on ab.k1 = ef.k1 and ab.k2 = ef.k2 where ( isnull(ab.val, 'X') <> isnull(cd.val, 'XX') or isnull(ab.val, 'X') <> isnull(ef.val, 'XX') or isnull(cd.val, 'X') <> isnull(ef.val, 'XX') or coalesce(ab.val, cd.val, ef.val) is NULL ) order by coalesce(ab.k1, cd.k1, ef.k1) , coalesce(ab.k2, cd.k2, ef.k2) ``` The parenthesis are around the entire `where` clause just in case you ever add another constraint (don't want the compiler confusing `and`/`or` because of syntax). And the `order by` clause is only to help match the order of the expected output shown in the question. Upvotes: 0 <issue_comment>username_3: Your existing query is the right approach. There are some small changes you can make to improve it. Your index for each table should be on `k1`, `k2`, `val`: EDIT (my original NULL handling was not correct. The correct approach appears long-winded, but is probably the most efficient solution that is logically correct): ``` SELECT a.*, c.*, e.* FROM @ab a FULL OUTER JOIN @cd c ON a.k1 = c.k1 AND a.k2 = c.k2 FULL OUTER JOIN @ef e ON (c.k1 = e.k1 AND c.k2 = e.k2 ) --OR (a.k1 = e.k1 --This condition is not needed and will only slow performance --AND a.k2 = e.k2 ) WHERE (a.k1 IS NULL OR c.k1 IS NULL OR e.k1 IS NULL) --OR (ISNULL(a.val, '') != ISNULL(c.val, '')) --Wrapping the val columns in ISNULL prevents the indexes from being used --OR (ISNULL(c.val, '') != ISNULL(e.val, '')) --OR (ISNULL(a.val, '') != ISNULL(e.val, '')) OR ((a.val != c.val) OR (a.val IS NULL AND c.val IS NOT NULL) OR (a.val IS NOT NULL AND c.val IS NULL)) OR ((a.val != e.val) OR (a.val IS NULL AND e.val IS NOT NULL) OR (a.val IS NOT NULL AND e.val IS NULL)) OR ((e.val != c.val) OR (e.val IS NULL AND c.val IS NOT NULL) OR (e.val IS NOT NULL AND c.val IS NULL)) ``` When you need to compare nullable columns, it may feel more elegant to compare ISNULL() results, but inline functions prevent the query engine from using indexes, forcing table scans, which is the worst thing you can do for performance. Upvotes: 3 [selected_answer]
2018/03/14
903
3,219
<issue_start>username_0: I'm having problems with my Printer-Counter School Problem. It's supposed to be a multithreading application and runs fine so far. But when I running it the second or third time it won't work anymore.. No error message. Looks like Threads sleep forever or so. Also when I test it with a JUnit test it won't work. But sometimes it does... which is already strange itself. ``` public class CounterPrinter { public static void main(String[] args) throws InterruptedException { if (args.length != 2) { System.out.println("Usage: CounterPrinter "); System.exit(1); } Storage s = new Storage(); Printer d = new Printer(s, Integer.parseInt(args[1])); Counter z = new Counter(s, Integer.parseInt(args[0]), Integer.parseInt(args[1])); z.start(); d.start(); z.join(); d.join(); Thread.sleep(5000); } } public class Printerextends Thread { private Storage storage; private Integer ende; Printer(Storage s, Integer ende) { this.storage = s; this.ende = ende; } @Override public void run() { while (storage.hasValue()) { try { System.out.print(speicher.getValue(ende) + " "); Thread.sleep(50); } catch (InterruptedException e) { e.printStackTrace(); } } } } public class Counter extends Thread { private Storage speicher; private int max, min; Counter(Storages, int min, int max) { this.storage = s; this.max = max; this.min = min; } @Override public void run() { for (int i = min; i <= max; i++) { try { storage.setValue(i); } catch (InterruptedException e) { e.printStackTrace(); } } } } public class Storage implements StorageIf { private Integer wert; private boolean hasValue = false; @Override public synchronized Integer getValue(Integer ende) throws InterruptedException { if(wert.equals(ende)){ hasValue = false; return wert; }else { while (!hasValue()) { wait(); } hasValue = false; notifyAll(); return wert; } } @Override public synchronized void setValue(Integer wert) throws InterruptedException { while (hasValue()){ wait(); } hasValue = true; this.wert = wert; notifyAll(); } @Override public boolean hasValue() { return hasValue; } } ``` Hope someone can spot a mistake I made Thanks a lot!<issue_comment>username_1: ``` z.start(); z.sleep(100); d.start(); ``` Putting a delay between starting the tow Threads solved the problem for me. My Computer was probably too fast down the road in Thread z before it even started Thread d. Thats why it hung itself up in 50% of the time. Thanks to everyone tho :) Upvotes: 0 <issue_comment>username_2: The problem is that you conflate 2 states : * there is currently a value available * there will be no more values Add an `hasEnded()` method to your Storage class, checking if the end value has been reached. Make sure to synchronize this method, as well as the `hasValue()` method. Synchronization needs to be done on both read and write access! Then make `Printer`'s while loop check `hasEnded`, rather than `hasValue`. Finally : get rid of all the `sleep()` calls. Your own answer, solving the problem with sleep, is not a real solution. A thread safe program does not depend on a computer's performance to function correctly. Upvotes: 1
2018/03/14
660
2,422
<issue_start>username_0: I have an EC2 instance (Ubuntu 14). I've defined port 27017 in its security group to be accessed from anywhere. I'm trying to connect from my PC (Windows 10) to the EC2 instance, but getting the following error: > > MongoDB not running on the provided host and port > > > which is not true since I've made sure that MongoDB is running. These were the settings I specified: [![enter image description here](https://i.stack.imgur.com/aoRxm.png)](https://i.stack.imgur.com/aoRxm.png) I've also tried to use SSH (which is also defined in the sercurity group and works well through terminal), but got the following error: > > Error creating SSH Tunnel: (SSH) Channel open failure: Connection > refused > > > These were the settings I specified: [![enter image description here](https://i.stack.imgur.com/k8yOk.png)](https://i.stack.imgur.com/k8yOk.png)<issue_comment>username_1: Alon, Checkpoints 1. See if there is a need to change the bind\_ip variable at the /etc/mongodb.conf file. By default, it is locked to localhost. Try setting the value to 0.0.0.0 or assign the IP that will be able to connect the DB to it. 2. Port is allowed in security group attached to EC2 (You mentioned its done) 3. Not using private IP to connect (I guess, you are using the right one as you connected through terminal) 4. Rare case, OS firewall I guess point 1 should do the trick. Rest points for future reference [Update 1] Doc link for bindIp <https://docs.mongodb.com/manual/reference/configuration-options/#net-options> Upvotes: 5 [selected_answer]<issue_comment>username_2: I am putting this as an answer although it is meant as a comment to the answer by username_1 above but I do not have the necessary reputation to post comments yet. If you set the bind\_ip variable to 0.0.0.0 as mentioned, MongoDB accepts connections from all IP addresses. This might be a security risk if you do not use access control. On the other hand, if I am not wrong, the idea of a SSH tunnel is for that, in this case MongoDB, the entering connection comes from localhost so no change in the configuration should be necessary. That said, I came here because I could not connect either. I got the error 'Error creating SSH Tunnel: Timed out while waiting for forwardOut'. In my case the solution was to put 'localhost' in the hostname field at the top instead of the host IP. Upvotes: 2
2018/03/14
481
1,560
<issue_start>username_0: I'd like to select a row by searching for some words in a specific column. Here is what I came up with but it's obviously not working: ``` SELECT * FROM 'list' WHERE 'Name' LIKE '%cat%' AND '%bengal%' AND 'Color' LIKE '%navy%'` ``` Basically I want to find this row: ``` ID | Name | Color ---+------------------+------------- 1 | Stuff | Stuff 2 | cat weird bengal | navy -> THIS 3 | cat weird bengal | blue 4 | Other stuff | stuff 5 | dog bengal | navy ``` I searched on here but found only people who suggested `OR`, but, as you can see from the sample table, I cannot use it (or it would match with ID:5). Is there some way to say `LIKE` 'this' AND 'that'?<issue_comment>username_1: You could use the same column twice: ``` SELECT * FROM list WHERE Name LIKE '%cat%' AND Name LIKE '%bengal%' AND Color LIKE '%navy%'; ``` As [<NAME>](https://stackoverflow.com/users/460557/jorge-campos) mentioned you could use single `LIKE` if order is known in advance: ``` SELECT * FROM list WHERE Name LIKE '%cat%bengal%' AND Color LIKE '%navy%'; ``` Anyway your query is not-SARGable. I would suggest using `FULL TEXT INDEX` which is RDBMS specific. Upvotes: 4 [selected_answer]<issue_comment>username_2: Try below.It may help ``` SELECT * FROM list WHERE Name LIKE '%cat%' and Name LIKE '%bengal%' and Color LIKE '%navy%'; ``` Or As suggested by jorge ``` SELECT * FROM list WHERE Name LIKE '%cat%bengal%' and Color LIKE '%navy%'; ``` Upvotes: 0
2018/03/14
419
1,364
<issue_start>username_0: I want to have method that will give me resized image, if i call this method on object, how can i do this with Pillow? my model: ``` class Item(models.Model): title = models.CharField(max_length=255, null=True) img = models.ImageField() def __str__(self): return self.owner.first_name def get_image(self): image = Image.open(self.img) resized_image = image.resize((128, 128)) return resized_image ``` This one always gives me something like that ``` ```<issue_comment>username_1: You could use the same column twice: ``` SELECT * FROM list WHERE Name LIKE '%cat%' AND Name LIKE '%bengal%' AND Color LIKE '%navy%'; ``` As [<NAME>](https://stackoverflow.com/users/460557/jorge-campos) mentioned you could use single `LIKE` if order is known in advance: ``` SELECT * FROM list WHERE Name LIKE '%cat%bengal%' AND Color LIKE '%navy%'; ``` Anyway your query is not-SARGable. I would suggest using `FULL TEXT INDEX` which is RDBMS specific. Upvotes: 4 [selected_answer]<issue_comment>username_2: Try below.It may help ``` SELECT * FROM list WHERE Name LIKE '%cat%' and Name LIKE '%bengal%' and Color LIKE '%navy%'; ``` Or As suggested by jorge ``` SELECT * FROM list WHERE Name LIKE '%cat%bengal%' and Color LIKE '%navy%'; ``` Upvotes: 0
2018/03/14
1,918
5,929
<issue_start>username_0: Below is my sample json. I have used ng-repeat="(key, value) as below to get desired output but it is not working. ``` | | | | --- | --- | | {{key}} | {{ value }} | ``` Sample JSON: ``` { "accessPointDetails": { "kernelVersion": "4.4.0", "videoAppVersion": "1.2.3", "zigbeeAppVersion": "1.2", "overrideFiles": [{ "path": "/some_dir/gateway.conf", "sizeBytes": 0 }], "sshKeyVersion": "ZA-1515092259", "ethInterfaces": [{ "macAddress": "", "dhcpProfile": "Management", "ipAddress": "", "hostName": "", "identifier": "eth0", "switchIpAddress": "", "switchPort": 12 }, { "macAddress": "", "dhcpProfile": "Zooter 1", "ipAddress": "", "hostName": "", "identifier": "eth1", "switchIpAddress": "", "switchPort": 12 }], "wlanInterfaces": [{ "dhcpProfile": "Gen3ZapA", "radioFrequency": "2.4", "radioVersion": "104" }, { "dhcpProfile": "Gen3Zap1", "radioFrequency": "5.8", "radioVersion": "108" }], "radioInterfaces": [{ "identifier": 1, "radioVersion": "123" }, { "identifier": 2, "radioVersion": "123" }, { "identifier": 3, "radioVersion": "123" }, { "identifier": 4, "radioVersion": "123" }] } } ``` Actual Output: [![enter image description here](https://i.stack.imgur.com/AuuRd.png)](https://i.stack.imgur.com/AuuRd.png) Desired Output: [![enter image description here](https://i.stack.imgur.com/PKvhf.png)](https://i.stack.imgur.com/PKvhf.png)<issue_comment>username_1: Have nested ng-repeat for evry loop keep checking if the object is array ```html (function () { var app = angular.module("testApp", ['ui.bootstrap', 'angular.filter']); app.controller('testCtrl', ['$scope', '$http', function ($scope, $http) { $scope.getKeys = function(val){ return Object.keys(val[0]) }; $scope.isArray = angular.isArray; $scope.data1 = { "accessPointDetails": { "kernelVersion": "4.4.0", "videoAppVersion": "1.2.3", "zigbeeAppVersion": "1.2", "overrideFiles": [{ "path": "/some\_dir/gateway.conf", "sizeBytes": 0 }], "sshKeyVersion": "ZA-1515092259", "ethInterfaces": [{ "macAddress": "", "dhcpProfile": "Management", "ipAddress": "", "hostName": "", "identifier": "eth0", "switchIpAddress": "", "switchPort": 12 }, { "macAddress": "", "dhcpProfile": "Zooter 1", "ipAddress": "", "hostName": "", "identifier": "eth1", "switchIpAddress": "", "switchPort": 12 }], "wlanInterfaces": [{ "dhcpProfile": "Gen3ZapA", "radioFrequency": "2.4", "radioVersion": "104" }, { "dhcpProfile": "Gen3Zap1", "radioFrequency": "5.8", "radioVersion": "108" }], "radioInterfaces": [{ "identifier": 1, "radioVersion": "123" }, { "identifier": 2, "radioVersion": "123" }, { "identifier": 3, "radioVersion": "123" }, { "identifier": 4, "radioVersion": "123" }] } }; }]); }()); | | | | | | | | | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | | {{key1}} | | | | | | | | --- | --- | --- | --- | --- | | {{key2}} | {{value2}} | | {{th}} | | --- | | {{ x[th]}} | | {{value1}} | | | ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: This is working fine, but if I change my JSON to below format, then the output is again unformatted. I need to make something generic, in case JSON is nested within 3 keys, then also it should work fine. ``` { "kernelVersion": "4.4.0", "videoAppVersion": "1.2.3", "zigbeeAppVersion": "1.2", "sshKeyVersion": "ZA-1515092259", "overrideFiles": [{ "path": "/some_dir/gateway.conf", "sizeBytes": 0 }], "ethInterfaces": [{ "macAddress": "", "dhcpProfile": "Management", "ipAddress": "", "hostName": "", "identifier": "eth0", "switchIpAddress": "", "switchPort": 12 }, { "macAddress": "", "dhcpProfile": "Zooter 1", "ipAddress": "", "hostName": "", "identifier": "eth1", "switchIpAddress": "", "switchPort": 12 }], "wlanInterfaces": [{ "dhcpProfile": "Gen3ZapA", "radioFrequency": "2.4", "radioVersion": "104" }, { "dhcpProfile": "Gen3Zap1", "radioFrequency": "5.8", "radioVersion": "108" }], "radioInterfaces": [{ "identifier": 1, "radioVersion": "123" }, { "identifier": 2, "radioVersion": "123" }, { "identifier": 3, "radioVersion": "123" }, { "identifier": 4, "radioVersion": "123" }] } ``` **Update code for the above json** ```html (function() { var app = angular.module("testApp", ['ui.bootstrap', 'angular.filter']); app.controller('testCtrl', ['$scope', '$http', function($scope, $http) { $scope.getKeys = function(val) { return Object.keys(val[0]) }; $scope.isArray = angular.isArray; $scope.data1 = {"kernelVersion":"4.4.0","videoAppVersion":"1.2.3","zigbeeAppVersion":"1.2","sshKeyVersion":"ZA-1515092259","overrideFiles":[{"path":"/some\_dir/gateway.conf","sizeBytes":0}],"ethInterfaces":[{"macAddress":"","dhcpProfile":"Management","ipAddress":"","hostName":"","identifier":"eth0","switchIpAddress":"","switchPort":12},{"macAddress":"","dhcpProfile":"Zooter 1","ipAddress":"","hostName":"","identifier":"eth1","switchIpAddress":"","switchPort":12}],"wlanInterfaces":[{"dhcpProfile":"Gen3ZapA","radioFrequency":"2.4","radioVersion":"104"},{"dhcpProfile":"Gen3Zap1","radioFrequency":"5.8","radioVersion":"108"}],"radioInterfaces":[{"identifier":1,"radioVersion":"123"},{"identifier":2,"radioVersion":"123"},{"identifier":3,"radioVersion":"123"},{"identifier":4,"radioVersion":"123"}]}; }]); }()); | | | | | | | --- | --- | --- | --- | --- | | {{key2}} | {{value2}} | | {{th}} | | --- | | {{ x[th]}} | | ``` Upvotes: 0
2018/03/14
3,377
8,113
<issue_start>username_0: I want to split a matrix into square regions with the given dimensions (k) starting from the upper left corner and then sum the maximum value of each region. This is what I've done so far. ```js arr = [ [ 0, 8, 1, 1, 10, 6 ], [ 6, 8, 7, 0, 3, 9], [ 0, 7, 6, 8, 6, 5], [ 4, 0, 2, 7, 2, 0], [ 4, 4, 5, 7, 5, 1] ], l = console.log, j = JSON.stringify result = [0, 3].map(i => arr.map(a => a.slice(i, i+3))) // map over the indexes to split by to get the parts l(j(result )) ``` I'm not an expert in javascript but I want to learn. Any help would be appreciated.<issue_comment>username_1: ```js const arr = [ [ 0, 8, 1, 1, 10, 6 ], [ 6, 8, 7, 0, 3, 9], [ 0, 7, 6, 8, 6, 5], [ 4, 0, 2, 7, 2, 0], [ 4, 4, 5, 7, 5, 1] ]; const sizeX = 3, sizeY = 3; // Go over the different squares for(let minX = 0; minX < arr.length - sizeX; minX++){ for(let minY = 0; minY < arr[0].length - sizeY; minY++){ // Sum their content up var sum = 0; for(let x = minX; x < minX + sizeX; x++) for(let y = minY; y < minY + sizeY; y++) sum += arr[x][y]; // Do whatever with the sum console.log(sum) } } ``` Upvotes: 0 <issue_comment>username_2: You can split your matrix in regions using `reduce()` and `forEach()` methods ```js const arr = [ [0, 8, 1, 1, 10, 6], [6, 8, 7, 0, 3, 9], [0, 7, 6, 8, 6, 5], [4, 0, 2, 7, 2, 0], [4, 4, 5, 7, 5, 1] ], l = console.log, j = JSON.stringify function splitMatrix(data, n) { let row = 0; return data.reduce(function(r, a, i) { if (i && i % n == 0) row++; let col = 0; a.forEach(function(e, j) { if (j && j % n == 0) col++; if (!r[row]) r[row] = []; if (!r[row][col]) r[row][col] = []; r[row][col].push(e) }) return r; }, []) } const s = splitMatrix(arr, 3); console.log(j(s)) ``` And to get maximum value of each region you could do that inside foreach loop and then use `reduce()` to sum those values. ```js const arr = [ [0, 8, 1, 1, 10, 6], [6, 8, 7, 0, 3, 9], [0, 7, 6, 8, 6, 5], [4, 0, 2, 7, 2, 0], [4, 4, 5, 7, 5, 1] ], l = console.log, j = JSON.stringify function splitMatrix(data, n) { let row = 0; return data.reduce(function(r, a, i) { if (i && i % n == 0) row++; let col = 0; a.forEach(function(e, j) { if (j && j % n == 0) col++; if (!r[row]) r[row] = []; if (!r[row][col]) r[row][col] = [e]; if (r[row][col][0] < e) r[row][col][0] = e; }) return r; }, []) } const max = splitMatrix(arr, 3) const result = [].concat(...max).reduce((r, [e]) => r + e, 0); console.log(j(max)) console.log(result) ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: In your example, the array is 6x5, so you couldn't split it evenly into square regions. That's a little besides the point though. If you want to go through the whole thing, the first step is to have a function that can take a certain chunk of a matrix and sum it up. Let's say we have an matrix, and we want to make a function that takes an x, y, width and height for what we want to sum up. ```js const m = [ [1,2,3,4,5], [2,3,4,5,6], [3,4,5,6,7], [8,9,10,11,12] ]; const sumArea = (m, x, y, w, h) => { let sum = 0; if (y + h > m.length) { throw new Error('Matrix is not tall enough for the requested area'); } if (x + w > m[0].length) { throw new Error('Matrix is not wide enough for the request area'); } for (let j = y; j < y + h; j++) { for (let i = x; i < x + w; i++) { sum += m[j][i]; } } return sum; } console.log(sumArea(m, 0, 0, 3, 3)); ``` With this, we can loop through an arbitrary area of a matrix and sum up everything in it. To do that, we just use a `for` loop from the starting point (`x` and `y`) to the end (`x + width`, `y + height`). I added a check if the area goes out of bounds and throw an error. You could alternately just count those as zero and skip them (by adding some conditions in your `for` loop. After that, to square all of them, you'd just iterate over each bit. ```js const m = [ [1,2,3,4], [2,3,4,5], [3,4,5,6], [8,9,10,11] ]; const sumArea = (m, x, y, w, h) => { let sum = 0; if (y + h > m.length) { throw new Error('Matrix is not tall enough for the requested area'); } if (x + w > m[0].length) { throw new Error('Matrix is not wide enough for the request area'); } for (let j = y; j < y + h; j++) { for (let i = x; i < x + w; i++) { sum += m[j][i]; } } return sum; } const sumSquares = (m, w, h) => { let result = []; for (let y = 0; y < m.length / h; y++) { result[y] = []; for (let x = 0; x < m[0].length / w; x++) { result[y][x] = sumArea(m, x * w, y * h, w, h); } } return result; } console.log(sumSquares(m, 2, 2)); ``` For this, we loop through each square that we'll have. The number of squares we'll have is the size of the matrix divided by the width of the square (so, a matrix that is 4x4, with 2x2 squares would have 2 across and 2 down). Then I just put those into another matrix to show their total size. Upvotes: 0 <issue_comment>username_4: With a same length and height array, you could iterate the arrays and take the indices for the group for counting. ```js var array = [[0, 8, 1, 1, 10, 6], [6, 8, 7, 0, 3, 9], [0, 7, 6, 8, 6, 5], [4, 0, 2, 7, 2, 0], [4, 4, 5, 7, 5, 1], [0, 1, 0, 1, 0, 1]], result = array.reduce((r, a, i) => { a.forEach((b, j) => { r[Math.floor(i / 3)] = r[Math.floor(i / 3)] || []; r[Math.floor(i / 3)][Math.floor(j / 3)] = (r[Math.floor(i / 3)][Math.floor(j / 3)] || 0) + b; }); return r; }, []); console.log(result); ``` ```css .as-console-wrapper { max-height: 100% !important; top: 0; } ``` Upvotes: 0 <issue_comment>username_5: I know it bit complex but it is working. It will try to get a region what you want, but it will take whatever left if a region of specific size is not possible.(in our case, it will divide array in 2 region of 3\*3, and another 2 region of 2\*3)(Warning:- it will not work if all element in region is less than -100, if you want that it work please change value of 'runningMax' accordingly) ```js function max(arr){ var runningMax=-100; for(i = 0, i2 = arr.length; i arr.map(a => a.slice(i, i+3))); y=[0,3].map(i=>result.map(a=>a.slice(i,i+3))); var v=0; y.forEach(function(e){ e.forEach(function(q){ //console.log(max(q) + " "+ q); v+=max(q); }) }); console.log(v); ``` Upvotes: 2 <issue_comment>username_6: From what I understand you want to look at all possible 3x3 sub-matrices of the given matrix. So I assume that in your example matrix you want to look at: ``` [ 0, 8, 1 ] [ 8, 1, 1 ] [ 1, 1, 10] [ 1, 10, 6 ] [ 6, 8, 7 ] [ 8, 7, 0 ] [ 7, 0, 3 ] [ 0, 3, 9 ] [ 0, 7, 6 ] [ 7, 6, 8 ] [ 6, 8, 6 ] [ 8, 6, 5 ] [ 6, 8, 7 ] [ 8, 7, 0 ] [ 7, 0, 3 ] [ 0, 3, 9 ] [ 0, 7, 6 ] [ 7, 6, 8 ] [ 6, 8, 6 ] [ 8, 6, 5 ] [ 4, 0, 2 ] [ 0, 2, 7 ] [ 2, 7, 2 ] [ 7, 2, 0 ] [ 0, 7, 6 ] [ 7, 6, 8 ] [ 6, 8, 6 ] [ 8, 6, 5] [ 4, 0, 2 ] [ 0, 2, 7 ] [ 2, 7, 2 ] [ 7, 2, 0] [ 4, 4, 5 ] [ 4, 5, 7 ] [ 5, 7, 5 ] [ 7, 5, 1] ``` Then I understand you want to find the maxima (12 maxima for 12 matrices): ``` 8 8 10 10 8 8 8 9 7 8 8 8 ``` The sum of these is 100. So if that is indeed the outcome you want to have, here is a functional way to get it: ```js const arr = [ [ 0, 8, 1, 1, 10, 6 ], [ 6, 8, 7, 0, 3, 9], [ 0, 7, 6, 8, 6, 5], [ 4, 0, 2, 7, 2, 0], [ 4, 4, 5, 7, 5, 1] ]; const sum = arr.slice(2).reduce( (sum, row, i) => row.slice(2).reduce( (sum, a, j) => sum + Math.max(...arr[i].slice(j,j+3), ...arr[i+1].slice(j,j+3), ...row.slice(j,j+3)), sum ), 0 ) console.log(sum); ``` Upvotes: 0
2018/03/14
412
1,097
<issue_start>username_0: How to display only non-duplicate word in the line of the text file using bash. For example, I have following lines in the text file: ``` 1001 1002 1003 1002 1003 ``` I want to display only 1001 . I don't want to display any word of the line which is duplicate. I am trying ``` #!/bin/bash file="/tmp/t1" while IFS= read line do echo $line | xargs -n1 | sort -u |xargs done < "$file" ``` output is: ``` 1001 1002 1003 ```<issue_comment>username_1: You may se `awk` for this: ``` s='1001 1002 1003 1002 1003' awk '{delete freq; for (i=1; i<=NF; i++) freq[$i]++; for (i in freq) if (freq[i] == 1) print i}' <<< "$s" ``` ``` 1001 ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: a fix for your solution... ``` $ while IFS= read line; do echo $line | xargs -n1 | sort | uniq -u | xargs; done < file ``` **NB.** `uniq -u` and `sort -u` do different things. Essentially `sort | uniq` equivalent to `sort -u`. However `uniq -u` is only returning non repeated entries (complement is `uniq -d`) Upvotes: 2
2018/03/14
1,128
4,018
<issue_start>username_0: Trying to run following command in the ubuntu 16.04 docker container ``` root@mytest:/Linux_Insight_Agent# systemctl start ir_agent.service ``` I have installed neede packages but at the time of starting the service following issue is coming. ``` Failed to connect to bus: No such file or directory ```<issue_comment>username_1: I had the same situation In my case, i resolve by running docker with `--volume` parameter. example: ``` docker run -it \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ --rm IMAGE /bin/bash ``` I hope works for you... Upvotes: 3 <issue_comment>username_2: That's because "systemctl" talks to the systemd daemon by using the d-bus. In a container there is no systemd-daemon. Asking for a start will probably not quite do what you expect - the dev-mapping need to be a bit longer. Another solution may be to avoid the usage of a systemd daemon with the help of the [docker-systemctl-replacement](https://github.com/gdraheim/docker-systemctl-replacement) script. It overwrites default systemctl however. Upvotes: 3 <issue_comment>username_3: <https://itectec.com/ubuntu/ubuntu-systemctl-failed-to-connect-to-bus-docker-ubuntu16-04-container/> The link above mentioned the follows that makes sense: > > Best Answer > > > I assume you start your docker container with something like `docker run -t -i ubuntu:16.04 /bin/bash` The problem now is that your init process PID 1 is /bin/bash, not systemd. Confirm with ps aux. In addition to that you are missing dbus with would be the way to communicate. This is where your error message is coming from. But as your PID 1 is not systemd, it will not help to install dbus. Best would be to re-think the way you plan to use docker. Do not rely on systemd as a process manager but have the docker container run your desired application in the foreground. * [docker systemctl replacement](https://github.com/gdraheim/docker-systemctl-replacement) This script may be used to overwrite "/usr/bin/systemctl". It will execute the systemctl commands without SystemD! Upvotes: 3 <issue_comment>username_4: Answer for WSL -------------- In my case, I got this error in my WSL when running `systemctl --user start docker-desktop` as part of the [Docker Desktop Ubuntu setup](https://docs.docker.com/desktop/install/ubuntu/) ### Explanation `systemctl` is an application that is dependent on `systemd`, which again provides a system and a service manager running on PID 1. **By default, this is not a part of WSL**, and the `systemctl` command above failed. **Microsoft recently released support for `systemd` in WSL**, as announced [here](https://devblogs.microsoft.com/commandline/systemd-support-is-now-available-in-wsl/). Here they describe how you activate systemd, which I will reproduce here in a step-by-step version: ### How to activate `systemctl` and `systemd` on WSL 1. Make sure you are running WSL version 0.67.6 or higher by running `wsl --version` in your Windows terminal *(if this fails, need to upgrade to the [Store version](https://devblogs.microsoft.com/commandline/a-preview-of-wsl-in-the-microsoft-store-is-now-available/#how-to-install-and-use-wsl-in-the-microsoft-store))* 2. Open WSL and navigate to your Linux **/home** directory. Navigate to **/etc** 3. Open (and if necessary, create) the **wsl.conf** file with sudo privileges, e.g. through nano: `sudo nano wsl.conf` 4. Add the following lines: ``` [boot] systemd=true ``` 5. Save and close (in nano `CTRL + 0` and `CTRL + x`) 6. Shutdown WSL in your Windows Terminal with `wsl.exe --shutdown` 7. Wait 8 seconds (see [the 8 second rule](https://learn.microsoft.com/en-us/windows/wsl/wsl-config#the-8-second-rule)) 8. Open WSL 9. Make sure that `systemd` and `systemctl` is working by running e.g. `systemctl list-unit-files --type=service`, which should show your services’ status. As for me, I was now able to start docker-desktop with no error. Hopefully this can be helpful for someone else as well. Upvotes: 2
2018/03/14
1,296
4,427
<issue_start>username_0: Trying to lookup a value from multiple sheets. The first code is working. But the code from `IF FUNCTION` is throwing an error - > > Type mismatch runtime error 13 > > > after first lookup, for all `#NA` in the column E, i need to lookup the value in the second sheet. ``` Sub Vlookup() ' Vlookup Macro Worksheets("ORD_CS").Activate Dim LR As Long LR = Worksheets("ORD_CS").UsedRange.Rows.Count Range("E2:E" & LR).Formula = Application.Vlookup(Range("M2:M" & LR), Worksheets("WSS").Range("A2:C999999"), 3, False) If Range("E2:E" & LR) = "" Then Range("E2:E" & LR) = Application.Vlookup(Range("M2:M" & LR), Worksheets("IBC").Range("C2:F999999"), 4, False) End If End Sub ```<issue_comment>username_1: I had the same situation In my case, i resolve by running docker with `--volume` parameter. example: ``` docker run -it \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ --rm IMAGE /bin/bash ``` I hope works for you... Upvotes: 3 <issue_comment>username_2: That's because "systemctl" talks to the systemd daemon by using the d-bus. In a container there is no systemd-daemon. Asking for a start will probably not quite do what you expect - the dev-mapping need to be a bit longer. Another solution may be to avoid the usage of a systemd daemon with the help of the [docker-systemctl-replacement](https://github.com/gdraheim/docker-systemctl-replacement) script. It overwrites default systemctl however. Upvotes: 3 <issue_comment>username_3: <https://itectec.com/ubuntu/ubuntu-systemctl-failed-to-connect-to-bus-docker-ubuntu16-04-container/> The link above mentioned the follows that makes sense: > > Best Answer > > > I assume you start your docker container with something like `docker run -t -i ubuntu:16.04 /bin/bash` The problem now is that your init process PID 1 is /bin/bash, not systemd. Confirm with ps aux. In addition to that you are missing dbus with would be the way to communicate. This is where your error message is coming from. But as your PID 1 is not systemd, it will not help to install dbus. Best would be to re-think the way you plan to use docker. Do not rely on systemd as a process manager but have the docker container run your desired application in the foreground. * [docker systemctl replacement](https://github.com/gdraheim/docker-systemctl-replacement) This script may be used to overwrite "/usr/bin/systemctl". It will execute the systemctl commands without SystemD! Upvotes: 3 <issue_comment>username_4: Answer for WSL -------------- In my case, I got this error in my WSL when running `systemctl --user start docker-desktop` as part of the [Docker Desktop Ubuntu setup](https://docs.docker.com/desktop/install/ubuntu/) ### Explanation `systemctl` is an application that is dependent on `systemd`, which again provides a system and a service manager running on PID 1. **By default, this is not a part of WSL**, and the `systemctl` command above failed. **Microsoft recently released support for `systemd` in WSL**, as announced [here](https://devblogs.microsoft.com/commandline/systemd-support-is-now-available-in-wsl/). Here they describe how you activate systemd, which I will reproduce here in a step-by-step version: ### How to activate `systemctl` and `systemd` on WSL 1. Make sure you are running WSL version 0.67.6 or higher by running `wsl --version` in your Windows terminal *(if this fails, need to upgrade to the [Store version](https://devblogs.microsoft.com/commandline/a-preview-of-wsl-in-the-microsoft-store-is-now-available/#how-to-install-and-use-wsl-in-the-microsoft-store))* 2. Open WSL and navigate to your Linux **/home** directory. Navigate to **/etc** 3. Open (and if necessary, create) the **wsl.conf** file with sudo privileges, e.g. through nano: `sudo nano wsl.conf` 4. Add the following lines: ``` [boot] systemd=true ``` 5. Save and close (in nano `CTRL + 0` and `CTRL + x`) 6. Shutdown WSL in your Windows Terminal with `wsl.exe --shutdown` 7. Wait 8 seconds (see [the 8 second rule](https://learn.microsoft.com/en-us/windows/wsl/wsl-config#the-8-second-rule)) 8. Open WSL 9. Make sure that `systemd` and `systemctl` is working by running e.g. `systemctl list-unit-files --type=service`, which should show your services’ status. As for me, I was now able to start docker-desktop with no error. Hopefully this can be helpful for someone else as well. Upvotes: 2
2018/03/14
769
2,344
<issue_start>username_0: > > Table1: > > > ``` ====================== | id |id_feature| ====================== | 1 | 4 | | 2 | 2 | | 3 | 9 | ====================== ``` > > Table\_feature4 > > > ``` ====================== | id | size | ====================== | 1 | 10000 | | 2 | 12000 | | 3 | 8000 | ====================== ``` > > Table\_feature2 > > > ``` ====================== | id | radius | ====================== | 1 | 0.9 | | 2 | 2 | | 3 | 3.1 | ====================== ``` MY QUERY: ``` SELECT * FROM Table1 LEFT JOIN (SELECT CONCAT("Table_feature",Table1.id_feature)) Feature ON Feature.id = Table1.id_feature ``` How can I make it work? Is there a way? Errors I receive: > > /\* SQL Error (1109): Unknown table 'Tabel1' in field list \*/ > > > And if I put 4 manualy in CONCAT and without alias it say > > /\* SQL Error (1248): Every derived table must have its own alias \*/ > > > If I'll alias "(SELECT CONCAT("Table\_feature",4)) Feature" in ON condition throw this error. > > /\* SQL Error (1054): Unknown column 'Feature.id' in 'on clause' \*/ > > > I think the problem is here: "(SELECT CONCAT("Table\_feature",4))" , it should return table name Table\_feature4 but it only add me a column called CONCAT("Table\_feature",4)<issue_comment>username_1: It's not clear what the expected output is but here is my go ``` SELECT t.id, CASE t.id_feature WHEN 2 THEN (SELECT size FROM feature2 f WHERE f.id = t.id) WHEN 4 THEN (SELECT radius FROM feature4 f WHERE f.id = t.id) END AS feature_value FROM Table1 t ``` I have a slightly different naming here but this will output the id and the value column for the corresponding id\_feature table Upvotes: 3 [selected_answer]<issue_comment>username_2: In your subquery you have not a table1 (in subquery the outer table name is not in scope) and Seems you missed the from clause in subquery and last you have not an id column or alias in subquery so you don't have a feature.id for ON clause try using somthings like eg: ``` SELECT * FROM Table1 LEFT JOIN ( SELECT CONCAT("Table_feature", Table1.id_feature) , table1.id FROM Table1 ) Feature ON Feature.id = Table1.id_feature ``` Upvotes: 0
2018/03/14
340
1,225
<issue_start>username_0: I drop into debugger in my jupyter notebooks using: ``` from IPython.core.debugger import set_trace set_trace() ``` But the text boxes (the ones following the `ipdb>` prompt) for entering commands are really short. How can I make them bigger by default? Thanks!<issue_comment>username_1: It's not clear what the expected output is but here is my go ``` SELECT t.id, CASE t.id_feature WHEN 2 THEN (SELECT size FROM feature2 f WHERE f.id = t.id) WHEN 4 THEN (SELECT radius FROM feature4 f WHERE f.id = t.id) END AS feature_value FROM Table1 t ``` I have a slightly different naming here but this will output the id and the value column for the corresponding id\_feature table Upvotes: 3 [selected_answer]<issue_comment>username_2: In your subquery you have not a table1 (in subquery the outer table name is not in scope) and Seems you missed the from clause in subquery and last you have not an id column or alias in subquery so you don't have a feature.id for ON clause try using somthings like eg: ``` SELECT * FROM Table1 LEFT JOIN ( SELECT CONCAT("Table_feature", Table1.id_feature) , table1.id FROM Table1 ) Feature ON Feature.id = Table1.id_feature ``` Upvotes: 0
2018/03/14
531
1,849
<issue_start>username_0: I'm trying to parse the output of a program, which is given like this: ``` Status : OK (97 ms) ``` Those are all spaces, no tabs. I don't know if that spacing will remain consistent over different versions, so I want to treat spaces *and* colons as delimiters. I'm well aware that the field separator can be declared as an arbitrarily complex regular expression, so I expect this would work: ``` echo " Status : OK (97 ms)" | awk -F'[ :]+' '/Status/{print $2}' ``` But it does not; instead it prints "Status", and `$1` is an empty string. Compare this with the output of the built-in delimiter, where leading delimiters seem to be ignored and `$1` is "Status": ``` echo " Status : OK (97 ms)" | awk '/Status/{print $1}' ``` It's easy enough to print `$3` instead, but it makes me wonder what I am doing wrong, or misunderstanding? I'm using GNU Awk 3.1.7<issue_comment>username_1: It's not clear what the expected output is but here is my go ``` SELECT t.id, CASE t.id_feature WHEN 2 THEN (SELECT size FROM feature2 f WHERE f.id = t.id) WHEN 4 THEN (SELECT radius FROM feature4 f WHERE f.id = t.id) END AS feature_value FROM Table1 t ``` I have a slightly different naming here but this will output the id and the value column for the corresponding id\_feature table Upvotes: 3 [selected_answer]<issue_comment>username_2: In your subquery you have not a table1 (in subquery the outer table name is not in scope) and Seems you missed the from clause in subquery and last you have not an id column or alias in subquery so you don't have a feature.id for ON clause try using somthings like eg: ``` SELECT * FROM Table1 LEFT JOIN ( SELECT CONCAT("Table_feature", Table1.id_feature) , table1.id FROM Table1 ) Feature ON Feature.id = Table1.id_feature ``` Upvotes: 0
2018/03/14
440
1,723
<issue_start>username_0: I have a site. I want to create Google AMP code for my site. What is the best way to transfer data between my existing site & new google AMP site. I make a decision to create a new domain for my existing site. Is it good or bad?<issue_comment>username_1: > > transfer data between my existing site & new google AMP site > > > well not sure what you mean there... You don't really transfer data, you are just creating a new webpage that follows AMP rules, which is then linked to your old site's page with a meta tag. php is a server-side language. When somebody makes a request for index.php the php code inside index.php will execute in your server and you will only remain with html, css and js, which are at the end served to the user. It is definitely good to implement AMP in my opinion, especially if ranking on top of google search is one of your top priorities. Upvotes: 1 <issue_comment>username_2: AMP is a client (browser) side framework. PHP is a server side program. So, the two are unrelated. Upvotes: 0 <issue_comment>username_3: **Can I use PHP code in AMP?** Yes, you can use because PHP files can contain text, HTML, CSS, JavaScript, and PHP code. > > **Note** : AMP pages are built with 3 core components AMP HTML, AMP JS and AMP Cache, you can use embedded inline css between tag on head. > > > **What is the best way to transfer data between my existing site & new google AMP site?** Make the separate page for AMP and make your page discover able with `canonical` and `amphtml` tag [**For more information visit the site**](https://www.ampproject.org/docs/fundamentals/converting/discoverable) I don't think you need new domain for amp purpose. Upvotes: 1
2018/03/14
1,338
3,850
<issue_start>username_0: How to find values of weight1, weight2, and bias? What's generalized mathematical way to find these 3 values for any problem! ``` import pandas as pd weight1 = 0.0 weight2 = 0.0 bias = 0.0 test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)] correct_outputs = [False, False, False, True] outputs = [] for test_input, correct_output in zip(test_inputs, correct_outputs): linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias output = int(linear_combination >= 0) is_correct_string = 'Yes' if output == correct_output else 'No' outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string]) num_wrong = len([output[4] for output in outputs if output[4] == 'No']) output_frame = pd.DataFrame(outputs, columns=['Input 1', ' Input 2', ' Linear Combination', ' Activation Output', ' Is Correct']) if not num_wrong: print('Nice! You got it all correct.\n') else: print('You got {} wrong. Keep trying!\n'.format(num_wrong)) print(output_frame.to_string(index=False)) ```<issue_comment>username_1: Well in the case of the normal equations, you do not need a bias unit. Therefore, this may be what you are after (keep in mind I have recast your `True` and `False` values to `1` and `0`, respectively): ``` import numpy as np A = np.matrix([[0, 0], [0, 1], [1, 0], [1, 1]]) b = np.array([[0], [0], [0], [1]]) x = np.linalg.inv(np.transpose(A)*A)*np.transpose(A)*b print(x) ``` Yields: ``` [[ 0.33333333] [ 0.33333333]] ``` Further details on the solution are given [here](https://en.wikipedia.org/wiki/Tikhonov_regularization). Upvotes: 0 <issue_comment>username_2: The following worked for me: ``` weight1 = 1.5 weight2 = 1.5 bias = -2 ``` Will update when I better understand why Upvotes: 0 <issue_comment>username_3: The problem asks you to evaluate weight1, weight2, and bias when your inputs are [(0,0), (0,1), (1,0), (1,1)] in order to produce [False, False, False, True]. 'False' in this context would be a result that is a negative number. In contrast, 'True' would be a result that is a positive number. So, you evaluate the following: > > x1\*weight1 + x2\*weight2 + bias' is positive or negative > > > For example, setting weight1=1, weight2=1, and bias=-1.1 (possible solution) you get for the first input: > > 0\*1 + 0\*1 + (-1.1) = -1.1 which is negative, meaning it evaluates to **False** > > > for the next input: > > 0\*1 + 1\*1 + (-1.1) = -0.1 which is negative, meaning it evaluates to **False** > > > for the next input: > > 1\*1 + 0\*1 + (-1.1) = -0.1 which is negative, meaning it evaluates to **False** > > > and for the last input: > > 1\*1 + 1\*1 + (-1.1) = +0.9 which is positive, meaning it evaluates to **True** > > > Upvotes: 1 <issue_comment>username_4: The following also worked for me: ``` weight1 = 1.5 weight2 = 1.5 bias = -2 ``` Upvotes: 1 <issue_comment>username_5: X1w1 + X2W2 + bias The test is: ``` linear_combination >= 0 ``` from the given input values: ``` test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)] ``` The AND value computes to true on the test only once, hence the output of a typical AND operation should be given as: ``` 1 1 True 1 0 False 0 1 False 0 0 False ``` Given, when we input the test inputs in the equation: X1w1 + X2W2 + bias, there should only be one true outcome. As noted above, our test is that the linear combinations of the equations should be greater or equal to zero. I believe what the question is looking for is for the this output to be true only one, as seen from the test run. To get the false value, therefore, the output should be a negative computation. The easiest way is to test the equation with small values, and a negative bias. I tried ``` weight1 = 1 weight2 = 1 bias = -2 ``` Upvotes: 0
2018/03/14
2,532
7,367
<issue_start>username_0: I have an object array with duplicates. The data at the root level of the objects are identical, but the nested array of objects are not. That is what I need to merge before running a lodash uniqBy or whatever function to remove the duplicates. This is the object array with duplicates. ``` [ { "id": "66E175A2-A29F-4F1A-AD81-2422B1EB00F6", "name": "College Park / Brookhaven", "mktId": 0, "status": "Unknown", "code": "197D6", "ownershipType": null, "series": [ { "id": "80004F2E-E3C8-4B6A-BCCC-81259AEAF22D", "name": "01", "productType": "Detached", "productClass": "Single Family", "salesStartDate": null, "modelOpenDate": null } ] }, { "id": "E053E656-4D14-4F2A-AD70-A37F65195CD1", "name": "College Park / Hampshire", "mktId": 0, "status": "Unknown", "code": "316D6", "ownershipType": null, "series": [ { "id": "46830FBD-CD68-4D4C-A095-FB9C3D93D01A", "name": "02,03", "productType": "Detached", "productClass": "Single Family", "salesStartDate": null, "modelOpenDate": null } ] }, { "id": "E053E656-4D14-4F2A-AD70-A37F65195CD1", "name": "College Park / Hampshire", "mktId": 0, "status": "Unknown", "code": "316D6", "ownershipType": null, "series": [ { "id": "1BC31692-AAB8-4A00-9D8D-9B8CF7E426E0", "name": "01", "productType": "Detached", "productClass": "Single Family", "salesStartDate": null, "modelOpenDate": null } ] }, { "id": "34F7C7AF-3D1B-4EE7-8271-C99294169C01", "name": "College Park / Hillsdale", "mktId": 0, "status": "Unknown", "code": "295D6", "ownershipType": null, "series": [ { "id": "807144A1-26ED-4657-9775-7DF7563107D3", "name": "02", "productType": "Detached", "productClass": "Single Family", "salesStartDate": null, "modelOpenDate": null } ] } ] ``` College Park / Hampshire is duplicated in this example. I need to find the 2 duplicates and return this expected result.. ``` [ { "id": "66E175A2-A29F-4F1A-AD81-2422B1EB00F6", "name": "College Park / Brookhaven", "mktId": 0, "status": "Unknown", "code": "197D6", "ownershipType": null, "series": [ { "id": "80004F2E-E3C8-4B6A-BCCC-81259AEAF22D", "name": "01", "productType": "Detached", "productClass": "Single Family", "salesStartDate": null, "modelOpenDate": null } ] }, { "id": "E053E656-4D14-4F2A-AD70-A37F65195CD1", "name": "College Park / Hampshire", "mktId": 0, "status": "Unknown", "code": "316D6", "ownershipType": null, "series": [ { "id": "46830FBD-CD68-4D4C-A095-FB9C3D93D01A", "name": "02,03", "productType": "Detached", "productClass": "Single Family", "salesStartDate": null, "modelOpenDate": null }, { "id": "1BC31692-AAB8-4A00-9D8D-9B8CF7E426E0", "name": "01", "productType": "Detached", "productClass": "Single Family", "salesStartDate": null, "modelOpenDate": null } ] }, { "id": "34F7C7AF-3D1B-4EE7-8271-C99294169C01", "name": "College Park / Hillsdale", "mktId": 0, "status": "Unknown", "code": "295D6", "ownershipType": null, "series": [ { "id": "807144A1-26ED-4657-9775-7DF7563107D3", "name": "02", "productType": "Detached", "productClass": "Single Family", "salesStartDate": null, "modelOpenDate": null } ] } ] ``` vanilla javascript or lodash is fine.<issue_comment>username_1: Just set up a hashmap for the ids and a resulting array: ``` const hash = {}, result = []; ``` Then iterate over the array: ``` for(const el of array){ const { id, series } = el; ``` Now we can check if the id already appears in the hash, and if so just add the series: ``` if(hash[id]){ hash[id].series.push(...series); } else { ``` If the id did not appear yet, we need to add the object to our result and to the hashtable: ``` result.push(hash[id] = el) } } ``` And thats already it :) Upvotes: 1 [selected_answer]<issue_comment>username_2: I've created `HashMap` using `reduce`, in that hashMap, the key is the name of the item, the value is the item itself. ```js const data = [{ "id": "66E175A2-A29F-4F1A-AD81-2422B1EB00F6", "name": "<NAME> / Brookhaven", "mktId": 0, "status": "Unknown", "code": "197D6", "ownershipType": null, "series": [{ "id": "80004F2E-E3C8-4B6A-BCCC-81259AEAF22D", "name": "01", "productType": "Detached", "productClass": "Single Family", "salesStartDate": null, "modelOpenDate": null }] }, { "id": "E053E656-4D14-4F2A-AD70-A37F65195CD1", "name": "<NAME> / Hampshire", "mktId": 0, "status": "Unknown", "code": "316D6", "ownershipType": null, "series": [{ "id": "46830FBD-CD68-4D4C-A095-FB9C3D93D01A", "name": "02,03", "productType": "Detached", "productClass": "Single Family", "salesStartDate": null, "modelOpenDate": null }] }, { "id": "E053E656-4D14-4F2A-AD70-A37F65195CD1", "name": "<NAME> / Hampshire", "mktId": 0, "status": "Unknown", "code": "316D6", "ownershipType": null, "series": [{ "id": "1BC31692-AAB8-4A00-9D8D-9B8CF7E426E0", "name": "01", "productType": "Detached", "productClass": "Single Family", "salesStartDate": null, "modelOpenDate": null }] }, { "id": "34F7C7AF-3D1B-4EE7-8271-C99294169C01", "name": "<NAME> / Hillsdale", "mktId": 0, "status": "Unknown", "code": "295D6", "ownershipType": null, "series": [{ "id": "807144A1-26ED-4657-9775-7DF7563107D3", "name": "02", "productType": "Detached", "productClass": "Single Family", "salesStartDate": null, "modelOpenDate": null }] } ]; const hashMap = data.reduce((hash, item) => { const key = item.name; if (hash.hasOwnProperty(key)) { hash[key].series = hash[key].series.concat(item.series); } else { hash[key] = item; } return hash; }, {}); const result = Object.keys(hashMap).map((key) => hashMap[key]); console.log(result); ``` Upvotes: 1
2018/03/14
530
2,107
<issue_start>username_0: Modal gets closed when clicked on the close modal button as well as when mouse clicked outside the modal I want to call the same function that I call when I click on the close modal button also when I click outside the modal. I am not sure how to do this in Angular 2. Could you please share your ideas on this. Also my Modal is a part of my main html and it doesnot have a separate component or html... Thanks ``` import { Component, OnInit, NgModule, Input, Output, EventEmitter, ViewChild, ElementRef } from '@angular/core'; import * as jQuery from 'jquery'; export class Component implements OnInit { @ViewChild('newModal') newModal: ElementRef; public OpenModal(){ jQuery(this.newModal.nativeElement).modal('show'); } public CloseModal(){ Includes actions taken when close modal button is clicked - I want the same function to be called with I click outside the model } } ``` I tried the below in html but this is running whenever I click anywhere on the component other than the modal area. ``` (clickOutside)="CloseModal()"; ```<issue_comment>username_1: Yes, that's, according to what modal you use, pretty easy. Normally each modal dialogue has an onDismiss()-Observable to wich you can subscribe. To make sure that it has been instantiated already perform your subscription inside AfterViewInit. ``` ngAfterViewInit() { // react on modal closed by clicking beside it this.newModal.onDismiss.subscribe(() => { // here goes your code then }); } ``` That works perfect for me all the time. Upvotes: 1 <issue_comment>username_2: I solved it by adding a function for clicking outside the native element and change in some of the conditions and it worked for me.... Upvotes: -1 [selected_answer]<issue_comment>username_3: if you are using NgbModel ``` import { NgbModal, ModalDismissReasons } from '@ng-bootstrap/ng-bootstrap'; ``` then call its inside constructor like this: ``` constructor(private modalService: NgbModal) ``` and call its own method: ``` this.modalService.dismissAll(); ``` no need to pass any argument Upvotes: 0
2018/03/14
946
3,502
<issue_start>username_0: I'm building a trivia game, and all the questions, answer choices and correct answers are to be stored in different multidimensional arrays arranged by category. example: historyArray contians all the history data, etc. I'm also using bootstrap from my front end UI and would like to be able to use a data attribute to reference a specific array, and dynamically load a question from that array into a modal that will launch when pressing a button. Here's what I have so far: **HTML:** ``` Launch Random Question #### Category Title ``` **JS:** ``` var historyCount = 0; $(document).ready(function(){ //$('#activeQuestion').modal('show'); var historyArray = { 'q0' : { 'question': 'Which U.S. President is on the 1,000 dollar bill?', 'a1': '<NAME>', 'a2': '<NAME>', 'a3': '<NAME>', 'correct': '1' } } }); $('#activeQuestion').on('show.bs.modal', function (event) { var button = $(event.relatedTarget); var aCat = button.data('category'); console.log(aCat); }) ``` Currently, the console.log only returns the value of the data-attribute, not the array. How can I return the array in the console.log so then I can parse through the array, grabbing the question, answer choices and correct answer so I can display them. I've tried using console.log(aCat[0]), but that only returns `'h'`, the first letter in the variable name.<issue_comment>username_1: Short of providing JSON string as `data-` attribute value, you can put questions on different topics in the same object (`topicArrays` below) and access them using `data-category` values as property keys. ``` var topicArrays = {}; var historyCount = 0; $(document).ready(function(){ //$('#activeQuestion').modal('show'); topicArrays.historyArray = { 'q0' : { 'question': 'Which U.S. President is on the 1,000 dollar bill?', 'a1': '<NAME>', 'a2': '<NAME>', 'a3': '<NAME>', 'correct': '1' } }; }); $('#activeQuestion').on('show.bs.modal', function (event) { var button = $(event.relatedTarget); var aCat = topicArrays[button.data('category')]; console.log(aCat); }); ``` Upvotes: 0 <issue_comment>username_2: There's a couple of misunderstandings here first you're mistaking arrays for objects arrays are lists `[]` and objects are key-value pairs. `{something: somethingelse}` to resolve your issue first you need a way of accessing the correct list of questions. change this in your HTML ``` data-category="historyObject" ``` and wrap your history object in an object called questions ``` var questions = { historyObject: { 'q0' : { 'question': 'Which U.S. President is on the 1,000 dollar bill?', 'a1': '<NAME>', 'a2': '<NAME>', 'a3': '<NAME>', 'correct': '1' } } } ``` Now we're able to access the historyObject by `questions[aCat]` but it won't work yet your object is in its own scope meaning you won't be able to access questions from your event listener unless you move ``` $('#activeQuestion').on('show.bs.modal', function (event) { var button = $(event.relatedTarget); var aCat = button.data('category'); console.log(questions[aCat]); }) ``` into your onload. hope this helped. Upvotes: 2
2018/03/14
465
1,694
<issue_start>username_0: I'm using **React Native** and [this library](https://github.com/Elyx0/react-native-document-picker) to pick files in my app. Native **Google Drive** files (docs, sheets, slides) started appearing to be **unselectable** on **iOS**, meanwhile on Android they appear selectable as PDF (it seems they are automatically converted, but that's OK for me). [![unselectable files](https://i.stack.imgur.com/5VniAl.png)](https://i.stack.imgur.com/5VniAl.png) It used to work on iOS too. There were some Androids where this was also happening, but after cleaning the cache of Drive app it started working, but this workaround doesn't work on iOS. I would like to know a way to make it work again.<issue_comment>username_1: ``` DocumentPicker.show({ filetype: [DocumentPickerUtil.images()], ->> this may be where you specify type },(error,res) => { // Android console.log( res.uri, res.type, // mime type res.fileName, res.fileSize ); }); ``` Upvotes: 0 <issue_comment>username_2: May be its late but it will be helpful for others. I was also facing the similar issue. Ios does not allow to open iCloud drive docs within the app. There are some System-Declared Uniform Type Identifiers which you have to define while using them as per <https://developer.apple.com/library/archive/documentation/Miscellaneous/Reference/UTIRef/Articles/System-DeclaredUniformTypeIdentifiers.html> In my case I was selecting pdf and using react-native-document-picker library. Below is the working code - ``` DocumentPicker.show({ filetype: [DocumentPickerUtil.pdf(),'public.composite-content'], },(error,res) => { console.log(res) }); } ``` Upvotes: 1
2018/03/14
900
2,727
<issue_start>username_0: This returns only the array a. I need to do an array that is equal to the array a but when the element is multiple of 3 i need to add the next even number. Like a=[1,3,4,6,1], the array would look like [1,7,4,6,1]. How would I do it? Thank you. ``` public static void main(String[] args) { int[] a = new int[]{10, 46, 78, 32, 3, 80, 97, 11, 39, 57}; System.out.println(Arrays.toString(a)); } public static int[] multiplos3 (int[] a){ int[] b = new int[a.length]; int j = 0; for (int i = 0 ; i < a.length; i++){ if (a[i] % 3 == 0){ if(a[i + 1] % 2 == 0) { b[j] = a[i] + a[i + 1]; j++; } } } System.out.println(Arrays.toString(b)); return b; } } ``` Based on a comment: ``` public static void multiplos3 (int[] a){ int[] b = new int[a.length]; System.arraycopy(a, 0, b, 0, a.length); for (int i = 0 ; i < a.length; i++){ if (a[i] % 3 == 0){ for(int j = i + 1; j < a.length; j++){ if(a[j] % 2 == 0) { b[i] = a[i] + a[j]; break; } } } } System.out.println(Arrays.toString(b)); } ``` ....................................................<issue_comment>username_1: Instead of using `a[i + 1]`, try using another for loop to find the next even number ``` for(int j = i + 1; j < a.length; j++) if(a[j] % 2 == 0) { [...] break; //stop the loop after the first even number } ``` Upvotes: 0 <issue_comment>username_2: The run time complexity of above mentioned code is O(N^2) You can implement this in O(N) by first using an additional array Try to populate this next even numbers array by traversing from the last Below is the code ``` public static void multiplos3 (int[] a){ int[] b = new int[a.length]; int[] nextEvnNosArr = new int[a.length]; for (int i = a.length - 2; i > 0; i++) { if (a[i] %2 == 0) { nextEvnNosArr[i] = a[i]; } else { if (nextEvnNosArr[i + 1] % 2 == 0) { nextEvnNosArr[i] = nextEvnNosArr[i+1]; } else { nextEvnNosArr[i] = -1; } } } for (int i = 0 ; i < a.length; i++){ if (a[i] % 3 == 0){ if (i != a.length - 1 && nextEvnNosArr[i + 1] != -1){ b[j] = a[i] + nextEvnNosArr[i + 1]; j++; } } } } System.out.println(Arrays.toString(b)); } ``` Upvotes: -1
2018/03/14
539
2,131
<issue_start>username_0: I have a Python dictionary containing recipes. I have to find key values inside it and then return results depending on where those keys where found. ``` recipes = { 'recipe1':{ 'name': 'NAME', 'components': { '1':{ 'name1':'NAME', 'percent':'4 %' }, '2':{ 'name2':'NAME', 'percent':'3 %' }, '3':{ 'name3':'NAME', 'percent':'1 %' }, }, 'time':'3-5 days', 'keywords':['recipe1', '1','etc'] } } ``` Each recipe has a list of `keywords`. How can I lookup a recipe based on its `keywords` and some search input? Upon finding a recipe, I would need to return the name components and time that are specific for that recipe.<issue_comment>username_1: Given some input in a variable called `search`, you can do the following: ``` for v in recipes.values(): if search in v['keywords']: # Found the recipe of interest. return v['components'], v['time'] ``` Unfortunately, the way you are currently storing data prevents you from taking advantage of the `O(1)` lookup time in your dictionary. (This could affect performance if you have several recipes in the `recipes` dictionary.) So you'll have to iterate over the key-value pairs in `recipes` to find a recipe, unless you refactor your data structures. Upvotes: 2 <issue_comment>username_2: Do like below ``` list = ['etc', 'cc', 'ddd'] for x, y in recipes.items(): for m in y['keywords']: if m in list: print('Name : ' + y['name']) print('Components : ') for i in y['components'].keys(): print(str(i), end=' ') for j in y['components'][i]: print(j + " " + y['components'][i][j], end=' ') print('') print('\nTime: ' + str(y['time'])) ``` **Output:** ``` Name : NAME Components : 1 name1 NAME percent 4 % 2 name2 NAME percent 3 % 3 name3 NAME percent 1 % Time: 3-5 days ``` Upvotes: 0
2018/03/14
2,224
6,587
<issue_start>username_0: I am trying to build a war from a grails 2.5.4 project but I am getting an error. ``` .Error | WAR packaging error: error=2, No such file or directory ``` Here is the command I ran and the full trace. ``` grails dev war --plain-output --stacktrace --verbose Base Directory: /Users/anupshrestha/workspaces/biblio |Loading Grails 2.5.4 |Configuring classpath . |Environment set to development ................................. |Packaging Grails application .................................................. |Compiling 4 GSP files for package [biblio] .. |Compiling 2 GSP files for package [springSecurityCore] . [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage . |Building WAR file [copy] Copying 406 files to /Users/anupshrestha/workspaces/biblio/target/work/stage [copy] Copied 121 empty directories to 2 empty directories under /Users/anupshrestha/workspaces/biblio/target/work/stage ............... [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/plugins/resources-1.2.14 . [copy] Copying 7 files to /Users/anupshrestha/workspaces/biblio/target/work/stage/plugins/resources-1.2.14 .......... [copy] Copying 5 files to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/grails-app [copy] Copied 4 empty directories to 1 empty directory under /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/grails-app . [copy] Copying 659 files to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/classes . [copy] Copying 1354 files to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/classes . [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/spring .. [copy] Copying 3 files to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/classes . [copy] Copying 152 files to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/lib . [copy] Copying 1 file to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF . [delete] Deleting: /Users/anupshrestha/workspaces/biblio/target/work/resources/web.xml . [copy] Copying 4 files to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/lib ...[propertyfile] Updating property file: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/classes/application.properties [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/mongodb-5.0.12.RELEASE . [copy] Copying 1 file to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/mongodb-5.0.12.RELEASE .. [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/cache-1.1.8 . [copy] Copying 1 file to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/cache-1.1.8 ... [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/webxml-1.4.1 . [copy] Copying 1 file to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/webxml-1.4.1 ... [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/resources-1.2.14 . [copy] Copying 2 files to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/resources-1.2.14 .. [copy] Copying 2 files to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/classes . [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/spring-security-cas-2.0-RC1 . [copy] Copying 1 file to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/spring-security-cas-2.0-RC1 ... [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/spring-security-core-2.0-RC4 . [copy] Copying 11 files to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/spring-security-core-2.0-RC4 ... [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/spring-security-ldap-2.0.1 . [copy] Copying 1 file to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/spring-security-ldap-2.0.1 ... [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/ldap-0.8.2 . [copy] Copying 1 file to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/ldap-0.8.2 ... [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/xss-sanitizer-0.4.0 . [copy] Copying 1 file to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/xss-sanitizer-0.4.0 .. [copy] Copying 2 files to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/classes . [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/quartz-1.0.1 . [copy] Copying 1 file to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/quartz-1.0.1 ... [mkdir] Created dir: /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/codenarc-0.25.2 . [copy] Copying 1 file to /Users/anupshrestha/workspaces/biblio/target/work/stage/WEB-INF/plugins/codenarc-0.25.2 .... [delete] Deleting directory /Users/anupshrestha/workspaces/biblio/target/work/stage .Error | WAR packaging error: error=2, No such file or directory ```<issue_comment>username_1: There are many possibilities: 1. First try to restart your IDE *Not solved?* Check below steps: When we run `compile` or create `war` command it create files and stores them by default in the project's working directory. Where default directory name is `target` If our project workspace contains previously created war. It will not deleted automatically. It will overwritten with new. SO run the `clean` command and check the contents of the target directory. If still you notice that `target/work` directory still exists. run `clean-all` command and now check `target/work` are deleted or not. Now try to create `war` If still getting issue then exclude particular classes or jars which cause to create war Hope this help's you Upvotes: 1 <issue_comment>username_2: I had a similar error with message: ``` | Error WAR packaging error: /Users/xxx/.grails/2.5.3/projects/my-proyect/resources/grails-app does not exist. ``` The error was caused by removing i18n directory. Fix is to re-instate the i18n directory and add an empty messages.properties file. [Reference](https://github.com/AtlasOfLivingAustralia/generic-hub/issues/3) Upvotes: 0
2018/03/14
537
1,989
<issue_start>username_0: is there a way to loop through two sets of JSON arrays and identify the additional array items from File 2 compared to File 1? Example below: File 1: ``` { "Cars": [{ "type": "Ford" }, { "type": "BMW" } ] } ``` File: 2 ``` { "Cars": [{ "type": "Ford" }, { "type": "BMW" }, { "type": "Vauxhall" }, { "type": "Fiat" } ] } ``` Desired outcome: The additons are: ``` { "Cars": [{ "type": "Vauxhall" }, { "type": "Fiat" } ] } ``` I am struggling to get into the array in Python. Any help much appreciated.<issue_comment>username_1: There are many possibilities: 1. First try to restart your IDE *Not solved?* Check below steps: When we run `compile` or create `war` command it create files and stores them by default in the project's working directory. Where default directory name is `target` If our project workspace contains previously created war. It will not deleted automatically. It will overwritten with new. SO run the `clean` command and check the contents of the target directory. If still you notice that `target/work` directory still exists. run `clean-all` command and now check `target/work` are deleted or not. Now try to create `war` If still getting issue then exclude particular classes or jars which cause to create war Hope this help's you Upvotes: 1 <issue_comment>username_2: I had a similar error with message: ``` | Error WAR packaging error: /Users/xxx/.grails/2.5.3/projects/my-proyect/resources/grails-app does not exist. ``` The error was caused by removing i18n directory. Fix is to re-instate the i18n directory and add an empty messages.properties file. [Reference](https://github.com/AtlasOfLivingAustralia/generic-hub/issues/3) Upvotes: 0
2018/03/14
2,210
6,475
<issue_start>username_0: I applied 10 Cross-Validation and the output is 10 folds on confusion matrix so how can I find the average of the folds by confusion matrix? and is my work is correct? this my code : ``` set.seed(100) library(caTools) library(caret) library(e1071) folds<-createFolds(wpdc$outcome, k=10) CV <- lapply(folds, function(x){ traing_folds=wpdc[-x,] test_folds=wpdc[x,] dataset_model_nb<-naiveBayes(outcome ~ ., data = traing_folds) dataset_predict_nB<-predict(dataset_model_nb, test_folds[-1]) dataset_table_nB<-table(test_folds[,1],dataset_predict_nB) accuracy<-confusionMatrix(dataset_table_nB, positive ="R") return(accuracy) }) outcome radius_mean texture_mean perimeter_mean area_mean smoothness_mean compactness_mean concavity_mean concave_points_mean symmetry_mean fractal_dimension_mean radius_se texture_se perimeter_se area_se smoothness_se 1 N 18.02 27.60 117.50 1013.0 0.09489 0.1036 0.1086 0.07055 0.1865 0.06333 0.6249 1.8900 3.972 71.55 0.004433 2 N 17.99 10.38 122.80 1001.0 0.11840 0.2776 0.3001 0.14710 0.2419 0.07871 1.0950 0.9053 8.589 153.40 0.006399 3 N 21.37 17.44 137.50 1373.0 0.08836 0.1189 0.1255 0.08180 0.2333 0.06010 0.5854 0.6105 3.928 82.15 0.006167 ```<issue_comment>username_1: I needed the same, then following the tip of @Stephen Handerson, what I was: 1. Define a list of matrices: * `rfConfusionMatrices <- list()` 2. Store each matrix inside that list: * `RrfConfusionMatrix[[i]] <- confMatrix` 3. Use the `Reduce` function to sum the matrices and divide by the folds: * `rfConfusionMatrixMean <- Reduce('+', rfConfusionMatrix) / nFolds` Upvotes: 2 <issue_comment>username_2: If you reorganize your code and store the predictions and the true labels as: ``` set.seed(100) library(caTools) library(caret) library(e1071) folds <- createFolds(wpdc$outcome, k=10) CV <- lapply(folds, function(x){ traing_folds=wpdc[-x,] test_folds=wpdc[x,] dataset_model_nb<-naiveBayes(outcome ~ ., data = traing_folds) dataset_predict_nB<-predict(dataset_model_nb, test_folds[-1]) dataset_table_nB<-table(test_folds[,1],dataset_predict_nB) return(dataset_table_nB) # storing true and predicted values }) ``` You can append them by reducing: ``` appended_table_nB<- do.call(rbind, dataset_table_nB) ``` And then take the confusion matrix: ``` accuracy <- confusionMatrix(appended_table_nB, positive ="R") ``` It is the same as taking the average. The only difference is that you sum the data points in the conf matrix, but the accuracy and other metrics are in their average. If you'd like to see the conf matrix as average, you can: `averaged_matrix <- as.matrix(accuracy) / nFold` Upvotes: 0 <issue_comment>username_3: I just googled to find out whether it is a common thing to calculate a mean from confusion matrices. Just in case somebody is interested in a solution that can be adjusted to save more than just average values: I defined the following function to get mean and standard deviation from a `list` of confusion matrices or similar objects, given that all of those matrices have the same format: ``` average_matr <- function(matr_list){ if(class(matr_list[[1]])[1] == "confusionMatrix"){ matr_lst <- lapply(matr_list, FUN = function(x){x$table}) }else{ matr_lst <- matr_list } vals <- lapply(matr_lst, as.numeric) matr <- do.call(cbind, vals) #vec_mean <- apply(matr, MARGIN = 1, FUN = mean, na.rm = TRUE) vec_mean <- rowMeans(matr, na.rm = TRUE) matr_mean <- matrix(vec_mean, nrow = nrow(matr_lst[[1]])) vec_sd <- apply(matr, MARGIN = 1, FUN = sd, na.rm = TRUE) matr_sd <- matrix(vec_sd, nrow = nrow(matr_lst[[1]])) out <- list(matr_mean, matr_sd) return(out) } average_matr(confusion_matr) ``` If the objects in the list are of the `confusionMatrix` class, the function will only extract the values. If it is a list of matrices, it will calculate mean and standard deviation. Note that `rowMeans` is supposedly faster than `apply` with `FUN = mean`, however, as far as I know there is no `sd` function. While I used a similar syntax, `apply` with `mean` could be replaced, but for smaller data sets there should be no noticeable difference. Edit: Added both versions. **Additional: Include export as LaTeX table** ``` average_matr <- function(matr_list, latex_file = NA, metric = "sd", return = TRUE){ if(class(matr_list[[1]])[1] == "confusionMatrix"){ matr_lst <- lapply(matr_list, FUN = function(x){x$table}) }else{ matr_lst <- matr_list } vals <- lapply(matr_lst, as.numeric) matr <- do.call(cbind, vals) #vec_mean <- apply(matr, MARGIN = 1, FUN = mean, na.rm = TRUE) vec_mean <- rowMeans(matr, na.rm = TRUE) matr_mean <- matrix(vec_mean, nrow = nrow(matr_lst[[1]])) if(metric == "sd"){ vec_sd <- apply(matr, MARGIN = 1, FUN = sd, na.rm = TRUE) }else if(metric == "se"){ vec_sd <- apply(matr, MARGIN = 1, FUN = function(x){sd(x, na.rm = TRUE)/sqrt(length(x))}) }else{ vec_sd <- NA } if(length(vec_sd) > 1){ matr_sd <- matrix(vec_sd, nrow = nrow(matr_lst[[1]])) out <- list(matr_mean, matr_sd) }else{ out <- matr_mean } # generate latex table if(is.character(latex_file)){ if(dir.exists(dirname(latex_file))){ sink(latex_file) cat("\\hline\n") cat(paste(row.names(matr_lst[[1]]), collapse = " & "), "\\\\\n") cat("\\hline\n") if(length(vec_sd) > 1){ for(r in 1:nrow(matr_mean)){ cat(paste(formatC(matr_mean[r, ], digits = 1, format = "f"), formatC(matr_sd[r, ], digits = 1, format = "f"), sep = " \\(\\pm\\) ", collapse = " & "), "\\\\\n") } }else{ for(r in 1:nrow(matr_mean)){ cat(paste(formatC(matr_mean, digits = 1, format = "f"), collapse = " & "), "\\\\\n") } } cat("\\hline\n") sink() }else{ warning("Directory not found: ", latex_file) } } if(return){ return(out) } } ``` Upvotes: 0
2018/03/14
920
3,478
<issue_start>username_0: I am interested in finding the people whose birthday is today using ofbiz. This is the entity for the user: ``` ``` This is the **broken** code that searches for the users that have birthday today: ``` SimpleDateFormat monthDayFormat = new SimpleDateFormat("MM-dd"); Calendar cal = Calendar.getInstance(); String today = monthDayFormat.format(cal.getTime()); List people = dctx.getDelegator().findList("User", EntityCondition.makeCondition("dateOfBirth", EntityOperator.LIKE, "%".today), null, null, null, false); ``` This obviously does not work because we are trying to compare a string and a date object. Another attempt at making it work was to create a view-entity, and either convert the date to two integers: day and month, OR convert the date to a string a use the above code. Obviously, I could not find any way of making it work.<issue_comment>username_1: The most efficient approach would be that of building a view entity. The following definition should work with several databases (e.g. [PostgreSQL](https://en.wikipedia.org/wiki/PostgreSQL)), i.e. the ones that support the [`EXTRACT`](https://www.postgresql.org/docs/8.1/static/functions-datetime.html) function: ``` ``` With the above view, you could easily perform a query by constraining the two new fields: ``` List users = EntityQuery.use(dctx.getDelegator()).from("UserView").where("dayOfBirth", day, "monthOfBirth", month).queryList(); ``` or, if you are coding a Groovy script, its equivalent in OFBiz DSL: ``` List users = from("UserView").where("dayOfBirth", day, "monthOfBirth", month).queryList(); ``` Alternatively, following a completely different approach, you could extend your "User" entity by adding the "dayOfBirth" and "monthOfBirth" fields (of type "numeric"): ``` ``` Then you can define an eca rule to trigger the execution of a service to populate the two new fields every time the User record is create or updated with a non null dateOfBirth field: ``` ``` The service definition for the populateDayAndMonthOfBirth service would look like: ``` ``` (please fill in the missing attributes like "engine", "location" and "invoke"). The service would simply select the record, extract the integers representing the day and month from its dateOfBirth field and would store them in the dayOfBirth and monthOfBirth fields. Upvotes: 3 [selected_answer]<issue_comment>username_2: Basically you can combine two condition like: ``` Calendar cal = Calendar.getInstance(); cal.setTime(date); cal.set(Calendar.HOUR_OF_DAY, 0); cal.set(Calendar.MINUTE, 0); cal.set(Calendar.SECOND, 0); cal.set(Calendar.MILLISECOND, 0); Date tmpDate = cal.getTime(); Timestamp from = new Timestamp(tmpDate.getTime()); EntityCondition condf = EntityCondition.makeCondition("dateOfBirth", EntityOperator.GREATER_THAN_EQUAL_TO, from); cal.set(Calendar.HOUR_OF_DAY, 23); cal.set(Calendar.MINUTE, 59); cal.set(Calendar.SECOND, 59); cal.set(Calendar.MILLISECOND, 999); tmpDate = cal.getTime(); Timestamp thru = new Timestamp(tmpDate.getTime()); EntityCondition condt = EntityCondition.makeCondition("dateOfBirth", EntityOperator.LESS_THAN_EQUAL_TO, thru); ``` And then find: ``` List people = dctx.getDelegator().findList("User", EntityCondition.makeCondition(condf, condt), null, null, null, false); ``` Upvotes: 0
2018/03/14
2,431
7,929
<issue_start>username_0: I want to be able to assign an object property to a value given a key and value as inputs yet still be able to determine the type of the value. It's a bit hard to explain so this code should reveal the problem: ``` type JWT = { id: string, token: string, expire: Date }; const obj: JWT = { id: 'abc123', token: '<PASSWORD>', expire: new Date(2018, 2, 14) }; function print(key: keyof JWT) { switch (key) { case 'id': case 'token': console.log(obj[key].toUpperCase()); break; case 'expire': console.log(obj[key].toISOString()); break; } } function onChange(key: keyof JWT, value: any) { switch (key) { case 'id': case 'token': obj[key] = value + ' (assigned)'; break; case 'expire': obj[key] = value; break; } } print('id'); print('expire'); onChange('id', 'def456'); onChange('expire', new Date(2018, 3, 14)); print('id'); print('expire'); onChange('expire', 1337); // should fail here at compile time print('expire'); // actually fails here at run time ``` I tried changing `value: any` to `value: valueof JWT` but that didn't work. Ideally, `onChange('expire', 1337)` would fail because `1337` is not a Date type. How can I change `value: any` to be the value of the given key?<issue_comment>username_1: UPDATE: Looks like the question title attracts people looking for a union of all possible property value types, analogous to the way `keyof` gives you the union of all possible property key types. Let's help those people first. You can make a `ValueOf` analogous to `keyof`, by using [indexed access types](https://www.typescriptlang.org/docs/handbook/2/indexed-access-types.html) with `keyof T` as the key, like so: ``` type ValueOf = T[keyof T]; ``` which gives you ``` type Foo = { a: string, b: number }; type ValueOfFoo = ValueOf; // string | number ``` --- For the question as stated, you can use individual keys, narrower than `keyof T`, to extract just the value type you care about: ``` type sameAsString = Foo['a']; // look up a in Foo type sameAsNumber = Foo['b']; // look up b in Foo ``` In order to make sure that the key/value pair "match up" properly in a function, you should use [generics](https://www.typescriptlang.org/docs/handbook/2/generics.html) as well as indexed access types, like this: ``` declare function onChange(key: K, value: JWT[K]): void; onChange('id', 'def456'); // okay onChange('expire', new Date(2018, 3, 14)); // okay onChange('expire', 1337); // error. 1337 not assignable to Date ``` The idea is that the `key` parameter allows the compiler to infer the generic `K` parameter. Then it requires that `value` matches `JWT[K]`, the indexed access type you need. Upvotes: 10 [selected_answer]<issue_comment>username_2: If anyone still looks for implementation of `valueof` for any purposes, this is a one I came up with: ``` type valueof = T[keyof T] ``` Usage: ``` type actions = { a: { type: 'Reset' data: number } b: { type: 'Apply' data: string } } type actionValues = valueof ``` Works as expected :) Returns an Union of all possible types Upvotes: 6 <issue_comment>username_3: Thanks the existing answers which solve the problem perfectly. Just wanted to add up a lib has included this utility type, if you prefer to import this common one. <https://github.com/piotrwitek/utility-types#valuestypet> ``` import { ValuesType } from 'utility-types'; type Props = { name: string; age: number; visible: boolean }; // Expect: string | number | boolean type PropsValues = ValuesType; ``` Upvotes: 3 <issue_comment>username_4: There is another way to extract the union type of the object: ```ts const myObj = { a: 1, b: 'some_string' } as const; type Values = typeof myObj[keyof typeof myObj]; ``` Result union type for `Values` is `1 | "some_string"` It's possible thanks to the [const assertions](https://www.typescriptlang.org/docs/handbook/release-notes/typescript-3-4.html#const-assertions) (`as const` part) introduced in TS 3.4. Upvotes: 7 <issue_comment>username_5: Try this: ``` type ValueOf = T extends any[] ? T[number] : T[keyof T] ``` It works on an array or a plain object. ``` // type TEST1 = boolean | 42 | "heyhey" type TEST1 = ValueOf<{ foo: 42, sort: 'heyhey', bool: boolean }> // type TEST2 = 1 | 4 | 9 | "zzz..." type TEST2 = ValueOf<[1, 4, 9, 'zzz...']> ``` Upvotes: 4 <issue_comment>username_6: One-liner: ```js type ValueTypesOfPropFromMyCoolType = MyCoolType[keyof MyCoolType]; ``` Example on a generic method: ``` declare function doStuff(propertyName: keyof MyCoolType, value: V) => void; ``` Upvotes: -1 <issue_comment>username_7: With the function below you can limit the value to be the one for that particular key. ``` function setAttribute(obj: T, key: U, value: T[U]) { obj[key] = value; } ``` **Example** ``` interface Pet { name: string; age: number; } const dog: Pet = { name: 'firulais', age: 8 }; setAttribute(dog, 'name', 'peluche') <-- Works setAttribute(dog, 'name', 100) <-- Error (number is not string) setAttribute(dog, 'age', 2) <-- Works setAttribute(dog, 'lastname', '') <-- Error (lastname is not a property) ``` Upvotes: 5 <issue_comment>username_8: You could use help of generics to define `T` that is a key of JWT and value to be of type `JWT[T]` ``` function onChange(key: T, value: JWT[T]); ``` the only problem here is in the implementation that following `obj[key] = value + ' (assigned)';` will not work because it will try to assign `string` to `string & Date`. The fix here is to change index from `key` to `token` so compiler knows that the target variable type is `string`. Another way to fix the issue is to use Type Guard ``` // IF we have such a guard defined function isId(input: string): input is 'id' { if(input === 'id') { return true; } return false; } // THEN we could do an assignment in "if" block // instead of switch and compiler knows obj[key] // expects string value if(isId(key)) { obj[key] = value + ' (assigned)'; } ``` Upvotes: 2 <issue_comment>username_9: You can made a Generic for your self to get the types of values, **BUT**, please consider the declaration of object should be declared `as const`, like: ```js export const APP_ENTITIES = { person: 'PERSON', page: 'PAGE', } as const; <--- this `as const` I meant ``` Then the below generic will work properly: ```js export type ValueOf = T[keyof T]; ``` Now use it like below: ```js const entity: ValueOf = 'P...'; // ... means typing // it refers 'PAGE' and 'PERSON' to you ``` Upvotes: 4 <issue_comment>username_10: with [type-fest](https://github.com/sindresorhus/type-fest) lib, you can do that with `ValueOf` like that: ```js import type { ValueOf } from 'type-fest'; export const PATH_NAMES = { home: '/', users: '/users', login: '/login', signup: '/signup', }; interface IMenu { id: ValueOf; label: string; onClick: () => void; icon: ReactNode; } const menus: IMenu[] = [ { id: PATH\_NAMES.home, label: t('common:home'), onClick: () => dispatch(showHome()), icon: , }, { id: PATH\_NAMES.users, label: t('user:users'), onClick: () => dispatch(showUsers()), icon: , }, ]; ``` Upvotes: 1 <issue_comment>username_11: *I realize this is slightly off topic, That said every time I've looked for a solution to this. I get sent to this post. So for those of you looking for String Literal Type generator, here you go.* This will create a string Literal list from an object type. ``` export type StringLiteralList = T[keyof Pick]; type DogNameType = { name: "Bob", breed: "Boxer" } | { name: "Pepper", breed: "Spaniel" } | { name: "Polly", breed: "Spaniel" }; export type DogNames = StringLiteralList; // type DogNames = "Bob" | "Pepper" | "Polly"; ``` Upvotes: 0
2018/03/14
3,317
9,094
<issue_start>username_0: I know how to squash commits together and drop messages: ``` pick A do thing pick B debug fixup C more debugging pick D do another thing ``` This will result in 3 commits, with the 2 debugging commits squashed together into one, with the message "debug". However, I wanted to know if there is a simple way to `fixup` commits, but use the LATEST message, rather than the earliest. For example: ``` p A do thing p B debug 1 f C debug 2 f D fix bug. #also remove debug statements ``` By running the above, I'll end up with a single commit that fixes the bug, and there will be no remnants of the debug statements. The issue is that that commit will have a comment of "debug 1". I'd like it to have the message of "fix bug". I know that I can `reword` the commit, but that opens the editor, and forces me to retype the commit message. I've tried rewording and copying the message inside the rebase file, but when the `reword` editor opens, it still has the old message. Is it possible to either `fixup` commits but use the latest message, or to modify the commit messages inside the rebase editor, without having to have it open the editor for each individual commit I want to `reword`?<issue_comment>username_1: Change your todo list at the beginning of rebase to reverse the order of commits to fixup: ``` pick A do thing pick C more debugging fixup B debug pick D do another thing ``` Upvotes: 1 <issue_comment>username_2: I don't think you really have a choice. However, instead of using fixup & reword, you can use squash ``` p A do thing p B debug 1 s C debug 2 s D fix bug. #also remove debug statements ``` You'll get the editor open with the three first commits messages that you can edit. Without rebase ============== You can do it otherwise by using `reset`. ``` # To clean the workspace before by saving the state git stash git reset --soft HEAD~3 # To restore the saved state before the reset operation git commit -am 'fix bug' git stash pop ``` With this solution the editor won't open Upvotes: 2 <issue_comment>username_3: For what it's worth, the best methodology I found for this was to simply edit the rebase file after running "git rebase --interactive --autosquash" and replace "fixup" with "squash" (or simply "s"). This will concatenate the two commit messages and allow you to edit the final message. Upvotes: 0 <issue_comment>username_4: Check if the latest Git 2.32 (Q2 2021) could help, considering "`rebase -i`" is getting enhanced with new options. See [commit 2c0aa2c](https://github.com/git/git/commit/2c0aa2ce2efcac181801957d8105c7007db5faf7), [commit bae5b4a](https://github.com/git/git/commit/bae5b4aea523388faedd3b850c862fe45198c6b2), [commit 1d410cd](https://github.com/git/git/commit/1d410cd8c25978c1591a7d35c9077745146c6129), [commit 9e3cebd](https://github.com/git/git/commit/9e3cebd97cbd47909e683e617d5ffa2781f0adaa), [commit 71ee81c](https://github.com/git/git/commit/71ee81cd9eea308aa72d41fed3ef1cd40b4cb89a), [commit ae70e34](https://github.com/git/git/commit/ae70e34f234e675878a34b0bd76c43ffe79b95af) (29 Jan 2021) by [<NAME> (`charvi-077`)](https://github.com/charvi-077). See [commit 7cdb968](https://github.com/git/git/commit/7cdb9682545d7865e832d092f900a8037898e907) (29 Jan 2021), and [commit 498bb5b](https://github.com/git/git/commit/498bb5b82e78ddf880ab8516d4e6ac4fc5f9b215), [commit eab0df0](https://github.com/git/git/commit/eab0df0e5b96ea8ca60f030dd8c5f6e5926988de) (19 Jan 2021) by [<NAME> (`phillipwood`)](https://github.com/phillipwood). (Merged by [<NAME> -- `gitster` --](https://github.com/gitster) in [commit ce4296c](https://github.com/git/git/commit/ce4296cf2b6f66d8717e8c3237c9b4be42fb2a1b), 26 Mar 2021) > > [`rebase -i`](https://github.com/git/git/commit/9e3cebd97cbd47909e683e617d5ffa2781f0adaa): add `fixup [-C | -c]` command > ------------------------------------------------------------------------------------------------------------------------ > > > Original-patch-by: <NAME> > > Mentored-by: <NAME> > > Mentored-by: <NAME> > > Signed-off-by: <NAME> > > > > > Add options to `fixup` command to fixup both the commit contents and message. > > > * `fixup -C` command is used to replace the original commit message and > * `fixup -c`, additionally allows to edit the commit message. > > > Combined with: > > [`rebase -i`](https://github.com/git/git/commit/bae5b4aea523388faedd3b850c862fe45198c6b2): teach `--autosquash` to work with `amend!` > ------------------------------------------------------------------------------------------------------------------------------------- > > > Original-patch-by: <NAME> > > Mentored-by: <NAME> > > Mentored-by: <NAME> > > Signed-off-by: <NAME> > > > > > If the commit subject starts with "`amend!`" then rearrange it like a "`fixup!`" commit and replace `pick` command with **`fixup -C` command, which is used to fixup up the content if any and replaces the original commit message with `amend!` commit's message.** > > > The [`git rebase -i` documentation](https://github.com/git/git/blob/fa153c1cd7a84accc83e97723af85cf0ab3869e7/Documentation/git-rebase.txt#L886-L899) now reads: > > If you want to fold two or more commits into one, replace the command > "`pick`" for the second and subsequent commits with "`squash`" or "`fixup`". > > > If the commits had different authors, the folded commit will be > attributed to the author of the first commit. > > > The suggested commit message for the folded commit is the concatenation of the first > commit's message with those identified by "`squash`" commands, omitting the > messages of commits identified by "`fixup`" commands, unless "`fixup -c`" > is used. > > In that case the suggested commit message is only the message of the "`fixup -c`" commit, and an editor is opened allowing you to edit the message. > > The contents (patch) of the "`fixup -c`" commit are still > incorporated into the folded commit. > > If there is more than one "`fixup -c`" commit, the message from the final one is used. > > > You can also use "`fixup -C`" to get the same behavior as "`fixup -c`" except without opening an editor. > > > [`git rebase -i`](https://github.com/git/git/blob/f07871d302c32777de25b3fde3c621be3b2e32c3/rebase-interactive.c#L47-L50) includes (from [commit f07871d](https://github.com/git/git/commit/f07871d302c32777de25b3fde3c621be3b2e32c3)): > > `f`, `fixup [-C | -c]` > ---------------------- > > > Like "`squash`" but keep only the previous commit's log message, unless `-C` is used, in which case keep only this commit's message; > > `-c` is same as `-C` but opens the editor\n" > > > --- "[`git rebase -i`](https://github.com/git/git/blob/dc154c39f7303baaf6cad8982a814b28a3a2027a/Documentation/git-rebase.txt#L501)"([man](https://git-scm.com/docs/git-rebase#Documentation/git-rebase.txt--i)) can mistakenly attempt to apply a `fixup` to a commit itself, which has been corrected with Git 2.39 (Q4 2022). See [commit 3e367a5](https://github.com/git/git/commit/3e367a5f2f53f19130b90a97c3e2668eca3461ae) (24 Sep 2022) by [<NAME> (`krobelus`)](https://github.com/krobelus). (Merged by [<NAME> -- `gitster` --](https://github.com/gitster) in [commit dc154c3](https://github.com/git/git/commit/dc154c39f7303baaf6cad8982a814b28a3a2027a), 10 Oct 2022) > > [`sequencer`](https://github.com/git/git/commit/3e367a5f2f53f19130b90a97c3e2668eca3461ae): avoid dropping fixup commit that targets self via commit-ish > ------------------------------------------------------------------------------------------------------------------------------------------------------- > > > Reported-by: <NAME> > > Signed-off-by: <NAME> > > > > > Commit [68d5d03](https://github.com/git/git/commit/68d5d03bc49a073be3b0e14b22d30d70e7ae686d) ("`rebase`: teach `--autosquash` to match on sha1 in addition to message", 2010-11-04, Git v1.7.4-rc0 -- [merge](https://github.com/git/git/commit/cbcf0a6981687b31635d806f45e77188c54b7cfc)) taught autosquash to recognize subjects like "fixup! 7a235b" where 7a235b is an OID-prefix. > > It actually did more than advertised: 7a235b can be an arbitrary commit-ish (as long as it's not trailed by spaces). > > > Accidental(?) use of this secret feature revealed a bug where we would silently drop a fixup commit. > > The bug can also be triggered when using an OID-prefix but that's unlikely in practice. > > > Let the commit with subject "`fixup! main`" be the tip of the "`main`" branch. > > When computing the fixup target for this commit, we find the commit itself. > > This is wrong because, by definition, a fixup target must be an earlier commit in the todo list. > > We wrongly find the current commit because we added it to the todo list prematurely. > > Avoid these fixup-cycles by only adding the current commit to the todo list after we have finished looking for the fixup target. > > > Upvotes: 1
2018/03/14
924
3,205
<issue_start>username_0: *Route Config* ``` /** * Author: Rahul * Date: 25 Feb 2018 * * Routes * @flow */ import React from 'react'; import { View, Text } from 'react-native'; import { StackNavigator } from 'react-navigation'; import LoginScreen from 'src/containers/login'; import HomeScreen from 'src/containers/home'; import FeedsScreen from 'src/containers/feeds'; import { AppLogo } from 'src/components'; import { background } from 'src/styles/'; import { SIGNED_IN, SIGNED_OUT, HOME, LOGIN, FEEDS } from './constants'; const navigationOptions = { navigationOptions: { headerLeft: ( Hamburger ), headerRight: ( ), headerStyle: { paddingHorizontal: 16, backgroundColor: background.color2, }, gesturesEnabled: false, }, }; const SignedOutRouteConfig = { [LOGIN]: { screen: LoginScreen }, }; const SignedInRouteConfig = { [HOME]: { screen: HomeScreen }, [FEEDS]: { screen: FeedsScreen }, }; const SignedOut = StackNavigator(SignedOutRouteConfig, navigationOptions); const SignedIn = StackNavigator(SignedInRouteConfig, navigationOptions); const createRootNavigator = (signedIn: boolean = false) => StackNavigator( { [SIGNED_IN]: { screen: SignedIn, navigationOptions: { gesturesEnabled: false, header: null, }, }, [SIGNED_OUT]: { screen: SignedOut, navigationOptions: { gesturesEnabled: false, header: null, }, }, }, { initialRouteName: signedIn ? SIGNED_IN : SIGNED_OUT, } ); export default createRootNavigator; ``` Adding screenshots for clarity: [![As you can observe, the header content is not center aligned](https://i.stack.imgur.com/4y2wE.png)](https://i.stack.imgur.com/4y2wE.png) [![Height of the header is 64](https://i.stack.imgur.com/cRl2V.png)](https://i.stack.imgur.com/cRl2V.png) [![Absolutely positioned header content is at the bottom](https://i.stack.imgur.com/viAE4.png)](https://i.stack.imgur.com/viAE4.png) How can I center the header content and get rid of the unnecessary space from the top? **P.S** I have already tried setting the height to `headerStyle`<issue_comment>username_1: Try placing this code in your App.js file: ``` import { SafeAreaView } from "react-navigation"; if (Platform.OS === "android") { // removes extra space at top of header on android SafeAreaView.setStatusBarHeight(0); } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: In my case `headerMode: 'none'` solved the issue. May be helpful ``` const Routes = createStackNavigator( { Login: { screen: Login, }, // Profile: { screen: ProfileScreen }, }, { // initialRouteName: 'Login', headerMode: 'none' } ); ``` Upvotes: 2 <issue_comment>username_3: You can set **`headerForceInset: { top: 'never', bottom: 'never' }`** in the **navigationOptions** and that will remove the paddingTop. For more details; <https://github.com/react-navigation/react-navigation/issues/3184> Upvotes: 3 <issue_comment>username_4: If you use StatusBar with translucent options, you need to use in your screenOptions on Stack.Navigator, the option headerStatusBarHeight: 0 ``` ``` Upvotes: 2
2018/03/14
1,233
4,048
<issue_start>username_0: I am trying to create an array of unique values based on the properties of a number of identical Objects. These properties will have identical values, but some of them will be `null`, like so: ``` Object obj1 = new Object("Value 1", "Value 2", null); Object obj2 = new Object("Value 1", null, "Value 3"); Object obj3 = new Object(null, "Value2", "Value3") ``` Object Class ``` public class Object { private String value1; private String value2; private String value3; // Constructor public Object(String value1, String value2, String value3){ // this.value1... } // Getters & Setters } ``` (These Objects can have `n` number of properties, but assume just 3 for this question) How can I take the above 3 objects (or any number objects) and quickly combine (or sample) each of their properties to create the below array? ``` ["Value 1", "Value 2", "Value 3"] ``` I'm thinking a `Set` could be useful here, but I'm not quite sure how to approach it<issue_comment>username_1: I assume your Object look like this : ``` class MyObject { private String[] inputs; public MyObject(String... inputs) { this.inputs = inputs; } //Getters Setter } ``` If you are using Java 8 you can use : ``` MyObject obj1 = new MyObject("Value 1", "Value 2", null); MyObject obj2 = new MyObject("Value 1", null, "Value 3"); MyObject obj3 = new MyObject(null, "Value 2", "Value 3"); Set result = Arrays.asList(obj1, obj2, obj3) .stream() .flatMap(item -> Stream.of(item.getInputs())) .filter(item -> item != null) .collect(Collectors.toSet()); System.out.println(result); ``` Outputs ``` [Value 3, Value 1, Value 2] ``` Upvotes: 1 <issue_comment>username_2: If your intention is just to pick all distinct values in each field of those objects, then you can work with lists of each field's values: ``` List objects = Arrays.asList(new Object("Value 1", "Value 2", null), new Object("Value 1", null, "Value 3"), new Object(null, "Value2", "Value3"); //This should give you the first list... List values = objects.stream().map(o -> Arrays.asList(o.getVal1(), o.getVal2(), o.getVal3())) .distinct() .findFirst().get(); //If you need to prefer non-null values, then you can use //an ordered stream: List values = objects.stream() .map(o -> Arrays.asList(o.getVal1(), o.getVal2(), o.getVal3())) .distinct().ordered(list -> (list.get(0) != null ? -1 : 1) + (list.get(1) != null ? -1 : 1) + (list.get(2) != null ? -1 : 1)) .findFirst() .get(); ``` The comparator implementation is just to make non-null values come at the beginning of the stream so that `findFirst` can hit them first. Of course you will need to check if the optional has a value. Upvotes: 0 <issue_comment>username_3: Try the below solution : ``` public class Main { public static void main(String[] args) throws InterruptedException, ExecutionException { MyObject obj1 = new MyObject("Value 1", "Value 2", null); MyObject obj2 = new MyObject("Value 1", null, "Value 3"); MyObject obj3 = new MyObject(null, "Value 2", "Value 3"); System.out.println(myObjectPropToStringArray(obj1,obj2,obj3)); } public static Set myObjectPropToStringArray(MyObject... myObject) { Set result = new HashSet<>(); Arrays.stream(myObject).forEach(e ->{ result.add(e.getValue1()); result.add(e.getValue2()); result.add(e.getValue3()); }); return result; } } class MyObject { private String value1; private String value2; private String value3; public MyObject(String value1, String value2, String value3) { super(); this.value1 = value1; this.value2 = value2; this.value3 = value3; } public String getValue1() { return value1; } public void setValue1(String value1) { this.value1 = value1; } public String getValue2() { return value2; } public void setValue2(String value2) { this.value2 = value2; } public String getValue3() { return value3; } public void setValue3(String value3) { this.value3 = value3; } } ``` Upvotes: 1
2018/03/14
1,382
4,305
<issue_start>username_0: I am using Draft-04 of JSON Schema. Is it possible to set dependencies based on the existence of a sub-property, and/or depend on a sub-property? Or am I forced to use `allOf` to manage these kinds of dependencies? I have the following (you can play with it at <https://repl.it/@neverendingqs/JsonSchemaNestedDependencies>): ``` 'use strict'; const Ajv = require('ajv'); const assert = require('chai').assert; // Using ajv@5.5.1 const draft4 = require('ajv/lib/refs/json-schema-draft-04.json'); const schema = { "$schema": "http://json-schema.org/draft-04/schema#", "type": "object", "properties": { "foo1": { "type": [ "object" ], "properties": { "bar1": { "type": "string" } } }, "foo2": { "type": [ "object" ], "properties": { "bar2": { "type": "string" } } } }, "dependencies": { "foo1": ["foo2"], // Is this possible? "foo1/bar1": ["foo2/bar2"] } }; const schemaName = 'my-schema'; const ajv = new Ajv(); ajv.addMetaSchema(draft4); ajv.addSchema(schema, schemaName); assert.isTrue( ajv.validate(schemaName, { "foo1": { "bar1": "a" }, "foo2": { "bar2": "c" } }), ajv.errorsText(ajv.errors, { dataVar: 'event' }) ); assert.isFalse(ajv.validate(schemaName, { "foo1": { "bar1": "a" } })); // Looking to cause this to pass assert.isFalse(ajv.validate(schemaName, { "foo1": { "bar1": "a" }, "foo2": {} })); ``` I am looking for Draft-04 answers, but am also interested in answers using later specifications. EDIT: **Draft-04** refers to the specifications under <http://json-schema.org/specification-links.html#draft-4>. Specifically, I am using `dependencies` which is defined under the **Validation** specification (<https://datatracker.ietf.org/doc/html/draft-fge-json-schema-validation-00>)<issue_comment>username_1: It's pretty tricky to achive this in draft 4! You can use `required` in draft 4 to make a property required in an object... ``` { "$schema": "http://json-schema.org/draft-04/schema#", "type": "object", "properties": { "foo1": { "type": [ "object" ], "properties": { "bar1": { "type": "string" } } }, "foo2": { "type": [ "object" ], "properties": { "bar2": { "type": "string" } }, "required": [ "bar2" ] } } } ``` I can't make this change re-run in repl.it, but I checked it against the schema you want to fail using <https://www.jsonschemavalidator.net> For draft-7 (which is latest at the time of writing), you can use `if`, `then`, `else`, which might be more intuative, but I think you'd still need to use `required` to achive this, as you want the subschema in `if` to pass or fail. [Keywords for Applying Subschemas Conditionally](http://json-schema.org/latest/json-schema-validation.html#rfc.section.6.6). Upvotes: 0 <issue_comment>username_2: It would be nice if `dependencies` supported a JSON Pointer, but it doesn't. You have to solve this using implication. I've broken it down using `definitions` to help make it more clear what is happening. First I define schemas for the cases we are checking for: `/foo1/bar1` is present and `/foo2/bar2` is present. With those two definitions, I use `anyOf` to say either `/foo1/bar1` is not present, or `/foo2/bar2` is required. In other words, `/foo1/bar1` implies `/foo2/bar2`. ``` { "$schema": "http://json-schema.org/draft-04/schema#", "type": "object", "properties": { "foo1": { "type": [ "object" ], "properties": { "bar1": { "type": "string" } } }, "foo2": { "type": [ "object" ], "properties": { "bar2": { "type": "string" } } } }, "allOf": [{ "$ref": "#/definitions/foo1-bar1-implies-foo2-bar2" }], "dependencies": { "foo1": ["foo2"] }, "definitions": { "foo1-bar1-implies-foo2-bar2": { "anyOf": [ { "not": { "$ref": "#/definitions/foo1-bar1" } }, { "$ref": "#/definitions/foo2-bar2" } ] }, "foo1-bar1": { "properties": { "foo1": { "required": ["bar1"] } }, "required": ["foo1"] }, "foo2-bar2": { "properties": { "foo2": { "required": ["bar2"] } }, "required": ["foo2"] } } } ``` Upvotes: 1
2018/03/14
429
1,518
<issue_start>username_0: I'm trying to check if the sha256 hash in my sqlite database matches the sha256 hash of the password the user sent to my NodeJS server. The Auth() function should return either true or false. How do I access the variable "refpw" in the if statement? ```js function Auth(username, password){ db.get("SELECT passwordsha256 FROM users WHERE username=?", username, (err, row) => { var refpw = row.passwordsha256 }) if(sha256(password) === refpw){return true}else{return false} } ```<issue_comment>username_1: What you are trying to do is change an asynchronous method to a synchronous one. I would recommend passing in a callback to your Auth method and then where ever you call the Auth method you pass in a callback function. ``` function Auth(username, password, callback){ db.get("SELECT passwordsha256 FROM users WHERE username=?", username, (err, row) => { var refpw = row.passwordsha256; callback(sha256(password) === refpw); }) } ``` And then call Auth like this ``` Auth(username, password, (success) => { if (success) { //do something here } }); ``` Upvotes: 1 <issue_comment>username_2: Use async and promise.. ``` async function Auth(username, password){ const refpw = await new Promise(function (resolve, reject) { db.get("SELECT passwordsha256 FROM users WHERE username=?", username, (err, row) => { resolve(row.passwordsha256); }) }) if(sha256(password) === refpw){return true}else{return false} } ``` Upvotes: 0
2018/03/14
704
2,090
<issue_start>username_0: I have a blank array and i need to add the numbers from 1 to 20 to this array. After that i need to sum these number totally. I am stucked in here: ``` for(i=1;i<=20;i++){ push() } ``` What do you think, please answer. Thank you<issue_comment>username_1: Let's see... First you need to define an array like so: ``` var array =[]; ``` And you also need to make a variable for the sum: ``` var sum = 0; ``` Now you use that for loop to add your numbers to the array: ``` for(var i = 0; i <= 20; i++) { array[i] = i; sum += i; } ``` Hopefully this is what you were looking for. Upvotes: 2 [selected_answer]<issue_comment>username_2: If I got your question correctly, this should help : ``` var i = 1, sum = 0, numbers = []; // Filling the array : for(i=1; i<=20; i++){ numbers.push(i); } // Calculating the sum : numbers.forEach(function(number) { sum += number; }); ``` And to see the result : ``` alert(sum); ``` Upvotes: 0 <issue_comment>username_3: yeah you can achieve it simply just by making a push() function to the array variable like: ``` function helloWorld(){ var arr = Array(); var sum = 0; for(var i=0;i<20;i++){ arr.push(i+1); sum = sum+(i+1); } console.log(arr,sum); } ``` in the console.log you will get the result Upvotes: 0 <issue_comment>username_4: another way to do it is with the use of `reduce` ``` var arr = []; for(i=1;i<=20;i++){ //a for loop to create the array arr.push(i) } console.log("the value of the array is " + arr) var sum = arr.reduce(add, 0); function add(a, b) { // a function the calculate the total return a + b; } console.log("the total is " + sum) ``` Upvotes: 0 <issue_comment>username_5: This is an alternative using the function `Array.from` to initialize and the function `reduce` to sum the whole set of numbers. ```js var numbers = Array.from({length: 20 }, () => ( this.i = ((this.i || 0) + 1 )) ); //0,1,2,3.... and so on! sum = numbers.reduce((a, n) => a + n, 0); console.log(sum); ``` Upvotes: 0
2018/03/14
949
3,038
<issue_start>username_0: I apologize in advance if the answer to this question is somewhere on this site. I am new to this and if the answer is on stack overflow the main reason why I couldn't find it is because I don't know the correct termanology. I have a very simple project in class which is to make an animal in processing.js. My problem being is that with the transform "lines" it stacks. Say you have two transform lines they multiply or stack. ```js noFill(); scale(2, 2); rect(100, 100, 100, 100); scale(3, 3); rect(100, 100, 100, 100); ``` The first rectangle will be double the size and the second rectangle will be 6x the size. What's intended ( for me ) is to have the second rectangle not 6x the size but 3x the size. I know I could just scale it a lower number but this gets pretty tricky when I have 5 different objects scaled and it also stacks for rotation, referencing to the last rotational point. If anyone knows a line that would seperate it as a different object or remove the stacking as if it were on the "top(lowest number in lines) of the code it'd be much appreciated. Thanks for taking the time to read this. If you would like the code for my project just ask!<issue_comment>username_1: Let's see... First you need to define an array like so: ``` var array =[]; ``` And you also need to make a variable for the sum: ``` var sum = 0; ``` Now you use that for loop to add your numbers to the array: ``` for(var i = 0; i <= 20; i++) { array[i] = i; sum += i; } ``` Hopefully this is what you were looking for. Upvotes: 2 [selected_answer]<issue_comment>username_2: If I got your question correctly, this should help : ``` var i = 1, sum = 0, numbers = []; // Filling the array : for(i=1; i<=20; i++){ numbers.push(i); } // Calculating the sum : numbers.forEach(function(number) { sum += number; }); ``` And to see the result : ``` alert(sum); ``` Upvotes: 0 <issue_comment>username_3: yeah you can achieve it simply just by making a push() function to the array variable like: ``` function helloWorld(){ var arr = Array(); var sum = 0; for(var i=0;i<20;i++){ arr.push(i+1); sum = sum+(i+1); } console.log(arr,sum); } ``` in the console.log you will get the result Upvotes: 0 <issue_comment>username_4: another way to do it is with the use of `reduce` ``` var arr = []; for(i=1;i<=20;i++){ //a for loop to create the array arr.push(i) } console.log("the value of the array is " + arr) var sum = arr.reduce(add, 0); function add(a, b) { // a function the calculate the total return a + b; } console.log("the total is " + sum) ``` Upvotes: 0 <issue_comment>username_5: This is an alternative using the function `Array.from` to initialize and the function `reduce` to sum the whole set of numbers. ```js var numbers = Array.from({length: 20 }, () => ( this.i = ((this.i || 0) + 1 )) ); //0,1,2,3.... and so on! sum = numbers.reduce((a, n) => a + n, 0); console.log(sum); ``` Upvotes: 0
2018/03/14
289
971
<issue_start>username_0: I'm wondering why I can't return result with getData() function? it is empty in browser (PHP 7.1), I'm trying to understand why. I can print/echo it just fine o.O ``` class testObject { public $data; function __construct($data) { $this->data = $data; } function showInfo() { print_r($this->data); } function getData() { $str = $this->data; return $str; } } $data = 'test'; $a = (new testObject($data))->getData(); ```<issue_comment>username_1: Before $a You must write echo or print, because your method returns data, but you must tell what this data should be shown. Upvotes: 2 [selected_answer]<issue_comment>username_2: This works fine for me just running it in [PHP Sandbox](http://sandbox.onlinephpfunctions.com/code/a4a09581a3432e16027623c4df97186f91cf8ffe). If you want to show it in the browser, put a echo or print after the $a. Upvotes: 0