date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/14
555
2,217
<issue_start>username_0: is there a lifecycle hook so-called "ngAfterAllChildrenInit" because ngAfterViewInit is called before ngOninit of the children, I am trying to avoid hacking with setTimeOut or emitting events from all the children and collecting them and then do what I want ( that depend on the children initialization) it seems so common why is it not part of angular?<issue_comment>username_1: You could use decorators for that. I mean, Angular decorators, not custom ones. In your parent : ``` activated: any = {}; ``` In your children : ```js constructor(@Host() parent: ParentComponent) {} // Hook of your choice ngOnInit() { this.parent.activated[this.constructor.name] = true; } ``` This will create an `activated` object like this ```js activated = { FirstChildComponent: true // Second not initialized ? Then not here and activated[SecondChildComponent] = undefined }; ``` Upvotes: 1 <issue_comment>username_2: Angular Lifecycle hooks are very powerful, in order to achieve what you want, you can simply use the OnInit lifecycle hook. Angular is reading your template file and running your component in a vertical way (from top to down). So, you just have to do whatever you want in your last component's ngOnInit method. If your components are used in different parts of your application, you can check that the parent component is the one you want by using the @Host decorator (see the answer of @trichetriche). I created a stack blitz where there's 3 components used in the app component (parent) in this order: ``` ``` So the AppComponent OnInit hook will fire first, and after that the HelloComponent OnInit and finally the ChildComponent. See it in action on the console: <https://stackblitz.com/edit/angular-62xjux> **EDIT:** You can use the `ngAfterViewChecked` as the documentation specifies: > > Respond after Angular checks the component's views and child views / > the view that a directive is in. > > > Called after the ngAfterViewInit and every subsequent > ngAfterContentChecked(). > > > Upvotes: 0 <issue_comment>username_3: how about use a service. the last child publish an event through observable, and the parent subscribes to it. Upvotes: 0
2018/03/14
664
2,654
<issue_start>username_0: ``` unit MyFirstUnit; uses MyTranslateUnit; ... sText := Dictionary('text to translate', UnitName); ... end. ``` ``` unit AnotherUnit; uses MyTranslateUnit; ... sText := Dictionary('new text to translate', UnitName); ... end. ``` ``` unit MyTranslateUnit; function Dictionary(sTextToTranslate: string; sUnitName: string) begin // Here I need the UnitName of the caller Result := ... end; end. ``` There are quite many places in my program where I call **Dictionary(...)**. How can I avoid passing the UnitName as second parameter? Is it possible to get the UnitName of the caller within MyTranslateUnit without having a second parameter? I want to have a function like > > function Dictionary(sTextToTranslate: string) > > ><issue_comment>username_1: As long as the call happens inside a method of a class, you can simply write `UnitName`. Every Delphi `TObject` provides the `class function UnitName: string;` which gives the name of unit the class is declared in. This won't give you the possibility to omit the second parameter, but it simplifies the maintenance when units are renamed or code is copied or moved between units. **Edit:** There is a real dirty hack to make this work without the second parameter and it also works only within a method of a class. I suggest to make use of this only as a last resort! The benefit of removing one parameter can easily backfire in the future. Declare a *class helper* for TObject like this: ``` type TRealDirtyDontDoItObjectHelper = class helper for TObject public class function Dictionary(const sTextToTranslate: string): string; end; implementation class function TRealDirtyDontDoItObjectHelper.Dictionary(const sTextToTranslate: string): string; begin { whatever implementation should go here } Result := UnitName + ': ' + sTextToTranslate; end; ``` Now you can call something like ``` Caption := Dictionary('title'); ``` inside any method where `UnitName` gives the unit where the class the method belongs is declared in. Note that this means the class of the current instance and not necessarily some inherited class where the method is declared. I should also mention, that this *class helper for TObject* doesn't interfere with class helpers for any other class, even if these obviously inherit from `TObject`. Upvotes: 3 <issue_comment>username_2: Possible but only with additional code in each unit ``` type TDummy = class(TObject) end; function Dictionary(sTextToTranslate: string): string; begin Result := MyTranslateUnit.Dictionary(sTextToTranslate, TDummy.UnitName); end; ``` Upvotes: 0
2018/03/14
680
2,689
<issue_start>username_0: I'm a total beginner with JS and trying to learn by my self. I'm hoping I don't get bashed here but Stackoverflow looks like still place to ask questions. I have this code for calculation and it works. But I want to change prompt to input and show the results on other inputs with a submit button. Or! What would be amazing if the calculation happens on the fly while user types the numbers (I'm not even sure if that's possible.) Thanks in advance and this my JS code so far: ``` var incomePerYear = prompt("Income per year: "); var perYear = parseFloat(((incomePerYear) * 1 / 100) - (incomePerYear * 6 / 1000)); var perMonth = parseFloat(perYear / 12); document.write("Income per year: " + perYear.toLocaleString('se') + " "); document.write("Income per month: " + perMonth.toFixed(4)); ```<issue_comment>username_1: As long as the call happens inside a method of a class, you can simply write `UnitName`. Every Delphi `TObject` provides the `class function UnitName: string;` which gives the name of unit the class is declared in. This won't give you the possibility to omit the second parameter, but it simplifies the maintenance when units are renamed or code is copied or moved between units. **Edit:** There is a real dirty hack to make this work without the second parameter and it also works only within a method of a class. I suggest to make use of this only as a last resort! The benefit of removing one parameter can easily backfire in the future. Declare a *class helper* for TObject like this: ``` type TRealDirtyDontDoItObjectHelper = class helper for TObject public class function Dictionary(const sTextToTranslate: string): string; end; implementation class function TRealDirtyDontDoItObjectHelper.Dictionary(const sTextToTranslate: string): string; begin { whatever implementation should go here } Result := UnitName + ': ' + sTextToTranslate; end; ``` Now you can call something like ``` Caption := Dictionary('title'); ``` inside any method where `UnitName` gives the unit where the class the method belongs is declared in. Note that this means the class of the current instance and not necessarily some inherited class where the method is declared. I should also mention, that this *class helper for TObject* doesn't interfere with class helpers for any other class, even if these obviously inherit from `TObject`. Upvotes: 3 <issue_comment>username_2: Possible but only with additional code in each unit ``` type TDummy = class(TObject) end; function Dictionary(sTextToTranslate: string): string; begin Result := MyTranslateUnit.Dictionary(sTextToTranslate, TDummy.UnitName); end; ``` Upvotes: 0
2018/03/14
941
3,096
<issue_start>username_0: Is there any builtin (or very simple) method of having a python dictionary work in a similar fashion as deque. I need it to have a maximum size, and when new keys are added, if the maximum length is reached then the key added first is deleted. It shouldn't be too difficult to implement as a custom class, but using builtins is always prefered. I am using Python 3.6 if that is of any help.<issue_comment>username_1: It sounds like you want a least recently used (LRU) cache. The [functools.lru\_cache decorator](https://docs.python.org/3/library/functools.html#functools.lru_cache) might be useful to you. If you want more control, then check out the package [pylru](https://pypi.python.org/pypi/pylru) or just google Python LRU cache. Upvotes: 3 <issue_comment>username_2: It sounds like a variant of `OrderedDict`. ``` class FixSizeOrderedDict(OrderedDict): def __init__(self, *args, max=0, **kwargs): self._max = max super().__init__(*args, **kwargs) def __setitem__(self, key, value): OrderedDict.__setitem__(self, key, value) if self._max > 0: if len(self) > self._max: self.popitem(False) ``` Upvotes: 5 [selected_answer]<issue_comment>username_3: Based on username_2's implementation, but updated to use the standard dict as well as avoiding keyword conflict on the max (now maxlen) parameter. CPython 3.6 (released on Dec 23 2016) introduced insertion order in dict and from 3.7 it's been a guarantee. No CPython version which does NOT support this behaviour is supported anymore, it's time to update :-) ``` class FixSizedDict(dict): def __init__(self, *args, maxlen=0, **kwargs): self._maxlen = maxlen super().__init__(*args, **kwargs) def __setitem__(self, key, value): dict.__setitem__(self, key, value) if self._maxlen > 0: if len(self) > self._maxlen: self.pop(next(iter(self))) fs_dict = FixSizedDict(maxlen=2) fs_dict['a'] = 1 fs_dict['b'] = 2 print(fs_dict) fs_dict['c'] = 3 print(fs_dict) ``` Output: ``` {'a': 1, 'b': 2} {'b': 2, 'c': 3} ``` Upvotes: 1 <issue_comment>username_4: There is a quite straight-forward library called [CircularDict](https://pypi.org/project/circular-dict/) that implements this behaviour. Besides limiting the maximum amount of items the `dict` can store, it also allows setting memory usage limits. To install: ``` pip install circular-dict ``` And then just: ``` from circular_dict import CircularDict # Initialize a CircularDict with a maximum length of 3 my_dict = CircularDict(maxlen=3) # You could also set maxsize_bytes=8*1024 bytes ``` Then, you can use it just as a standard dict, and it will manage that circular-queue behaviour. ``` # Fill it with 4 items my_dict['item1'] = 'value1' my_dict['item2'] = 'value2' my_dict['item3'] = 'value3' # When adding this 4th item, the 1st one will be dropped my_dict['item4'] = 'value4' print(circ_dict) ``` Ouptut will look like. ``` {'item2': 'value2', 'item3': 'value3', 'item4': 'value4'} ``` Upvotes: 0
2018/03/14
1,257
4,754
<issue_start>username_0: I'm using dropwizard (jersey, jackson) to create a REST API and have stumbled upon some problems I can't seem to find the answer to. I would like to build an sql query based on a json file. This would be done via a map (criteria, value). I have some problems realising this: * Calling the DAO method getUserByCriteria(Map/JSONObject) will give me this type of error: UnsupportedOperationException: No type parameters found for erased type 'interface java.util.Map[K, V]'. To bind a generic type, prefer using bindByType. OR a "No Argument factory" error which I can't seem to reproduce atm Code: UserResource: ``` @POST @Path("/list") @Consumes(MediaType.APPLICATION_JSON) @Produces(MediaType.APPLICATION_JSON) public List getUser(@Auth UserToken token, JSONObject json) { return userDAO.getUserByCriteria(json); } ``` UserDao: ``` List getInvoiceByCriteria(@Bind("json") JSONObject json); ``` * When I do get this to work, how would I go about it? My code looks like this (can't seem to get the code block formatted for this one): @SqlQuery("SELECT \* FROM user LIMIT 10") @RegisterRowMapper(UserMapper.class) List getUserByCriteria(@Bind("json") Map json); And I would like to make it do something like this: ``` @SqlQuery("SELECT * FROM user WHERE crit1 = :crit1 AND crit2 = :crit2 LIMIT 10") @RegisterRowMapper(UserMapper.class) List getUserByCriteria(@Bind("json") Map json){ //EXTRACT VALUES OF MAP HERE // }; ``` I realise this is a pretty vague question. Problem is I'm a pretty big noob on this REST stuff and the problems I encounter aren't that common (or I'm searching for the wrong things). Any help/insight is greatly appreciated! Xtra question regarding http/rest: I feel like this should be a GET request instead of a POST, but my Advanced Rest Client doesn't allow for a body in the GET request. I found online that this is usually not done, but allowed. Is using POST ok here?<issue_comment>username_1: I will answer first to your Xtra question : The objective of the GET is to retrieve data with the URI and params sent in the URI. There is a difference b/w can or should. It's the same with Body with GET, you can send the body with GET but then you are not following the HTTP Guidelines also the purpose of GET and POST are mixed. <[Refer Here](https://www.rfc-editor.org/rfc/rfc2616#section-9.3)> and <[here](https://www.rfc-editor.org/rfc/rfc2616#section-4.3)> Now in your question. You are sending `JSONObject json` in ``` public List getUser(@Auth UserToken token, JSONObject json) { return userDAO.getUserByCriteria(json); } ``` but your are matching with Map. Map is basically entertains TypeErasure < which means while compiling the code your collection's generic will be replaced by bind objects of this> You can rather Insert type casts if necessary to preserve type safety. Also you can use something like this ``` List getInvoiceByCriteria(@Bind("json") Map json); ``` Upvotes: 1 <issue_comment>username_2: Here is a simple example of a DAO interface using a Mapper : ``` @RegisterMapper(EmployeeMapper.class) public interface EmployeeDao { @SqlQuery("select * from employee;") public List getEmployees(); @SqlQuery("select \* from employee where id = :id") public Employee getEmployee(@Bind("id") final int id); @SqlUpdate("insert into employee(name, department, salary) values(:name, :department, :salary)") void createEmployee(@BindBean final Employee employee); @SqlUpdate("update employee set name = coalesce(:name, name), " + " department = coalesce(:department, department), salary = coalesce(:salary, salary)" + " where id = :id") void editEmployee(@BindBean final Employee employee); @SqlUpdate("delete from employee where id = :id") int deleteEmployee(@Bind("id") final int id); @SqlQuery("select last\_insert\_id();") public int lastInsertId(); } ``` Here is the Employee Mapper Class used above: ``` public class EmployeeMapper implements ResultSetMapper { private static final String ID = "id"; private static final String NAME = "name"; private static final String DEPARTMENT = "department"; private static final String SALARY = "salary"; public Employee map(int i, ResultSet resultSet, StatementContext statementContext) throws SQLException { Employee employee = new Employee(resultSet.getString(NAME), resultSet.getString(DEPARTMENT),resultSet.getInt(SALARY)); employee.setId(resultSet.getInt(ID)); return employee; } } ``` I have explained how to use JDBI, create REST APIs in Dropwizard in simple steps in a blog post and there is also a sample working application that I have created on GitHub. Please check: <http://softwaredevelopercentral.blogspot.com/2017/08/dropwizard-mysql-integration-tutorial.html> Upvotes: 3 [selected_answer]
2018/03/14
248
1,053
<issue_start>username_0: Our team have been trying to develop some GUI for the ansible execution and I wanted to know if there is some way to pause the execution of the playbook midway with just command line argument. I am familiar with the Pause option but that needs to be added in the YAML, we don't want that. I am also familiar with the --step argument passed on the CLI, that's close to what we want but not specifically. Thanks.<issue_comment>username_1: I believe that you have found the only two options that would stop a playbook mid-play (pause and --step). As you're probably aware, Ansible is designed to run roles/playbooks start to finish without user intervention. Anything that allows you to stop mid-execution is really just for debugging. What are you trying to accomplish by building a custom GUI? Have you looked into [AWX](https://github.com/ansible/awx)? Upvotes: 0 <issue_comment>username_2: While the playbook is running you can press ctrl + s to freeze the console and then ctrl + q to resume it. Upvotes: 3 [selected_answer]
2018/03/14
1,705
4,766
<issue_start>username_0: I am trying to create a legend inside a fixed-width div, containing coloured checkboxes and a label for each checkbox. My problem is that longer labels break over two lines and push the label down below the checkbox, which looks bad: [![enter image description here](https://i.stack.imgur.com/yfqVL.png)](https://i.stack.imgur.com/yfqVL.png) How can I get the checkbox and the label on the same line, but retaining the line break, and ensuring that "France" and "territories" both have the same left indent, rather than "territories" having the same indent as the checkbox? JSFiddle: <https://jsfiddle.net/6yLd6j1b/10/> Full code: ``` France and its overseas territories Germany #legend { width: 200px; } #legend div { margin-right: 5px; } .checkbox { margin: 0 0 1.2em 0; clear: both; cursor: pointer; } .checkbox .tag { display: block; float: left; } .checkbox .area_type { display: none; } .area_type + label { cursor: pointer; -webkit-appearance: none; background-color: #fafafa; border: 1px solid #cacece; box-shadow: 0 1px 2px rgba(0, 0, 0, 0.05), inset 0px -15px 10px -12px rgba(0, 0, 0, 0.05); padding: 7px; display: block; float: left; position: relative; top: 2px; overflow: hidden; border-radius: 5px; margin-right: 5px; } .area_type:checked + label:after { width: 100%; height: 100%; content: ""; position: absolute; left: 0px; top: 0px; } .france .area_type:checked + label:after { background-color: #2c98a0; } .germany .area_type:checked + label:after { background-color: #38b2a3; } ```<issue_comment>username_1: You can do it with the ***Flexbox*** if you add `display: inline-flex` and `align-items: flex-start` to the `.checkbox` class: ```css #legend { background-color: #ccc; border-radius: 3px; box-shadow: 0 1px 2px rgba(0,0,0,0.10); padding: 10px; position: absolute; top: 10px; right: 10px; z-index: 1; width: 200px; } #legend div { margin-right: 5px; } .checkbox { margin: 0 0 1.2em 0; clear: both; cursor: pointer; display: inline-flex; /* only takes the content's width, but in your case "display: flex" also works just fine */ align-items: flex-start; /* prevents the default vertical stretching of flex-items */ } .checkbox .tag { display: block; float: left; } .checkbox .area_type { display: none; } .area_type + label { cursor: pointer; -webkit-appearance: none; background-color: #fafafa; border: 1px solid #cacece; box-shadow: 0 1px 2px rgba(0, 0, 0, 0.05), inset 0px -15px 10px -12px rgba(0, 0, 0, 0.05); padding: 7px; display: block; float: left; position: relative; top: 2px; overflow: hidden; border-radius: 5px; margin-right: 5px; } .area_type:checked + label:after { width: 100%; height: 100%; content: ""; position: absolute; left: 0px; top: 0px; } .france .area_type:checked + label:after { background-color: #2c98a0; } .germany .area_type:checked + label:after { background-color: #38b2a3; } ``` ```html France and its overseas territories Germany ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I took the liberty of correctly wrapping the `input[type="checkbox"]` inside the `label` so that it toggles the checkbox for better user experience. Obviously this required some markup changes as well. Bonus tip: I didn't want to mess *any more than necessary* with your initial code, but try using more semantic code and simpler logic - for example the `.checkbox` class should be applied on the checkbox itself and not around it. Also with the new markup the could become obsolete, since you might as well have used the `label.country` class to target specific elements. HTML: ``` France and its overseas territories Germany ``` CSS: ``` #legend { width: 200px; } #legend div { margin-right: 5px; } .checkbox { margin: 0 0 1.2em 0; clear: both; cursor: pointer; } .checkbox { position: relative; } .area_type { position: absolute; visibility: hidden; opacity: 0; } .area_type+span { display: block; margin-left: 25px; } .area_type+span:before { content: ""; cursor: pointer; -webkit-appearance: none; background-color: #fafafa; border: 1px solid #cacece; box-shadow: 0 1px 2px rgba(0, 0, 0, 0.05), inset 0px -15px 10px -12px rgba(0, 0, 0, 0.05); padding: 7px; display: block; overflow: hidden; border-radius: 5px; margin-right: 5px; position: absolute; left: 0; top: 2px; } .france .area_type:checked+span:before { background-color: #2c98a0; } .germany .area_type:checked+span:before { background-color: #38b2a3; } ``` Here is a working fiddle: <https://jsfiddle.net/y74qanps/> Upvotes: 1
2018/03/14
814
2,662
<issue_start>username_0: I have to perform a search on a database using ransack. A few columns in the database have data stored in serialized arrays. I want to match the exact data stored in the arrays with data sent by user to perform search (users' data are also arrays). For example, in database, one column has data as (`c1`, `c2` are test cases): ``` c1.column_data = [1, 2, 3, 4, 5] c2.column_data = [] ``` User searches for data (`t1`, `t2`, `t3` are test cases): ``` t1.user_data = [1] t2.user_data = [1, 3] t3.user_data = [1, 2, 3, 4, 5] t4.user_data = [] ``` * For case `c1` with `t1`, `t2`, `t4`, it should return `no match found`. * With `t3`, it should result with `match found`. * For case `c2` with `t1`, `t2`, `t3`, it should return `no match found`. * With `t4`, it return `match found. I found [postgres\_ext gem](https://github.com/activerecord-hackery/ransack/issues/321), but I cannot make it work. Can anyone suggest how I can do this or suggest any alternative method for search like this? Any alternative solution is also welcome.<issue_comment>username_1: ok, with rails 5.1 simple Ransack search will works fine. For example assuming one of your columns with serialised array name is 'column\_data' ``` t.text :column_data, array: true, default: [] ``` you can search the array columns by following this syntax: ``` ‘{ val1, val2, … }’ ``` for example in console ``` some_search = Model.search(column_data_eq: '{1,3}') some_search.result ``` should give you the following results ``` SELECT "model".* FROM "mdole" WHERE "model"."column_data" = '{1,3}' LIMIT $1 [["LIMIT", 11]] ``` Upvotes: 1 <issue_comment>username_2: I am answering my own question. It may vary to the requirement. I had to take the user params and search in the database if any exact data matches then put the user in same group else make a new group with the searched data and at the end save each searched data for respective user also. As all the fields were multi select dropdowns checkboxes etc all params was coming as an array of strings. My workaround for the problem: 1. sorted the array params and joined it with comma and made string. ex: ``` a = [1,2,3,4,5,6] a.sort.join(',') "1,2,3,4,5,6" ``` I stored this string in the database column which is in text. No when user searches for a group I take its params and convert it again in comma separated string and using where clause query to database is done. In UI I convert this string to array again and show it to user. I dont know its a correct way of doing it or not but it is working for me. still any good solutions are welcome. Upvotes: 4 [selected_answer]
2018/03/14
767
2,743
<issue_start>username_0: In my database I have one columns in which the data is saved as float I now want to export this and other columns from my database into a datatabel and then convert each column of the datatable to a list. This workes fine for every column expect the one which is filled whit float values. My code: Getting the database into a Datatable: ``` MySqlDataAdapter adapter = new MySqlDataAdapter(command); DataSet dataset = new DataSet(); adapter.Fill(dataset); DataTable dt= dataset.Tables[0]; ``` Converting the data from the datatable to a list ("Nutzfläche" is the column which includees the float values): ``` foreach (DataRow row in dt.Rows) { string s_roomRoomNr = row["Raumnr"].ToString(); ls_roomRoomNr.Add(s_roomRoomNr); string s_roomRoomUser = row["Raumbereich Nutzer"].ToString(); ls_roomRoomUser.Add(s_roomRoomUser); string s_roomRoomArea = row["Nutzfläche"].ToString(); ls_roomRoomArea.Add(s_roomRoomArea); } ``` This workes fine untill I get to s\_roomRoomArea, then it throws an exeption Exeptiontype: System.ArgumentException<issue_comment>username_1: ok, with rails 5.1 simple Ransack search will works fine. For example assuming one of your columns with serialised array name is 'column\_data' ``` t.text :column_data, array: true, default: [] ``` you can search the array columns by following this syntax: ``` ‘{ val1, val2, … }’ ``` for example in console ``` some_search = Model.search(column_data_eq: '{1,3}') some_search.result ``` should give you the following results ``` SELECT "model".* FROM "mdole" WHERE "model"."column_data" = '{1,3}' LIMIT $1 [["LIMIT", 11]] ``` Upvotes: 1 <issue_comment>username_2: I am answering my own question. It may vary to the requirement. I had to take the user params and search in the database if any exact data matches then put the user in same group else make a new group with the searched data and at the end save each searched data for respective user also. As all the fields were multi select dropdowns checkboxes etc all params was coming as an array of strings. My workaround for the problem: 1. sorted the array params and joined it with comma and made string. ex: ``` a = [1,2,3,4,5,6] a.sort.join(',') "1,2,3,4,5,6" ``` I stored this string in the database column which is in text. No when user searches for a group I take its params and convert it again in comma separated string and using where clause query to database is done. In UI I convert this string to array again and show it to user. I dont know its a correct way of doing it or not but it is working for me. still any good solutions are welcome. Upvotes: 4 [selected_answer]
2018/03/14
2,281
5,345
<issue_start>username_0: Given a (dummy) vector ``` index=log(seq(10,20,by=0.5)) ``` I want to compute the running mean with centered window and **with tapered windows at each end**, i.e. that the first entry is left untouched, the second is the average of a window size of 3, and so on until the specified window size is reached. The answers given here: [Calculating moving average](https://stackoverflow.com/questions/743812/calculating-moving-average), seem to all produce a shorter vector cutting off the start and end where the window is too large, for example: ``` ma <- function(x,n=5){filter(x,rep(1/n,n), sides=2)} ma(index) Time Series: Start = 1 End = 21 Frequency = 1 [1] NA NA 2.395822 2.440451 2.483165 2.524124 2.563466 2.601315 [9] 2.637779 2.672957 2.706937 2.739798 2.771611 2.802441 2.832347 2.861383 [17] 2.889599 2.917039 2.943746 NA NA ``` same goes for ``` rollmean(index,5) ``` from the zoo package Is there a quick way of implementing tapered windows without resorting to coding up loops?<issue_comment>username_1: The `width` argument of `zoo::rollapply` can be a numeric vector. Hence, in your example, you can use: ``` rollapply(index, c(1, 3, 5, rep(5, 15), 5, 3, 1), mean) # [1] 2.302585 2.350619 2.395822 2.440451 2.483165 2.524124 2.563466 2.601315 2.637779 2.672957 2.706937 2.739798 2.771611 2.802441 2.832347 2.861383 # [17] 2.889599 2.917039 2.943746 2.970195 2.995732 ``` And if `n` is an odd integer, a general solution is: ``` w <- c(seq(1, n, 2), rep(n, length(index) - n - 1), seq(n, 1, -2)) rollapply(index, w, mean) ``` --- **Edit:** If you care about performance, you can use a custom Rcpp function: ``` library(Rcpp) cppFunction("NumericVector fasttapermean(NumericVector x, const int window = 5) { const int n = x.size(); NumericVector y(n); double s = x[0]; int w = 1; for (int i = 0; i < n; i++) { y[i] = s/w; if (i < window/2) { s += x[i + (w+1)/2] + x[i + (w+3)/2]; w += 2; } else if (i > n - window/2 - 2) { s -= x[i - (w-1)/2] + x[i - (w-3)/2]; w -= 2; } else { s += x[i + (w+1)/2] - x[i - (w-1)/2]; } } return y; }") ``` New benchmark: ``` n <- 5 index <- log(seq(10, 200, by = .5)) w <- c(seq(1, n, 2), rep(n, length(index) - n - 1), seq(n, 1, -2)) bench::mark( fasttapermean(index), tapermean(index), zoo::rollapply(index, w, mean) ) # # A tibble: 3 x 14 # expression min mean median max `itr/sec` mem_alloc n_gc n_itr total_time result memory time gc # # 1 fasttapermean(index) 4.7us 5.94us 5.56us 67.6us 168264. 5.52KB 0 10000 59.4ms # 2 tapermean(index) 53.9us 79.68us 91.08us 405.8us 12550. 37.99KB 3 5951 474.2ms # 3 zoo::rollapply(index, w, mean) 12.8ms 15.42ms 14.31ms 29.2ms 64.9 100.58KB 8 23 354.7ms ``` However if you care about (extreme) precision you should use the `rollapply` method because the built-in `mean` algorithm of R is more accurate than the naive sum-and-divide approach. Also note that the `rollapply` method is the only one that allows you to use `na.rm = TRUE` if needed. Upvotes: 4 [selected_answer]<issue_comment>username_2: As `rollapply` can be quite slow, it is often worth writing a simple bespoke function... ``` tapermean <- function(x, width=5){ taper <- pmin(width, 2*(seq_along(x))-1, 2*rev(seq_along(x))-1) #set taper pattern lower <- seq_along(x)-(taper-1)/2 #lower index for each mean upper <- lower+taper #upper index for each mean x <- c(0, cumsum(x)) #sum x once return((x[upper]-x[lower])/taper)} #return means ``` This is over 200x faster than the `rollapply` solution... ``` library(microbenchmark) index <- log(seq(10,200,by=0.5)) #longer version for testing w <- c(seq(1,5,2),rep(5,length(index)-5-1),seq(5,1,-2)) #as in username_1s answer microbenchmark(tapermean(index), rollapply(index,w,mean)) Unit: microseconds expr min lq mean median uq max neval tapermean(index) 185.562 193.9405 246.4123 210.6965 284.548 590.197 100 rollapply(index,w,mean) 48213.027 49681.0715 52053.7352 50583.4320 51756.378 97187.538 100 ``` I rest my case! Upvotes: 3 <issue_comment>username_3: Similarly to `zoo::rollapply()`, you can also use `gtools::running()` and change the `width` argument. However, interestingly enough, @username_2's function is still faster. ``` require(tidyverse) require(gtools) require(zoo) require(rbenchmark) index <- rep(log(seq(10,20,by=0.5)),100) benchmark("rollapply" = { rollapply(index, c(1, 3, 5, rep(5, 15), 5, 3, 1), mean) }, "tapermean" = { tapermean(index) }, "running" = { running(index, fun=mean, width=c(1, 3, 5, rep(5, 15), 5, 3, 1), simplify=TRUE) }, replications = 1000, columns = c("test", "replications", "elapsed","user.self", "sys.self")) test replications elapsed user.self sys.self 1 rollapply 1000 17.67 17.57 0.01 3 running 1000 32.24 32.23 0.00 2 tapermean 1000 0.14 0.14 0.00 ``` Upvotes: 2
2018/03/14
1,248
4,523
<issue_start>username_0: I have serverless API which is working with serverless framework version 1.25 Due to security reason I want to add response header. Please help me how can I set below headers via serverless.yml file. Is it necessary to add this header for the security reason? • Content-Security-Policy: Include default-src 'self' • Strict-Transport-Security max-age=31536000; includeSubDomains; preload • X-Content-Type-Options: nosniff • X-XSS-Protection: 1 • Cache-Control: max- age=0; Expires=-1 or Expires: Fri, 01 Jan 1990 00:00:00 GMT; no-cache, must-revalidate Below is my serverless application serverless.yaml ``` service: myService provider: name: aws runtime: nodejs6.10 stage: dev region: eu-west-1 environment: REGION: ${self:provider.region} PROJECT_NAME: ${self:custom.projectName} SERVERLESS_STAGE: ${self:provider.stage} SERVERLESS_SERVICE: ${self:service} IP_ADDRESS: http://example.com functions: getMyFunction: handler: handler.getMyFunction timeout: 30 events: - http: method: get path: api/getMyFunction/v1 integration: lambda cors: true authorizer: name: authorizerFunc identitySource: method.request.header.Token authorizationType: AWS_IAM ```<issue_comment>username_1: You can use [Lambda Proxy Integration](https://serverless.com/framework/docs/providers/aws/events/apigateway/). based on the documentation, you need to create a function which will run when someone accesses your API endpoint. As an example : ``` module.exports.hello = function (event, context, callback) { console.log(event); // Contains incoming request data (e.g., query params, headers and more) const response = { statusCode: 200, headers: { "x-custom-header": "My Header Value" }, body: JSON.stringify({ "message": "Hello World!" }) }; callback(null, response); }; ``` And in your serverless.yml ``` functions: index: handler: handler.hello events: - http: GET hello ``` Upvotes: 3 <issue_comment>username_2: Since you use Lambda Integration, you have to put it in your `serverless.yml`. ``` service: myService provider: name: aws runtime: nodejs6.10 stage: dev region: eu-west-1 environment: REGION: ${self:provider.region} PROJECT_NAME: ${self:custom.projectName} SERVERLESS_STAGE: ${self:provider.stage} SERVERLESS_SERVICE: ${self:service} IP_ADDRESS: http://example.com functions: getMyFunction: handler: handler.getMyFunction timeout: 30 events: - http: method: get path: api/getMyFunction/v1 integration: lambda cors: true authorizer: name: authorizerFunc identitySource: method.request.header.Token authorizationType: AWS_IAM response: headers: Content-Security-Policy: "'Include default-src 'self''" Strict-Transport-Security: "'max-age=31536000; includeSubDomains; preload'" X-Content-Type-Options: "'nosniff'" X-XSS-Protection: "'1'" Cache-Control: "'max-age=0; Expires=-1 or Expires: Fri, 01 Jan 1990 00:00:00 GMT; no-cache, must-revalidate'" ``` Reference: <https://serverless.com/framework/docs/providers/aws/events/apigateway#custom-response-headers> Upvotes: 2 <issue_comment>username_3: ``` service: myService provider: name: aws runtime: nodejs6.10 stage: dev region: eu-west-1 environment: REGION: ${self:provider.region} PROJECT_NAME: ${self:custom.projectName} SERVERLESS_STAGE: ${self:provider.stage} SERVERLESS_SERVICE: ${self:service} IP_ADDRESS: http://example.com functions: getMyFunction: handler: handler.getMyFunction timeout: 30 events: - http: method: get path: api/getMyFunction/v1 integration: lambda cors: true authorizer: name: authorizerFunc identitySource: method.request.header.Token authorizationType: AWS_IAM response: headers: Content-Security-Policy: "'Include default-src 'self''" Strict-Transport-Security: "'max-age=31536000; includeSubDomains; preload'" X-Content-Type-Options: "'nosniff'" X-XSS-Protection: "'1'" Cache-Control: "'max-age=0; Expires=-1 or Expires: Fri, 01 Jan 1990 00:00:00 GMT; no-cache, must-revalidate'" ``` Upvotes: 0
2018/03/14
1,504
5,664
<issue_start>username_0: I am trying to run cron schedules on DurableExecutorService on Hazelcast. My idea is that if one node goes down with its schedule, other nod having backup can pickup and resume the CRON. This is what I am doing ``` String cron = "0/5 * * * * *"; config.setInstanceName(name); config.getDurableExecutorConfig("exec").setCapacity(200).setDurability(2).setPoolSize(8); HazelcastInstance instance = Hazelcast.newHazelcastInstance(config); DurableExecutorService executorService = instance.getDurableExecutorService("exec"); executorService.executeOnKeyOwner(new EchoTask(name, cron), name); ``` The I use a Spring CRON scheduler to actually run the CRON job. ``` public class EchoTask implements Runnable, Serializable { private final String msg; private final String cronExpression; EchoTask(String msg, String cronExpression) { this.msg = msg; this.cronExpression = cronExpression; } public void run() { ThreadPoolTaskScheduler scheduler = new ThreadPoolTaskScheduler(); scheduler.initialize(); scheduler.schedule(new Runnable() { @Override public void run() { System.out.println("Executing" + msg); } }, new CronTrigger(cronExpression)); } } ``` Now, I run this twice. So in effect, there are 2 instances running. 1. Executinginstance-1 on one instance 2. Executinginstance-2 on another instance Now, My understanding is that if I go and kill one instance, lets say 1, then the CRON of node1 should migrate to node2. However, this is not happening. I do get this message though when I kill the node > > INFO: [192.168.122.1]:5707 [dev] [3.9.3] Committing/rolling-back alive > transactions of Member [192.168.122.1]:5709 - > 26ed879b-8ce5-4d58-832c-28d2df3f7f87, UUID: > 26ed879b-8ce5-4d58-832c-28d2df3f7f87 > > > I am sure, I am missing something here. Can someone pls guide? **Edit 1:** I verified that for normal tasks this behavior works, but it does not for some reason work for Spring CRON **Edit 2** One doubt I have is, that ThreadPoolTaskScheduler is not serializable for some reason. > > Failed to serialize > org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler > > > I suspect, that why it is not getting persisted on the ringbuffer. Any idea how I can make it Serializable. I checked the code and ThreadPoolTaskScheduler already implements Serializable<issue_comment>username_1: `executorService.executeOnKeyOwner(new EchoTask(name, cron), name)` will run on the key owner node. If you do not have backups enabled and kill the owner node then Hazelcast has no way of knowing that the key ever existed in cluster, hence no durability. See the following code: ``` public class DurableExecutorServiceTest { DurableExecutorServiceTest() { Config config = new Config(); config.getDurableExecutorConfig("MyService").setDurability(2).setCapacity(200).setPoolSize(2); HazelcastInstance hc = Hazelcast.newHazelcastInstance(config); hc.getMap("MyMap").put("Key-1", "Value-1"); DurableExecutorService service = hc.getDurableExecutorService("MyService"); service.executeOnKeyOwner(new MyRunnable(), "Key-1"); } public static void main(String[] args) { new DurableExecutorServiceTest(); } } class MyRunnable implements Runnable, Serializable { public void run() { int i = 0; while(true) { try { TimeUnit.SECONDS.sleep(3); System.out.println("Printing in Durable executor service: "+i++); } catch (InterruptedException e) { e.printStackTrace(); } } } } ``` You first need to fire up one Hazelcast instance before executing this code. When you launch this, it joins the previously running node and the runnable gets executed on the owner node of the key. Now kill the node that is printing the message and see if the other remaining node picks up the runnable. If you are seeing something else in your setup then you might want to dig into your Spring cron jobs. Upvotes: 1 <issue_comment>username_2: As answered by <NAME> at <https://groups.google.com/forum/#!topic/hazelcast/id8AcvWyR5I> > > Hi, > > > As Jaromir said above, you are basically executing this line in your > runnable `scheduler.schedule(...)`. Once this instruction is finished, > your Spring scheduler is working in the background, but from the > DurableExecutor perspective the task is finished. Once a task is > finished, it gets replaced by its result, in your case there is no > result, so `null`. If you kill that owner member, the backup is > promoted, but there is no task anymore, since we replaced it in the > previous step. The backup knows, that the task completed. Does this > make sense? > > > I can't thing of any non-abusive way to accomplish what you are > looking for, except from maybe a naive `CronScheduler`inside > the`IScheduledExecutor`. Imagine a periodic task, that runs every > second or so, and it holds a `Map` in it. > During its `run` cycle, it checks the expression to assert if there is > any runnable ready, and if so, runs it another Executor (Durable or > Scheduled). Since this is a periodic task, it will work well with out > `IScheduledExecutor` and it will be durable upon failures. > > > Hope that helps! We will be looking in adding native support of > cron-expressions in `IScheduledExecutor` in the future. > > > Thanks > > > Upvotes: 0
2018/03/14
1,594
5,075
<issue_start>username_0: This is my string: ``` ================================================================================ INPUT FILE ================================================================================ NAME = CO-c0m1.txt | 1> ! HF def2-TZVP opt numfreq | 2> | 3> % scf | 4> convergence tight | 5> end | 6> | 7> * xyz 0 1 | 8> C 0 0 0 | 9> O 0 0 1 | 10> * | 11> | 12> ****END OF INPUT**** ================================================================================ ``` I want get this output: ``` ! HF def2-TZVP opt numfreq % scf convergence tight end * xyz 0 1 C 0 0 0 O 0 0 1 * ``` I've been trying to do for like 5 hours and can't do it, please help, this is my pregmatch: ``` $regx = '/INPUT FILE...................................................................................(.*?)........................END OF INPUT/s'; if(preg_match($regx, $source[$i], $matches)) { $input[$i] = preg_replace('/\s\s\s\s+/', "\n", $matches[1]); } ``` I'am very new to regex and seems to be so hard. Can someone please help me, thanks in advance :)!<issue_comment>username_1: ``` $p ="/[|]\s*\d*[>]\s(.+)/"; $t = "================================================================================ INPUT FILE ================================================================================ NAME = CO-c0m1.txt | 1> ! HF def2-TZVP opt numfreq | 2> | 3> % scf | 4> convergence tight | 5> end | 6> | 7> * xyz 0 1 | 8> C 0 0 0 | 9> O 0 0 1 | 10> * | 11> | 12> ****END OF INPUT**** ================================================================================"; preg_match_all($p,$t,$res); die(json_encode($res[1], JSON_PRETTY_PRINT)); /* Output: [ "! HF def2-TZVP opt numfreq", "% scf", " convergence tight", "end", "* xyz 0 1", "C 0 0 0", "O 0 0 1", "*", " ****END OF INPUT****" ] */ ``` Second item of `$res` is an array that have what you want. Upvotes: 2 <issue_comment>username_2: You need a regular expression that matches the lines that start with `|` followed by some spaces, then one or more digits then `>` and you need only the text that follows this prefix. The regular expression is: `/^\|\s*\d+>(.*)$/m`. It contains a capturing group for the text you need. [`preg_match_all()`](http://php.net/manual/en/function.preg-match-all.php) puts the capturing fragments in `$matches[1]`: ``` preg_match_all('/^\|\s*\d+>(.*)$/m', $source[$i], $matches); echo(implode("\n", $matches[1])); ``` You can then remove the line that contains `****END OF INPUT****` by other means ([`array_pop()`](http://php.net/manual/en/function.array-pop.php), [`array_filter()`](http://php.net/manual/en/function.array-filter.php), etc.) Check it in action: <https://3v4l.org/hUEBk> The `regex` explained: ``` / # regex delimiter ^ # match the beginning of the line \| # match '|' (it needs to be escaped because it is a meta-character) \s # match a whitespace character (space, tab) * # the previous (a whitespace) can appear zero or more times \d # match a digit (0..9) + # the previous (a digit) can appear one or more times > # match '>' ( # begin of a capturing group .* # match any character, any number of times ) # end of the capturing group $ # match the end of the line / # regex delimiter m # multiline (regex modifier); check the regex against each line of the input string ``` Read more about [Perl-Compatible Regular Expressions in PHP](http://php.net/manual/en/pcre.pattern.php). Upvotes: 3 [selected_answer]<issue_comment>username_1: You don't need to run first regex on your text, only run this regex: ``` preg_match_all("/[|]\s*\d*[>]\s(.+)/", $source[$i], $matches); echo(implode("\n", $matches[1])); ``` This works fine in my tests. Upvotes: 1 <issue_comment>username_3: You may have a single regex solution to get all those data in one go: ``` ^\|\h+\d+>(?!\h*\Q****END OF INPUT****\E)\h\K.+ ``` Breakdown: * `^` Match beginning of line * `\|\h+\d+>` Match up to `digit>` * `(?!` Start of a negative lookahead + `\h*` If horizental whitespace(s) exist(s) + `\Q****END OF INPUT****\E` And ends with *end of input* * `)` End of lookahead * `\h\K` Match a horizental whitespace then reset match * `.+` Match up to the end of line PHP code: ``` preg_match_all("~^\|\h+\d+>(?!\h*\Q****END OF INPUT****\E)\h\K.+~mi", $str, $matches); ``` [Live demo](https://3v4l.org/9MBQU) Output of `print_r($matches[0]);`: ``` Array ( [0] => ! HF def2-TZVP opt numfreq [1] => % scf [2] => convergence tight [3] => end [4] => * xyz 0 1 [5] => C 0 0 0 [6] => O 0 0 1 [7] => * ) ``` You need to do a `implode(PHP_EOL, $matches[0]);` to join values together. Upvotes: 1
2018/03/14
1,350
4,261
<issue_start>username_0: I am new to tensorflow, I am using CNN model descrined by <http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/> I am getting around 60% accuracy for 5 class and 80% accuracy for 2 class classification Now I want to visualize which word impact most to a particular classification, can anybody tell me how to do it<issue_comment>username_1: ``` $p ="/[|]\s*\d*[>]\s(.+)/"; $t = "================================================================================ INPUT FILE ================================================================================ NAME = CO-c0m1.txt | 1> ! HF def2-TZVP opt numfreq | 2> | 3> % scf | 4> convergence tight | 5> end | 6> | 7> * xyz 0 1 | 8> C 0 0 0 | 9> O 0 0 1 | 10> * | 11> | 12> ****END OF INPUT**** ================================================================================"; preg_match_all($p,$t,$res); die(json_encode($res[1], JSON_PRETTY_PRINT)); /* Output: [ "! HF def2-TZVP opt numfreq", "% scf", " convergence tight", "end", "* xyz 0 1", "C 0 0 0", "O 0 0 1", "*", " ****END OF INPUT****" ] */ ``` Second item of `$res` is an array that have what you want. Upvotes: 2 <issue_comment>username_2: You need a regular expression that matches the lines that start with `|` followed by some spaces, then one or more digits then `>` and you need only the text that follows this prefix. The regular expression is: `/^\|\s*\d+>(.*)$/m`. It contains a capturing group for the text you need. [`preg_match_all()`](http://php.net/manual/en/function.preg-match-all.php) puts the capturing fragments in `$matches[1]`: ``` preg_match_all('/^\|\s*\d+>(.*)$/m', $source[$i], $matches); echo(implode("\n", $matches[1])); ``` You can then remove the line that contains `****END OF INPUT****` by other means ([`array_pop()`](http://php.net/manual/en/function.array-pop.php), [`array_filter()`](http://php.net/manual/en/function.array-filter.php), etc.) Check it in action: <https://3v4l.org/hUEBk> The `regex` explained: ``` / # regex delimiter ^ # match the beginning of the line \| # match '|' (it needs to be escaped because it is a meta-character) \s # match a whitespace character (space, tab) * # the previous (a whitespace) can appear zero or more times \d # match a digit (0..9) + # the previous (a digit) can appear one or more times > # match '>' ( # begin of a capturing group .* # match any character, any number of times ) # end of the capturing group $ # match the end of the line / # regex delimiter m # multiline (regex modifier); check the regex against each line of the input string ``` Read more about [Perl-Compatible Regular Expressions in PHP](http://php.net/manual/en/pcre.pattern.php). Upvotes: 3 [selected_answer]<issue_comment>username_1: You don't need to run first regex on your text, only run this regex: ``` preg_match_all("/[|]\s*\d*[>]\s(.+)/", $source[$i], $matches); echo(implode("\n", $matches[1])); ``` This works fine in my tests. Upvotes: 1 <issue_comment>username_3: You may have a single regex solution to get all those data in one go: ``` ^\|\h+\d+>(?!\h*\Q****END OF INPUT****\E)\h\K.+ ``` Breakdown: * `^` Match beginning of line * `\|\h+\d+>` Match up to `digit>` * `(?!` Start of a negative lookahead + `\h*` If horizental whitespace(s) exist(s) + `\Q****END OF INPUT****\E` And ends with *end of input* * `)` End of lookahead * `\h\K` Match a horizental whitespace then reset match * `.+` Match up to the end of line PHP code: ``` preg_match_all("~^\|\h+\d+>(?!\h*\Q****END OF INPUT****\E)\h\K.+~mi", $str, $matches); ``` [Live demo](https://3v4l.org/9MBQU) Output of `print_r($matches[0]);`: ``` Array ( [0] => ! HF def2-TZVP opt numfreq [1] => % scf [2] => convergence tight [3] => end [4] => * xyz 0 1 [5] => C 0 0 0 [6] => O 0 0 1 [7] => * ) ``` You need to do a `implode(PHP_EOL, $matches[0]);` to join values together. Upvotes: 1
2018/03/14
690
2,595
<issue_start>username_0: I was studying C++ and came across [this](http://www.cplusplus.com/doc/tutorial/variables/) link for data types. In first table, character type is listed even under signed and unsigned integer category: character types char Exactly one byte in size. At least 8 bits. Integer types (signed) signed char Same size as char. At least 8 bits. Integer types (unsigned) unsigned char (same size as their signed counterparts) What does this mean? Does this mean that we could use char type in place of 8 bit integer as we typically do for 8 bit data in C? For example, below code is legal even in C. ``` unsigned char index; int ar1[]={4,5,6,7,8}; for(index=0;index<5;index++) cout< ``` [cppreference](http://en.cppreference.com/w/cpp/language/types) does not list char under integer category. Secondly, is the above usage of unsigned char for index is safe in C++? Are we going to save memory in this usage or will it not save anything? Or is there any performance issue if we use char type?<issue_comment>username_1: Using an unsigned char is perfectly fine for the example you gave. What will happen is when you call `ar1[index]` is the `unsigned char` will converted into a `size_t` before the call; and since the unsigned char is known to be smaller or the same size as size\_t there will be no warning issued by the compiler. > > Do you save any memory? > > > Well; yes outside the call to `[]`. Since the type is converted on the call, you will end up having to use more than that much memory later since you can't pass that item by reference. On the other hand if your `index` variable is to last the life of the program; and you plan to have a lot of them; then the space saving can make it worth while. Upvotes: 0 <issue_comment>username_2: I would like to point out the existence of data types such as `(u)int{n}_t` where n is size in bits, which since c++11 do signed and unsigned integers of specified sizes and are unlikely to be mistaken for characters. [cppreference](http://en.cppreference.com/w/cpp/types/integer) (cannot comment yet) Upvotes: 1 <issue_comment>username_3: There is a subtlety of language employed by the standard. It describes "signed integer types", "unsigned integer types", and "integral types" (a.k.a "integer types"), which are `bool`, `char`, `char16_­t`, `char32_­t`, `wchar_­t`, and the signed and unsigned integers. The headings in cppreference's description of the fundamental types are non-normative, and exist to highlight that character types represent character values, in addition to being integers. Upvotes: 0
2018/03/14
608
2,301
<issue_start>username_0: I have a text box that takes input and when i hit enter it stores the output and displays in an unordered list format. The function without IIFE works when i use onclick event. However, with IIFE its not functioning. Please help. ``` messagewithenter Message: (function () { var link = document.getElementById("submitenter"); link.addEventListener("click", function () { document.onkeypress = enter; function enter(e) { if (e.which == 13 || e.keyCode == 13) { addMessage(); } } }); function addMessage() { var message = document.getElementById("message").value; var output = "<li>" + message + "<li" + "<br>"; document.getElementById("list").innerHTML += output; } }()); ```<issue_comment>username_1: Using an unsigned char is perfectly fine for the example you gave. What will happen is when you call `ar1[index]` is the `unsigned char` will converted into a `size_t` before the call; and since the unsigned char is known to be smaller or the same size as size\_t there will be no warning issued by the compiler. > > Do you save any memory? > > > Well; yes outside the call to `[]`. Since the type is converted on the call, you will end up having to use more than that much memory later since you can't pass that item by reference. On the other hand if your `index` variable is to last the life of the program; and you plan to have a lot of them; then the space saving can make it worth while. Upvotes: 0 <issue_comment>username_2: I would like to point out the existence of data types such as `(u)int{n}_t` where n is size in bits, which since c++11 do signed and unsigned integers of specified sizes and are unlikely to be mistaken for characters. [cppreference](http://en.cppreference.com/w/cpp/types/integer) (cannot comment yet) Upvotes: 1 <issue_comment>username_3: There is a subtlety of language employed by the standard. It describes "signed integer types", "unsigned integer types", and "integral types" (a.k.a "integer types"), which are `bool`, `char`, `char16_­t`, `char32_­t`, `wchar_­t`, and the signed and unsigned integers. The headings in cppreference's description of the fundamental types are non-normative, and exist to highlight that character types represent character values, in addition to being integers. Upvotes: 0
2018/03/14
1,164
4,921
<issue_start>username_0: Every connection needs to be opened and closed. But what time is the best or it depends on the situation? 1. Open connection when initiate a windows(WPF), form(Windows form), webpage(ASP.NET) and close when the user close window/form/webpage. 2. Open connection only when user calls the database. For example the user click "Add" button and we run the SQL "INSERT..." then close the connection ASAP. Then if the user click Add/Edit again we have to reconnect to the database, run the SQL and close it ASAP again? Both solutions have their advantages. By keeping the connection, we will make the program more performance. By keep connecting and disconnecting, the performance will drop.<issue_comment>username_1: It is better to have a separate project which we call the data layer. The project can have different files for different modules (recommended) or all in one file as you wish. The database layer will expose the various methods like insert, update, get, delete, etc. It is always better to open the connection only for the particular method call and close it as soon as you get the result. You can use the using block like: ``` using (SqlConnection connection = new SqlConnection(connectionString)) { connection.Open(); //your code here insert/update/get/delete } ``` The connection will get disposed automatically when the control is out of the using block. Upvotes: 1 <issue_comment>username_2: Open the connection and do your operation such as Inserting, Update or Delete and as you get the response close the connection. to every task open and close connection do not leave connection open in any case Upvotes: 1 <issue_comment>username_3: ASP.NET / Web API / (...) * Usually tied to the unit of work pattern, a connection is created for an incoming request then closed when the request has been processed + deviations may occur when you have long running background tasks WPF / Window Forms * In this case I would suggest that it's more on a per action basis, even when polling for changes - if that scenario exists + Open, Query, Close, repeat Upvotes: 2 [selected_answer]<issue_comment>username_4: The correct answer is wrap them in a `using` for the query or queries you want to use them for. Don't hold on to connections for longer then you immediately need. Doing so creates more problems then its worth. In essence if you need a connection for a bunch of queries just wrap them in a `using` statement, if they are separated by long running tasks, close them and open them on a piecemeal basis. The worst thing you can so is try to keep them open and check for if they are still alive. [SQL Server Connection Pooling (ADO.NET)](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling) > > In practice, most applications use only one or a few different > configurations for connections. This means that during application > execution, many identical connections will be repeatedly opened and > closed. To minimize the cost of opening connections, *ADO.NET* uses an > optimization technique called connection pooling. > > > **Furthermore** > > Connection pooling reduces the number of times that new connections > must be opened. The pooler maintains ownership of the physical > connection. It manages connections by keeping alive a set of active > connections for each given connection configuration. Whenever a user > calls `Open` on a connection, the pooler looks for an available > connection in the pool. If a pooled connection is available, it > returns it to the caller instead of opening a new connection. When the > application calls `Close` on the connection, the pooler returns it to > the pooled set of active connections instead of closing it. Once the > connection is returned to the pool, it is ready to be reused on the > next `Open` call. > > > In the following example, three new SqlConnection objects are created, but only two connection pools are required to manage them. **Example from MSDN** ``` using (SqlConnection connection = new SqlConnection("Integrated Security=SSPI;Initial Catalog=Northwind")) { connection.Open(); // Pool A is created. } using (SqlConnection connection = new SqlConnection("Integrated Security=SSPI;Initial Catalog=pubs")) { connection.Open(); // Pool B is created because the connection strings differ. } using (SqlConnection connection = new SqlConnection("Integrated Security=SSPI;Initial Catalog=Northwind")) { connection.Open(); // The connection string matches pool A. } ``` For *Controllers* *WCF* services, and CQRS, they are usually short lived, so injecting them in a scoped life cycle is very common. However for things like button clicks in user applications, Service patterns calls, just use them when you need them. Never try to cache or pool. Upvotes: 2
2018/03/14
1,140
4,843
<issue_start>username_0: In my project i am using Activiti API for process workflow.If i start activiti workflow by calling ProcessEngines.getDefaultProcessEngine() activiti will create few tables in our database like ACT\_GE\_PROPERTY,ACT\_HI\_ACTINST etc.. But our requirement is that we are not at all going to use activiti tables using our own database tables. How to avoid creating activiti tables in our database? Is it possible? I have read that we have to write our own implementation for ProcessEngineConfiguration. Could anyone provide the steps to override implementation of ProcessEngineConfiguration? Thanks in advance.<issue_comment>username_1: It is better to have a separate project which we call the data layer. The project can have different files for different modules (recommended) or all in one file as you wish. The database layer will expose the various methods like insert, update, get, delete, etc. It is always better to open the connection only for the particular method call and close it as soon as you get the result. You can use the using block like: ``` using (SqlConnection connection = new SqlConnection(connectionString)) { connection.Open(); //your code here insert/update/get/delete } ``` The connection will get disposed automatically when the control is out of the using block. Upvotes: 1 <issue_comment>username_2: Open the connection and do your operation such as Inserting, Update or Delete and as you get the response close the connection. to every task open and close connection do not leave connection open in any case Upvotes: 1 <issue_comment>username_3: ASP.NET / Web API / (...) * Usually tied to the unit of work pattern, a connection is created for an incoming request then closed when the request has been processed + deviations may occur when you have long running background tasks WPF / Window Forms * In this case I would suggest that it's more on a per action basis, even when polling for changes - if that scenario exists + Open, Query, Close, repeat Upvotes: 2 [selected_answer]<issue_comment>username_4: The correct answer is wrap them in a `using` for the query or queries you want to use them for. Don't hold on to connections for longer then you immediately need. Doing so creates more problems then its worth. In essence if you need a connection for a bunch of queries just wrap them in a `using` statement, if they are separated by long running tasks, close them and open them on a piecemeal basis. The worst thing you can so is try to keep them open and check for if they are still alive. [SQL Server Connection Pooling (ADO.NET)](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling) > > In practice, most applications use only one or a few different > configurations for connections. This means that during application > execution, many identical connections will be repeatedly opened and > closed. To minimize the cost of opening connections, *ADO.NET* uses an > optimization technique called connection pooling. > > > **Furthermore** > > Connection pooling reduces the number of times that new connections > must be opened. The pooler maintains ownership of the physical > connection. It manages connections by keeping alive a set of active > connections for each given connection configuration. Whenever a user > calls `Open` on a connection, the pooler looks for an available > connection in the pool. If a pooled connection is available, it > returns it to the caller instead of opening a new connection. When the > application calls `Close` on the connection, the pooler returns it to > the pooled set of active connections instead of closing it. Once the > connection is returned to the pool, it is ready to be reused on the > next `Open` call. > > > In the following example, three new SqlConnection objects are created, but only two connection pools are required to manage them. **Example from MSDN** ``` using (SqlConnection connection = new SqlConnection("Integrated Security=SSPI;Initial Catalog=Northwind")) { connection.Open(); // Pool A is created. } using (SqlConnection connection = new SqlConnection("Integrated Security=SSPI;Initial Catalog=pubs")) { connection.Open(); // Pool B is created because the connection strings differ. } using (SqlConnection connection = new SqlConnection("Integrated Security=SSPI;Initial Catalog=Northwind")) { connection.Open(); // The connection string matches pool A. } ``` For *Controllers* *WCF* services, and CQRS, they are usually short lived, so injecting them in a scoped life cycle is very common. However for things like button clicks in user applications, Service patterns calls, just use them when you need them. Never try to cache or pool. Upvotes: 2
2018/03/14
526
2,051
<issue_start>username_0: I'm using `platformWorkerAppDynamic` to render my Angular application. However, I need to manipulate canvas context. I can create canvas, using `Renderer2` but **I can't find a way to call `getContext('2d')` method** for example or **draw an image to canvas**. I get an error because `this.canvas.nativeElement` is an object of `WebWorkerRenderNode` but not the HTML element. Can anyone help me with this issue? Some tutorials of manipulation with DOM for Angular application which is rendered by means of webworker are also more than welcome.<issue_comment>username_1: You should consider using viewchild first declare a template variable to your canvas element ``` ``` import the following into your component ``` import { ViewChild, ElementRef } from '@angular/core'; ``` then, use it inside your component class ``` @ViewChild('myCanvas') canvasRef: ElementRef; ``` Now, declare it inside angular lifecycle hook ``` ngOnInit() { var ctx = this.canvasRef.nativeElement.getContext('2d'); ctx.fillStyle="red"; ctx.fillRect(0,0,600,600); } ``` Upvotes: -1 <issue_comment>username_2: Unfortunately, this is a native limitation of the web platform, there is simply no HTMLCanvasElement, and thus no canvas context in web workers, at least not in the way you are looking for. For worker threads, there is something called [OffscreenCanvas](https://developer.mozilla.org/en-US/docs/Web/API/OffscreenCanvas), but the problem is that its `getContext` implementation is extremely limited currently, it's basically for WebGL only (so no `getContext("2d")`, and no corresponding methods, such as beginPath, moveTo, stroke, etc.): ``` let canvas = new OffscreenCanvas(512, 512); let canvasCtx = canvas.getContext("webgl"); ... ``` Your second option is to compute the raw image data yourself in a web-worker, and transfer it back to the main thread for drawing on a canvas. None of these solve your problem directly, but if you want to do canvas-related stuff in a worker, these are your only options. Upvotes: 1
2018/03/14
413
1,370
<issue_start>username_0: I have hibernate query: ``` getSession() .createQuery("from Entity where id in :ids") .setParameterList("ids", ids) .list(); ``` where ids is Collection ids, that can contain a lot of ids. Currently I got exception when collection is very large: java.io.IOException: Tried to send an out-of-range integer as a 2-byte value I heard that postgre has some problem with it. But I can't find the solution how to rewrite it on hql.<issue_comment>username_1: Try adding parenthesis around `:ids`, i.e.: ``` "from Entity where id in (:ids)" ``` Upvotes: -1 <issue_comment>username_2: The exception occurs because of PostgreSQL's limit of 32767 bind variables per statement. As a possible workaround try to split your large query into smaller ones. Upvotes: 6 <issue_comment>username_3: If you don't want split your large query into smaller ones, you may use JDBC: ResultSet executeQuery() Upvotes: -1 <issue_comment>username_4: In newer PostgreSQL driver **42.4.0** you can now pass up to 65,535 records. > > fix: queries with up to 65535 (inclusive) parameters are supported now (previous limit was 32767) PR #2525, [Issue #1311](https://github.com/pgjdbc/pgjdbc/issues/1311#issuecomment-1143805011) > > > Source: <https://jdbc.postgresql.org/changelogs/2022-06-09-42.4.0-release/> Upvotes: 0
2018/03/14
360
1,130
<issue_start>username_0: I have a form: ``` ``` and I want to make a specific production after clicking on any element of the form eg. ``` ``` How can this be done - and work? I read about touched or dirty directive, but I don't know how to implement it?<issue_comment>username_1: Try adding parenthesis around `:ids`, i.e.: ``` "from Entity where id in (:ids)" ``` Upvotes: -1 <issue_comment>username_2: The exception occurs because of PostgreSQL's limit of 32767 bind variables per statement. As a possible workaround try to split your large query into smaller ones. Upvotes: 6 <issue_comment>username_3: If you don't want split your large query into smaller ones, you may use JDBC: ResultSet executeQuery() Upvotes: -1 <issue_comment>username_4: In newer PostgreSQL driver **42.4.0** you can now pass up to 65,535 records. > > fix: queries with up to 65535 (inclusive) parameters are supported now (previous limit was 32767) PR #2525, [Issue #1311](https://github.com/pgjdbc/pgjdbc/issues/1311#issuecomment-1143805011) > > > Source: <https://jdbc.postgresql.org/changelogs/2022-06-09-42.4.0-release/> Upvotes: 0
2018/03/14
269
1,122
<issue_start>username_0: We are currently using NodeJs with Knex for connecting with MySQL. We have plans to migrate our database to Cloud Spanner. So wanted to know, if knexjs has support for cloud spanner. I did not see any related articles in their official website (<http://knexjs.org/>). If not, any ORM which has support to both MySQL and Cloud Spanner which will have minimal changes from knexjs<issue_comment>username_1: The Google public docs list the different libraries that can be used with Google [Cloud Spanner](https://cloud.google.com/spanner/docs/reference/libraries#installing_the_client_library). You can use node.js with Cloud Spanner so I believe the knexjs should also work. A recommendation is to modify your code so that knexjs outputs the SQL command to help with debugging in case certain commands don't work Upvotes: -1 <issue_comment>username_2: We continued using Knexjs for our Spanner operations. It is working fine so far. We build the queries using knex and convert it to raw queries using ``` querybuilder.toSQL() ``` and binding the parameters. Upvotes: 3 [selected_answer]
2018/03/14
1,231
3,762
<issue_start>username_0: Wondering why this isn't possible: ``` class Test { init(key: T, value: U) { } } let array: [Test] = [ Test(key: "test", value: 42), Test(key: "test", value: []) ] ``` I'm getting an error: > > error: cannot convert value of type 'Test' to expected > element type 'Test' > > > **Update: following Brduca's answer** How come this works: ``` class Test { let key: T let value: U init(key: T, value: U) { self.key = key self.value = value } } let properties: [Test] = [ Test(key: "fontSize", value: []), Test(key: "textColor", value: 42) ] ``` But this doesn't: ``` class TestBlock { let key: String let block: (T, U) -> Void init(key: String, block: @escaping (T, U) -> Void) { self.key = key self.block = block } } let block1: (UILabel, CGFloat) -> Void = { $0.font = $0.font.withSize($1) } let block2: (UILabel, UIColor) -> Void = { $0.textColor = $1 } let propertiesWithBlock: [TestBlock] = [ TestBlock(key: "fontSize", block: block1), TestBlock(key: "textColor", block: block2) ] ``` I'm getting this error: ``` Cannot convert value of type 'TestBlock' to expected element type 'TestBlock' ```<issue_comment>username_1: No need to explicitly type: ``` class Test { init(key: T, value: U) { } } let array: [Test] = [ Test(key: "test", value: []), Test(key: "test", value: 42) ] ``` Update: ``` typealias tuple = (Any,Any) class TestBlock { let key: String let block: (tuple) -> Void init(key: String, block: @escaping (tuple) -> Void) { self.key = key self.block = block } } let block1: (tuple) -> Void = { (arg) in let (_label, _size) = arg let label = _label as! UILabel label.font = label.font.withSize((_size as! CGFloat)) } let block2: (tuple) -> Void = { (arg) in let (_label, _color) = arg let label = _label as! UILabel let color = _color as! UIColor label.textColor = color } let propertiesWithBlock: [TestBlock] = [ TestBlock(key: "fontSize", block: block1), TestBlock(key: "textColor", block: block2) ] ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You got this error, because `heterogeneous collection literal could only be inferred to '[Any]'`. That means that compiler is not able to resolve relations btw `Test`, `Test` and types. It is same as you would tried to put `Int` and `String` into array without specifying it as `[Any]`. As a solution you can use [Brduca's answer](https://stackoverflow.com/a/49274534/5501940) or you can mark your array as `[Any]` ``` let array: [Any] = [ Test(key: "test", value: 42.0), Test(key: "test", value: []) ] ``` Upvotes: -1 <issue_comment>username_3: I found "workaround" but nobody can like it. Basically you create base class for Test and put whatever function you want to be called on it when accessed from array. Also you can make it implement Equatable or create another reusable class that implements it: ``` class TestBase: CSObject { open func someFunction(){ fatalError() } } open class CSObject: Equatable { public static func ==(lhs: CSObject, rhs: CSObject) -> Bool { lhs === rhs } } //Now make your Test extend TestBase and use that in your Arrays: class Test : TestBase { init(key: T, value: U) { } @override func someFunction(){ //...do some work that is accessible from array } } let array: [TestBase] = [ Test(key: "test", value: 42.0), Test(key: "test", value: []) ] array[0].someFunction() // :) ``` Its like joke for me ... writing simple generics list for ages but swift just doesn't allow due to is milliard of stupid type-safety restrictions that make development more time consuming. Sure this is solution just for certain situations... Upvotes: 0
2018/03/14
3,179
14,382
<issue_start>username_0: I've read the docs and followed the examples but I am unable to get user claims into the access token. My client is not ASP.NET core, so the configuration of the MVC client is not the same as the v4 samples. Unless I have misunderstood the docs, the ApiResources are used to populate the RequestedClaimTypes in the profile service when creating the access token. The client should add the api resource to it's list of scopes to include associated userclaims. In my case they are not being connected. When ProfileService.GetProfileDataAsync is called with a caller of "ClaimsProviderAccessToken", the requested claim types are empty. Even if I set the context.IssuedClaims in here, when it is called again for "AccessTokenValidation" the claims on the context are not set. In the MVC app: ``` app.UseOpenIdConnectAuthentication( new OpenIdConnectAuthenticationOptions { UseTokenLifetime = false, ClientId = "portal", ClientSecret = "secret", Authority = authority, RequireHttpsMetadata = false, RedirectUri = redirectUri, PostLogoutRedirectUri = postLogoutRedirectUri, ResponseType = "code id_token", Scope = "openid offline_access portal", SignInAsAuthenticationType = "Cookies", Notifications = new OpenIdConnectAuthenticationNotifications { AuthorizationCodeReceived = async n => { await AssembleUserClaims(n); }, RedirectToIdentityProvider = n => { // if signing out, add the id_token_hint if (n.ProtocolMessage.RequestType == Microsoft.IdentityModel.Protocols.OpenIdConnect.OpenIdConnectRequestType.Logout) { var idTokenHint = n.OwinContext.Authentication.User.FindFirst("id_token"); if (idTokenHint != null) { n.ProtocolMessage.IdTokenHint = idTokenHint.Value; } } return Task.FromResult(0); } } }); private static async Task AssembleUserClaims(AuthorizationCodeReceivedNotification notification) { string authCode = notification.ProtocolMessage.Code; string redirectUri = "https://myuri.com"; var tokenClient = new TokenClient(tokenendpoint, "portal", "secret"); var tokenResponse = await tokenClient.RequestAuthorizationCodeAsync(authCode, redirectUri); if (tokenResponse.IsError) { throw new Exception(tokenResponse.Error); } // use the access token to retrieve claims from userinfo var userInfoClient = new UserInfoClient(new Uri(userinfoendpoint), tokenResponse.AccessToken); var userInfoResponse = await userInfoClient.GetAsync(); // create new identity var id = new ClaimsIdentity(notification.AuthenticationTicket.Identity.AuthenticationType); id.AddClaims(userInfoResponse.GetClaimsIdentity().Claims); id.AddClaim(new Claim("access_token", tokenResponse.AccessToken)); id.AddClaim(new Claim("expires_at", DateTime.Now.AddSeconds(tokenResponse.ExpiresIn).ToLocalTime().ToString())); id.AddClaim(new Claim("refresh_token", tokenResponse.RefreshToken)); id.AddClaim(new Claim("id_token", notification.ProtocolMessage.IdToken)); id.AddClaim(new Claim("sid", notification.AuthenticationTicket.Identity.FindFirst("sid").Value)); notification.AuthenticationTicket = new AuthenticationTicket(id, notification.AuthenticationTicket.Properties); } ``` Identity Server Client: ``` private Client CreatePortalClient(Guid tenantId) { Client portal = new Client(); portal.ClientName = "Portal MVC"; portal.ClientId = "portal"; portal.ClientSecrets = new List { new Secret("secret".Sha256()) }; portal.AllowedGrantTypes = GrantTypes.HybridAndClientCredentials; portal.RequireConsent = false; portal.RedirectUris = new List { "https://myuri.com", }; portal.AllowedScopes = new List { IdentityServerConstants.StandardScopes.OpenId, IdentityServerConstants.StandardScopes.Profile, "portal" }; portal.Enabled = true; portal.AllowOfflineAccess = true; portal.AlwaysSendClientClaims = true; portal.AllowAccessTokensViaBrowser = true; return portal; } ``` The API resource: ``` public static IEnumerable GetApiResources() { return new List { new ApiResource { Name= "portalresource", UserClaims = { "tenantId","userId","user" }, Scopes = { new Scope() { Name = "portalscope", UserClaims = { "tenantId","userId","user",ClaimTypes.Role, ClaimTypes.Name), }, } }, }; } ``` The Identity resource: ``` public static IEnumerable GetIdentityResources() { return new IdentityResource[] { // some standard scopes from the OIDC spec new IdentityResources.OpenId(), new IdentityResources.Profile(), new IdentityResources.Email(), new IdentityResource("portal", new List{ "tenantId", "userId", "user", "role", "name"}) }; } ``` UPDATE: Here is the interaction between the MVC app and the Identity Server (IS): ``` MVC: Owin Authentication Challenge IS: AccountController.LoginAsync - assemble user claims and call HttpContext.SignInAsync with username and claims) ProfileService.IsActiveAsync - Context = "AuthorizeEndpoint", context.Subject.Claims = all userclaims ClaimsService.GetIdentityTokenClaimsAsync - Subject.Claims (all userclaims), resources = 1 IdentityResource (OpenId), GrantType = Hybrid MVC: SecurityTokenValidated (Notification Callback) AuthorizationCodeReceived - Protocol.Message has Code and IdToken call to TokenClient.RequestAuthorizationCodeAsync() IS: ProfileService.IsActiveAsync - Context = "AuthorizationCodeValidation", context.Subject.Claims = all userclaims ClaimsService.GetAccessTokenClaimsAsync - Subject.Claims (all userclaims), resources = 2 IdentityResource (openId,profile), GrantType = Hybrid ProfileService.GetProfileDataAsync - Context = "ClaimsProviderAccessToken", context.Subject.Claims = all userclaims, context.RequestedClaimTypes = empty, context.IssuedClaims = name,role,user,userid,tenantid ClaimsService.GetIdentityTokenClaimsAsync - Subject.Claims (all userclaims), resources = 2 IdentityResource (openId,profile), GrantType = authorization_code MVC: call to UserInfoClient with tokenResponse.AccessToken IS: ProfileService.IsActiveAsync - Context = "AccessTokenValidation", context.Subject.Claims = sub,client_id,aud,scope etc (expecting user and tenantId here) ProfileService.IsActiveAsync - Context = "UserInfoRequestValidation", context.Subject.Claims = sub,auth_time,idp, amr ProfileService.GetProfileDataAsync - Context = "UserInfoEndpoint", context.Subject.Claims = sub,auth_time,idp,amp, context.RequestedClaimTypes = sub ```<issue_comment>username_1: You need to modify the code of "Notifications" block in MVC App like mentioned below: ``` Notifications = new OpenIdConnectAuthenticationNotifications { AuthorizationCodeReceived = async n => { var userInfoClient = new UserInfoClient(UserInfoEndpoint); var userInfoResponse = await userInfoClient.GetAsync(n.ProtocolMessage.AccessToken); var identity = new ClaimsIdentity(n.AuthenticationTicket.Identity.AuthenticationType); identity.AddClaims(userInfoResponse.Claims); var tokenClient = new TokenClient(TokenEndpoint, "portal", "secret"); var response = await tokenClient.RequestAuthorizationCodeAsync(n.Code, n.RedirectUri); identity.AddClaim(new Claim("access_token", response.AccessToken)); identity.AddClaim(new Claim("expires_at", DateTime.UtcNow.AddSeconds(response.ExpiresIn).ToLocalTime().ToString(CultureInfo.InvariantCulture))); identity.AddClaim(new Claim("refresh_token", response.RefreshToken)); identity.AddClaim(new Claim("id_token", n.ProtocolMessage.IdToken)); n.AuthenticationTicket = new AuthenticationTicket(identity, n.AuthenticationTicket.Properties); }, RedirectToIdentityProvider = n => { if (n.ProtocolMessage.RequestType == OpenIdConnectRequestType.LogoutRequest) { var idTokenHint = n.OwinContext.Authentication.User.FindFirst("id_token").Value; n.ProtocolMessage.IdTokenHint = idTokenHint; } return Task.FromResult(0); } } ``` (consider if any changes related to the version of identity server as this code was built for identity server 3.) Upvotes: 0 <issue_comment>username_2: As I'm not seeing what happens in your `await AssembleUserClaims(context);` I would suggest to check if it is doing the following: Based on the the access token that you have from either the `context.ProtoclMessage.AccessToken` or from the call to the `TokenEndpoint` you should create a new `ClaimsIdentity`. Are you doing this, because you are not mentioning it? Something like this: ``` var tokenClient = new TokenClient( IdentityServerTokenEndpoint, "clientId", "clientSecret"); var tokenResponse = await tokenClient.RequestAuthorizationCodeAsync( n.Code, n.RedirectUri); if (tokenResponse.IsError) { throw new Exception(tokenResponse.Error); } // create new identity var id = new ClaimsIdentity(n.AuthenticationTicket.Identity.AuthenticationType); id.AddClaim(new Claim("access_token", tokenResponse.AccessToken)); id.AddClaim(new Claim("expires_at", DateTime.Now.AddSeconds(tokenResponse.ExpiresIn).ToLocalTime().ToString())); id.AddClaim(new Claim("refresh_token", tokenResponse.RefreshToken)); id.AddClaim(new Claim("id_token", n.ProtocolMessage.IdToken)); id.AddClaims(n.AuthenticationTicket.Identity.Claims); // get user info claims and add them to the identity var userInfoClient = new UserInfoClient(IdentityServerUserInfoEndpoint); var userInfoResponse = await userInfoClient.GetAsync(tokenResponse.AccessToken); var userInfoEndpointClaims = userInfoResponse.Claims; // this line prevents claims duplication and also depends on the IdentityModel library version. It is a bit different for >v2.0 id.AddClaims(userInfoEndpointClaims.Where(c => id.Claims.Any(idc => idc.Type == c.Type && idc.Value == c.Value) == false)); // create the authentication ticket n.AuthenticationTicket = new AuthenticationTicket( new ClaimsIdentity(id.Claims, n.AuthenticationTicket.Identity.AuthenticationType, "name", "role"), n.AuthenticationTicket.Properties); ``` And one more thing - read [this](https://leastprivilege.com/2016/12/01/new-in-identityserver4-resource-based-configuration/) regarding the resources. In your particular case, you care about IdentityResources (but I see that you also have it there). So - when calling the `UserInfoEndpoint` do you see the claims in the response? If no - then the problem is that they are not issued. Check these, and we can dig in more. Good luck **EDIT** I have a solution that you may, or may not like, but I'll suggest it. In the IdentityServer project, in the `AccountController.cs` there is a method `public async Task Login(LoginInputModel model, string button)`. This is the method after the user has clicked the login button on the login page (or whatever custom page you have there). In this method there is a call `await HttpContext.SignInAsync`. This call accept parameters the user subject, username, authentication properties and **list of claims**. Here you can add your custom claim, and then it will appear when you call the userinfo endpoint in the `AuthorizationCodeReceived`. I just tested this and it works. Actually I figured out that this is the way to add custom claims. Otherwise - IdentityServer doesn't know about your custom claims, and is not able to populate them with values. Try it out and see if it works for you. Upvotes: 1 <issue_comment>username_3: You can try to implement your own IProfileService and override it following way: ``` services.AddIdentityServer() .//add clients, scopes,resources here .AddProfileService(); ``` For more information look up here: <https://damienbod.com/2016/10/01/identityserver4-webapi-and-angular2-in-a-single-asp-net-core-project/> Upvotes: -1 <issue_comment>username_4: Why do you have "portal" listed as an identity resource and Api resource? That could be causing some confusion. Also, before I switched to IdentityServer4 and asp.net core, my IdentityServer3 startup code looked very similar to what you have with MVC. You may want to look at the examples for IdentityServer3. Some suggestions I may give, in your "ResponseType" field for MVC, you could try "code id\_token token" Also, you are setting your claims on AuthorizationCodeReceived, instead use SecurityTokenValidated. But you shouldn't have to do anything custom like people are mentioning. IdentityServer4 handles custom ApiResources like you are attempting to do. Upvotes: 0 <issue_comment>username_5: 1. portal is not an identity resource: you should remove new IdentityResource("portal", new List{ "tenantId", "userId", "user", "role", "name"}) 2. Names for the api resources should be consistent: ``` public static IEnumerable GetApiResources() { return new List { new ApiResource { Name= "portal", UserClaims = { "tenantId","userId","user" }, Scopes = { new Scope("portal","portal") } }, ``` }; } 3. Try setting GrantTypes.Implicit in the client. Upvotes: -1
2018/03/14
1,326
4,284
<issue_start>username_0: I have a function that changes the background color on hover and this works fine. However, What I need to do is keep the hover function but also make the the class active on a click event. I am having trouble finding a solution for this and would be grateful for any assistance. Thanks PS: If it helps, I have bootstrap available. ``` $(function() { $(".SentmsgHdr").hover(function() { $(this).css("background-color", "#f1f0ee"); }, function() { $(this).css("background-color", "#fcfaf7"); }); }); $(function() { $(document).on('click', '.SentmsgHdr', function() { $(this).css('background', 'yellow'); var sentid = $(this).find(".Sentdynid"); var id = sentid.attr("id"); $.ajax({ type: "POST", dataType: "JSON", url: "/domain/admin/Sentmsg.php", data: { id: id }, success: function(data) { var date = new Date(data.date); var newDate = date.getDate() + '/' + (date.getMonth() + 1) + '/' + date.getFullYear() + ' '+ date.getHours() + ':' + date.getMinutes() + ':' + date.getSeconds() +'PM'; $('#Sentdiv').html( '![](/domain/admin/images/contact.png)' + '' + data.from + ' ' + '' + 'Ticket#: ' + '' + data.ticket + '' + '' + '' + 'Subject: ' + '' + data.subject + '' + '' + '' + 'Date: '+ '' + newDate + '' + '' + '' + '' + ' --- ' + data.message + '' + '' ); } }); }); }); ```<issue_comment>username_1: The correct way to do this is with CSS, not JavaScript, except for the click which should add a class. Here's an example, see comments: ```js // I haven't disallowed repeated clicks here, but they'll be no-ops. // If you want to disallow them, make the selector ".SentmsgHdr:not(.clicked)". $(document).on("click", ".SentmsgHdr", function() { // Obviously the .text(...) part of this is just for this demo, // you probably just want the class part $(this).addClass("clicked").text("I have the class and have been clicked"); }); ``` ```css /* Defines the color for them when not hovered */ .SentmsgHdr { background-color: #fcfaf7; } /* Defines the color for them when hovered */ .SentmsgHdr:hover { background-color: #f1f0ee; } /* Defines the color for them when they've been clicked */ .SentmsgHdr.clicked { background-color: yellow; } ``` ```html I don't have the class I have the class, click me I don't have the class I have the class, click me ``` Upvotes: 1 <issue_comment>username_2: You can add some css class to active alement to separeate styles from hover to active. Here's a simple example. ```js $('.btn').click(function(ev) { $('.btn.active').removeClass('active'); $(this).addClass('active') }) ``` ```css nav a { display: block; padding: 2rem ; font-size: 1.1rem; background-color:#ccc; text-decoration: none; text-align: center; border: solid 1px #fff; } nav a:hover { background-color:green; color: #fff; } nav a.active { background-color: blue; color: #fff; } nav a.active:hover { } ``` ```html [One](#) [two](#) [three](#) ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: Your main Problem is, that the hover styles INSTANTLY overwrite your click event styles. The solutions given by the others here should solve your Problem, because if you use a class on the click event instead of css, the hover styles will not overwrite your click event styles. Upvotes: 0 <issue_comment>username_4: Try this ; or you can delete `active:hover` from css and just edit `.SentmsgHdr:hover` class with background `!important` ```js $(function() { /* $(".SentmsgHdr").hover(function() { $(this).css("background-color", "#fcfaf7"); }, function() { $(this).css("background-color", "#f1f0ee"); }); */ }); $(function() { $(document).on('click', '.SentmsgHdr', function() { //$(this).css('background', 'yellow'); $(this).addClass('active'); console.log('clicked'); }); }); ``` ```css .SentmsgHdr{ background-color: #fcfaf7; } .SentmsgHdr:hover { background-color: #f1f0ee; } .SentmsgHdr.active { background-color: yellow; } .SentmsgHdr.active:hover { background-color: #f1f0ee; } ``` ```html Click Me! ``` Upvotes: 1
2018/03/14
473
1,588
<issue_start>username_0: I currently have 4 websites running off my home desktop PC using XAMPP. They are running on ports 80, 81, 7733, and 25293. The first three run fine when accessed from an external network, however the last (25293) won't load. (This site can't be reached. ERR\_CONNECTION\_FAILED) I am port forwarding all 4 ports the exact same way. Just as soon as I'm not on my local network, the page stops loading. [![port forward](https://i.stack.imgur.com/SnLTt.png)](https://i.stack.imgur.com/SnLTt.png) I attempted to open up the port in my firewall as well however that achieved nothing. What can I do to resolve this? The error I receive upon visiting the port on an external network: [![External Access Error](https://i.stack.imgur.com/RU5RI.jpg)](https://i.stack.imgur.com/RU5RI.jpg)<issue_comment>username_1: Do you have another computer on the same network? If so, can that computer access your webserver? Try, because that must be your first step, then worry about being visible to the outside world. I just saw this [link](https://irvingduran.com/2013/02/how-to-access-xampp-from-external-address-http-and-mysql/) and [this](https://null-byte.wonderhowto.com/how-to/access-xampp-server-remotely-0168340/) one. Try to see if it solves your problem. Upvotes: 0 <issue_comment>username_2: This might be a common issue because you are using 5 digits port number, you may need port validation. For example this was known issue for Drupal: <https://www.drupal.org/project/link/issues/182916> Are you running Linux, or Windows server? Upvotes: 3 [selected_answer]
2018/03/14
437
1,466
<issue_start>username_0: I am searching an event field in a file but is giving wrong output. I am searching gpio-keys event in input devices for which I have written a script, but I'm unable to print anything in output file (in my case I am writing in a button device file it is null always). Please help me to figure out this. Where am I doing wrong in script file? Bash script: ``` #!/bin/bash if grep -q "gpio-keys" /proc/bus/input/devices ; then EVENT=$(cat /proc/bus/input/devices | grep "Handlers=kbd") foo= `echo $EVENT | awk '{for(i=1;i<=NF;i++) if($i=="evbug")printf($(i-1))}'` #foo=${EVENT:(-7)} echo -n $foo > /home/ubuntu/Setups/buttonDevice fi ```<issue_comment>username_1: Do you have another computer on the same network? If so, can that computer access your webserver? Try, because that must be your first step, then worry about being visible to the outside world. I just saw this [link](https://irvingduran.com/2013/02/how-to-access-xampp-from-external-address-http-and-mysql/) and [this](https://null-byte.wonderhowto.com/how-to/access-xampp-server-remotely-0168340/) one. Try to see if it solves your problem. Upvotes: 0 <issue_comment>username_2: This might be a common issue because you are using 5 digits port number, you may need port validation. For example this was known issue for Drupal: <https://www.drupal.org/project/link/issues/182916> Are you running Linux, or Windows server? Upvotes: 3 [selected_answer]
2018/03/14
602
2,363
<issue_start>username_0: I need to copy any selection from a webpage to the clipboard, it can be on a div, text input, password input, span, etc. I have the following function that does this for me, but the challenge is have the returned value from the function to be set on the clipboard ``` const getSelectionText = () => { let text = ""; let activeDomElement = document.activeElement; let activeDomElementTagName = activeDomElement ? activeDomElement.tagName.toLowerCase() : null; if ( (activeDomElementTagName === 'textarea') || (activeDomElementTagName === 'input' && /^(?:text|search|password|tel|url)$/i.test(activeDomElement.type)) && (typeof activeDomElement.selectionStart === "number") ) { text = activeDomElement.value.slice(activeDomElement.selectionStart, activeDomElement.selectionEnd); } else if (window.getSelection) { text = window.getSelection().toString(); } return text; } ``` Any idea, or links to resources would be helpful, thanks<issue_comment>username_1: Have a look at the example from w3schools.It is a basic example. <https://www.w3schools.com/howto/howto_js_copy_clipboard.asp> You can also use this - Just call myFunction with your returned text as an argument. ``` function myFunction(arg) { var x = document.createElement("INPUT"); x.setAttribute("type", "text"); x.setAttribute("value",arg); x.select(); document.execCommand("Copy"); alert("Copied the text: " + x.value); document.removeChild(x); } ``` Upvotes: 1 <issue_comment>username_2: `document.execCommand("copy")` this will copied all your selected text ! ```js function copySelectionText(){ var copysuccess // var to check whether execCommand successfully executed try{ copysuccess = document.execCommand("copy") // run command to copy selected text to clipboard } catch(e){ copysuccess = false } return copysuccess } document.body.addEventListener('mouseup', function(e){ var copysuccess = copySelectionText() // copy user selected text to clipboard if(copysuccess) { document.getElementById('paste').style.display = "block"; } }, false) ``` ```html ### Late dsfjdslkfjdslkfjdsklfj `Cdoing is godo for a wite` ``` Upvotes: 1 [selected_answer]
2018/03/14
779
2,695
<issue_start>username_0: I have values `department=apple, department1=orange, department2=banana, department3=tomato, department4=potato` How do i tell in the following SQL where clause something like this? ``` u.department = ifContain(department or department1 or department2 or department3 or department4) OR ...... OR u.department4 = ifContain(department or department1 or department2 or department3 or department4) ``` SQL: ``` SELECT u.id, u.username, u.status, callstatus, callername, u.alias as alias, u.access1, u.access2 FROM sh_av_users u INNER join sh_av_profile p on u.profileid=p.id WHERE u.groups='abcd' and u.xyz='xyz' and ( u.department='{$anyValueMatchOutOf_departments}' OR u.department1='{$anyValueMatchOutOf_departments}' OR u.department2='{$anyValueMatchOutOf_departments}' OR u.department3='{$anyValueMatchOutOf_departments}' OR u.department4='{$anyValueMatchOutOf_departments}' ) ```<issue_comment>username_1: I'm not sure if that is what you want but you could use `REGEX` like. ``` ... u.department REGEX 'apple|orange|banana|tomato|potato' OR u.department1 REGEX 'apple|orange|banana|tomato|potato' OR u.department2 REGEX 'apple|orange|banana|tomato|potato' OR u.department3 REGEX 'apple|orange|banana|tomato|potato' OR u.department4 REGEX 'apple|orange|banana|tomato|potato' ... ``` but i did not test this. Upvotes: -1 <issue_comment>username_2: I think you need the [`IN (value,...)` function](https://dev.mysql.com/doc/refman/5.7/en/comparison-operators.html#function_in) Simple example: ``` SELECT u.id, u.username, u.status, callstatus, callername, u.alias as alias, u.access1, u.access2 FROM sh_av_users u INNER join sh_av_profile p on u.profileid=p.id WHERE u.groups='abcd' and u.xyz='xyz' and ( u.department IN ('banana', 'potato') OR u.department1 IN ('banana', 'potato') OR u.department2 IN ('banana', 'potato') OR u.department3 IN ('banana', 'potato') OR u.department4 IN ('banana', 'potato') ) ``` You can even run a nested query: ``` SELECT u.id, u.username, u.status, callstatus, callername, u.alias as alias, u.access1, u.access2 FROM sh_av_users u WHERE u.department in (select dep from departments where location = 'US') ``` Upvotes: 2 [selected_answer]
2018/03/14
361
1,133
<issue_start>username_0: The problem is that language button (site body right) that made of bootstrap v4 dropdown is somewhy shows inside div and goes under instead of on top, and I can't figure out how to spot it, even `z-index: 10000` didn't help anyhow. page with sample <http://protasov.by/contacts> button is `#pm_language`<issue_comment>username_1: Here you have to remove some `css` and add `html` code Firstly ,remove `overflow: hidden;` from this id ``` #wb_page_heading { position: relative; // padding-left: 20px; // top: 10px; width: 100%; display: block; z-index: 1; border-bottom: 0.16rem solid get_color_transparent('gray', 0.25); @include material_box_shadow(0, 0.1rem, 4px, 0.25); // overflow: hidden; /*Remove this*/ float: none; } ``` And wrap the inner content in `.clearfix` ``` Контакты -------- Language [English](#) [Русский](#) ``` Upvotes: 1 <issue_comment>username_2: So the solution after couple hours of experementing is remove overflow: hidden from parental div element, and set manual height to it, that's make dropdown body appears on top of site canvas :) Upvotes: 0
2018/03/14
398
1,337
<issue_start>username_0: I have a Component, which have child component. at ngOnInit() I'm calling Web API and get list of data. Initial point length of the list is 10, But it will have more. Need to execute some method (task|process|job) in background to take rest of the data 10 by 10 in a loop which would run parallel to other task in background no matter what the user is currently doing, which component he/she is interacting with. And execute that method so that it doesn't block others. What is the correct way to do this?<issue_comment>username_1: Here you have to remove some `css` and add `html` code Firstly ,remove `overflow: hidden;` from this id ``` #wb_page_heading { position: relative; // padding-left: 20px; // top: 10px; width: 100%; display: block; z-index: 1; border-bottom: 0.16rem solid get_color_transparent('gray', 0.25); @include material_box_shadow(0, 0.1rem, 4px, 0.25); // overflow: hidden; /*Remove this*/ float: none; } ``` And wrap the inner content in `.clearfix` ``` Контакты -------- Language [English](#) [Русский](#) ``` Upvotes: 1 <issue_comment>username_2: So the solution after couple hours of experementing is remove overflow: hidden from parental div element, and set manual height to it, that's make dropdown body appears on top of site canvas :) Upvotes: 0
2018/03/14
2,000
6,330
<issue_start>username_0: We have a Java 8 application served by an Apache Tomcat 8 behind an Apache server, which is requesting multiple webservices in parallel using CXF. From time to time, there's one of them which lasts exactly 3 more seconds than the rest (which should be about 500ms only). I've activated CXF debug, and now I have the place inside CXF where the 3 seconds are lost: ``` 14/03/2018 09:20:49.061 [pool-838-thread-1] DEBUG o.a.cxf.transport.http.HTTPConduit - No Trust Decider for Conduit '{http://ws.webapp.com/}QueryWSImplPort.http-conduit'. An affirmative Trust Decision is assumed. 14/03/2018 09:20:52.077 [pool-838-thread-1] DEBUG o.a.cxf.transport.http.HTTPConduit - Sending POST Message with Headers to http://172.16.56.10:5050/services/quertServices Conduit :{http://ws.webapp.com/}QueryWSImplPort.http-conduit ``` As you could see, there're three seconds between these two lines. When the request is ok, it usually takes 0ms in between these two lines. I've been looking into the CXF code, but no clue about the reason of these 3 secs... The server application (which is also maintained by us), is served from another Apache Tomcat 6.0.49, which is behind another Apache server. The thing is that it seems that the server's Apache receives the request after the 3 seconds. Anyone could help me? **EDIT:** We've monitored the server's send/received packages and it seems that the client's server is sending a negotiating package at the time it should, while the server is replying after 3 seconds. These are the packages we've found: ``` 481153 11:31:32 14/03/2018 2429.8542795 tomcat6.exe SOLTESTV010 SOLTESTV002 TCP TCP:Flags=CE....S., SrcPort=65160, DstPort=5050, PayloadLen=0, Seq=2858646321, Ack=0, Win=8192 ( Negotiating scale factor 0x8 ) = 8192 {TCP:5513, IPv4:62} 481686 11:31:35 14/03/2018 2432.8608381 tomcat6.exe SOLTESTV002 SOLTESTV010 TCP TCP:Flags=...A..S., SrcPort=5050, DstPort=65160, PayloadLen=0, Seq=436586023, Ack=2858646322, Win=8192 ( Negotiated scale factor 0x8 ) = 2097152 {TCP:5513, IPv4:62} 481687 11:31:35 14/03/2018 2432.8613607 tomcat6.exe SOLTESTV010 SOLTESTV002 TCP TCP:Flags=...A...., SrcPort=65160, DstPort=5050, PayloadLen=0, Seq=2858646322, Ack=436586024, Win=256 (scale factor 0x8) = 65536 {TCP:5513, IPv4:62} 481688 11:31:35 14/03/2018 2432.8628380 tomcat6.exe SOLTESTV010 SOLTESTV002 HTTP HTTP:Request, POST /services/consultaServices {HTTP:5524, TCP:5513, IPv4:62} ``` So, it seems is the server's Tomcat is the one which is blocked with something. Any clue? **EDIT 2:** Although that happened yesterday (the first server waiting 3s for the ack of the second), this is not the most common scenario. What it usually happens is what I described at the beginning (3 seconds between the two CXF's logs and the server receiving any request from the first one after 3 seconds. There has been some times when the server (the one which receives the request), hangs for 3 seconds. For instance: 1. Server 1 sends 5 requests at the same time (suppossedly) to server 2. 2. Server 2 receives 4 of them, in that same second, and start to process them. 3. Server 2 finish processing 2 of those 4 requests in 30ms and replies to server 1. 4. More or less at this same second, there's nothing registered in the application logs. 5. After three seconds, logs are registered again, and the server finish to process the remaining 2 requests. So, although the process itself is about only some milliseconds, the response\_time - request\_time is 3 seconds and a few ms. 6. At this same time, the remaining request (the last one from the 5 request which were sent), is registered in the network monitor and is processed by the application in just a few milliseconds. However, the global processing time is anything more than 3s, as it has reached the server 3 seconds after being sent. So there's like a hang in the middle of the process. 2 requests were successfully processed before this hang and replied in just a fraction of a second. 2 other request lasted a little bit more, the hang happened, and ended with a processing time of 3 seconds. The last one, reached the server just when the hang happened, so it didn't get into the application after the hang. It sounds like a gc stop the world... but we have analyzed gc.logs and there's nothing wrong with that... could there be any other reason? Thanks! **EDIT 3:** Looking at the TCP flags like the one I pasted last week, we've noticed that there are lots of packets with the CE flag, that is a notification of TCP congestion. We're not network experts, but have found that this could deal us to a 3 seconds of delay before the packet retransmission... could anyone give us some help about that? Thanks. Kind regards.<issue_comment>username_1: > > The thing is that it seems that the server's Apache receives the > request after the 3 seconds. > > > How do you figure this out ? If you're looking at Apache logs, you can be misleaded by wrong time stamps. I first thought that your Tomcat 6 takes 3 seconds to answer instead of 0 to 500ms, but from the question and the comments, it is not the case. **Hypothesis 1** : Garbage Collector The GC is known for introducing latency. Highlight the GC activity in your logs by using the [GC verbosity parameters](https://stackoverflow.com/a/1818039/7748072). If it is too difficult to correlate, you can use the [jstat](https://docs.oracle.com/javase/7/docs/technotes/tools/share/jstat.html) command with the gcutil option and you can compare easily with the Tomcat's log. **Hypothesis 2** : network timeout Although 3s is a very short time (in comparison with the 21s TCP default timeout on Windows for example), it could be a timeout. To track the timeouts, you can use the netstat command. With `netstat -an` , look for the `SYN_SENT` connections, and with `netstat -s` look for the error counters. Please check if there is any network resource that must be resolved or accessed in this guilty webservice caller. Upvotes: 0 <issue_comment>username_2: Finally, it was everything caused by the network congestion we discovered looking at the TCP flags. Our network admins has been looking at the problem, trying to reduce the congestion, reducing the timeout to retransmit. Upvotes: 1
2018/03/14
676
2,252
<issue_start>username_0: How to fix this error? Splunk Mint: Archiving "MyApp" to "/tmp/splunk-mint-dsyms/MyApp.zip" adding: MyApp zip error: Interrupted (aborting) Splunk Mint: Failed to archive dSYMs for "MyApp" to "/tmp/splunk-mint-dsyms" Command /bin/sh failed with exit code 252 Second error is Splunk Mint: Archiving "MyApp" to "/tmp/splunk-mint-dsyms/MyApp.zip" adding: MyApp (deflated 68%) Splunk Mint: ERROR "400" while uploading "/tmp/splunk-mint-dsyms/MyApp.zip"<issue_comment>username_1: Please remove run script added for auto upload DSYM file. And manually upload DSYM for crash report symbolication. Please check for how to upload DSYM manually. **Download dSYM bundles using Xcode Organizer after archiving your app** 1. In Xcode, from the Xcode menu select Window > Organizer. 2. In Xcode Organizer, click the Archives tab. 3. Under iOS Apps, select your app from the list. 4. From the Version column, select the archive for your app, displayed as App Version (Build Uuid). 5. Click Download dSYMs to download the dSYM bundles from Apple. **Compress and upload dSYM bundles to MINT** 6. In Finder, find the dSYM bundles (archive files) you just downloaded. 7. Right-click the archive file and select Show package contents. 8. Open the dSYMs folder. 9. Compress each dSYM file named with your app's build UUID to a ZIP file. 10. Open MINT Management Console by logging in to mint.splunk.com. 11. Select your app project. 12. Click the Settings dashboard. 13. Under Project Settings, click dSYMs. 14. Click Browse & Upload, then navigate to and select the dSYM bundles you compressed. For more info check out [docs](https://docs.splunk.com/Documentation/MintIOSSDK/5.2.x/DevGuide/Configureyourprojectforsymbolication) Upvotes: 2 [selected_answer]<issue_comment>username_2: The answer that worked for me was contained here: [Run Script Phase after dSYM is generated with Xcode 10 (on build)](https://stackoverflow.com/questions/53205108/run-script-phase-after-dsym-is-generated-with-xcode-10-on-build) Adding `sleep 5` to the top of: ``` iOS/Pods/SplunkMint/SplunkMint.framework/Resource/splunkmint_postbuild_dsym_upload_script.sh ``` A hack, yes, but better than having to disable the upload completely IMO. Upvotes: 0
2018/03/14
707
2,496
<issue_start>username_0: Given a component: ``` Vue.component('my-comp', { props: ['input'], watch: { input: function(){...} }, }); ``` What is the programmatic method for the following? ``` map[key] change triggers watch ``` I have tried: ``` new (Vue.component('my-comp'))({ propsData: { input:map[key] }, // map[key] change doesn't trigger watch }); ``` The context for this is inserting zero-to-many components into markdown-generated HTML. I call `.$mount()` for each component, and move its node with a native DOM `replaceChild()` call when markdown is re-rendered. See also [Vue components in user-defined markdown](https://stackoverflow.com/questions/49139819/vue-components-in-user-defined-markdown)<issue_comment>username_1: Please remove run script added for auto upload DSYM file. And manually upload DSYM for crash report symbolication. Please check for how to upload DSYM manually. **Download dSYM bundles using Xcode Organizer after archiving your app** 1. In Xcode, from the Xcode menu select Window > Organizer. 2. In Xcode Organizer, click the Archives tab. 3. Under iOS Apps, select your app from the list. 4. From the Version column, select the archive for your app, displayed as App Version (Build Uuid). 5. Click Download dSYMs to download the dSYM bundles from Apple. **Compress and upload dSYM bundles to MINT** 6. In Finder, find the dSYM bundles (archive files) you just downloaded. 7. Right-click the archive file and select Show package contents. 8. Open the dSYMs folder. 9. Compress each dSYM file named with your app's build UUID to a ZIP file. 10. Open MINT Management Console by logging in to mint.splunk.com. 11. Select your app project. 12. Click the Settings dashboard. 13. Under Project Settings, click dSYMs. 14. Click Browse & Upload, then navigate to and select the dSYM bundles you compressed. For more info check out [docs](https://docs.splunk.com/Documentation/MintIOSSDK/5.2.x/DevGuide/Configureyourprojectforsymbolication) Upvotes: 2 [selected_answer]<issue_comment>username_2: The answer that worked for me was contained here: [Run Script Phase after dSYM is generated with Xcode 10 (on build)](https://stackoverflow.com/questions/53205108/run-script-phase-after-dsym-is-generated-with-xcode-10-on-build) Adding `sleep 5` to the top of: ``` iOS/Pods/SplunkMint/SplunkMint.framework/Resource/splunkmint_postbuild_dsym_upload_script.sh ``` A hack, yes, but better than having to disable the upload completely IMO. Upvotes: 0
2018/03/14
728
2,579
<issue_start>username_0: I have a table that has a lot inserts going into it. After I get to let's say, 1,000,000 rows, I don't care anymore about the first rows, and want to delete them. Unfortunately, because of the high rate of inserts delete is taking too long. To solve this I wanted to know if the following is possible - pseudo code: ``` on insert: if id % 1,000,000 != id: do replace instead of insert with id = (id % 1,000,000) ``` In this way the table would sort of "cycle" back from the top, and delete would not be necessary. Any solution has to take into account the possibility of concurrent inserts. Is such a thing possible? EDIT: For all future suggestions of delete methods - I have already tried using all the advises from <http://mysql.rjweb.org/doc.php/deletebig><issue_comment>username_1: Please remove run script added for auto upload DSYM file. And manually upload DSYM for crash report symbolication. Please check for how to upload DSYM manually. **Download dSYM bundles using Xcode Organizer after archiving your app** 1. In Xcode, from the Xcode menu select Window > Organizer. 2. In Xcode Organizer, click the Archives tab. 3. Under iOS Apps, select your app from the list. 4. From the Version column, select the archive for your app, displayed as App Version (Build Uuid). 5. Click Download dSYMs to download the dSYM bundles from Apple. **Compress and upload dSYM bundles to MINT** 6. In Finder, find the dSYM bundles (archive files) you just downloaded. 7. Right-click the archive file and select Show package contents. 8. Open the dSYMs folder. 9. Compress each dSYM file named with your app's build UUID to a ZIP file. 10. Open MINT Management Console by logging in to mint.splunk.com. 11. Select your app project. 12. Click the Settings dashboard. 13. Under Project Settings, click dSYMs. 14. Click Browse & Upload, then navigate to and select the dSYM bundles you compressed. For more info check out [docs](https://docs.splunk.com/Documentation/MintIOSSDK/5.2.x/DevGuide/Configureyourprojectforsymbolication) Upvotes: 2 [selected_answer]<issue_comment>username_2: The answer that worked for me was contained here: [Run Script Phase after dSYM is generated with Xcode 10 (on build)](https://stackoverflow.com/questions/53205108/run-script-phase-after-dsym-is-generated-with-xcode-10-on-build) Adding `sleep 5` to the top of: ``` iOS/Pods/SplunkMint/SplunkMint.framework/Resource/splunkmint_postbuild_dsym_upload_script.sh ``` A hack, yes, but better than having to disable the upload completely IMO. Upvotes: 0
2018/03/14
540
1,667
<issue_start>username_0: For example, I have one function. In this function as a parameter I have to pass some UIControl like UILabel, UITextFile, UIButton, etc. How can I do this: ``` func passDicData(controll : UIControll){ //code } ``` Used like: ``` passDicData(UILabel) ```<issue_comment>username_1: It is `UIControl`, not `UIControll`. Moreover, the common ancestor of the `UILabel`, `UITextField`, and `UIButton` is `UIView`, see for example this [scheme](https://iosdevelopmenttutorials.files.wordpress.com/2013/07/uikit_classes.jpg). ``` func passDicData(view: UIView){ //code } ``` Upvotes: 0 <issue_comment>username_2: //Use UIView ``` func passDicData(control:UIView){ //code } passDicData(UILabel(frame: .zero)) ``` Upvotes: 2 <issue_comment>username_3: You can use this way, ``` func passDicData(control: AnyClass){ print(control.class()) } ``` and call it like this way, ``` passDicData(control: UILabel.self) ``` Upvotes: 0 <issue_comment>username_4: Yes, you can use `UIControl` as a parameter: ``` func passDicData(controll : UIControl){ //code } ``` However, you will **NOT** be able to call this function like this: ``` passDicData(UILabel) ``` That's because 1. In your example, you are not creating an instance of an UILabel, you are just using its type. In order to pass the instance you should initialize an `UILabel` first, e.g., `passDicData(UILabel())` 2. UILabel is not a subclass of `UIControl` Instead you could use `UIView` or `UIResponder`, both of them are super classes of `UILabel` ``` func passDicData(responder : UIResponder){ //code } passDicData(UILabel()) ``` Upvotes: 0
2018/03/14
544
1,691
<issue_start>username_0: If I have a string in this format: ``` string placeholder = "[[Ford:Focus(blue)]]"; ``` or ``` string placeholder = "[[Ford:Focus(light blue)]]"; ``` What regex could I use to populate the three variables below? ``` string make = ? string model = ? string colour = ? ``` Any ideas? Thanks,<issue_comment>username_1: You can try ``` ([^:\[\]]*):([^(]*)\(([^)]*)\) ``` [Demo](http://regex101.com/r/1J048G/3) Explanation: * `([^:\[\]]*)` captures make (anything that is not a square bracket nor `:`) * `:` separates model * `([^(]*)` matches model * `\(([^)]*)\)` matches colour (everything but closing bracket) inside brackets Upvotes: 1 <issue_comment>username_2: If the brackets are part of the string, you could use something like this ``` \[\[(.*):(.*)\((.*)\) ``` Upvotes: 0 <issue_comment>username_3: This can be done using Regex and Groups: ``` string strRegex = @"\[(?\w\*):(?\w\*)\((?.\*)\)"; Regex myRegex = new Regex(strRegex, RegexOptions.None); string strTargetString = @"[[Ford:Focus(blue)]]"; foreach (Match myMatch in myRegex.Matches(strTargetString)) { if (myMatch.Success) { var make = myMatch.Groups["make"].Value; var model = myMatch.Groups["model"].Value; var colour = myMatch.Groups["colour"].Value; } } ``` Upvotes: 2 <issue_comment>username_4: If you want to avoid RegEx - another approach with `Split()` ``` string placeholder = "[[Ford:Focus(light blue)]]"; string[] result = placeholder.Split(new[] { '[', ']', '(', ')', ':' }, StringSplitOptions.RemoveEmptyEntries); string make = result[0]; string model = result[1]; string colour = result[2]; ``` <https://dotnetfiddle.net/0iKNgJ> Upvotes: 2 [selected_answer]
2018/03/14
1,211
3,944
<issue_start>username_0: I have a base class `Tag` and a child class `TagSet`, that inherits from `Tag`. ``` class Tag { public: Tag(std::string); std::string tag; }; std::ostream & operator <<(std::ostream &os, const Tag &t); class TagSet : public Tag { public: TagSet(); }; std::ostream & operator <<(std::ostream &os, const TagSet &ts); ``` and their implementations ``` Tag::Tag(std::string t) : tag(t) {} std::ostream & operator <<( std::ostream &os, const Tag &t ) { os << "This is a tag"; return os; } TagSet::TagSet() : Tag("SET") {} std::ostream & operator <<(std::ostream &os, const TagSet &ts) { os << "This is a TagSet"; return os; } ``` I want to include a third class `TagList` that has a member `std::vector`, which can hold either `Tag*` instances or `TagSet*` instances. I want to define the `<<` operator for `TagList` such that it uses the `Tag` version of `operator<<` if the element is a `Tag` or the `TagSet` version of `operator<<` if the element is a `TagSet`. This is my attempt: ``` std::ostream & operator <<(std::ostream &os, const TagList &ts) { for (auto t : ts.tags) { if (t->tag == "SET") { TagSet * tset = dynamic_cast(t); os << \*tset << ", "; } else os << t->tag << ", "; } } ``` The code crashes at runtime. I checked the `tset` pointer and it isn't null. Probably it's a bad cast. What is the correct way to do this? Is the problem something to do with consts in the `operator<<` function? Other suggests for how to achieve this are welcome. The rest of the `TagList` implementation is here for completeness: ``` class TagList { public: TagList(std::vector taglist); std::vector tags; typedef std::vector::const\_iterator const\_iterator; const\_iterator begin() const { return tags.begin(); } const\_iterator end() const { return tags.end(); } }; std::ostream & operator <<(std::ostream &os, const TagList &ts); ``` and ``` TagList::TagList(std::vector tagvec) : tags(tagvec.begin(), tagvec.end()) {} ```<issue_comment>username_1: If I may suggest a different solution to the problem of outputting your `Tag` objects, then have only a single operator overload, for `Tag const&`, and then have that call a virtual `output` function in the `Tag` structure. Then override that function in the inherited classes. Perhaps something like ``` struct Tag { ... virtual std::ostream& output(std::ostream& out) { return out << "This is Tag\n"; } friend std::ostream& operator<<(std::ostream& out, Tag const& tag) { return tag.output(out); } }; struct TagSet : Tag { ... std::ostream& output(std::ostream& out) override { return out << "This is TagSet\n"; } }; ``` Then to output the list ``` for (auto t : ts.tags) std::cout << *t; ``` Upvotes: 3 <issue_comment>username_2: You cannot do that, because `operator<<` is not virtual. Define a virtual `print` method instead, and use that in `operator<<`, e.g. ``` class Tag { public: virtual void print(std::ostream &f) const; }; std::ostream & operator <<(std::ostream &os, const Tag &t) { t->print(os); return os; } ``` Now you can use method `print()` in `TagList` as well without any cast at all: ``` std::ostream & operator <<(std::ostream &os, const TagList &ts) { for (auto t : ts.tags) { t->print(os); os << ", "; } } ``` or implicit ``` for (auto t : ts.tags) { os << *t << ", "; } ``` Upvotes: 1 <issue_comment>username_3: Your current approach is essentially a hand-written RTTI with extra memory overhead. Mixing it with build-in RTTI by using dynamic cast won't work since classes `Tag` and `TagSet` are not polymorphic and build-in RTTI for objects of these types is not available at runtime to perform such cast. If you insist of using hand-written RTTI then you need to perform a `static_cast`. ``` TagSet & tset{static_cast(\*t)}; ``` Upvotes: 0
2018/03/14
715
2,443
<issue_start>username_0: This is my function in foreach loop for creating object which has property and value as word and its count, but i want to convert it in map according to es6 ``` function harmlessRamsonNote(noteText,magazineText) { var noteArr = noteText.split(' '); var magazineArr = magazineText.split(' '); var magazineObj = {}; magazineArr.forEach(word => { if(!magazineObj[word]) { magazineObj[word] = 0; } magazineObj[word]++; }); console.log(magazineObj); }; ```<issue_comment>username_1: If I may suggest a different solution to the problem of outputting your `Tag` objects, then have only a single operator overload, for `Tag const&`, and then have that call a virtual `output` function in the `Tag` structure. Then override that function in the inherited classes. Perhaps something like ``` struct Tag { ... virtual std::ostream& output(std::ostream& out) { return out << "This is Tag\n"; } friend std::ostream& operator<<(std::ostream& out, Tag const& tag) { return tag.output(out); } }; struct TagSet : Tag { ... std::ostream& output(std::ostream& out) override { return out << "This is TagSet\n"; } }; ``` Then to output the list ``` for (auto t : ts.tags) std::cout << *t; ``` Upvotes: 3 <issue_comment>username_2: You cannot do that, because `operator<<` is not virtual. Define a virtual `print` method instead, and use that in `operator<<`, e.g. ``` class Tag { public: virtual void print(std::ostream &f) const; }; std::ostream & operator <<(std::ostream &os, const Tag &t) { t->print(os); return os; } ``` Now you can use method `print()` in `TagList` as well without any cast at all: ``` std::ostream & operator <<(std::ostream &os, const TagList &ts) { for (auto t : ts.tags) { t->print(os); os << ", "; } } ``` or implicit ``` for (auto t : ts.tags) { os << *t << ", "; } ``` Upvotes: 1 <issue_comment>username_3: Your current approach is essentially a hand-written RTTI with extra memory overhead. Mixing it with build-in RTTI by using dynamic cast won't work since classes `Tag` and `TagSet` are not polymorphic and build-in RTTI for objects of these types is not available at runtime to perform such cast. If you insist of using hand-written RTTI then you need to perform a `static_cast`. ``` TagSet & tset{static_cast(\*t)}; ``` Upvotes: 0
2018/03/14
684
2,285
<issue_start>username_0: The error : Error:Execution failed for task ':app:transformDexArchiveWithExternalLibsDexMergerForDebug'. > > java.lang.RuntimeException: com.android.builder.dexing.DexArchiveMergerException: Unable to merge dex > > > [![enter image description here](https://i.stack.imgur.com/2tD33.png)](https://i.stack.imgur.com/2tD33.png)<issue_comment>username_1: If I may suggest a different solution to the problem of outputting your `Tag` objects, then have only a single operator overload, for `Tag const&`, and then have that call a virtual `output` function in the `Tag` structure. Then override that function in the inherited classes. Perhaps something like ``` struct Tag { ... virtual std::ostream& output(std::ostream& out) { return out << "This is Tag\n"; } friend std::ostream& operator<<(std::ostream& out, Tag const& tag) { return tag.output(out); } }; struct TagSet : Tag { ... std::ostream& output(std::ostream& out) override { return out << "This is TagSet\n"; } }; ``` Then to output the list ``` for (auto t : ts.tags) std::cout << *t; ``` Upvotes: 3 <issue_comment>username_2: You cannot do that, because `operator<<` is not virtual. Define a virtual `print` method instead, and use that in `operator<<`, e.g. ``` class Tag { public: virtual void print(std::ostream &f) const; }; std::ostream & operator <<(std::ostream &os, const Tag &t) { t->print(os); return os; } ``` Now you can use method `print()` in `TagList` as well without any cast at all: ``` std::ostream & operator <<(std::ostream &os, const TagList &ts) { for (auto t : ts.tags) { t->print(os); os << ", "; } } ``` or implicit ``` for (auto t : ts.tags) { os << *t << ", "; } ``` Upvotes: 1 <issue_comment>username_3: Your current approach is essentially a hand-written RTTI with extra memory overhead. Mixing it with build-in RTTI by using dynamic cast won't work since classes `Tag` and `TagSet` are not polymorphic and build-in RTTI for objects of these types is not available at runtime to perform such cast. If you insist of using hand-written RTTI then you need to perform a `static_cast`. ``` TagSet & tset{static_cast(\*t)}; ``` Upvotes: 0
2018/03/14
753
2,995
<issue_start>username_0: I created a custom class that is based on Apache Flink. The following are some parts of the class definition: ``` public class StreamData { private StreamExecutionEnvironment env; private DataStream data ; private Properties properties; public StreamData(){ env = StreamExecutionEnvironment.getExecutionEnvironment(); } public StreamData(StreamExecutionEnvironment e , DataStream d){ env = e ; data = d ; } public StreamData getDataFromESB(String id, int from) { final Pattern TOPIC = Pattern.compile(id); Properties properties = new Properties(); properties.setProperty("bootstrap.servers", "localhost:9092"); properties.setProperty("group.id", Long.toString(System.currentTimeMillis())); properties.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); properties.setProperty("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer"); properties.put("metadata.max.age.ms", 30000); properties.put("enable.auto.commit", "false"); if (from == 0) properties.setProperty("auto.offset.reset", "earliest"); else properties.setProperty("auto.offset.reset", "latest"); StreamExecutionEnvironment e = StreamExecutionEnvironment.getExecutionEnvironment(); DataStream stream = env .addSource(new FlinkKafkaConsumer011<>(TOPIC, new AbstractDeserializationSchema() { @Override public byte[] deserialize(byte[] bytes) { return bytes; } }, properties)); return new StreamData(e, stream); } public void print(){ data.print() ; } public void execute() throws Exception { env.execute() ; } ``` Using class `StreamData`, trying to get some data from Apache Kafka and print them in the main function: ``` StreamData stream = new StreamData(); stream.getDataFromESB("original_data", 0); stream.print(); stream.execute(); ``` I got the error: ``` Exception in thread "main" org.apache.flink.api.common.InvalidProgramException: The implementation of the FlinkKafkaConsumer010 is not serializable. The object probably contains or references non serializable fields. Caused by: java.io.NotSerializableException: StreamData ``` As mentioned [here](https://stackoverflow.com/questions/34118469/flink-using-dagger-injections-not-serializable), I think it's because of some data type in `getDataFromESB` function is not serializable. But I don't know how to solve the problem!<issue_comment>username_1: Your AbstractDeserializationSchema is an anonymous inner class, which as a result contains a reference to the outer StreamData class which isn't serializable. Either let StreamData implement Serializable, or define your schema as a top-level class. Upvotes: 3 [selected_answer]<issue_comment>username_2: It seems that you are importing FlinkKafkaConsumer010 in your code but using FlinkKafkaConsumer011. Please use the following dependency in your sbt file: ``` "org.apache.flink" %% "flink-connector-kafka-0.11" % flinkVersion ``` Upvotes: 0
2018/03/14
689
2,260
<issue_start>username_0: how can I replace only the text inside without removing any other element inside ? It look actually like this : ``` "Old text 1 " ![](http://urlofimage.com) "Old 2" ``` And I would like to get that : ``` "New text 1 " ![](http://urlofimage.com) "New text 2" ```<issue_comment>username_1: ``` New text 1 ![](http://urlofimage.com) New text 1 Change Text function myFunction() { document.getElementById("text1").innerHTML = "Hello there"; } ``` Enclose the text with and unique id. Upvotes: 1 <issue_comment>username_2: Iterate `childNodes` of `p` and look for **text nodes** ``` var p = document.querySelector( ".text p" ); Array.from( p.childNodes ).forEach( s => { if ( s.nodeType == Node.TEXT_NODE && s.textContent.trim().length > 0 ) { //change the value here } }); ``` **Demo** ```js var p = document.querySelector(".text p"); Array.from(p.childNodes).forEach( (s, i) => { if (s.nodeType == Node.TEXT_NODE && s.textContent.trim().length > 0) { s.textContent = "new value" + i; } }); ``` ```html "Old text 1 " ![](http://urlofimage.com) "Old 2" ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: ``` data = $('.text p').html().trim().split('\n') ``` This will give you array of size 3 in which image component will be at 2nd index so what you can simply do is add your updated values at data[0] and data[2] and then using .toString() function you can get simple string of updated contents which you can update at $('.text p'). ``` data[0] = '"New text 1 "'; data[2] = '"New text 2"'; var updatedString = data.toString(); $('.text p').html(updatedString); ``` There might be some typo because I didn't executed this script but this logic should work. **++++++++++++++ Update +++++++++++++++++** It's working fine on my machine, just added code to handle comma, here is the complete code : ``` $(document).ready(function(){ data = $('.text p').html().trim().split('\n') data[0] = '"New text 1 "'; data[2] = '"New text 2"'; console.log(data); var updatedString = data.toString(); updatedString = updatedString.replace(/,/g, "\n"); $('.text p').html(updatedString); }) ``` Upvotes: 0
2018/03/14
628
2,155
<issue_start>username_0: Is it possible to load a packaged spacy model (i.e. `foo.tar.gz`) directly from the tar file instead of installing it beforehand? I would imagine something like: ``` import spacy nlp = spacy.load(/some/path/foo.tar.gz) ```<issue_comment>username_1: No, that's currently not possible. The main purpose of the `.tar.gz` archives is to make them easy to install via `pip install`. However, you can always extract the model data from the archive, and then load it in from a path – [see here for more details](https://spacy.io/usage/models#download-manual). ``` nlp = spacy.load('/path/to/en_core_web_md') ``` Using the [`spacy link` command](https://spacy.io/api/cli#link) you can also create "shortcut links" for your models, i.e. symlinks that let you load in models using a custom name instead of the full path or package name. This is especially useful if you're working with large models and multiple environments (and don't want to install the data in each of them). ``` python -m spacy link /path/to/model_data cool_model ``` The above shortcut link would then let you load your model like this: ``` nlp = spacy.load('cool_model') ``` Alternatively, if you *really* need to load models from an archive, you could always write a simple wrapper for `spacy.load` that takes the file, extracts the contents, reads the [model meta](https://spacy.io/api/top-level#util.get_model_meta), gets the path to the data directory and then calls [`spacy.util.load_model_from_path`](https://spacy.io/api/top-level#util.load_model_from_path) on it and returns the `nlp` object. Upvotes: 5 [selected_answer]<issue_comment>username_2: Its not the direct answer but it might be helpful in order to load compressed models directly with `SpaCy`. This can be done by using `pickle`. First, you need to load your `SpaCy` Model and dump it compressed with `pickle`: ``` import spacy import pickle s = spacy.load("en_core_web_sm", parse=False) pickle.dump(s, open("save.p", "wb")) ``` Afterwards, you can load easily somewhere else the pickle dump directly as `SpaCy` model: ``` s = pickle.load(open("save.p", "rb")) ``` Upvotes: 2
2018/03/14
1,320
4,444
<issue_start>username_0: I am trying to mount a NFS share (outside of k8s cluster) in my container via DNS lookup, my config is as below ``` apiVersion: v1 kind: Pod metadata: name: service-a spec: containers: - name: service-a image: dockerregistry:5000/centOSservice-a command: ["/bin/bash"] args: ["/etc/init.d/jboss","start"] volumeMounts: - name: service-a-vol mountPath: /myservice/por/data volumes: - name: service-a-vol nfs: server: nfs.service.domain path: "/myservice/data" restartPolicy: OnFailure ``` nslookup of `nfs.service.domin` works fine from my container. This is achiveded via `StubDomain` . However when creating the container it fails to resolve the nfs server. Error: ``` Warning FailedMount kubelet, worker-node-1 MountVolume.SetUp failed for volume "service-a-vol" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/44aabfb8-2767-11e8-bcf9-fa163ece9426/volumes/kubernetes.io~nfs/service-a-vol --scope -- mount -t nfs nfs.service.domain:/myservice/data /var/lib/kubelet/pods/44aabfb8-2767-11e8-bcf9-fa163ece9426/volumes/kubernetes.io~nfs/service-a-vol Output: Running scope as unit run-27293.scope. mount.nfs: Failed to resolve server nfs.service.domain: Name or service not known mount.nfs: Operation already in progress ``` If i modify `server: nfs.service.domain` to `server: 10.10.1.11` this works fine! So to summarise 1. DNS resolution of the service works fine 2. Mounting via DNS resolution does not 3. Mounting via specific IP address works 4. I have tried `Headless Service` instead of StubDomain but the same issue exists Any help much appreciated *Update 1*: If i add an entry in the /etc/hosts files of worker/master nodes `10.10.1.11 nfs.service.domain` then my configuration above `server: nfs.service.domain` works. This is obviously not a desired workaround...<issue_comment>username_1: As pointed out by <NAME> and as referenced in [this github ticket](https://github.com/kubernetes/kubernetes/issues/44528/) among others this is currently not possible as the node needs to be able to resolve the DNS entry and it does not resolve kube-dns. Two possible solutions are: 1. Update `/etc/hosts` of each kubernetes node to resolve the NFS endpoint (as per update above). This is a primitive solution. 2. A more robust fix that would work for this NFS service and any other remote service in the same domain (as NFS) is to add the remote DNS server to the kubernetes nodes `resolv.conf` `someolddomain.org service.domain xx.xxx.xx nameserver 10.10.0.12 nameserver 192.168.20.22 nameserver 8.8.4.4` Upvotes: 3 [selected_answer]<issue_comment>username_2: I'm using the **full service name** and it's working fine for me. Like this: ``` apiVersion: v1 kind: Pod metadata: name: alpine labels: app: alpine spec: containers: - name: alpine image: "alpine:latest" imagePullPolicy: "Always" command: [ "tail", "-f", "/dev/null" ] resources: limits: cpu: 100m memory: 100Mi volumeMounts: - mountPath: /nfs name: nfs-vol volumes: - name: nfs-vol nfs: path: /exports server: nfs-server-svc.nfs-test.svc.cluster.local ``` Before that I was trying to use only the "simple" service name and it wasn't working. I'm working on GKE, with the version "v1.20.10-gke.1600". You can see more details [here](https://github.com/kubernetes/kubernetes/issues/44528#issuecomment-373017042). Thanks. Upvotes: 1 <issue_comment>username_3: try to use without " cluster.local " and use only the name of nfs service name. Upvotes: -1 <issue_comment>username_4: try to use full service name as follow "[service-name].[service-namespace].svc.cluster.local". Upvotes: -1 <issue_comment>username_5: Facing similar issue. Trying to implement `busybox` init container or as side car which resolves hostname. I will update this post once I have implementation ready for init container. Workaround that is working for me: 1. Use the services IP instead of the FQDN 2. Manually update the nodes hosts file (/etc/hosts) to make using the FQDN work busybox example: ``` kubectl create -f https://k8s.io/examples/admin/dns/busybox.yaml kubectl exec -ti busybox -- nslookup nfs-service-tcp.default.svc.cluster.local ``` Upvotes: 0
2018/03/14
640
2,261
<issue_start>username_0: This question has been asked before but non of the solutions work for me. This my code ``` ``` and the script ``` $(document).ready(function () { $(".comment-rate-wrapper a img").on('click', function (e) { e.preventDefault(); var item\_id = 1; var url = "{{route('like.voteHandler', ':id')}}"; url = url.replace(':id', item\_id); //alert(url); I am sure thr url is correct and it outputs correctly $.ajaxSetup({ headers: { 'X-CSRF-TOKEN': $('meta[name="csrf-token"]').attr('content') } }); $.ajax({ method: 'POST', // Type of response and matches what we said in the route url: url, // This is the url we gave in the route data: { 'item\_id': item\_id }, success: function (result) { // What to do if we succeed console.log(result); }, error: function (jqXHR, textStatus, errorThrown) { // What to do if we fail console.log(JSON.stringify(jqXHR)); console.log("AJAX error: " + textStatus + ' : ' + errorThrown); } }); }) }) ``` and this is the controller function ``` public function voteHandler($item_id) { echo "sas"; return "hi"; } ``` But it always returns an empty array and I have no idea why it is not working. Thanks<issue_comment>username_1: Try this out in your controller ``` public function voteHandler(Request $request) { return response()->json(["item_id" => $request->item_id]); } ``` **Edit:** This was actually an error with the request method, change your route declaration to match your ajax request method `POST` ``` $router->post('{item_id}/voteHandler', 'LikeController@voteHandler')->name('like.voteHandler'); ``` or change your ajax method to `GET` Upvotes: 2 [selected_answer]<issue_comment>username_2: Change the Parameter and print the inputs ``` public function voteHandler(Request $request) { print_r($request->all()); } ``` Make sure it reaches this method. Use firebug to debug the same. Upvotes: 0 <issue_comment>username_3: You can check Your Ajax request like below way.You should try to below way. ``` public function voteHandler(Request $request) { if(Request::ajax()){ return response()->json(['status'=>'Ajax request']); } return response()->json(['status'=>'Http request']); } ``` Upvotes: 1
2018/03/14
534
1,945
<issue_start>username_0: Latest chrome browser version (65.0.3325.162) is not supporting webdriverio Browser is getting launched and also hits the url successfully. but while performing operation on UI is throwing different error. Here are some error I have got: Case 1: Typing in a Text box Method Used: setValue() unknown error: call function result missing 'value' [chrome #0-0] Error: An unknown server-side error occurred while processing the command. [chrome #0-0] at elementIdValue("0", "text123") Case 2: Selecting a value from drop down Method used: selectByVisibleText() stale element reference: element is not attached to the page document [chrome #0-0] Error: An element command failed because the referenced element is no longer attached to the DOM. [chrome #0-0] at elementIdClick("1") Note: Same code was working fine with the previous version of Chrome browser (64.0.3282.186) Since there is no way we can downgrade chrome version we have to go for the latest chrome browser only. Since it works well in Firefox and Chrome previous version I don't think it is a problem with webdriverIO. Just let me know is there anybody else have faced the same problem or if anybody can reproduce this issues and give some solution to the problem stated.<issue_comment>username_1: I just found exactly the same problem. I was about to post about it when your post came up. I tried the following code. Whilst it works with firefox it fails with chrome throwing the error you mention. ``` browser.url('http://www.google.com') .element('[name="q"]') .setValue('webdriver') .element('[name="btnK"]') .click() ``` EDIT: In this issue <https://github.com/webdriverio/webdriverio/issues/2631> they mention that upgrading the chromedriver to the version 2.36.0 it fixes the issue. It works now! Upvotes: 1 <issue_comment>username_2: Upgrading "wdio-selenium-standalone-service" from 0.0.9 to 0.0.10 fixed it for me. Upvotes: 0
2018/03/14
355
1,356
<issue_start>username_0: Let me explain a bit the scenario: I have hundreds of hive tables stored on S3 (ORC, Parquet), so just to be clear no HDFS. Now, I am interested in migrating some of them to Redshift to run some performance tests. I know that redshift does not support ORC, Parquet so I need to create some CSV/JSON to be able to use the COPY command. I am thinking of using Hive itself to create temporary CSV tables and then migrate to Redshift. I was also thinking of using Spark to move this data. Anyone with experience in this scenario?<issue_comment>username_1: There is a simple way for migrating the data into redshift. So first of all you need to load that parquet or orc into Spark (pyspark, java or scala) then you can directly insert those data into redshift using databricks package. Below is the link for databricks package which includes some examples. <https://github.com/databricks/spark-redshift> Upvotes: 2 [selected_answer]<issue_comment>username_2: You can set up Redshift Spectrum so that your S3 tables look like Redshift tables, you can then query the data directly or bring it in to internal Redshift tables. <https://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-external-tables.html> ORC and Parquet are fully supported. <https://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-data-files.html> Upvotes: 3
2018/03/14
372
1,481
<issue_start>username_0: I add the forms via dataPush function ``` dataPush(dataType) { let dataObject; if (dataType == 'bankInfo') { dataObject = { bankIndex: 0, bankOwner: '', bankName: [], bankAccount: '', } } } ``` this is for deleting function, but I have no idea to pass the what is checked form? ``` dataPull(dataType, index) { this[dataType].splice(index, 1); }, ``` buttons are locationed outside v-for , so I have no idea how I can pass indexes it's checked Here are more details:<issue_comment>username_1: There is a simple way for migrating the data into redshift. So first of all you need to load that parquet or orc into Spark (pyspark, java or scala) then you can directly insert those data into redshift using databricks package. Below is the link for databricks package which includes some examples. <https://github.com/databricks/spark-redshift> Upvotes: 2 [selected_answer]<issue_comment>username_2: You can set up Redshift Spectrum so that your S3 tables look like Redshift tables, you can then query the data directly or bring it in to internal Redshift tables. <https://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-external-tables.html> ORC and Parquet are fully supported. <https://docs.aws.amazon.com/redshift/latest/dg/c-spectrum-data-files.html> Upvotes: 3
2018/03/14
1,185
4,139
<issue_start>username_0: I have a very similar question to [this one](https://stackoverflow.com/questions/24325786/table-columns-is-not-a-function-in-datatable-js). I create a datatable, then the user can use the search function, click a button and the selected data will then be sent to another function where it is further processed. Initializing the table works fine, also sending the selected data [works as intended](https://stackoverflow.com/questions/33169649/how-to-get-filtered-data-result-set-from-jquery-datatable). However, I fail to access the column names correctly. That's the datatables version I use: ``` ``` A bit of background: By clicking on a link (`#calculate`), a table is created and the column headers look fine, e.g. like this: [![enter image description here](https://i.stack.imgur.com/DxomG.png)](https://i.stack.imgur.com/DxomG.png) The user can then select entries (here `88`), and the filtered data are then sent back using a second link (`#send_data`). The data used to initialize the column headers look like this (taken from `console.log(data.columns);`): ``` […] 0: Object { name: "C1", title: "C1", sName: "C1", … } 1: Object { name: "C2", title: "C2", sName: "C2", … } length: 2 __proto__: Array [] ``` and I can also access `table.columns()`, however, when I try `table.columns().names()` or `table.columns().title()` I receive > > TypeError: table.columns(...).names is not a function. > > > How can I access and pass the displayed column headers i.e. what goes to `col_names` in the code below? The relevant code looks as follows: ``` $(document).ready(function() { var table = null; $('#calculate').bind('click', function() { $.getJSON('/\_get\_table', { a: $('#a').val(), b: $('#b').val() }, function(data) { console.log(data.columns); $("#elements").text(data.number\_elements); if (table !== null) { table.destroy(); table = null; $("#a\_nice\_table").empty(); } table = $("#a\_nice\_table").DataTable({ data: data.my\_table, columns: data.columns }); }); return false; }); $('#send\_data').bind('click', function() { //console.log(table.columns()); //console.log(table.columns().title()); //console.log(table.columns().names()); $.getJSON('/\_selected\_data', { sel\_data: JSON.stringify(table.rows( { filter : 'applied'} ).data().toArray()), col\_names: //what goes here? }, function(data) { alert('This worked') }); return false; }); }); ```<issue_comment>username_1: Here is a possible solution: ``` table.columns().header().toArray().map(x => x.innerText) ``` I used the API docs from [DataTable](https://datatables.net/reference/api/columns().header()). Replacing `innerText` with `innerHTML` also works. Upvotes: 3 [selected_answer]<issue_comment>username_2: Extracting the table headers from the HTML table seems kind of hackish; based on your code, I think you can do something like this instead; at the part where you initialize the table variable just add one more line to retain the column list from the JSON data you obtained: ``` table = $("#a_nice_table").DataTable({ data: data.my_table, columns: data.columns }); table.columns = data.columns; // dynamically add a property, store columns ``` Then you should be able to just do: ``` col_names: JSON.stringify(table.columns) ``` Or some variant of that, depending on exactly what kind of data structure is contained in `data.columns` (if it's a simple array, it should work fine). Upvotes: 2 <issue_comment>username_3: I had to get an array of all column names using the DataTables `.settings()` method. I ended up doing the following: ``` // Initialise your table and set up column names. const yourTable = $('#yourTableID').DataTable({ columns: [ {name: 'FirstName'}, {name: 'LastName'}, ] }); // Get your table's settings const settings = yourTable.settings(); // Map through the settings.aoColumns object and return each column.name const columnNames = settings.aoColumns.map((column) => { return column.name; }); console.log(columnNames) // prints: ['FirstName', 'LastName'] ``` Upvotes: 0
2018/03/14
363
1,159
<issue_start>username_0: I am currently learning Sass. I got a minor problem, I receive this error: "End of line expected". Why do I get this error and what can I do about it? ``` .admin-table tr: nth-child(odd) // Generates the error background-color: $white .admin-table tr: nth-child(even) // Generates the error background-color: $white ```<issue_comment>username_1: take off the spaces betwen `tr:` and `nth-child` . Furthermore, add `;` at the end of each property as following : ``` .admin-table tr:nth-child(odd){ background-color: $white; } .admin-table tr:nth-child(even){ background-color: $white; } ``` Upvotes: 0 <issue_comment>username_2: For Sass syntax you'll need to remove the spaces in `: nth-child)`, also a good idea to add semi-colons `;` to the end of each property ``` .admin-table tr:nth-child(odd) background-color: $white; .admin-table tr:nth-child(even) background-color: $white; ``` For Scss syntax you'll additionally need to add braces ``` .admin-table tr:nth-child(odd) { background-color: $white; } .admin-table tr:nth-child(even) { background-color: $white; } ``` Upvotes: 2 [selected_answer]
2018/03/14
1,657
4,725
<issue_start>username_0: I want to output data of the lyrics\_body key from the following URL.[API Link](https://api.musixmatch.com/ws/1.1/matcher.lyrics.get?callback=format=jsonp&q_track=O%20Saathi&q_artist=Arijit%20Singh&apikey=c12b7dffb9a86d5b6d70e0f0fdeab589) But the JSON text on that URL is like that following. ``` { "message": { "header": { "status_code": 200, "execute_time": 0.0047609806060791 }, "body": { "lyrics": { "lyrics_id": 16894378, "can_edit": 0, "locked": 0, "published_status": 8, "action_requested": "", "verified": 0, "restricted": 0, "instrumental": 0, "explicit": 1, "lyrics_body": "Kis Tarah Main Bataoon\nKee Adhura Main Hoon\nYah Yakeen Dilaoon\nBana Tere Lie Hee Main Hoon\nMamam. Ab Yah Hee Hai Meree Khvaahish\nIs Pal Ko Toh Main Jee Loon\n\nTu Iss Jagah Hai Khada\nPhir Bhi Hai Door Tu Haan\nKuch Na Raha Darmiyaan\nPhir Kyun Dil Keh Raha\n\nO Saathi…Itna Toh Bas Kar De\n...\n\n******* This Lyrics is NOT for Commercial use *******", "lyrics_language": "", "lyrics_language_description": "", "lyrics_copyright": "Lyrics powered by www.musixmatch.com. This Lyrics is NOT for Commercial use and only 30% of the lyrics are returned.", "writer_list": [], "publisher_list": [], "backlink_url": "https://www.musixmatch.com/lyrics/Arijit-Singh/O- Saathi?utm_source=application&utm_campaign=api&utm_medium=", "updated_time": "2017-07-26T07:05:50Z" } } } } ``` Now how can I output text from only `lyrics_body` key to a div element which has `id="result"`.I have tried many methods but no outputs.<issue_comment>username_1: Access nested `json` keys via dot notation: `json.message.body.lyrics.lyrics_body`. Use `.innerText` to append text to the element. Use `.getElementById("")` to find an element by its `id` attribute. ```js var json = { "message": { "header": { "status_code": 200, "execute_time": 0.0047609806060791 }, "body": { "lyrics": { "lyrics_id": 16894378, "can_edit": 0, "locked": 0, "published_status": 8, "action_requested": "", "verified": 0, "restricted": 0, "instrumental": 0, "explicit": 1, "lyrics_body": "Kis Tarah Main Bataoon\nKee Adhura Main Hoon\nYah Yakeen Dilaoon\nBana Tere Lie Hee Main Hoon\nMamam. Ab Yah Hee Hai Meree Khvaahish\nIs Pal Ko Toh Main Jee Loon\n\nTu Iss Jagah Hai Khada\nPhir Bhi Hai Door Tu Haan\nKuch Na Raha Darmiyaan\nPhir Kyun Dil Keh Raha\n\nO Saathi…Itna Toh Bas Kar De\n...\n\n******* This Lyrics is NOT for Commercial use *******", "lyrics_language": "", "lyrics_language_description": "", "lyrics_copyright": "Lyrics powered by www.musixmatch.com. This Lyrics is NOT for Commercial use and only 30% of the lyrics are returned.", "writer_list": [], "publisher_list": [], "backlink_url": "https://www.musixmatch.com/lyrics/Arijit-Singh/O-Saathi?utm_source=application&utm_campaign=api&utm_medium=", "updated_time": "2017-07-26T07:05:50Z" } } } } document.getElementById("result").innerText = json.message.body.lyrics.lyrics_body; ``` Upvotes: 2 <issue_comment>username_2: 1st you need to store those data into a variable. ``` var myData = { //your json data } ``` then `console.log(myData.message.body.lyrics.lyrics_body)` see if you are getting the right value. Upvotes: 0 <issue_comment>username_3: Get JSON via .getJSON. After parse set #result to `data.message.body.lyrics.lyrics_body` ```js var url = 'https://crossorigin.me/https://api.musixmatch.com/ws/1.1/matcher.lyrics.get?callback=format=jsonp&q_track=O%20Saathi&q_artist=Arijit%20Singh&apikey=c12b7dffb9a86d5b6d70e0f0fdeab589' $.getJSON(url, function(data) { $('#result').text(data.message.body.lyrics.lyrics_body); }); ``` ```html #Result ``` Upvotes: 2 <issue_comment>username_4: ```js $( "#target" ).click(function() { var url = 'https://crossorigin.me/https://api.musixmatch.com/ws/1.1/matcher.lyrics.get?callback=format=jsonp&q_track=O%20Saathi&q_artist=Arijit%20Singh&apikey=c12b7dffb9a86d5b6d70e0f0fdeab589' $.getJSON(url, function(result) { console.log(result.message.body.lyrics.lyrics_body) }); }); ``` ```html Click here ``` Upvotes: 3 [selected_answer]
2018/03/14
1,013
3,226
<issue_start>username_0: How can I get the current `--mode` specified in *package.json* inside *webpack.config.js*? (For instance, for pushing some plugins.) ``` package.json "scripts": { "dev": "webpack --mode development", "build": "webpack --mode production" } ``` What I did in Webpack 3: ``` package.json "scripts": { "build": "cross-env NODE_ENV=development webpack", "prod": "cross-env NODE_ENV=production webpack" }, ``` Then, I was able to get environment in Webpack with `process.env.NODE_ENV`. Of course, I can pass `NODE_ENV` with `--mode` but I prefer to avoid duplication.<issue_comment>username_1: Try this one package.json ``` "scripts": { "dev": "webpack --mode development", "build": "webpack --mode production --env.production" } ``` so if you are using the `env` inside `webpack config`, that looks something like this ``` module.exports = env => { const inProduction = env.production return { entry: {...}, output: {...}, module: {...} } } ``` more details to set up your `webpack.config.js`. ([Environment Variables for webpack 4](https://webpack.js.org/guides/environment-variables/)) Upvotes: 2 <issue_comment>username_2: You want to avoid duplication of options passed on the script. When you export a function, the function will be invoked with 2 arguments: an environment `env` as the first parameter **and an options map `argv` as the second parameter.** *package.json* ``` "scripts": { "build-dev": "webpack --mode development", "build-prod": "webpack --mode production" }, ``` *webpack.config.js* ``` module.exports = (env, argv) => { console.log(`This is the Webpack 4 'mode': ${argv.mode}`); return { ... }; } ``` These are the results: For `npm run build-dev`: ``` > webpack --mode development This is the Webpack 4 'mode': development Hash: 554dd20dff08600ad09b Version: webpack 4.1.1 Time: 42ms Built at: 2018-3-14 11:27:35 ``` For `npm run build-prod`: ``` > webpack --mode production This is the Webpack 4 'mode': production Hash: 8cc6c4e6b736eaa4183e Version: webpack 4.1.1 Time: 42ms Built at: 2018-3-14 11:28:32 ``` Upvotes: 8 [selected_answer]<issue_comment>username_3: I ended up (ab)using `npm_lifecycle_script` to set the mode in the `DefinePlugin`: ``` MODE: JSON.stringify(process.env.npm_lifecycle_script.substr(process.env.npm_lifecycle_script.indexOf('--mode ') + '--mode '.length, process.env.npm_lifecycle_script.substr(process.env.npm_lifecycle_script.indexOf('--mode ') + '--mode '.length).search(/($|\s)/))) ``` This takes the value of the `--mode` parameter from the issued `webpack` command. Upvotes: 1 <issue_comment>username_4: To test if is in production mode, inside `webpack.config.js` file I use this: ``` const isProduction = process.argv[process.argv.indexOf('--mode') + 1] === 'production'; const config = { ... }; if (isProduction) { config.plugins.push(new MiniCssExtractPlugin()); } else { // isDev config.devtool = /*'source-map'*/ 'inline-source-map'; } module.exports = config; ``` Stop trying `NODE_ENV`, is old school ( *webpack 3* ). And this is more compatible to work with `import / webpack resolver` Upvotes: 4
2018/03/14
715
2,281
<issue_start>username_0: I am trying to add a new key on the JSON array return from the mongoose query ``` find({chatBotId:req.params.chatBotId}). populate('userPlan'). .exec(function(err,result){ result.map(function(e){ e.transactionObject =null e.taxAmount = 100; return e; }); }) ``` I am adding a new key `taxAmount` but it does not appear the array, however, `transactionObject=null` works fine it's an existing key<issue_comment>username_1: Quoting [MDN's page on `Array.prototype.map()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map) (emphasis mine): > > The `map()` method creates a **new** array > > > Your code ``` result.map(function(e){ e.transactionObject =null e.taxAmount = 100; return e; }); ``` doesn't actually do anything. If you want to use the output of the `.map` above you have to re-assign it: ``` arrayWithNewKeys = result.map(function (e) { e.transactionObject = null e.taxAmount = 100; return e; }); ``` --- If you are using `result` later, expecting it to have the new keys `transactionObject` and `taxAmount`, you can re-assign `result` without needing to change your later code: ``` result = result.map(function (e) { e.transactionObject = null e.taxAmount = 100; return e; }); ``` Upvotes: 0 <issue_comment>username_2: What worked for me was to do `result = JSON.parse(JSON.stringify(result))` or do: ``` result = result.map(function (e) { e = e.toJSON(); // toJSON() here. e.transactionObject = null; e.taxAmount = 100; return e; }); ``` Or even better use `lean()` (<https://mongoosejs.com/docs/api.html#query_Query-lean>): ``` Model.find().lean() ``` Upvotes: 2 <issue_comment>username_3: You should use `lean()` to prevent mongoose to [hydrate](https://mongoosejs.com/docs/api.html#model_Model.hydrate) the document. Also `map()` function gets and returns an array. So I think it's better to use `forEach()` for the results. ```js results = Document.find({chatBotId:req.params.chatBotId}) .populate('userPlan') .lean() .exec(function(err,results){ results.forEach(function(e){ e.transactionObject = null e.taxAmount = 100 }); }) ``` Upvotes: 2
2018/03/14
582
2,217
<issue_start>username_0: I'm trying to create an application which user can change primary color and for that I've created this class ``` const mode = true; export default class Colors { static primary() { return mode ? '#fff' : '#000'; } static accent() { return mode ? '#fff' : '#000'; } } ``` and in Components ``` const styles = StyleSheet.create({ header: { height: 100, backgroundColor: Colors.primary() } }); ``` so when user clicks on changing the theme the mode will change. The problem is how can I force all Components and Pages to refresh and use the new styles when the `mode` changes?<issue_comment>username_1: If you are using redux(or something like that) * Define a variable in reducer * Bind mapStateToProps function to component * Then change that variable in reducer when you want Upvotes: 0 <issue_comment>username_2: Read This: <https://medium.com/@jonlebensold/getting-started-with-react-native-redux-2b01408c0053> then you can access state on any Screen of page via mapStateToProp. But if you want to store to store theme permanently then you have to store in userDefault ``` import React , {PureComponent} from 'react'; import { View, Text, } from 'react-native'; import { connect } from 'react-redux'; import { bindActionCreators } from 'redux'; const mapStateToProps = (() => { return { colorStyle: colorTheme.style, }; }); class ColorTheme extends PureComponent { constructor(props) { super(props); } render() { console.log(this.props.colorStyle) } } export default connect(mapStateToProps, null)(ColorTheme); ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: I don't know if you like using global variables or not. But they can really help you here, you don't need a state. Just create some global variables in your index.js such as ``` global.themeColorDark = "rgb(0,106,176)"; global.themeColorLight = "rgb(10,143,192)"; global.themeColorBackground = "white"; ``` and then, you can access and edit these variables from anywhere in your app. without importing any file. Upvotes: 1
2018/03/14
657
2,271
<issue_start>username_0: I send POST request with JSON string in JSP to PHP but the received JSON cannot be decoded. Here is my html file: ``` var jsonObj = { "merchID": "0000", "amount": "test" }; var jsonString = JSON.stringify(jsonObj); document.getElementById('test').value = jsonString; ``` Here is my PHP file: ``` php echo file_get_contents("php://input"); $data = json_decode(file_get_contents("php://input")); echo $data['amount']; ? ``` The output of **echo file\_get\_contents("php://input");** is > > DO=%7B%22merchID%22%3A%220000%22%2C%22amount%22%3A%22test%22%7D > > > which means the JSON objecy has been successfully received. Is there any solution for this problem?<issue_comment>username_1: You're not sending a request with a pure JSON request body, you're sending a regular url-encoded form request. As your output shows, the request body contains the JSON string *inside* a form-encoded string. You need to first URL-decode that and then pick your JSON string from it. Fortunately PHP has already done that for you and the data is available inside `$_POST`: ``` $data = json_decode($_POST['DO']); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: when you don't specify the enctype attribute on a `which doesn't contain any file uploads, it defaults to `application/x-www-form-urlencoded`-encoding, when php receive an `application/x-www-form-urlencoded`-encoded POST request, it parses the request and put the data in the `$_POST` variable, because your element containing the javascript had the name "test", your json ends up in the variable `$_POST['test']`, thus this should work: `json_decode($_POST['test']);` - a reason for `json_decode(file_get_contents("php://input"));` not working, include the fact that the data in php://input is `application/x-www-form-urlencoded`-encoded, NOT json-encoded. given that PHP has already decoded it for you in the $\_POST variable, you don't need to decode that manually, but if you for some reason would WANT to do so, php has the `parse_str` function to decode `application/x-www-form-urlencoded`-encoded data, so you could do this too:` ``` parse_str(file_get_contents("php://input"),$data); $data=$data['test']; $data=json_decode($data); ``` Upvotes: 0
2018/03/14
1,625
6,068
<issue_start>username_0: I'll like that everytime an hospital is created, A corresponding User account is created likewise, allowing the newly created account to login.... models.py ``` class Hospital(models.Model): """ Model representing individual Hospitals """ id = models.UUIDField(primary_key=True, unique=True, db_index=True, default=uuid.uuid4, editable=False) hospital_name = models.CharField(help_text="full name of hospital", max_length=200) slug = models.SlugField(max_length=200) hospital_logo = models.ImageField(upload_to='hospital_logo',) hospital_address = models.CharField(max_length=500) hospital_email = models.EmailField(max_length=254) hospital_website = models.CharField(max_length=200) hospital_rc_number = models.PositiveSmallIntegerField() date_added = models.DateField(auto_now=True) ``` forms.py ``` class HospitalForm(forms.ModelForm): """ Forms for Hospital creation """ class Meta: model = Hospital fields = ('hospital_name', 'hospital_address', 'hospital_email', 'hospital_website', 'hospital_rc_number','hospital_logo') widgets = { 'hospital_name': forms.TextInput(attrs={'class': 'form-control'}), 'hospital_address': forms.TextInput(attrs={'class': 'form-control'}), 'hospital_email': forms.TextInput(attrs={'class': 'form-control'}), 'hospital_website': forms.TextInput(attrs={'class': 'form-control'}), 'hospital_rc_number': forms.TextInput(attrs={'class': 'form-control'}), } def save(self): instance = super(HospitalForm, self).save(commit=False) instance.slug = slugify(instance.hospital_name) instance.save() return instance ``` views.py ``` class HospitalCreate(CreateView): model = Hospital form_class = HospitalForm template_name = 'hospital/hospital_add.html' pk_url_kwarg = 'hospital_id' success_url = reverse_lazy('hospital_list') user_form = UserForm ```<issue_comment>username_1: One option is to add user creation to your views [`form_valid` method](https://docs.djangoproject.com/en/1.11/ref/class-based-views/mixins-editing/#django.views.generic.edit.ModelFormMixin.form_valid) ``` class HospitalCreate(CreateView): def form_valid(self, form): form.save() # create user here return HttpResponseRedirect(self.get_success_url()) ``` Upvotes: 0 <issue_comment>username_2: I dont know if this is the best way to go about this, But it seems to be doing what i want. I'm yet to fully check if other issues will pop-up. I created more fields in the hospital model that corresponded to the fields in the User class, and i was able to use the `object.create_user` method to create instances of the hospital model in the User class. models.py ``` class Hospital(models.Model): """ Model representing individual Hospitals """ id = models.UUIDField(primary_key=True, unique=True, db_index=True, default=uuid.uuid4, editable=False) admin_username = models.CharField(unique=True, max_length=15) first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=30) password = models.CharField(max_length=50) password2 = models.CharField(max_length=50) user = models.IntegerField() hospital_name = models.CharField(help_text="full name of hospital", max_length=200) slug = models.SlugField(max_length=200) hospital_logo = models.ImageField(upload_to='hospital_logo',) hospital_address = models.CharField(max_length=500) hospital_email = models.EmailField(max_length=254) hospital_website = models.CharField(max_length=200) hospital_rc_number = models.PositiveSmallIntegerField() date_added = models.DateField(auto_now=True) ``` forms.py ``` class HospitalForm(forms.ModelForm): """ Forms for Hospital creation """ class Meta: model = Hospital fields = ('hospital_name', 'hospital_address', 'hospital_email', 'hospital_website', 'hospital_rc_number', 'first_name','last_name','hospital_logo','admin_username','password','<PASSWORD>') widgets = { 'admin_username': forms.TextInput(attrs={'class': 'form-control'}), 'first_name': forms.TextInput(attrs={'class': 'form-control'}), 'last_name': forms.TextInput(attrs={'class': 'form-control'}), 'password': forms.PasswordInput(attrs={'class': 'form-control'}), 'password2': forms.PasswordInput(attrs={'class': 'form-control'}), 'hospital_name': forms.TextInput(attrs={'class': 'form-control'}), 'hospital_address': forms.TextInput(attrs={'class': 'form-control'}), 'hospital_email': forms.TextInput(attrs={'class': 'form-control'}), 'hospital_website': forms.TextInput(attrs={'class': 'form-control'}), 'hospital_rc_number': forms.TextInput(attrs={'class': 'form-control'}), } def save(self): instance = super(HospitalForm, self).save(commit=False) instance.slug = slugify(instance.hospital_name) instance.save() return instance ``` And Views.py ``` class HospitalCreate(CreateView): model = Hospital form_class = HospitalForm template_name = 'hospital/hospital_add.html' pk_url_kwarg = 'hospital_id' success_url = reverse_lazy('hospital_list') def form_valid(self, form): username = form.instance.admin_username password = <PASSWORD> email = form.instance.hospital_email first_name = form.instance.first_name last_name = form.instance.last_name user_profile = User.objects.create_user(username=username, password=<PASSWORD>, email=email,first_name=first_name, last_name=last_name, is_staff='0', is_active='1') user_profile.save() form.instance.user = int(user_profile.id) return super(HospitalCreate, self).form_valid(form) ``` Upvotes: 1
2018/03/14
1,167
4,172
<issue_start>username_0: I am trying to use ionic native from Android. The problem is, in console it says [object Object] Uploaded Successfully, but nothing is uploaded on my server. I checked network tab in browser, it is not even calling the upload URL. Below is my `home.html` code: ``` {{imageURI}} Get Image #### Image Preview ![Ionic File]({{imageFileName}}) Upload ``` **home.ts** code: ``` import { Component } from '@angular/core'; import { NavController, LoadingController, ToastController } from 'ionic-angular'; import { FileTransfer, FileUploadOptions, FileTransferObject } from '@ionic-native/file-transfer'; import { Camera, CameraOptions } from '@ionic-native/camera'; @Component({ selector: 'page-home', templateUrl: 'home.html' }) export class HomePage { imageURI:any; imageFileName:any; constructor(public navCtrl: NavController, private transfer: FileTransfer, private camera: Camera, public loadingCtrl: LoadingController, public toastCtrl: ToastController) {} getImage() { const options: CameraOptions = { quality: 100, destinationType: this.camera.DestinationType.FILE_URI, sourceType: this.camera.PictureSourceType.PHOTOLIBRARY } this.camera.getPicture(options).then((imageData) => { this.imageURI = imageData; }, (err) => { console.log(err); this.presentToast(err); }); } uploadFile() { let loader = this.loadingCtrl.create({ content: "Uploading..." }); loader.present(); const fileTransfer: FileTransferObject = this.transfer.create(); let options: FileUploadOptions = { fileKey: 'ionicfile', fileName: 'ionicfile', chunkedMode: false, mimeType: "image/jpeg", headers: {} } fileTransfer.upload(this.imageURI, 'http://example.com/upload2.php', options) .then((data) => { console.log(data+" Uploaded Successfully"); // this.imageFileName = "http://192.168.0.7:8080/static/images/ionicfile.jpg" loader.dismiss(); this.presentToast("Image uploaded successfully"); }, (err) => { console.log(err); loader.dismiss(); this.presentToast(err); }); } presentToast(msg) { let toast = this.toastCtrl.create({ message: msg, duration: 6000, position: 'bottom' }); toast.onDidDismiss(() => { console.log('Dismissed toast'); }); toast.present(); } } ```<issue_comment>username_1: This is because you are not using an Exact API call. In the code : `fileTransfer.upload(this.imageURI, 'http://example.com/upload2.php', options)` You have to use Api call which is pointing to your api endpoint. Try to read more about api calls. Instead of `http://example.com/upload2.php` Your call should be like `http://example.com/upload2.php/api/uploadmyimage` where in upload2.php you have defined the get or post method for an `/api/uploadmyimage` Upvotes: 0 <issue_comment>username_2: This is the script the works for me! in html ``` ``` in ts file ``` upload(str:any) { const formData = new FormData(); this.image=str.target.files[0]; formData.append('files[]', this.image); console.log(formData,this.image); this.http.post("http://localhost/test/test.php",formData) .subscribe((data:any)=>{ console.log(data); }) console.log(str); } ``` Bonus here is the PHP file: ``` php if ($_SERVER['REQUEST_METHOD'] === 'POST') { if (isset($_FILES['files'])) { $errors = []; $path = 'uploads/'; $extensions = ['jpg', 'jpeg', 'png', 'gif']; $all_files = count($_FILES['files']['tmp_name']); $file_tmp = $_FILES['files']['tmp_name'][0]; $file_type = $_FILES['files']['type'][0]; $file_size = $_FILES['files']['size'][0]; $file_ext = strtolower(end(explode('.', $_FILES['files']['name'][0]))); $file_name = uniqid().".".$file_ext; $file = $path . $file_name; if (empty($errors)) { move_uploaded_file($file_tmp, $file); } if ($errors) print_r($errors); } } </code ``` Upvotes: 2
2018/03/14
740
2,338
<issue_start>username_0: I tried to install **pillow** on **python 3.7** using command **pip3 install pillow**. The package downloaded **fine** but during installation I got below error. I tried to download using wheel too but it failed too. Any advice? Thanks. ``` Command "c:\...\python37\python.exe -u -c "import setuptools, tokenize;__file__='C:\\...\\A~1.M\\AppData\ \Local\\Temp\\pip-build-phpvx_2j\\pillow\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\...\pip-lde0t8n9-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\...\pip-build-phpvx_2j\pillow\ ```<issue_comment>username_1: There seems to be **no support** for **Pillow** for **Python 3.7**. I installed **python 3.6** and then installed pillow successfully. **pip3 install pillow** Upvotes: 2 [selected_answer]<issue_comment>username_2: This also means that ReportLab can neighter be installed in Python 3.7 since there is a direct dependency to pillow in there. I use reportlab to generate pdf files. I sincerely hope this will be resolved soon. Upvotes: 1 <issue_comment>username_3: Hmm, I installed Pillow on Python 3.7 all right. This is what I did: **Install Python 3.7 64 bit** In the command prompt in Windows, I ran: ``` python -m pip install Pillow ``` That should work. Upvotes: 3 <issue_comment>username_4: I'm using **Python 3.7** in windows and I tried ``` python -m pip install Pillow ``` It didn't work, then I tried ``` pip install pillow ``` which didn't work either. so what I did is I tried ``` pip install Pillow==6.2.1 ``` it worked, then I keep installing till the latest version that's ``` pip install Pillow==7.1.1 ``` And now I have the latest version which is working fine. Upvotes: 2 <issue_comment>username_5: I think I will be able to answer this. My system info are follows: * Windows 10 * Django version 3.2 * Python version 3.7 I found on the Pillow documentation that the tested for Pillow version is 7.1.0 You can read [here.](https://pillow.readthedocs.io/en/stable/installation.html#other-platforms) for this, you need to launch cmd or anaconda prompt and type ``` pip uninstall Pillow pip install Pillow==7.1.0 ``` Upvotes: -1
2018/03/14
533
1,886
<issue_start>username_0: I'm struggling to find a very simple example of how to add a marker(s) to a Google Map when a user left clicks on the map using React-google-maps in components based. Need help. ``` const Map = withScriptjs(withGoogleMap((props) => {props.isMarkerShown && } )) export default class MapContainer extends React.Component { constructor (props) { super(props) this.state = { } } render () { return ( } containerElement={} mapElement={} placeMarker={this.placeMarker} /> ) } } ```<issue_comment>username_1: Check this code with the edited version which add the marker ``` const InitialMap = withGoogleMap(props => { var index = this.marker.index || []; return( {props.markers.map(marker => ( props.onMarkerRightClick(marker)} /> ))} ) }); export default class MapContainer extends Component{ constructor(props){ this.state = { markers:[{ position:{ lat: 255.0112183, lng:121.52067570000001, } }] } } render(){ return( } mapElement={ } markers={this.state.markers} /> ) } } ``` Upvotes: 2 <issue_comment>username_2: This is a generic example that demonstrates how to display marker on map click: ``` const Map = compose( withStateHandlers(() => ({ isMarkerShown: false, markerPosition: null }), { onMapClick: ({ isMarkerShown }) => (e) => ({ markerPosition: e.latLng, isMarkerShown:true }) }), withScriptjs, withGoogleMap ) (props => {props.isMarkerShown && } ) export default class MapContainer extends React.Component { constructor(props) { super(props) } render() { return ( } containerElement={} mapElement={} /> ) } } ``` [Live demo](https://stackblitz.com/edit/react-umnzy4) Upvotes: 3
2018/03/14
642
2,405
<issue_start>username_0: I have two collections: ``` cc.tabGeneralValuesCollection = [{label: 'Creation Date', name: 'CreationDate', direction: 'asc', type: 'DATETIME'}, {label: 'Modifier', name: 'Modifier', direction: 'asc', type: 'STRING'}, {label: 'Subject', name: 'Subject', direction: 'asc', type: 'STRING'}]; cc.tabPropertiesValuesCollection = [{label: 'Group Permission', name: 'GroupPermission', direction: 'asc', type: 'DATETIME'}, {label: 'World Permission', name: 'WorldPermission', direction: 'asc', type: 'STRING'}, {label: 'Object ID', name: 'ObjectID', direction: 'asc', type: 'STRING'}, {label: 'ACL Object Name', name: 'ACLObjectName', direction: 'asc', type: 'STRING'}]; ``` I have to dynamically specify which collection to use in `ng-repeat`. ``` **{{col.label}}** ``` The variable `selectedTab` has the collection name for ex: "tabGeneralValuesCollection". How to get this to work?<issue_comment>username_1: Check this code with the edited version which add the marker ``` const InitialMap = withGoogleMap(props => { var index = this.marker.index || []; return( {props.markers.map(marker => ( props.onMarkerRightClick(marker)} /> ))} ) }); export default class MapContainer extends Component{ constructor(props){ this.state = { markers:[{ position:{ lat: 255.0112183, lng:121.52067570000001, } }] } } render(){ return( } mapElement={ } markers={this.state.markers} /> ) } } ``` Upvotes: 2 <issue_comment>username_2: This is a generic example that demonstrates how to display marker on map click: ``` const Map = compose( withStateHandlers(() => ({ isMarkerShown: false, markerPosition: null }), { onMapClick: ({ isMarkerShown }) => (e) => ({ markerPosition: e.latLng, isMarkerShown:true }) }), withScriptjs, withGoogleMap ) (props => {props.isMarkerShown && } ) export default class MapContainer extends React.Component { constructor(props) { super(props) } render() { return ( } containerElement={} mapElement={} /> ) } } ``` [Live demo](https://stackblitz.com/edit/react-umnzy4) Upvotes: 3
2018/03/14
591
2,255
<issue_start>username_0: I've some trouble compiling this program. The error message I get says that "Forside" abstract cannot be instantiated". I have troubles fixing this. Here's my code and thank you in advance. ``` import javax.swing.*; import java.awt.*; import java.awt.event.*; public abstract class Forside implements ActionListener { private Forside() { JFrame jfrm = new JFrame("BasisBaren - Sudentersamfundet"); jfrm.setLayout(new FlowLayout()); jfrm.setSize(1500, 1000); jfrm.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); JButton jbtnOel = new JButton("OEL"); JButton jbtnAndreDrikkevarer = new JButton("<NAME>"); JButton jbtnTender = new JButton("Tender"); JButton jbtnSnacks = new JButton("Snacks"); jbtnOel.addActionListener(this); jbtnAndreDrikkevarer.addActionListener(this); jbtnTender.addActionListener(this); jbtnSnacks.addActionListener(this); jfrm.add(jbtnOel); jfrm.add(jbtnAndreDrikkevarer); jfrm.add(jbtnTender); jfrm.add(jbtnSnacks); } public static void main (String args[]) { SwingUtilities.invokeLater (Forside::new); } } ```<issue_comment>username_1: You have to remove the `abstract` modifier from the class declaration in order to instantiate it. Upvotes: 0 <issue_comment>username_2: Abstract classes cannot be instantiated. You need a concrete class for instantiation to happen which can be done by removing the `abstract` modifier or creating a new subclass. > > An abstract class is a class that is declared abstract—it may or may not include abstract methods. Abstract classes cannot be instantiated, but they can be subclassed. > > > Read up the documentations [here](https://docs.oracle.com/javase/tutorial/java/IandI/abstract.html). Upvotes: 1 <issue_comment>username_3: The message says it all: *you can't instantiate an abstract class*. You **either** have to **remove** the `abstract` from your class definition or you have to create another class to **extend your abstract class** in case you feel you will need different versions of your "Forside" class: ``` public class ConcreteForside extends Forside { } ``` Upvotes: 0
2018/03/14
1,889
6,139
<issue_start>username_0: I created a UI5 master-detail page: ### Master ```xml ``` ```js onItemPress: function(oEvent) { var oUserContext = oEvent.getSource().getBindingContext("som"); var oUser = oUserContext.getObject(); this.getRouter().navTo("userDetails", {userId: oUser.Id}); } ``` ### Detail ```js onInit: function () { var route = this.getRouter().getRoute("userDetails"); route.attachPatternMatched(this._onObjectMatched, this); }, _onObjectMatched: function (oEvent) { var sUserId = oEvent.getParameter("arguments").userId; this.getView().bindElement({ path: "som>/Users('"+sUserId+"')", model: "som" }); }, reload: function() { this.getView().getModel("som").refresh(); }, ``` ```xml ``` Everything is working fine. But when I change a user in the detail view, it is being updated but not in the master view! With the reload method, I can manually refresh it. But how can I fire this automatically after a change? Can I bind a change event on the SimpleForm?<issue_comment>username_1: V4 model works with promise. When submit the changes you can use .then(), so you can refresh the model. ``` ... var oView = this.getView(); function refreshModel() { oView.getModel().refresh(); } oView.getModel().submitBatch(sGroupId).then(refreshModel); ... ``` <https://sapui5.hana.ondemand.com/#/api/sap.ui.model.odata.v4.ODataModel/methods/submitBatch> Upvotes: 0 <issue_comment>username_2: See <https://embed.plnkr.co/qatUyq/?show=preview:%3Fsap-ui-xx-componentPreload%3Doff> [![https://embed.plnkr.co/qatUyq/](https://i.stack.imgur.com/lZ4Lf.gif)](https://i.stack.imgur.com/lZ4Lf.gif) Minimal example using the OData V4 *TripPin* service Keep in mind that `**v4**.ODataModel` is still work in progress. The **synchronization mode has to be `"None"`** currently. > > ### [*synchronizationMode*](https://openui5.hana.ondemand.com/api/sap.ui.model.odata.v4.ODataModel#constructor) > > > Controls synchronization between different bindings which refer to the same data for the case data changes in one binding. Must be set to 'None' which means **bindings are not synchronized at all** [...]. > > > ***Update:** According to the [UI5 roadmap](https://roadmaps.sap.com/board?PRODUCT=73554900100800001361&range=FIRST-LAST#;INNO=6EAE8B28C5D91EDABEB530FB8D6620ED) (SAP Community account required), the data binding synchronization support is "planned for Q2 2021".* Therefore, the application itself has to identify related bindings and refresh them manually. To make it efficient, we can send such GET requests together with the update request via batch group ID which is what `v2.ODataModel` automatically does (unless `refreshAfterChange` is disabled). In the example above, I used the following settings and APIs: 1. `$$updateGroupId: "myGroupId"` for the context binding (detail page). 2. After user presses `Submit`, call [`refresh("myGroupId")`](https://openui5.hana.ondemand.com/api/sap.ui.model.odata.v4.ODataListBinding#methods/refresh) from the list binding (master list). 3. And finally [`submitBatch("myGroupId")`](https://openui5.hana.ondemand.com/api/sap.ui.model.odata.v4.ODataModel#methods/submitBatch). If we then inspect the request payload, we can [see that the PATCH and GET requests are bundled](https://i.stack.imgur.com/LxX5a.gif) together. Hence, the master list is refreshed at the same time. --- Q&A === 1. What is the default binding mode in `v4.ODataModel`? * [It's `"TwoWay"`](https://github.com/SAP/openui5/blob/5fdbf3e9846a72ca29f2db4ed7b457f955d38c46/src/sap.ui.core/src/sap/ui/model/odata/v4/ODataModel.js#L334) (Unless `sharedRequests` is enabled) - The UI changes the model data and the other way around. When the change is **not** stored in a batch queue, the corresponding request is sent to the backend immediately. 2. How do I know if I'm using batch mode or not? * Pretty much all bindings in v4, as well as the model itself, support the parameter `$$groupId` for read, and certain bindings support `$$updateGroupId` for update requests. + If the ID is set to `"$direct"`, the corresponding requests are sent *directly* without batch. Those requests are easier to diagnose and [cacheable](https://developer.mozilla.org/docs/Glossary/cacheable) by the browser. + If the ID is a custom string (like in the example above), the corresponding requests are to be sent explicitly by the API `submitBatch`. + If there is no such ID defined, the default value is set to `"$auto"`, meaning by default, requests are sent as batch automatically after all the related controls are rerendered. * Take a look at the documentation topic [Batch Control](https://openui5.hana.ondemand.com/topic/74142a38e3d4467c8d6a70b28764048f). 3. How can the application make sure that refreshing the list is always applied after the update is done even though those two requests (GET & PATCH) are bundled as one? * The model already takes care of this: > > The OData V4 model automatically puts all **non**-GET requests into a single change set, which is located at the beginning of a batch request. All GET requests are put **after** it. > > > 4. Why can I call `submitBatch` without adding the groups to "deferred groups" beforehand like I did with `v2.ODataModel`? * V4 has simplified dealing with batch groups: > > Application groups are **by default deferred**; there is no need to set or get deferred groups. You just need the `submitBatch` method on the model to control execution of the batch. [[doc]](https://openui5.hana.ondemand.com/topic/abd4d7c7548d4c29ab8364d3904a6d74) > > > 5. Models usually support events such as `requestSent` and `requestCompleted`. Why can't I make use of them in `v4.ODataModel`? * It's just not supported by design. See [Unsupported Superclass Methods and Events](https://openui5.hana.ondemand.com/topic/1232241b99d7437ba3614698d53dfa4b). --- Hope I made some things clearer. The latest documentation about OData V4 in UI5 can be found at: <https://openui5nightly.hana.ondemand.com/topic/5de13cf4dd1f4a3480f7e2eaaee3f5b8> Upvotes: 3
2018/03/14
1,112
2,439
<issue_start>username_0: I have problem to analyse vincenty distance because the format is `object` and have `km` metrics in there, I want to analyse further. I want to convert vincenty distance to `float` format Here's the data ``` customer_id lat_free long_free lat_device long_device radius timestamp 7509 -6.283468 106.857636 -7.802388 110.368660 1264.000000 2017-12-14 21:18:40.327 7509 -6.283468 106.857636 -7.804296 110.367192 14.000000 2017-12-15 20:02:21.923 ``` Here's my code ``` from geopy.distance import vincenty df['Vincenty_distance'] = df.apply(lambda x: vincenty((x['lat_free'], x['long_free']), (x['lat_device'], x['long_device'])), axis = 1) ``` This is the result ``` customer_id lat_free long_free lat_device long_device radius timestamp Vincenty_distance 7509 -6.283468 106.857636 -7.802388 110.368660 1264.000000 2017-12-14 21:18:40.327 422.7123873310482 km 7509 -6.283468 106.857636 -7.804296 110.367192 14.000000 2017-12-15 20:02:21.923 422.64674499172787 km ``` I need to convert `Vincenty_distance` to float<issue_comment>username_1: The best is add `.km`: ``` df['Vincenty_distance'] = df.apply(lambda x: vincenty((x['lat_free'], x['long_free']), (x['lat_device'], x['long_device'])).km, axis = 1) ``` Or use after processing - convert to `string`, remove last letters and cast to `float`s: ``` df['Vincenty_distance'] = df['Vincenty_distance'].astype(str).str[:-3].astype(float) ``` --- ``` print (df) customer_id lat_free long_free lat_device long_device radius \ 0 7509 -6.283468 106.857636 -7.802388 110.368660 1264.0 1 7509 -6.283468 106.857636 -7.804296 110.367192 14.0 timestamp Vincenty_distance 0 2017-12-14 21:18:40.327 422.712361 1 2017-12-15 20:02:21.923 422.646709 print (df.dtypes) customer_id int64 lat_free float64 long_free float64 lat_device float64 long_device float64 radius float64 timestamp object Vincenty_distance float64 dtype: object ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You can use `str.replace` to remove "km" and use `apply` to set float to the series. **Ex:** ``` df["Vincenty_distance"] = df["Vincenty_distance"].str.replace(" km", "").apply(float) ``` Upvotes: 2
2018/03/14
732
2,119
<issue_start>username_0: I have a very basic question. I am trying to pass variable using url. So, I faced one issue as undefined index. To solve that i used isset() but value is coming as blank. can anyone tell me why this is happening? Code: Page 1: ``` [Print](print.php?id=$row[id]&rname=$row[rname]) ``` Page 2: ``` php if(isset($\_GET['rname'])){ $name=$\_GET['rname']; echo $name; ? | ``` Thanks!!!<issue_comment>username_1: You have written id and name in url of Page1, but in Page2, you have written rname, it should be **name**, that is from url of Page1. So, it should be ``` if(isset($_GET['name'])){ $name=$_GET['name']; echo $name; ?> ``` Upvotes: 1 <issue_comment>username_2: In your first page modify the content `&name=$row[rname]` to `&rname=$row[rname]` and you will be able to `echo` the name in second page. Upvotes: 1 <issue_comment>username_3: Well, this is your code ``` [Print](print.php?id=$row[id]&name=$row[rname]) ``` So you will only get value if you search `$_GET['id']` (equal to `$row[id]`) and `$_GET['name']` (equal to `$row[rname]`). To get the value only if not empty, null or isset try to use `empty` maybe : ``` if(!empty($_GET['rname'])){ $name=$_GET['rname']; echo $name; } ``` And don't forget to change your code by ``` if(!empty($_GET['name'])){ // change rname by name $name=$_GET['name']; // change rname by name echo $name; } ``` OR ``` [Print](print.php?id=$row[id]&rname=$row[rname]) ``` As said in comment, I would use `= ...; =` too to be sure to get the php value (it's an `php echo ... ?` equivalent : ``` //change name by rname to use $_GET['rname'] : [Print](print.php?id=<?= $row["id"]; ?>&rname=<?= $row["rname"]; ?>) ``` so you can use `$_GET['rname']` Upvotes: 2 [selected_answer]<issue_comment>username_4: First print php value with `= ?` and rename get param from `name` to `rname` ``` [Print](print.php?id=<?= $row['id']?>&rname=<?= $row['rname'] ?>) ``` Then you can get it in php like below ``` php if(isset($\_GET['rname'])){ $name=$\_GET['rname']; echo $name; } ? | ``` Upvotes: 0
2018/03/14
5,075
15,924
<issue_start>username_0: This is a little awkward to explain so i shall try my best. I have written a query in SSMS 17. The query runs fine and returns the data correctly from 1990-01-01 to 2018-03-04. This is correct and 2018-03-04 is the most recent case for this query to pull. When dropping this exact query into SSRS (Visual Studio) i originally put a between data parameter which cascaded to two further options to select of case type and location. I put these as seperate datasets to link the parameters to cascade them. This all worked fine. It was only on my final checks of the report did i realise i couldn't get any data past 2017-10-27. **There is no filters on any of the data.** In the end i have now removed all my parameters so it is the main data set only and put a code in to pull the last 2 years instead. This still ends at 2017-10-27. Does anyone have any ideas how the same query run in SSMS returns the data correctly but copied and pasted in SSRS VS it suddenly wont pull past 2017-10-27? There are only 1,615 rows in that time so its not like there is masses of data and i have hit some limit. Truly stumped with this one. I haven't posted my code as it works in SSMS without issue so i dont believe the issue lies there. This is the code used currently. The long Case When for the Dates is to counter for UTC database time to BST. ``` SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; SET NOCOUNT ON; SET ARITHABORT ON; WITH cvr AS ( SELECT DISTINCT hcx.mps_Person, cpx.mps_Name [Name], CASE WHEN hcx.mps_SchemeCode IS NULL THEN LTRIM(RTRIM(hcx.mps_MembershipNumber)) WHEN hcx.mps_SchemeCode = '' THEN LTRIM(RTRIM(hcx.mps_MembershipNumber)) WHEN hcx.mps_SchemeCode IS NOT NULL THEN LTRIM(RTRIM(hcx.mps_SchemeCode))+'/'+LTRIM(RTRIM(hcx.mps_MembershipNumber)) ELSE NULL END AS [Membership Number], hcx.mps_MemberCoverStatus [Cover Status], cmd.[Country of Incident] [Case Country], CASE WHEN hcx.mps_CoverSource = 0 THEN 'Cover Account' WHEN hcx.mps_CoverSource = 1 THEN 'SAM Cover' WHEN hcx.mps_CoverSource = 2 THEN 'MDU Transfer Cover' ELSE NULL END AS [Cover Source], CASE WHEN hcx.mps_startdate > (DATEADD(HH,1,(DATEADD(DAY,DATEDIFF(DAY,'19000107',DATEADD(MONTH,DATEDIFF(MONTH,0,CAST(DATEPART(YEAR,hcx.mps_startdate) AS CHAR) + '/03/01'),30))/7*7,'19000107')))) AND hcx.mps_startdate < (DATEADD(HH,2,(DATEADD(DAY,DATEDIFF(DAY,'19000107',DATEADD(MONTH,DATEDIFF(MONTH,0,CAST(DATEPART(YEAR,hcx.mps_startdate) AS CHAR) + '/10/01'),30))/7*7,'19000107')))) THEN (DATEADD(HH,1,hcx.mps_startdate)) ELSE hcx.mps_startdate END AS [CM Cover Start Date], CASE WHEN hcx.mps_enddate > (DATEADD(HH,1,(DATEADD(DAY,DATEDIFF(DAY,'19000107',DATEADD(MONTH,DATEDIFF(MONTH,0,CAST(DATEPART(YEAR,hcx.mps_enddate) AS CHAR) + '/03/01'),30))/7*7,'19000107')))) AND hcx.mps_enddate < (DATEADD(HH,2,(DATEADD(DAY,DATEDIFF(DAY,'19000107',DATEADD(MONTH,DATEDIFF(MONTH,0,CAST(DATEPART(YEAR,hcx.mps_enddate) AS CHAR) + '/10/01'),30))/7*7,'19000107')))) THEN (DATEADD(HH,1,hcx.mps_enddate)) ELSE hcx.mps_enddate END AS [CM Cover End Date], cmd.[Case Number] AS [Case Number], cmd.IncidentId, cmd.[Medical/Dental] AS [Medical/Dental], cmd.[Primary Case Type] AS [Incident Primary Case Type], cmd.[Case Types] AS [Incident Case Types], cpx.mps_dn_CaseTypesInvolved AS [Member Involved Case Types], CASE WHEN cpx.mps_involvedfrom > (DATEADD(HH,1,(DATEADD(DAY,DATEDIFF(DAY,'19000107',DATEADD(MONTH,DATEDIFF(MONTH,0,CAST(DATEPART(YEAR,cpx.mps_involvedfrom) AS CHAR) + '/03/01'),30))/7*7,'19000107')))) AND cpx.mps_involvedfrom < (DATEADD(HH,2,(DATEADD(DAY,DATEDIFF(DAY,'19000107',DATEADD(MONTH,DATEDIFF(MONTH,0,CAST(DATEPART(YEAR,cpx.mps_involvedfrom) AS CHAR) + '/10/01'),30))/7*7,'19000107')))) THEN (DATEADD(HH,1,cpx.mps_involvedfrom)) ELSE cpx.mps_involvedfrom END AS [Involved From], CASE WHEN cpx.mps_involvedto > (DATEADD(HH,1,(DATEADD(DAY,DATEDIFF(DAY,'19000107',DATEADD(MONTH,DATEDIFF(MONTH,0,CAST(DATEPART(YEAR,cpx.mps_involvedto) AS CHAR) + '/03/01'),30))/7*7,'19000107')))) AND cpx.mps_involvedto < (DATEADD(HH,2,(DATEADD(DAY,DATEDIFF(DAY,'19000107',DATEADD(MONTH,DATEDIFF(MONTH,0,CAST(DATEPART(YEAR,cpx.mps_involvedto) AS CHAR) + '/10/01'),30))/7*7,'19000107')))) THEN (DATEADD(HH,1,cpx.mps_involvedto)) ELSE cpx.mps_involvedto END AS [Involved To], CASE WHEN cpx.mps_claimsmadenotificationdate > (DATEADD(HH,1,(DATEADD(DAY,DATEDIFF(DAY,'19000107',DATEADD(MONTH,DATEDIFF(MONTH,0,CAST(DATEPART(YEAR,cpx.mps_claimsmadenotificationdate) AS CHAR) + '/03/01'),30))/7*7,'19000107')))) AND cpx.mps_claimsmadenotificationdate < (DATEADD(HH,2,(DATEADD(DAY,DATEDIFF(DAY,'19000107',DATEADD(MONTH,DATEDIFF(MONTH,0,CAST(DATEPART(YEAR,cpx.mps_claimsmadenotificationdate) AS CHAR) + '/10/01'),30))/7*7,'19000107')))) THEN (DATEADD(HH,1,cpx.mps_claimsmadenotificationdate)) ELSE cpx.mps_claimsmadenotificationdate END AS [Claims Made Notification Date], CASE WHEN cmd.[MPS Claim Date] > (DATEADD(HH,1,(DATEADD(DAY,DATEDIFF(DAY,'19000107',DATEADD(MONTH,DATEDIFF(MONTH,0,CAST(DATEPART(YEAR,cmd.[MPS Claim Date]) AS CHAR) + '/03/01'),30))/7*7,'19000107')))) AND cmd.[MPS Claim Date] < (DATEADD(HH,2,(DATEADD(DAY,DATEDIFF(DAY,'19000107',DATEADD(MONTH,DATEDIFF(MONTH,0,CAST(DATEPART(YEAR,cmd.[MPS Claim Date]) AS CHAR) + '/10/01'),30))/7*7,'19000107')))) THEN (DATEADD(HH,1,cmd.[MPS Claim Date])) ELSE cmd.[MPS Claim Date] END AS [MPS Claim Date] ,cmd.[Total Claim Payments (Sterling Equivalent in £)] AS [TotalClaimPayments] ,cmd.[Total Non-Claim Payments (Sterling Equivalent in £)] AS [TotalNonClaimPayments] ,FLOOR(cmd.[MPS Apportionment %]*100) AS [Liability] FROM OneMPS_MSCRM.dbo.mps_historiccoveraccountExtensionBase AS hcx INNER JOIN OneMPS_MSCRM.dbo.mps_historiccoveraccountBase AS hca ON hca.mps_historiccoveraccountId = hcx.mps_historiccoveraccountId AND hca.statuscode = 1 /*Active*/ INNER JOIN OneMPS_MSCRM.dbo.mps_casepartyExtensionBase AS cpx ON hcx.mps_Person = cpx.mps_Person INNER JOIN OneMPS_MSCRM.dbo.mps_casepartyBase AS cpb ON cpx.mps_casepartyId = cpb.mps_casepartyId AND cpb.statuscode = 1 /*Active*/ AND cpx.mps_PrimaryRole = 0 INNER JOIN dbo.CasesMasterData AS cmd ON cmd.incidentid = cpx.mps_Case WHERE hcx.mps_CoverBasis = 'Claims Made' AND hcx.mps_IsSuperseded = 0 /*Not Superseded*/ AND cpx.mps_involvedto >= hcx.mps_StartDate /*Intersects with Period of Involvement*/ AND cpx.mps_involvedfrom <= hcx.mps_EndDate /*Intersects with Period of Involvement*/ AND cpx.mps_InvolvedFrom >= '1990-01-01 00:00:00.000' ) ,cts AS ( SELECT cvr.mps_Person, cvr.[Cover Source], cvr.Name, cvr.[Membership Number], cvr.[Cover Status], cvr.[CM Cover Start Date], cvr.[CM Cover End Date], cvr.[Case Number], cvr.IncidentId, cvr.[Involved From], cvr.[Involved To], cvr.[Claims Made Notification Date], cvr.[MPS Claim Date], cvr.[Medical/Dental], cvr.[Case Country], cvr.[Incident Primary Case Type], cvr.[Incident Case Types], cvr.[Member Involved Case Types], CASE WHEN LEAD(cvr.[CM Cover Start Date],1,0) OVER (PARTITION BY cvr.mps_person, cvr.[Membership Number], cvr.[Case Number] ORDER BY cvr.[CM Cover Start Date]) = cvr.[CM Cover Start Date] AND LEAD(cvr.[CM Cover End Date],1,0) OVER (PARTITION BY cvr.mps_person, cvr.[Membership Number], cvr.[Case Number] ORDER BY cvr.[CM Cover Start Date]) = cvr.[CM Cover End Date] THEN NULL ELSE DATEDIFF(d,cvr.[CM Cover Start Date],cvr.[CM Cover End Date]) END [CM Cover Days], (SELECT MIN(cvr_sd.[CM Cover Start Date]) FROM cvr cvr_sd WHERE cvr_sd.mps_Person = cvr.mps_Person AND cvr_sd.[Membership Number] = cvr.[Membership Number] AND cvr_sd.[Case Number] = cvr.[Case Number]) [CM Cover Start Date (Min)], (SELECT MAX(cvr_ed.[CM Cover End Date]) FROM cvr cvr_ed WHERE cvr_ed.mps_Person = cvr.mps_Person AND cvr_ed.[Membership Number] = cvr.[Membership Number] AND cvr_ed.[Case Number] = cvr.[Case Number]) [CM Cover End Date (Max)], (SELECT COUNT(cvr_rw.mps_Person) FROM cvr cvr_rw WHERE cvr_rw.mps_Person = cvr.mps_Person AND cvr_rw.[Membership Number] = cvr.[Membership Number] AND cvr_rw.[Case Number] = cvr.[Case Number]) [Total Rows], cvr.TotalClaimPayments, cvr.TotalNonClaimPayments, cvr.Liability FROM cvr ) ,chk AS ( SELECT cts.mps_Person, cts.[Case Number], cts.IncidentId, cts.[Membership Number], SUM(cts.[CM Cover Days]) AS [CM Cover Days], DATEDIFF(d,cts.[CM Cover Start Date (Min)],cts.[CM Cover End Date (Max)])-MAX(cts.[Total Rows]) AS [CM Cover Days (If Unbroken)] FROM cts GROUP BY cts.mps_Person, cts.[Case Number], cts.IncidentId, cts.[Membership Number], cts.[CM Cover Start Date (Min)], cts.[CM Cover End Date (Max)], cts.[Case Country] ) SELECT cts.Name, cts.[Membership Number], cts.[Cover Status], cts.[Cover Source], cts.[CM Cover Start Date], cts.[CM Cover End Date], CASE WHEN chk.[CM Cover Days] >= DATEDIFF(d,cts.[CM Cover Start Date (Min)],cts.[CM Cover End Date (Max)])-cts.[Total Rows] THEN cts.[CM Cover Start Date (Min)] ELSE NULL END [CM Continuous Cover Start Date], CASE WHEN chk.[CM Cover Days] >= DATEDIFF(d,cts.[CM Cover Start Date (Min)],cts.[CM Cover End Date (Max)])-cts.[Total Rows] THEN cts.[CM Cover End Date (Max)] ELSE NULL END [CM Continuous Cover End Date], a2a.[A2A Decision], a2a.[A2A Decision Reason], cts.[Case Number], cts.[Involved From] AS [Involved From Date], cts.[Involved To] AS [Involved To Date], cts.[Claims Made Notification Date] AS [Claims Made Notification Date], cts.[MPS Claim Date], cts.[Medical/Dental], cts.[Case Country], cts.[Incident Primary Case Type], cts.[Incident Case Types], cts.[Member Involved Case Types], mbr.[# Members Involved], SUM(ISNULL(cts.TotalClaimPayments,0)+ISNULL(cts.TotalNonClaimPayments,0)) AS [TotalSpend], ISNULL(cts.TotalClaimPayments,0) AS [Claim Payments - Total (£)], CAST(cts.Liability*(cts.TotalClaimPayments/100) AS NUMERIC(14,2)) AS [Claim Payments - Apportioned to Member (£)], ISNULL(cts.TotalNonClaimPayments,0) AS [Non-Claim Payments - Total (£)], cts.Liability AS [MPSClaimLiability] FROM cts INNER JOIN chk ON chk.[Case Number] = cts.[Case Number] AND chk.[Membership Number] = cts.[Membership Number] AND chk.mps_Person = cts.mps_Person OUTER APPLY ( SELECT TOP 1 CASE WHEN adx.mps_Decision = 0 THEN 'No' WHEN adx.mps_Decision = 1 THEN 'Yes' WHEN adx.mps_Decision = 2 THEN 'Pending' WHEN adx.mps_Decision = 3 THEN 'No - Member Declined' WHEN adx.mps_Decision = 4 THEN 'Yes - Ex Gratia' WHEN adx.mps_Decision = 5 THEN 'No - Member Uncontactable/Not Responding' ELSE NULL END AS [A2A Decision], adx.mps_DecisionReason AS [A2A Decision Reason] FROM OneMPS_MSCRM.dbo.mps_authoritytoassistdecisionBase AS adb INNER JOIN OneMPS_MSCRM.dbo.mps_authoritytoassistdecisionExtensionBase adx ON adb.mps_authoritytoassistdecisionId = adx.mps_authoritytoassistdecisionId AND adb.statuscode IN (2) /*2=Valid*/ INNER JOIN OneMPS_MSCRM.dbo.mps_casepartyExtensionBase cpx ON cpx.mps_casepartyId = adx.mps_CaseParty INNER JOIN OneMPS_MSCRM.dbo.mps_casepartyBase cpb ON cpx.mps_casepartyId = cpb.mps_casepartyId AND cpb.statuscode = 1 /*Active*/ AND cpx.mps_PrimaryRole = 0 WHERE cpx.mps_Case = cts.IncidentId ORDER BY adx.mps_DecisionOn DESC ) AS a2a OUTER APPLY ( SELECT TOP 1 COUNT(cpx.mps_casepartyId) AS [# Members Involved] FROM OneMPS_MSCRM.dbo.mps_casepartyExtensionBase cpx INNER JOIN OneMPS_MSCRM.dbo.mps_casepartyBase cpb ON cpx.mps_casepartyId = cpb.mps_casepartyId AND cpb.statuscode = 1 /*Active*/ AND cpx.mps_PrimaryRole = 0 WHERE cpx.mps_Case = cts.IncidentId ) AS mbr GROUP BY cts.Name, cts.[Membership Number], cts.[Cover Status], cts.[Cover Source], cts.[CM Cover Start Date], cts.[CM Cover End Date], CASE WHEN chk.[CM Cover Days] >= DATEDIFF(d,cts.[CM Cover Start Date (Min)],cts.[CM Cover End Date (Max)])-cts.[Total Rows] THEN cts.[CM Cover Start Date (Min)] ELSE NULL END, CASE WHEN chk.[CM Cover Days] >= DATEDIFF(d,cts.[CM Cover Start Date (Min)],cts.[CM Cover End Date (Max)])-cts.[Total Rows] THEN cts.[CM Cover End Date (Max)] ELSE NULL ENd, a2a.[A2A Decision], a2a.[A2A Decision Reason], cts.[Case Number], cts.[Involved From], cts.[Involved To] , cts.[Claims Made Notification Date], cts.[MPS Claim Date], cts.[Medical/Dental], cts.[Case Country], cts.[Incident Primary Case Type], cts.[Incident Case Types], cts.[Member Involved Case Types], mbr.[# Members Involved], cts.TotalClaimPayments , CAST(cts.Liability*(cts.TotalClaimPayments/100) AS NUMERIC(14,2)), cts.TotalNonClaimPayments, cts.Liability ORDER BY cts.[Involved From] ASC ``` Sample Data shortened. Its Essentially a Data Extract. ``` Name|Membership Number|CoverStatus|Cover Date |Involved From Bob |984684638 |Active |2017-03-01 00:00:00.000|2017-10-27 00:00:00.000 Test|135486968 |Active |2017-07-01 00:00:00.000|2018-03-04 00:00:00.000 ``` The first row will show in SSRS fine, the second row will not be visible at all. The main dates this would work off is the last one [Involved From]. The only thing i can think of is the BST conversion is maybe somehow affecting it as its the only thing i do with the date.<issue_comment>username_1: Load this Query into SP and Give the Source from SP,I hope this will solve the issue since the code is huge. Upvotes: 0 <issue_comment>username_2: > > You have SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; > > May be Same query returning two different data sets due to this isolation level session. As if any table is updated (insert or update or delete) under a transaction and same transaction is not completed that is not committed or roll backed then uncommitted values will displaly (Dirty Read) in select query of "Read Uncommitted" isolation transaction sessions. > > So, SET TRANSACTION ISOLATION LEVEL SNAPSHOT; > > > Upvotes: 0 <issue_comment>username_3: Hi all apologies i appear to have well over thought this. Thank you for those that responded with ideas. It turned out to be the Shared Data Source, we have a live and a test environment. The Data Source said it was pointing at live, but when i checked the connection string (why i didn't think of it earlier) it was actually looking up the test server which has limited data. Couldn't see the wood for the trees with this one :) so lesson learned. REMEMBER TO CHECK YOUR CONNECTION STRING AND NOT JUST THE NAME. Upvotes: 2 [selected_answer]
2018/03/14
332
1,153
<issue_start>username_0: I am new to VB and Json. Here is the code of obtaining the value of the key `ID`. Can I get it directly instead using a loop? ``` data_Obj = JsonConvert.DeserializeObject(Of Dictionary(Of String, Object))(data_result) For Each obj In data_Obj If obj.Key.Equals("ID") Then str_id = obj.Value End If Next ``` Thank you.<issue_comment>username_1: There is no need to have a loop. One of the main reasons to have a Dictionary is the very fast access to its elements. You could use [TryGetValue](https://msdn.microsoft.com/en-us/library/bb347013(v=vs.110).aspx) ``` data_Obj = JsonConvert.DeserializeObject(Of Dictionary(Of String, Object))(data_result) Dim str_id as String if data_Obj.TryGetValue("ID", str_id) Then ' SUCCESS else ' FAILURE end if ``` This is safest method because it doesn't trigger an exception in case your key doesn't exist in the returned dictionary. Upvotes: 2 <issue_comment>username_2: try this ``` data_Obj = JsonConvert.DeserializeObject(Of Dictionary(Of String, Object))(data_result) str_id = data_Obj.item("ID") ``` as if data\_Obj is a normal Dictionary Upvotes: 0
2018/03/14
1,135
3,810
<issue_start>username_0: ``` try { List stockistIds=updateStockMapper.getStockInvoiceId(stockmodel); String myList = new String(); for (UpdateStockModel x : stockistIds) { //System.out.println("stockist list id.." + x.getStockInvoiceIds()); myList = myList.concat(x.getStockInvoiceIds()); myList = myList.concat(","); } List list = new ArrayList(); list.add(myList); System.out.println("list.." + list); System.out.println("stockInvoiceId is.." + stockmodel.getStockInvoiceIds()); System.out.println("list status.." +list.contains(stockmodel.getStockInvoiceIds())); if (list.contains(stockmodel.getStockInvoiceIds()) ==true){ return true; } else { return true; } } ``` Output: ``` list..[47,49,50,51,52,53,54,55,56,57,58,59,60,62,64,65,66,67,68,69,70,71,72,74,75,] stockInvoiceId is..66 list status..false return else ``` Here 66 existed in list but return false.I need to get true<issue_comment>username_1: The problem is in the `if` statement Change: ``` if (list.contains(stockmodel.getStockInvoiceIds()) ==true){ return true; } else { return true; } ``` to: ``` if (list.contains(stockmodel.getStockInvoiceIds()) ==true){ return true; } else { return false; } ``` Upvotes: -1 <issue_comment>username_2: It looks like you setup the list varible as as an Arry with one entry. replacing the statement ``` list.contains(..) ``` with ``` stockistIds.contains(..) ``` should do the trick. Upvotes: -1 <issue_comment>username_2: Try to initalize the list as follows: ``` List list = new ArrayList(); for (UpdateStockModel x : stockistIds) { //System.out.println("stockist list id.." + x.getStockInvoiceIds()); list.add(x.getStockInvoiceIds()); } ``` Then you can compare agains a list and not against a String. Upvotes: 0 <issue_comment>username_3: I think this is what you're after: ``` Set ids = updateStockMapper.getStockInvoiceId(stockmodel) .stream() .map(usm -> usm.getStockInvoiceIds()) .collect(Collectors.toSet()); String id = stockmodel.getStockInvoiceIds(); return ids.contains(id); ``` Upvotes: 2 <issue_comment>username_4: Okay lets get it straight here. You got a List full of objects which contain a ID. You get the IDs from the object and concate them to a single large String. Later on you add this single String to an ArrayList and expect the `List.contains()` method to find a proper match for you. This is **not** how it works. You can either fix this, by calling `list.get(0).contains(...)` which will work since you will retrieve your string from the list and check it for the ID or even better,you add the Strings themself to an ArrayList. Doing so would end up similiar to this: ``` List stockistIds=updateStockMapper.getStockInvoiceId(stockmodel); List myList = new ArrayList<>(); for (UpdateStockModel x : stockistIds) { myList.add(x.getStockInvoiceIds()); } ``` Doing so will replace the following part: ``` //This all becomes useless since you will already have a list with proper objects. List list = new ArrayList(); list.add(myList); System.out.println("list.." + list); System.out.println("stockInvoiceId is.." + stockmodel.getStockInvoiceIds()); System.out.println("list status.." +list.contains(stockmodel.getStockInvoiceIds())); ``` It's not rocket science. Think of Lists as they were just more dynamic and flexible Arrays. Upvotes: 2 <issue_comment>username_5: ``` // if the element that you want to check is of string type String value= "66";// you use element instead of 66 Boolean flag=false; if (list.contains(value)){ flag=true; } else { flag=false; } //you can use flag where you want ``` Upvotes: 0
2018/03/14
1,341
4,013
<issue_start>username_0: ``` State City DL,UP DELHI: Karol Bag,Ashok Nagr UttarPradesh: Noida,Lucknow ``` OutPut ``` State City DL KarolBag DL Ashok Nagr UP Noida UP Lucknow ``` i have created a function to split the value, but while cross applying this, it is giving umnappropriate result, Like DL-Lucknow. I want the exact result.<issue_comment>username_1: Try this: I don't think it's an efficient way. But it gives solution if your data stored in the same pattern. ``` DECLARE @City VARCHAR(50) = 'DL,UP', @Str VARCHAR(150)='DELHI: Karol Bag,Ashok Nagr UttarPradesh: Noida,Lucknow' DECLARE @STr1 VARCHAR(150),@STr2 VARCHAR(150) SELECT @Str1=REVERSE(RIGHT(REVERSE(LEFT(@Str,LEN(@Str)-CHARINDEX(':',REVERSE(@Str)))),LEN(REVERSE(LEFT(@Str,LEN(@Str)-CHARINDEX(':',REVERSE(@Str)))))-CHARINDEX(' ', REVERSE(LEFT(@Str,LEN(@Str)-CHARINDEX(':',REVERSE(@Str))))))) ,@STr2=RIGHT(@Str,LEN((RIGHT(REVERSE(LEFT(@Str,LEN(@Str)-CHARINDEX(':',REVERSE(@Str)))),LEN(REVERSE(LEFT(@Str,LEN(@Str)-CHARINDEX(':',REVERSE(@Str)))))-CHARINDEX(' ', REVERSE(LEFT(@Str,LEN(@Str)-CHARINDEX(':',REVERSE(@Str))))))))) SELECT @STr1=RIGHT(@STr1,LEN(@STr1)-(CHARINDEX(':', @STr1)+1)),@STr2=RIGHT(@STr2,LEN(@STr2)-(CHARINDEX(':', @STr2)+1)) SELECT A.value [State],D.value City FROM dbo.fn_Split(@City,',') A INNER JOIN( SELECT 0 ID,* FROM dbo.fn_Split(@STr1,',') UNION SELECT 1,* FROM dbo.fn_Split(@STr2,',') )D ON D.ID=A.idx ``` **OutPut:** ``` State City DL Karol Bag DL Ashok Nagr UP Noida UP Lucknow ``` Upvotes: 1 <issue_comment>username_2: I will not say that is a good idea to store data in that way, but if so, you can try this solution (based on your example) ``` -------------------------------------------------------------------------------- --create sample data set DECLARE @tbl AS TABLE([State] VARCHAR(100), City VARCHAR(1000)); INSERT INTO @tbl(State, City) VALUES('DL,UP,Some', 'DELHI: Karol Bag,Ashok Nagr UttarPradesh: Noida,Lucknow SomeState: Same City1, Some City2'); ---------------------------------------------------------- --Create sequence (can be replaced by recursive cte) DECLARE @Tally TABLE(N INT); DECLARE @i AS INT=1; WHILE @i !=1000 BEGIN INSERT INTO @Tally(N)VALUES(@i); SET @i+=1; END; ---------------------------------------------------------- --Query (2 cte to split each fied and assign ID) ;WITH StatesSplit AS (SELECT StateID=ROW_NUMBER() OVER (ORDER BY N), StateShort=REPLACE(REPLACE(P1, LEAD(P1, 1, '') OVER (ORDER BY N), ''), ',', '') FROM @tbl AS E INNER JOIN @Tally AS T ON SUBSTRING(','+E.State, T.N, 1)=',' CROSS APPLY(SELECT STUFF(','+E.State, 1, n, '') AS P1) AS Stage1 ), CitiesSplit AS (SELECT StateLong = REPLACE(States, ':', ''), ComboCities = REPLACE(REPLACE(P2, LEAD(P2, 1, '') OVER (ORDER BY N), ''), States, ''), StateID=ROW_NUMBER() OVER (ORDER BY N) FROM @tbl AS E INNER JOIN @Tally AS T ON SUBSTRING(E.City, T.N, 1)=':' CROSS APPLY(SELECT REVERSE(STUFF(E.City, n+1, 999, '')) AS P1) AS Stage1 CROSS APPLY(SELECT REVERSE(STUFF(P1+' ', CHARINDEX(' ', P1+' '), 999, '')) AS States) AS Stage2 CROSS APPLY(SELECT STUFF(' '+City, 1, PATINDEX('%'+States+'%', City), '') AS P2) AS Stage3 ) --Final Output SELECT S.StateShort, C.StateLong, Cities=LTRIM(REPLACE(REPLACE(P1, LEAD(P1, 1, '') OVER (PARTITION BY S.StateShort ORDER BY N), ''), ',', '')) FROM StatesSplit AS S JOIN CitiesSplit AS C ON S.StateID=C.StateID INNER JOIN @Tally AS T ON SUBSTRING(','+C.ComboCities, T.N, 1)=',' CROSS APPLY(SELECT STUFF(','+C.ComboCities, 1, n, '') AS P1) AS Stage1 ORDER BY 1 ``` Test below: <https://data.stackexchange.com/stackoverflow/query/822004/ccccombo> [![enter image description here](https://i.stack.imgur.com/pfzUQ.png)](https://i.stack.imgur.com/pfzUQ.png) Upvotes: 0
2018/03/14
1,596
5,522
<issue_start>username_0: I'm new to R so please bear with me! I Have a dataframe named `mydata`. Here's a sample of the relevant columns: ``` Backlog.Item.Type State Task.Initial.Hours.Estimate Task.Completed.Hours Epic In Progress NA NA Feature New NA NA Product Backlog Item Done NA NA Task Done 5.00 0.50 Task Done 3.00 0.50 Task Done 5.50 6.50 Task Done 2.50 3.00 Task Done 2.00 5.50 Task Done 2.00 3.00 Product Backlog Item Done NA NA Product Backlog Item Done NA NA Product Backlog Item Approved NA NA Task In Progress NA NA ``` Now, what I want to accomplish is the following: I want to select the rows where the value for `Backlog.Item.Type` = Task, `State` = Done and `Task.Initial.Hours.Estimate` & `Task.Completed.Hours` are not N/A or 0.00. Once the rows that meet these conditions have been selected, I want to perform the following calculation on them: `Task.Completed.Hours` / (divided by) `Task.Initial.Hours.Estimate` x (multiplied by) 100. I then want to store this new value in a new column and calculate the mean of this entire column. Thanks in advance, I hope I have been clear enough and formulated my question in a understandable manner!<issue_comment>username_1: Try this: I don't think it's an efficient way. But it gives solution if your data stored in the same pattern. ``` DECLARE @City VARCHAR(50) = 'DL,UP', @Str VARCHAR(150)='DELHI: Karol Bag,Ashok Nagr UttarPradesh: Noida,Lucknow' DECLARE @STr1 VARCHAR(150),@STr2 VARCHAR(150) SELECT @Str1=REVERSE(RIGHT(REVERSE(LEFT(@Str,LEN(@Str)-CHARINDEX(':',REVERSE(@Str)))),LEN(REVERSE(LEFT(@Str,LEN(@Str)-CHARINDEX(':',REVERSE(@Str)))))-CHARINDEX(' ', REVERSE(LEFT(@Str,LEN(@Str)-CHARINDEX(':',REVERSE(@Str))))))) ,@STr2=RIGHT(@Str,LEN((RIGHT(REVERSE(LEFT(@Str,LEN(@Str)-CHARINDEX(':',REVERSE(@Str)))),LEN(REVERSE(LEFT(@Str,LEN(@Str)-CHARINDEX(':',REVERSE(@Str)))))-CHARINDEX(' ', REVERSE(LEFT(@Str,LEN(@Str)-CHARINDEX(':',REVERSE(@Str))))))))) SELECT @STr1=RIGHT(@STr1,LEN(@STr1)-(CHARINDEX(':', @STr1)+1)),@STr2=RIGHT(@STr2,LEN(@STr2)-(CHARINDEX(':', @STr2)+1)) SELECT A.value [State],D.value City FROM dbo.fn_Split(@City,',') A INNER JOIN( SELECT 0 ID,* FROM dbo.fn_Split(@STr1,',') UNION SELECT 1,* FROM dbo.fn_Split(@STr2,',') )D ON D.ID=A.idx ``` **OutPut:** ``` State City DL Karol Bag DL Ashok Nagr UP Noida UP Lucknow ``` Upvotes: 1 <issue_comment>username_2: I will not say that is a good idea to store data in that way, but if so, you can try this solution (based on your example) ``` -------------------------------------------------------------------------------- --create sample data set DECLARE @tbl AS TABLE([State] VARCHAR(100), City VARCHAR(1000)); INSERT INTO @tbl(State, City) VALUES('DL,UP,Some', 'DELHI: Karol Bag,Ashok Nagr UttarPradesh: Noida,Lucknow SomeState: Same City1, Some City2'); ---------------------------------------------------------- --Create sequence (can be replaced by recursive cte) DECLARE @Tally TABLE(N INT); DECLARE @i AS INT=1; WHILE @i !=1000 BEGIN INSERT INTO @Tally(N)VALUES(@i); SET @i+=1; END; ---------------------------------------------------------- --Query (2 cte to split each fied and assign ID) ;WITH StatesSplit AS (SELECT StateID=ROW_NUMBER() OVER (ORDER BY N), StateShort=REPLACE(REPLACE(P1, LEAD(P1, 1, '') OVER (ORDER BY N), ''), ',', '') FROM @tbl AS E INNER JOIN @Tally AS T ON SUBSTRING(','+E.State, T.N, 1)=',' CROSS APPLY(SELECT STUFF(','+E.State, 1, n, '') AS P1) AS Stage1 ), CitiesSplit AS (SELECT StateLong = REPLACE(States, ':', ''), ComboCities = REPLACE(REPLACE(P2, LEAD(P2, 1, '') OVER (ORDER BY N), ''), States, ''), StateID=ROW_NUMBER() OVER (ORDER BY N) FROM @tbl AS E INNER JOIN @Tally AS T ON SUBSTRING(E.City, T.N, 1)=':' CROSS APPLY(SELECT REVERSE(STUFF(E.City, n+1, 999, '')) AS P1) AS Stage1 CROSS APPLY(SELECT REVERSE(STUFF(P1+' ', CHARINDEX(' ', P1+' '), 999, '')) AS States) AS Stage2 CROSS APPLY(SELECT STUFF(' '+City, 1, PATINDEX('%'+States+'%', City), '') AS P2) AS Stage3 ) --Final Output SELECT S.StateShort, C.StateLong, Cities=LTRIM(REPLACE(REPLACE(P1, LEAD(P1, 1, '') OVER (PARTITION BY S.StateShort ORDER BY N), ''), ',', '')) FROM StatesSplit AS S JOIN CitiesSplit AS C ON S.StateID=C.StateID INNER JOIN @Tally AS T ON SUBSTRING(','+C.ComboCities, T.N, 1)=',' CROSS APPLY(SELECT STUFF(','+C.ComboCities, 1, n, '') AS P1) AS Stage1 ORDER BY 1 ``` Test below: <https://data.stackexchange.com/stackoverflow/query/822004/ccccombo> [![enter image description here](https://i.stack.imgur.com/pfzUQ.png)](https://i.stack.imgur.com/pfzUQ.png) Upvotes: 0
2018/03/14
696
1,924
<issue_start>username_0: I am having trouble adjusting the icon background to be transparent but the parent container to have black on the sides of the transparent border. The attached picture says a thousand words. I need the final work to look like the picture below. You can change the markup. [![enter image description here](https://i.stack.imgur.com/jlLUa.jpg)](https://i.stack.imgur.com/jlLUa.jpg) ```css * { font-size: 16px; } .home-contact { background-image: url('https://images.unsplash.com/photo-1518666452233-523dfa23d45e?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&cs=tinysrgb&w=400&fit=max&ixid=eyJhcHBfaWQiOjE0NTg5fQ&s=ac4741c99f65e732e43ab8abb770fbbc'); width: 300px; margin: 0 auto; padding: 1rem; } .contact-num { display: flex; color: #999; } .icon-hold { background: transparent; border-radius: 100%; border: 4px solid #000; color: #000; } .icon-hold span { padding: 1rem 1rem 0 1rem ; font-size: 3rem; } .icon-text { background: #000; display: flex; flex-direction: column; align-items: center; justify-content: center; } .icon-text span { padding: 0 1rem; text-transform: uppercase; letter-spacing: 0.1rem; } .triangle-right { width: 0; height: 0; border-top: 50px solid transparent; border-left: 50px solid black; border-bottom: 50px solid transparent; } ``` ```html Book Now 0701 000 659 ```<issue_comment>username_1: Something along the lines of ``` Book Now 0701 000 659 ``` That will give you a transparent background for the FontAwesome-Icon. That fancy rounded thing around it you will have to figure out for yourself ;-) Upvotes: -1 <issue_comment>username_2: Try to build an svg with <http://editor.method.ac/>. I built it within 5 minutes and got this: ``` background Layer 1 ``` Maybe you need to edit it a little more, but I think this will help you out a lot. Upvotes: 0
2018/03/14
714
2,466
<issue_start>username_0: Consider the following class: ``` class Person attr_accessor :first_name def initialize(█) instance_eval(█) if block_given? end end ``` When I create an instance of Person as follows: ``` person = Person.new do first_name = "Adam" end ``` I expected the following: ``` puts person.first_name ``` to output "Adam". Instead, it outputs only a blank line: the first\_name attribute has ended up with a value of nil. When I create a person likes this, though: ``` person = Person.new do @first_name = "Adam" end ``` The first\_name attribute is set to the expected value. The problem is that I want to use the attr\_accessor in the initialization block, and not the attributes directly. Can this be done?<issue_comment>username_1: Try this: ``` person = Person.new do |obj| obj.first_name = "Adam" end puts person.first_name ``` Upvotes: 0 <issue_comment>username_2: Ruby setters cannot be called without an explicit receiver since local variables take a precedence over method calls. You don’t need to experiment with such an overcomplicated example, the below won’t work as well: ``` class Person attr_accessor :name def set_name(new_name) name = new_name end end ``` only this will: ``` class Person attr_accessor :name def set_name(new_name) # name = new_name does not call `#name=` self.name = new_name end end ``` --- For your example, you must explicitly call the method on a receiver: ``` person = Person.new do self.first_name = "Adam" end ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: If the code is run with warnings enabled (that is `ruby -w yourprogram.rb`) it responds with : "warning: assigned but unused variable - first\_name", with a line-number pointing to `first_name = "Adam"`. So Ruby interprets `first_name` as a variable, not as a method. As others have said, use an explicit reciever: `self.first_name`. Upvotes: 2 <issue_comment>username_4: > > I want to use the attr\_accessor in the initialization block, and not the attributes directly > > > `instance_eval` undermines encapsulation. It gives the block access to instance variables and private methods. Consider passing the person instance into the block instead: ``` class Person attr_accessor :first_name def initialize yield(self) if block_given? end end ``` Usage: ``` adam = Person.new do |p| p.first_name = 'Adam' end #=> # ``` Upvotes: 0
2018/03/14
606
2,187
<issue_start>username_0: Is there any reason why a Xamarin.Forms search bar wouldn't show on Android (currently running Android 7.0). I read that it might be a good idea to do a HeightRequest but even after trying that, the search bar still doesn't show up. Here's what I have in my xaml to initialize the search bar: ``` ``` Any idea how to move forward? UPDATE: The whole layout looks like this: ``` xml version="1.0" encoding="utf-8" ? ``` But it's only set up like this currently in order to see the search bar but it's still not visible<issue_comment>username_1: Your problem is that you have more than one `View` in the root of your `ContentPage`. Group your `View`s under a parent control, an example: ``` xml version="1.0" encoding="utf-8" ? ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: In case that you want a more general way to resolve this and not adding manually a `HeightRequest` you can implement a `SearchBarRenderer` ``` using Android.OS; using AppTTM_AVI.Droid; using Xamarin.Forms; using Xamarin.Forms.Platform.Android; [assembly: ExportRenderer(typeof(SearchBar), typeof(CustomSearchBarRenderer))] namespace App.Droid { /// /// Workaround for searchBar not appearing on Android >= 7 /// public class CustomSearchBarRenderer : SearchBarRenderer { public CustomSearchBarRenderer() { } protected override void OnElementChanged(ElementChangedEventArgs e) { base.OnElementChanged(e); if (e.OldElement != null || Element == null) { return; } if (Build.VERSION.SdkInt >= BuildVersionCodes.N) { Element.HeightRequest = 42; } } } } ``` Source: <https://forums.xamarin.com/discussion/comment/296772/#Comment_296772> Upvotes: 0 <issue_comment>username_3: Android 7.0 and 7.1 Search bar appears first time after installation once I logout or changes the orientation to landscape mode its gone, Have implemented this code as well. ``` protected override void OnElementChanged(ElementChangedEventArgs e) { base.OnElementChanged(e); if (e.OldElement != null || Element == null) return; if(Build.VERSION.SdkInt >= BuildVersionCodes.N) Element.HeightRequest = 40; } ``` Upvotes: 0
2018/03/14
264
1,016
<issue_start>username_0: I'm a bit confused as to how I can specify another .config file in my web.config while retaining parts of the original web config. I want to put my connection strings in another file but when I build the project I get an error about there being multiple `appsettings` elements. I have this: ``` ``` then further down, because it's a Crystal Reports application, these settings are specified. I don't want these keys in my connnectionstrings.config file as they're not relevant. ``` ``` How do I keep my seperate config file and the Crystal settings above, without putting them all in the connectionstrings.config file?<issue_comment>username_1: Try this, maybe! ``` ``` Upvotes: 0 <issue_comment>username_2: You main configuration file(web.config) should look like this ``` xml version="1.0"? ``` Further your separate appSettings.config should look like this ``` xml version="1.0" encoding="utf-8"? ``` This is how we have worked in our project. Upvotes: 2 [selected_answer]
2018/03/14
1,243
4,844
<issue_start>username_0: Here I am first retrieving an image from firebase then adding it to an locationImage array,which will be later added to collectionView. ``` import UIKit import Firebase import FirebaseStorage class ViewController: UIViewController, UICollectionViewDataSource, UICollectionViewDelegate { var locationImage = [UIImage(named: "hawai"), UIImage(named: "mountain")] override func viewDidLoad() { super.viewDidLoad() retrieveData() } func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { print(locationImage.count) return locationImage.count } func retrieveData(){ let database = FIRDatabase.database().reference() let storage = FIRStorage.storage().reference() let imageRef = storage.child("blue blurr.png") imageRef.data(withMaxSize: (1*1000*1000)) { (data, error) in if error == nil{ let tempImage = UIImage(data: data!) self.locationImage.append(tempImage) print("HELLLLLOOOO WOOOOORRRLLLDDDD") print(self.locationImage.count) } else{ print(error?.localizedDescription) } } return } } ``` **Here the retrieveData() function is calling before collectionView()**.Instead viewdidload should be called first,how can I do that,can someone help ?<issue_comment>username_1: You don't want the collectionView to be called before ViewDidLoad? ``` override func viewDidLoad() { super.viewDidLoad() collectionView.dataSource = self collectionView.delegate = self } ``` But this shouldn't worry you, because if the array you are using to initialise the CollectionView is empty, it wouldn't matter if the call goes to the numberOfItemsInSection method. What you require here is to call a reload after you have data in your locationImage. So right after your self.locationImage.append(tempImage), add: ``` self.collectionView.reloadData() ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` import UIKit import Firebase import FirebaseStorage class ViewController: UIViewController, UICollectionViewDataSource, UICollectionViewDelegate { var locationImage = [UIImage(named: "hawai"), UIImage(named: "mountain")] override func viewDidLoad() { super.viewDidLoad() retrieveData() //Add these two line here remove delegate and datasource from storyborad collectionView.dataSource = self collectionView.delegate = self } func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { print(locationImage.count) return locationImage.count } func retrieveData(){ let database = FIRDatabase.database().reference() let storage = FIRStorage.storage().reference() let imageRef = storage.child("blue blurr.png") imageRef.data(withMaxSize: (1*1000*1000)) { (data, error) in if error == nil{ let tempImage = UIImage(data: data!) self.locationImage.append(tempImage) print("HELLLLLOOOO WOOOOORRRLLLDDDD") print(self.locationImage.count) } else{ print(error?.localizedDescription) } } return } } ``` Upvotes: 0 <issue_comment>username_3: **100% working** Where is your IBOutlet for `UICollectionView`? ``` @IBOutlet weak var collectionView: UICollectionView! ``` Set first IBOutlet & try this code: ``` import UIKit import Firebase import FirebaseStorage class ViewController: UIViewController, UICollectionViewDataSource, UICollectionViewDelegate { var locationImage = [UIImage(named: "hawai"), UIImage(named: "mountain")] override func viewDidLoad() { super.viewDidLoad() //Add these two line here remove delegate and datasource from storyborad collectionView.dataSource = self collectionView.delegate = self retrieveData() } func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { print(locationImage.count) return locationImage.count } func retrieveData(){ let database = FIRDatabase.database().reference() let storage = FIRStorage.storage().reference() let imageRef = storage.child("blue blurr.png") imageRef.data(withMaxSize: (1*1000*1000)) { (data, error) in if error == nil{ let tempImage = UIImage(data: data!) self.locationImage.append(tempImage) print("HELLLLLOOOO WOOOOORRRLLLDDDD") print(self.locationImage.count) } else{ print(error?.localizedDescription) } } return } ``` Upvotes: 0 <issue_comment>username_4: a simple solution is that put sleep your function for some sec something like this ``` func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { do { sleep(1) } ``` EDIT: formatted function Upvotes: -1
2018/03/14
750
2,576
<issue_start>username_0: Basically, I create a checkbox range with pure JS. What I can't figure out, how to make checked range impossible to uncheck. For example you check **2** and **5**. So number **3** and **4** should be impossible to uncheck, but **2** and **5** is possible to uncheck. ``` * One * Two * Three * Four * Five * Six * Seven // JavaScript var el = document.getElementsByTagName("input"); var lastChecked = null; // Iterate through every 'input' element and add an event listener = click for(var i = 0; i < el.length; i++){ el[i].addEventListener("click", myFunction); } function myFunction() { // In this statement we check, which input is checked..✓ if (!lastChecked) { lastChecked = this; return; } // Declaring 'from' and 'to' and getting index number of an array-like inputs var from = [].indexOf.call(el, this); var to = [].indexOf.call(el, lastChecked); /* Here we will know which 'check' will be the min and which will be the max with the help of Math metods */ var start = Math.min(from, to); var end = Math.max(from, to) + 1; // Here we create an array from NodeList var arr = Array.prototype.slice.call(el); // Here we get a new array, where we declare the start and the end // Start is the min, end is the max var slicedArr = arr.slice(start, end); // Now we iterate through every sliced input, and set its attribute to checked for(var j = 0; j < slicedArr.length; j++){ slicedArr[j].checked = lastChecked.checked; } lastChecked = this; } ``` Thank you for any help guys. It is my first post on stackoverflow by the way, so I'm verry sorry if I didn't post it correctly. Thank you.<issue_comment>username_1: Get the element via attribute selector ``` for (let i=start; i<=end; i++){ document.querySelector(`[value="${i}"]`).disabled = true; } ``` Here it is building the string in the selector with template literals <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Template_literals> And just setting the disabled attribute in the checkboxes via Javascript ``` document.querySelector('[value="7"]').disabled = true; ``` it will get an HTML like ``` ``` <https://www.w3schools.com/jsref/prop_checkbox_disabled.asp> Upvotes: 2 [selected_answer]<issue_comment>username_2: It's impossible to make checkboxes uncheckable. Sure, you can use `document.getElementById("elemnt-id").disabled = true;` or via `disabled` attribute in `html`, but there's always a way to uncheck them manually via `Developer tools`. So you should have some validation on server too. Upvotes: 2
2018/03/14
589
2,046
<issue_start>username_0: I have the following pattern of code in a lot of methods: ``` $attempts = 0; do { $response = $this->DOSOMETHIING($data, $info); sleep(1); $attempts++; } while ($attempts < 5); ``` What I would like to do is have a helper method for this while loop that can somehow be sent a specific method call. So something like this: ``` $response = $this->execute($this->DOSOMETHIING($data, $info)); ``` The helper method: ``` function execute($method){ $attempts = 0; do { $response = $method(); <<< I know! sleep(1); $attempts++; } while ($attempts < 5); return $response; } ``` The trouble is that the method call being sent to the helper method will be one of 3 different method calls and they all have a different number of parameters, so it's not like I can send the method and the parameters separately.<issue_comment>username_1: You could make use of [call\_user\_func\_array](http://php.net/manual/en/function.call-user-func-array.php) which will return the value of your callback. Upvotes: 1 <issue_comment>username_2: Looks like you need closure pattern : <http://php.net/manual/en/class.closure.php> bellow code use same "execute" function for two kind of traitment : ``` public function __construct() { } public function execute($method) { $attempts = 0; do { $response = $method(); sleep(1); $attempts++; } while ($attempts < 5); return $response; } public function foo($data, $info) { //Do something return array_merge($data,$info); } public function bar($other) { echo 'Hello '.$other; } public function main() { $data = ['foo' => 'bar']; $info = ['some' => 'info']; $other = 'world'; $return = $this->execute(function() use ($data, $info) { return $this->foo($data,$info); }); var_dump($return); $this->execute(function() use ($other) { $this->bar($other); }); } } $tester = new Foo(); $tester->main(); ``` Upvotes: 3 [selected_answer]
2018/03/14
330
1,124
<issue_start>username_0: I'm trying to convert the dates in my date column from: `yyyy-mm-dd` to `Month dd, yyyy` So far i've tried the below: ``` SELECT To_char(sq2.date_column, Month DD, YYYY) ``` but get the below error, despite my pattern [aligning to the documentation](https://www.postgresql.org/docs/current/static/functions-formatting.html) ``` ERROR: syntax error at or near "DD" ``` I noticed in the documentation that `To_char` isn't explicitly referenced for date conversion (only time). Also, i'm not necessarily trying to alter the DATE type to VARCHAR. I would be happy to keep the data in the date\_column as DATE type and just change the pattern.<issue_comment>username_1: You were damn close. Just missed the `'` at the second parameter. `SELECT To_char('2018-01-20'::date, 'Month DD, YYYY')` Using your table column: `SELECT To_char(sq2.date_column, 'Month DD, YYYY')` Upvotes: 1 <issue_comment>username_2: you have to use single code before month and end of the YYYY. you can use fallowing query for your required result SELECT To\_char(sq2.date\_column, 'Month DD, YYYY') Upvotes: 0
2018/03/14
305
787
<issue_start>username_0: I would like to create a number with N digits. *Example;* `myVar=543` and I would like it with 6 digits so myVar will be `000543` ``` myVar=44345 will be 044345 myVar=1 will be 0000001 ... ``` I do that with Batch, so just with Windows Batch commands.<issue_comment>username_1: add 6 zeros at the front of the number and then cut the last 6 characters: ``` set myVar=51 echo 1: %myVar% set myVar=000000%myvar% echo 2: %myVar% set myVar=%myVar:~-6% echo 3: %myVar% ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: username_1's method works. It can also be done using PowerShell formatting. ``` SET "N=123.4" FOR /F %%n IN ('powershell -NoLogo -NoProfile -Command "([double]%N%).ToString('000000')"') DO (SET "NZ=%%n") ECHO %NZ% ``` Upvotes: 0
2018/03/14
1,356
4,359
<issue_start>username_0: i try to make a trigger button from a div children with the same class,but my code below seems not working properly, i try to make a looping at first to get the class box then try to get the children from each box, but it seems not working. ```js var container = document.querySelector(".container"); var box = document.getElementsByClassName("box"); for(var i = 0; i < box.length; i++){ var x = box[i].children; x.addEventListener("click", function(){ console.log('hello world') }) } ``` ```css .container { border: 1px solid black; padding: 10px; } .box { width: 100px; height: 100px; background: red; } ``` ```html JS Bin click me click me click me click me ```<issue_comment>username_1: > > try to make a looping at first to get the class box then try to get > the children from each box, but it seems not working. > > > `children` returns **List of Elements**, you need to access the first one (**button**) ``` var x = box[i].children[0]; ``` **Demo** ```js var container = document.querySelector(".container"); var box = document.getElementsByClassName("box"); for(var i = 0; i < box.length; i++){ var x = box[i].children[0]; x.addEventListener("click", function(){ console.log('hello world') }) } ``` ```css .container { border: 1px solid black; padding: 10px; } .box { width: 100px; height: 100px; background: red; } ``` ```html JS Bin click me click me click me click me ``` **Note** * If there are multiple `button`s inside each `box`, then you need to iterate `children`. Upvotes: 1 <issue_comment>username_2: `box[i].children` is returning a collection (see <https://developer.mozilla.org/en-US/docs/Web/API/ParentNode/children> ) You need to loop over it to get each nodes. Or better, access the node you want using `querySelector` : ```js var container = document.querySelector(".container"); var box = document.getElementsByClassName("box"); for(var i = 0; i < box.length; i++){ var button = box[i].querySelector('button'); button.addEventListener("click", function(){ console.log('hello world') }) } ``` ```css .container { border: 1px solid black; padding: 10px; } .box { width: 100px; height: 100px; background: red; } ``` ```html JS Bin click me click me click me click me ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: If you use [`querySelectorAll`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelectorAll) you can simplify that a lot. [`querySelectorAll`](https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelectorAll) takes a CSS selector as a parameter and makes it very easy to narrow the nodelist down to just the elements you want, e.g. the `button`'s. Stack snippet ```js var buttons = document.querySelectorAll(".container .box button"); for(var i = 0; i < buttons.length; i++){ buttons[i].addEventListener("click", function(){ console.log('hello world') }) } ``` ```css .container { border: 1px solid black; padding: 10px; } .box { width: 100px; height: 100px; background: red; } ``` ```html JS Bin click me click me click me click me ``` --- Or like this, using the `btn` class Stack snippet ```js var buttons = document.querySelectorAll(".container .btn"); for(var i = 0; i < buttons.length; i++){ buttons[i].addEventListener("click", function(){ console.log('hello world') }) } ``` ```css .container { border: 1px solid black; padding: 10px; } .box { width: 100px; height: 100px; background: red; } ``` ```html JS Bin click me click me click me click me ``` Upvotes: 1 <issue_comment>username_4: `children` method returns an array of matching elements, you have to specify to which element you want to addEventListened, even if it's one child you need to indicate `x[0].addEventListener()` That suppose to work for any amount of children inside: ``` var container = document.querySelector(".container"); var box = document.getElementsByClassName("box"); for(var i = 0; i < box.length; i++){ var x = box[i].children; for (let i=0; i < x.length; i++) { x[i].addEventListener("click", function(){ console.log('hello world') }) } } ``` Upvotes: 0
2018/03/14
425
1,379
<issue_start>username_0: I have a simple todo list app. I want to increase the `width` of the text field when it is `hovered` on, over the course of `3 seconds`. I have used `transition` property but it is not working correctly. **Problem** is that the `width` increases instantly when the mouse pointer is hovered on the text field instead of increasing over the course of `3 seconds`. What am i doing wrong here? Here's the code [codepen](https://codepen.io/anon/pen/NYxJEE)<issue_comment>username_1: You have to add the changing CSS Property in the input that will change on hover like this. ``` input{ width: 100px; transition: width 3s ease-out; } input:hover{ width: 300px; } ``` Your code miss width in input tag CSS. Upvotes: 1 <issue_comment>username_2: It's because your `input does not have width property in default state its auto`. Transition will not work `auto to some value`. just add `width` property to `input`. it will work. **Here is the working code.** ```css .main-container { border: 1px solid #000; width: 45%; text-align: center; } input { width: 120px; padding: 8px; border-radius: 7px; transition: width 3s; border: 2px solid #222; } input:hover { border: 2px solid green; width: 300px; } ``` ```html To-Do List ========== Add Item Remove Last Item ``` Upvotes: 3 [selected_answer]
2018/03/14
381
1,203
<issue_start>username_0: While I compile and debug every procedure I'm getting error SEVERE: NULL Preferences during startup And oracle SQL developer getting hung. ![enter image description here](https://i.stack.imgur.com/SkTTj.png) ![enter image description here](https://i.stack.imgur.com/b0XRe.png)<issue_comment>username_1: You have to add the changing CSS Property in the input that will change on hover like this. ``` input{ width: 100px; transition: width 3s ease-out; } input:hover{ width: 300px; } ``` Your code miss width in input tag CSS. Upvotes: 1 <issue_comment>username_2: It's because your `input does not have width property in default state its auto`. Transition will not work `auto to some value`. just add `width` property to `input`. it will work. **Here is the working code.** ```css .main-container { border: 1px solid #000; width: 45%; text-align: center; } input { width: 120px; padding: 8px; border-radius: 7px; transition: width 3s; border: 2px solid #222; } input:hover { border: 2px solid green; width: 300px; } ``` ```html To-Do List ========== Add Item Remove Last Item ``` Upvotes: 3 [selected_answer]
2018/03/14
556
1,330
<issue_start>username_0: In My monogdb I have Date column. I want to display the date in descending order with month and year wise. Date column: `14-03-2018 12-03-2018 13-03-2018 11-03-2018 10-02-2018 06-01-2018 09-01-2018 08-02-2018 07-01-2017` Expected result: `14-03-2018 13-03-2018 12-03-2018 11-03-2018 10-02-2018 08-02-2018 09-01-2018 07-01-2017` I used **tr dir-paginate date on dates orderBy:'LastFollowupDate':'desc'"** but is displaying what I expect How to achieve this in angularjs<issue_comment>username_1: try this ``` $scope.sortDate= function(dt) { var date = new Date(dt); return date; }; ``` and ``` | date | | --- | | {{date}} | ``` [**DEMO**](https://codepen.io/anon/pen/xWZePX) Upvotes: 1 <issue_comment>username_2: LastFollowDate has date: 14-03-2018 ``` var Sort_date=$scope.GetInformations[i].LastFollowDate.split('-');//split date $scope.$scope.GetInformations[i].oderdate= Sort_date[2]+Sort_date[1]+Sort_date[0]; ``` The above code used to consider the whole number. It will consider as number like 20180314. if the date is 15-03-2018 it will consider as number like 20180315. It will increase the number count. So If I want to display the previous date using descending order **ng-repeat** > > orderBy:'oderdate':'desc' > > > Upvotes: 0
2018/03/14
560
1,759
<issue_start>username_0: I've a php query which creates the JSON data of the the data received from the SQL query that is run. The JSON data is created using this query using this query ``` echo json_encode($res->fetch_all(MYSQLI_ASSOC), JSON_UNESCAPED_UNICODE | JSON_UNESCAPED_SLASHES | JSON_PRETTY_PRINT ); ``` Now I want to plot a chart using fusionchart for which I need to convert this data into an array. I tried a sample JS code to convert the JSON data into an JS array and it worked ``` var data = { "timestamp": "2016-09-23", "value1": "0", "value2": "0", ........., "value49": "0", "value50": "0" }; var arr = []; for (elem in data) { arr.push(data[elem]); } console.log(arr); ``` Now in the **data** variable I need to pass the data from php code. This is just one of the records that I entered manually. There are over a million records and I need to convert all of them. How I do this?<issue_comment>username_1: try this ``` $scope.sortDate= function(dt) { var date = new Date(dt); return date; }; ``` and ``` | date | | --- | | {{date}} | ``` [**DEMO**](https://codepen.io/anon/pen/xWZePX) Upvotes: 1 <issue_comment>username_2: LastFollowDate has date: 14-03-2018 ``` var Sort_date=$scope.GetInformations[i].LastFollowDate.split('-');//split date $scope.$scope.GetInformations[i].oderdate= Sort_date[2]+Sort_date[1]+Sort_date[0]; ``` The above code used to consider the whole number. It will consider as number like 20180314. if the date is 15-03-2018 it will consider as number like 20180315. It will increase the number count. So If I want to display the previous date using descending order **ng-repeat** > > orderBy:'oderdate':'desc' > > > Upvotes: 0
2018/03/14
538
1,952
<issue_start>username_0: I am waiting for successful JSON from server: > > {"...."} > > > Actual Behavior I get > > ***SyntaxError: Unexpected token b in JSON at position 0*** > > > b is the first letter of word "badlogin". It responds server when sent wrong combination of userName and password. But when I use Postman with the same key values combination on the same address I get correct rosponse from the server. Steps to Reproduce ``` fetch('http://....', { method: 'POST', headers: { 'Accept': 'application/json', 'Content-Type': 'application/json', }, body: JSON.stringify({ userName: "react", password: "123", }) }).then((response) => response.json()) .then((responseJson) => { console.log(responseJson.message); if(responseJson.success === true) { alert('successufuly loged'); } else{ console.log(responseJson.message); alert(responseJson.message); } }) } ``` }<issue_comment>username_1: You are trying to parse a string. This is the error. Instead of always parse the json, just add a clausule to check if the request was made with success ``` }).then((response) => { if(!response.ok) { // handle your error here and return to avoid to parse the string return } return response.json() }) .then() ``` Upvotes: 1 <issue_comment>username_2: Look like the response you got is not `json` Try to check what is the response you are getting first: ``` .then((response) => response.text()) .then((responseJson) => { console.log(responseJson); } ``` Upvotes: 1 <issue_comment>username_3: I solved this issue by using FormData to prepare data for sending: ``` ...... login = () => { var formData = new FormData(); formData.append('username', 'react'); formData.append('password', '123'); fetch('http://......', { method: 'POST', body: formData ........ ``` Upvotes: 1 [selected_answer]
2018/03/14
392
1,376
<issue_start>username_0: Due to reasons outside my control, I have to write code to perform the following action: * If a condition is met, execute the function on a separate thread and wait for it to finish later in the code. * Otherwise, execute the program sequentially by simply calling the function without using a new thread. ```cpp boost::thread thread; if (condition) { thread = boost::thread(function); } else { function(); } thread.join(); ``` Is there a better way of doing this than the above code? Unfortunately, I am quite restricted with what solutions I can use but any advice/idea/opinion is welcome.<issue_comment>username_1: A future that is either built ready or running an async task. Upvotes: 2 <issue_comment>username_2: As alternative, `std::future` might help, something like: ``` std::future fut; if (condition) { fut = std::async(std::launch::async, function); } else { fut = std::async(std::launch::deferred, function); fut.wait(); } // some more code here fut.wait() ``` Upvotes: 2 <issue_comment>username_3: Another take on using futures ``` std::future fut; if (condition) { fut = std::async(std::launch::async, function); } else { function(); std::promise prom; fut = prom.get\_future(); prom.set\_value(); // or std::experimental::make\_ready\_future() if avaliable } // some more code here fut.wait() ``` Upvotes: 2
2018/03/14
391
1,317
<issue_start>username_0: When customising the markup of a gallery component (Pollozen SimpleGallery), passing in a slug manually works fine. (The gallery component is used inside a layout). ``` [Gallery] idGallery = 0 markup = "user" slug = "test" ``` When trying to assign a variable that is set by a static page (Rainlab Pages) nothing happens ``` [Gallery] idGallery = 0 markup = "user" slug = {{page.galleryId}} ``` `{{page.galleryId}}` is displayed fine as "test" inside the actual layout. How should I assign viewBag variables to components to make this work?<issue_comment>username_1: A future that is either built ready or running an async task. Upvotes: 2 <issue_comment>username_2: As alternative, `std::future` might help, something like: ``` std::future fut; if (condition) { fut = std::async(std::launch::async, function); } else { fut = std::async(std::launch::deferred, function); fut.wait(); } // some more code here fut.wait() ``` Upvotes: 2 <issue_comment>username_3: Another take on using futures ``` std::future fut; if (condition) { fut = std::async(std::launch::async, function); } else { function(); std::promise prom; fut = prom.get\_future(); prom.set\_value(); // or std::experimental::make\_ready\_future() if avaliable } // some more code here fut.wait() ``` Upvotes: 2
2018/03/14
481
1,590
<issue_start>username_0: I have a problem using async await in react, this is my .babelrc ``` { "presets": [ ["es2015", {"modules": false}], "stage-2", "react" ], "plugins": [ "react-hot-loader/babel", "transform-async-to-generator", ["transform-runtime", { "polyfill": false, "regenerator": true }] ] } ``` I don't have problem using async await to call api but can't setState because the `this` is not defined ``` class App extends Component { testAsync = async () => { const { body } = await requestOuter('GET', 'https://api.github.com/users') console.log(body) //working, body is bunch of array of object this.setState({body}) //this is not defined error? } render() { return this.testAsync()}>test async {(this.state.body || []).map(o => o.login)} } } ```<issue_comment>username_1: A future that is either built ready or running an async task. Upvotes: 2 <issue_comment>username_2: As alternative, `std::future` might help, something like: ``` std::future fut; if (condition) { fut = std::async(std::launch::async, function); } else { fut = std::async(std::launch::deferred, function); fut.wait(); } // some more code here fut.wait() ``` Upvotes: 2 <issue_comment>username_3: Another take on using futures ``` std::future fut; if (condition) { fut = std::async(std::launch::async, function); } else { function(); std::promise prom; fut = prom.get\_future(); prom.set\_value(); // or std::experimental::make\_ready\_future() if avaliable } // some more code here fut.wait() ``` Upvotes: 2
2018/03/14
1,183
4,439
<issue_start>username_0: I am working on a JavaFX desktop application and I have one button that should read from the memory of an embedded device and print that into a JSON. I have implemented a Task that does that, and this Task is passed as argument to a new thread in the button event handler. The problem is, this only works once. After that, even though new threads are generated on button click, the call() method of the Task is never called again. Here is the code: The Task definition: ``` Task readValDaemon = new Task() { @Override public Void call() { //This functions reads from memory and writes the JSON readDataHI(connection,commandListHI,statusHI); return null; } }; ``` The Thread creation: ``` readData.setOnMouseClicked(new EventHandler() { @Override public void handle(MouseEvent event) { Thread readValThread = new Thread(readValDaemon); readValThread.setDaemon(true); readValThread.start(); } }); ```<issue_comment>username_1: `Task` is kind of the wrong tool for this. It's very purposefully only designed to run once because it's a kind of [future](https://en.wikipedia.org/wiki/Futures_and_promises). It stores the result (in your case null) as a kind of [memoization](https://en.wikipedia.org/wiki/Memoization) to avoid doing expensive operations more times than is necessary. So `Task` is best suited for situations where an expensive computation must be done *just once*, and usually you would want a result from it at some point down the line. The [documentation for `Task`](https://docs.oracle.com/javafx/2/api/javafx/concurrent/Task.html) is very thorough so I would give that a read. In your case, just use a plain `Runnable`. You can use a lambda expression: ``` readData.setOnMouseClicked(new EventHandler() { @Override public void handle(MouseEvent event) { Thread readValThread = new Thread(() -> readDataHI(a, b, c)); readValThread.setDaemon(true); readValThread.start(); } }); ``` As an aside, creating threads manually isn't considered very good practice in modern Java. Strongly consider an [`ExecutorService`](https://docs.oracle.com/javase/9/docs/api/java/util/concurrent/ExecutorService.html) instead. Upvotes: 0 <issue_comment>username_2: As observed in other answers, a [`Task`](https://docs.oracle.com/javase/9/docs/api/javafx/concurrent/Task.html) is an implementation of [`FutureTask`](https://docs.oracle.com/javase/9/docs/api/java/util/concurrent/FutureTask.html). From the `Task` documentation: > > As with `FutureTask`, a `Task` is a one-shot class and cannot be reused. See `Service` for a reusable `Worker`. > > > So you cannot reuse a task. Second and subsequent attempts to run it will just silently fail. You could just create a new task directly every time: ``` private Task createReadValTask() { return new Task() { @Override public Void call() { //This functions reads from memory and writes the JSON readDataHI(connection,commandListHI,statusHI); return null; } }; } ``` and then do ``` readData.setOnMouseClicked(new EventHandler() { @Override public void handle(MouseEvent event) { Thread readValThread = new Thread(createReadValTask()); readValThread.setDaemon(true); readValThread.start(); } }); ``` You could also consider using a [`Service`](https://docs.oracle.com/javase/9/docs/api/javafx/concurrent/Service.html), which is designed for reuse. It basically encapsulates the "create a new task every time" functionality, but adds in a lot of useful UI callbacks. A `Service` also manages a thread pool for you (via an `Executor`), so you no longer need to worry that you may be creating too many thread. (The `Executor` can also be specified, if you want to control it.) So, e.g.: ``` Service readValDaemon = new Service() { @Override protected Task createTask() { return new Task() { @Override public Void call() { //This functions reads from memory and writes the JSON readDataHI(connection,commandListHI,statusHI); return null; } }; } }; ``` and then ``` readData.setOnMouseClicked(new EventHandler() { @Override public void handle(MouseEvent event) { readValThread.restart(); } }); ``` If the mouse is clicked while the service is already running, this will automatically cancel the already running task, and restart a new one. You could add in checks if you wanted, or bind the `disable` state of `readData` to the state of the `Service`, if you wanted. Upvotes: 2 [selected_answer]
2018/03/14
1,226
4,887
<issue_start>username_0: **TLDR; It seems that my POSTs (to DRF endpoints) are only CSRF protected, if the client has an authenticated session. This is wrong, and leaves the application option to [login CSRF](https://en.wikipedia.org/wiki/Cross-site_request_forgery#Forging_login_requests) attacks. How can I fix this?** I'm starting to build a django rest framework API for a ReactJS frontend, and we want everything, including the authentication, to be handled via API. We are using SessionAuthentication. If I have an authenticated session, then CSRF works entirely as expected (when auth'd the client should have a CSRF cookie set, and this needs to be paired with the csrfmiddlewaretoken in the POST data). However, when **not** authenticated, no POSTs seem to be subject to CSRF checks. Including the (basic) login APIView that has been created. This leaves the site vulnerable to [login CSRF](https://en.wikipedia.org/wiki/Cross-site_request_forgery#Forging_login_requests) exploits. **Does anyone know how to enforce CSRF checks even on unathenticated sessions?** and/or how DRF seems to bypass CSRF checks for login? Below is my rough setup ... settings.py: ``` REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': [ 'rest_framework.authentication.SessionAuthentication', ], 'DEFAULT_PERMISSION_CLASSES': [ 'rest_framework.permissions.IsAuthenticated', ], } ``` views.py: ``` class Login(APIView): permission_classes = (permissions.AllowAny,) @method_decorator(csrf_protect) # shouldn't be needed def post(self, request, format=None): user = authenticate( request, username=request.POST['username'], password=request.POST['<PASSWORD>'] ) # ... perform login logic ... def get(self, request, format=None): """ Client must GET the login to obtain CSRF token """ # Force generation of CSRF token so that it's set in the client get_token(request) return Response(None) ``` urls.py: ``` urlpatterns = [ url(r'^login/$', views.Login.as_view(), name='login'), ] ``` expected behaviour: ``` login_url = reverse('login') login_details = { 'username': self.user.email, 'password': <PASSWORD>, } client = APIClient(enforce_csrf_checks=True) # Try to just POST to a CSRF protected view with no CSRF response = client.post(reverse('login'), login_details) # response status should be 403 Missing or incorrect CSRF # GET the login API first to obtain CSRF client.get(reverse('login')) login_details['csrfmiddlewaretoken'] = client.cookies.get('csrftoken').value # Now POST to the login API with the CSRF cookie and CSRF token in the POST data response = client.post(reverse('login'), login_details) # response status should now be 200 (and a newly rotated CSRF token delivered) ``` actual behaviour: ``` client = APIClient(enforce_csrf_checks=True) # Try to just to a CSRF protected view with no CSRF response = client.post(reverse('login'), login_details) # BROKEN: response status is 200, client is now logged in # Post to the exact same view again, still with no CSRF response = client.post(reverse('login'), login_details) # response status is now 403 # BROKEN: This prooves that this view is protected against CSRF, but ONLY for authenticated sessions. ```<issue_comment>username_1: Django REST Framework is disabling CSRF token requirement when using SessionAuthentication and user is not authenticated. This is by design to not mess up other authentication method that don't require CSRF authentication (because they're not based on cookies) and you should ensure by yourself that CSRF is validated on login request and it is mentioned in last paragraph of [`SessionAuthentication` documentation](http://www.django-rest-framework.org/api-guide/authentication/#sessionauthentication). It is advised to either use non-API login process or ensure that API-based login process is fully protected. You can check how [DRFs `SessionAuthentication`](https://github.com/encode/django-rest-framework/blob/master/rest_framework/authentication.py#L112-L142) is enforcing CSRF validation when you are logged in and base your view on that. Upvotes: 4 [selected_answer]<issue_comment>username_2: You can create a child class of APIView that forces CSRF. ``` from rest_framework import views class ForceCRSFAPIView(views.APIView): @classmethod def as_view(cls, **initkwargs): # Force enables CSRF protection. This is needed for unauthenticated API endpoints # because DjangoRestFramework relies on SessionAuthentication for CSRF validation view = super().as_view(**initkwargs) view.csrf_exempt = False return view ``` Then all you need to do is change your login view to descend from this ``` class Login(ForceCRSFAPIView) # ... ``` Upvotes: 2
2018/03/14
340
1,119
<issue_start>username_0: I tried to uninstall my custom module by using ``` './odoo.py -d db_name --uninstall module_name(s)' command. ``` But I get the error: > > 'bash: ./odoo.py: No such file or directory' > > > How can I fix it? How can i know a module is installed or not? Thanks<issue_comment>username_1: There is no any possible option now to uninstall the odoo module directly from the command prompt. You can also see another option for help further for additional commands using ``` ./odoo-bin --help ``` Which gives you the list of additional option with addons-path Upvotes: 0 <issue_comment>username_2: In odoo 12, I ran the below command in psql: ``` UPDATE ir_module_module SET state = 'to remove' WHERE name = 'app_library'; ``` Once its updated, restart the odoo server: ``` python odoo-bin -d demodb (windows) ./odoo-bin -d demodb (linux) ``` The module installed before is completely uninstalled. Upvotes: 1 <issue_comment>username_3: Try executing this command: ``` UPDATE ir_module_module SET state = 'uninstalled' WHERE name = 'module_name'; ``` in postgres. Upvotes: 1
2018/03/14
596
2,188
<issue_start>username_0: The [http-proxy-middleware](https://github.com/chimurai/http-proxy-middleware) Nodejs module provides a way of re-target request using a function in the option.router parameter. As described [here](https://github.com/chimurai/http-proxy-middleware#http-proxy-middleware-options): ``` router: function(req) { return 'http://localhost:8004'; } ``` I'll need to implement a process that check some aspect in the request (headers, URLs... all that information is at hand in the `req` object that function receives) and return a 404 error in some case. Something like this: ``` router: function(req) { if (checkRequest(req)) { return 'http://localhost:8004'; } else { // Don't proxy and return a 404 to the client } } ``` However, I don't know how to solve that `// Don't proxy and return a 404 to the client`. Looking to http-proxy-middleware is not so evident (or at least I haven't found the way...). Any help/feedback on this is welcomed!<issue_comment>username_1: At the end I have solved throwing and expection and using the default Express error handler (I didn't mention in the question post, but the proxy lives in a Express-based application). Something like this: ``` app.use('/proxy/:service/', proxy({ ... router: function(req) { if (checkRequest(req)) { // Happy path ... return target; } else { throw 'awfull error'; } } })); ... // Handler for global and uncaugth errors app.use(function (err, req, res, next) { if (err === 'awful error') { res.status(404).send(); } else { res.status(500).send(); } next(err); }); ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: You can do this in `onProxyReq` instead of throwing and catching an error: ``` app.use('/proxy/:service/', proxy({ ... onProxyReq: (proxyReq, req, res) => { if (checkRequest(req)) { // Happy path ... return target; } else { res.status(404).send(); } } })); ``` Upvotes: 2
2018/03/14
575
2,360
<issue_start>username_0: Let's say I have a Value that is deserialized from a class. ``` public class MyValue { public string MyPropertyA { get; set; } public string MyPropertyB { get; set; } public string DeserializationClass { get; } = typeof(MyValue).Name; } ``` I serialize this using `JsonConvert` class. `MyValue` class has a property `DeserializationClass` that should be used as info from which class the string was serialized from. In other words, when I deserialize the string into an object, this property serves as info which class should be used to deserialize the string. However, I am kinda stuck here as I am not sure how to get back the class from the string. Can anybody help me here? ``` public class Program { void Main() { var serialized = Serialize(); var obj = Deserialize(serialized); } string Serialize() { var objValue = new MyValue { MyPropertyA="Something", MyPropertyB="SomethingElse" }; return JsonConvert.SerializeObject(value); } object Deserialize(string serialized) { //How to deserialize based on 'DeserializationClass' property in serialized string? return = JsonConvert.Deserialize??(serialized); } } ``` EDIT: Modified example to make it more clear what I need as I don't have access to objValue when I need to deserialize the string.<issue_comment>username_1: There is an overload If your `Type` is in form of a Namespace, you can obtain the type from a string representation: ``` Type objValueType = Type.GetType("Namespace.MyValue, MyAssembly"); object deserialized = JsonConvert.Deserialize(objValueType, serialized); ``` Upvotes: 1 <issue_comment>username_2: probably you might need to use JsonSerializerSettings. What you might need to do is ``` JsonSerializerSettings setting = new JsonSerializerSettings { TypeNameHandling = TypeNameHandling.All, }; ``` and then while serializing use this setting. ``` var serialized = JsonConvert.SerializeObject(objValue,setting); ``` this will give you Json like this > > > ``` > {"$type":"WPFDatagrid.MyValue, WPFDatagrid","MyPropertyA":"Something","MyPropertyB":"SomethingElse","DeserializationClass":"MyValue"} > > ``` > > from this you can find the name of the class used it to actually get your type. Hope this helps !! Upvotes: 2
2018/03/14
1,405
4,732
<issue_start>username_0: I'm trying to make a long array composed of the digits 0 - 9 in a random order, meaning there would be no duplicates of the same digit. I'm a novice coder, and this is what I tried to come up with. ``` public static void shuffle() { long[] rn = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}; Random rand = new Random(); for (int i = 0; i < 10; i++) { int n; n = rand.nextInt(10) + 0; for (int j = 0; j < 10; j++) { if (n != rn[j]) { j++; } else if (n == rn[j]) { n = rand.nextInt(10) + 0; if (j > 0) { j--; } } } rn[i] = n; } for (int l = 0; l < 10; l++) { System.out.print(rn[l]); } System.out.println(); } ``` By my understanding, this shouldn't let any duplicate digits pass, yet it does. Did I do something wrong? PS: I'm not allowed to use collections or array lists. I'm using the Ready To Program IDE which apparently doesn't support those.<issue_comment>username_1: So second one. There are many flaws in it. You should never change your for variable inside of your code only when you really know what you do. ``` if (n != rn[j]){ j++; } ``` That is wrong => you will skip a few elements because you already increase the variable in the for loop. So in each iteration you increase it +2. But because you want to check all other elements from the beginning again when one test failed you have to put in the else part j=-1. Why -1? Because in the last step of the for loop the for loop will increase it by 1 so you will start with 0 again. ``` j = -1; ``` When you change the one above so that you don't increase it you will run into a never ending loop. The reason is, that you are prefilling it with zeros. Would you fill it with 10 or some other numbers (not 0-9) it won't happen. So one way is to fill the array with other numbers or only check the first i numbers you have filled with random numbers: ``` for (int j = 0; j < i; j++) ``` In the end this should work: ``` for (int i = 0; i < 10; i++) { int n; n = rand.nextInt(10); for (int j = 0; j < i; j++) { if (n == rn[j]) { n = rand.nextInt(10); j = -1; } } rn[i] = n; } ``` Upvotes: 0 <issue_comment>username_2: You are indeed doing some peculiar things. * **Changing the loop variable** — Most of the time, one should only modify the loop variable (`i` and `j` in your case) when some serious magic is needed, otherwise, the loop variable should be left alone. * **Voodooish `+ 0`** — Adding 0 to a value is pointless. Regarding your shuffle method, I'm proposing a whole other approach. You have a loop within a loop, so the execution time will exponentially increase when the array is larger. Instead, one could also select a random *index* from the source array, and then fill the next position of the new array with that value. Of course, in order to avoid a value from the source array to be picked twice, we will swap that value with the value on the last position of the source array and decrement its size. ```none index | 0 | 1 | 2 | 3 | 4 | value | 2 | 3 | 5 | 7 | 11 | arraySize: 5 Step 1. Pick random index, where index is less than arraySize (in this example index 1 has been picked) index | 0 | 1 | 2 | 3 | 4 | value | 2 | 3 | 5 | 7 | 11 | arraySize: 5 pickedIndex: 1 Step 2. Swap value at pickedIndex with value at index arraySize − 1 index | 0 | 1 | 2 | 3 | 4 | value | 2 | 11 | 5 | 7 | 3 | arraySize: 5 Step 3. Decrease array size, so the value previously on the picked position won't be picked again index | 0 | 1 | 2 | 3 | value | 2 | 11 | 5 | 7 | array size: 4 ``` The code looks like this: ``` private static int[] shuffle(int[] array) { int[] availableNumbers = new int[array.length]; int availableNumbersLength = availableNumbers.length; System.arraycopy(array, 0, availableNumbers, 0, array.length); int[] shuffledArray = new int[availableNumbers.length]; Random r = new Random(); for (int i = 0; i < availableNumbers.length; i++) { int index = r.nextInt(availableNumbersLength); shuffledArray[i] = availableNumbers[index]; availableNumbers[index] = availableNumbers[availableNumbersLength - 1]; availableNumbersLength--; } return shuffledArray; } ``` Note that I copied the input array so that the source array is left unchanged and a new array containing the elements in a random order, is returned. You could also, of course, shuffle the original array. Upvotes: -1
2018/03/14
1,807
5,700
<issue_start>username_0: I want to slide my image to left when click on the right arrow of the image slider. I cant apply the slide left animation.When click right arrow current image hide and next image is showing but not slide animation is happening ```js $('#image-content img:first').addClass('active'); //ON CLICK ON RIGHT ARROW DISPLAY NEXT currentIndex = $('#image-content img').index(this) + 1; $('.right-arrow').on('click', function() { if($('#image-content img.active').index() < ($('#image-content img').length - 1)){ currentIndex++; $('#image-content img.active').animate({width: 'toggle'}).removeClass('active').next('#image-content img').addClass('active'); } else { currentIndex = 1; $("#image-content img").removeClass("active"); $('#image-content img').eq(currentIndex - 1).addClass('active'); } }); //ON CLICK LEFT ARROW DISPLAY PREVIOUS IMAGE $('.left-arrow').on('click', function() { if ($('#image-content img.active').index() > 0) { currentIndex--; $('#image-content img.active').removeClass('active').prev('#image-content img').addClass('active'); } else { currentIndex = $('#image-content img').length; $("#image-content img").removeClass("active"); $('#image-content img').eq(currentIndex - 1).addClass('active') } }); ``` ```css #image-content{ height: 400px; width: 100%; } #image-content img{ position: absolute; height: auto; width: auto; max-height: 380px; max-width: 100%; top: 50%; left: 50%; transform: translate(-50%,-50%); display: none } #image-content img.active{ display: block } ``` ```html ![](https://d30y9cdsu7xlg0.cloudfront.net/png/136304-200.png) ![](http://letssunday.com/assets/upload/product/5aa63613ea17a107011.jpg) ![](http://letssunday.com/assets/upload/productImageGallary/5a5da97dc88ad258479.jpeg) ![](http://letssunday.com/assets/upload/productImageGallary/5a5da97d45e75220450.jpeg) ![](http://letssunday.com/assets/upload/productImageGallary/5a5da97dcf94f110046.jpeg) ![](http://letssunday.com/assets/upload/productImageGallary/5a5da97e6268c505542.jpeg) ![](https://cdn4.iconfinder.com/data/icons/ionicons/512/icon-ios7-arrow-right-128.png) ``` I just want to slide it to left side.Currently working fine except animation.Help Please<issue_comment>username_1: css seems to work, i think there's an issue on youor js. ```js $('#image-content img:first').addClass('active'); //ON CLICK ON RIGHT ARROW DISPLAY NEXT currentIndex = $('#image-content img').index(this) + 1; $('.right-arrow').on('click', function() { if($('#image-content img.active').index() < ($('#image-content img').length - 1)){ currentIndex++; $('#image-content img.active').animate({width: 'toggle'}).removeClass('active').next('#image-content img').addClass('active'); } else { currentIndex = 1; $("#image-content img").removeClass("active"); $('#image-content img').eq(currentIndex - 1).addClass('active'); } }); //ON CLICK LEFT ARROW DISPLAY PREVIOUS IMAGE $('.left-arrow').on('click', function() { if ($('#image-content img.active').index() > 0) { currentIndex--; $('#image-content img.active').removeClass('active').prev('#image-content img').addClass('active'); } else { currentIndex = $('#image-content img').length; $("#image-content img").removeClass("active"); $('#image-content img').eq(currentIndex - 1).addClass('active') } }); ``` ```css #image-content{ height: 400px; width: 100%; } #image-content img{ position: absolute; height: auto; width: auto; max-height: 380px; max-width: 100%; top: 50%; left: 50%; transform: translate(-50%,-50%); display: none; transition: 1s; } #image-content img.active{ display: block; } ``` ```html ![](https://d30y9cdsu7xlg0.cloudfront.net/png/136304-200.png) ![](http://letssunday.com/assets/upload/product/5aa63613ea17a107011.jpg) ![](http://letssunday.com/assets/upload/productImageGallary/5a5da97dc88ad258479.jpeg) ![](http://letssunday.com/assets/upload/productImageGallary/5a5da97d45e75220450.jpeg) ![](http://letssunday.com/assets/upload/productImageGallary/5a5da97dcf94f110046.jpeg) ![](http://letssunday.com/assets/upload/productImageGallary/5a5da97e6268c505542.jpeg) ![](https://cdn4.iconfinder.com/data/icons/ionicons/512/icon-ios7-arrow-right-128.png) ``` The best option to slide is moving content from out of page to in and the opposite for sliding-out object. Upvotes: -1 <issue_comment>username_1: There's another way for it. Use bootstrap: ```css div#myCarousel, div#myCarousel > div.carousel-inner, div#myCarousel > div.carousel-inner > div.item{ /* selector to edit divs */ } div#myCarousel > div.carousel-inner > div.item > img{ /* selector to edit images */ } ``` ```html Bootstrap Example Carousel Example ---------------- 1. 2. 3. ![Los Angeles](http://letssunday.com/assets/upload/productImageGallary/5a5da97d45e75220450.jpeg) ![Chicago](http://letssunday.com/assets/upload/productImageGallary/5a5da97dcf94f110046.jpeg) ![New york](http://letssunday.com/assets/upload/productImageGallary/5a5da97e6268c505542.jpeg) [Previous](#myCarousel) [Next](#myCarousel) ``` it's working like a charm Upvotes: 1 [selected_answer]
2018/03/14
911
2,907
<issue_start>username_0: I'm trying to build an sign in/sign up form in SweetAlert. Everything works fine but I'm trying to put an select with options that are made of ng-repeat from my array. This is my code : Part of my innerHTML: ``` var registerForm = document.createElement("div"); registerForm.innerHTML = "**Firstname:** **Lastname:** **Login:** **Password:** **Repeat Password:** **E-Mail:** **City:** **Postal Code:** **Adress:** **Country:**{{item.name}} Fields marked with **\*** are required. "; ``` My swal and how I put my html in it: ``` $scope.registerSwal = function(){ swal({ title: 'Sign Up', text: 'Create a new account.', content: registerForm, buttons: { stop: { text: "Cancel", className: "red-modal-button", }, ok: { text: "Register", value: "ok", className: "green-modal-button", }, } }); } ``` My countries array : ``` $scope.countries = {1 : {name: 'Poland', id: 1}, 1 : {name: 'Holland', id: 2}}; ``` This is the output: [![swal](https://i.stack.imgur.com/S1Bt9.png)](https://i.stack.imgur.com/S1Bt9.png) Problem is that my `ng-repeat` in select option in `innerHTML` doesn't work. Is there any way I can do it?<issue_comment>username_1: Change the duplicate keys in the object and it works. ``` $scope.countries = {1 : {name: 'Poland', id: 1}, 2 : {name: 'Holland', id: 2}}; ``` Demo: <https://plnkr.co/edit/pE6EGWS5oV4WUEzCzrCr?p=preview> Upvotes: -1 <issue_comment>username_2: Don't manipulate your DOM in AngularJS witihout using a [directive](https://docs.angularjs.org/guide/directive): > > At a high level, directives are markers on a DOM element (such as an attribute, element name, comment or CSS class) that tell AngularJS's HTML compiler ($compile) to attach a specified behavior to that DOM element (e.g. via event listeners), or even to transform the DOM element and its children. > > > **Note:** `$scope.countries` isn't an array. I transformed it into an array. ### View ``` ``` ### AngularJS application: ``` var myApp = angular.module('myApp',[]); myApp.controller('MyCtrl', function ($scope) { $scope.countries = [ {name: 'Poland', id: 1}, {name: 'Holland', id: 2} ]; }); myApp.directive('myDirective', function () { return { restrict: 'E', scope: { countries: '=' }, template: ` **Firstname:** **Lastname:** **Login:** **Password:** **Repeat Password:** **E-Mail:** **City:** **Postal Code:** **Adress:** **Country:** {{item.name}} Fields marked with **\*** are required. ` } }); ``` **> [Demo fiddle](http://jsfiddle.net/h7symv5n/)** Upvotes: 1 <issue_comment>username_3: You need to compile `registerForm` into the scope to use the `ng-repeat` on the scope. ``` $compile(registerForm)($scope); ``` Upvotes: 0
2018/03/14
1,079
4,105
<issue_start>username_0: I deployed a .war file through IntelliJ Idea on a Tomcat server. I noticed a character "ä" was not properly displayed while at other places the same character was displayed correctly. I found out that only the special characters that I hard-coded in my .js files were affected. I tried to set all my .js files to UTF-8 in IntelliJ, I also changed all standard encoding settings to UTF-8 but the error didn't go away. All my js files are mapped into one index.js file using webpack, but how exactly I don't know because this is a project initially set up by someone else. I recently made a new interesting observation: When I first open up a browser (tested with Firefox and Chrome) it's displayed incorrectly: [![Wrong display](https://i.stack.imgur.com/u0mVc.png)](https://i.stack.imgur.com/u0mVc.png) On regular reload (`F5`) nothing changes, but when reloading with `CTRL + F5` it's suddenly correct: [![Correct display](https://i.stack.imgur.com/We0FO.png)](https://i.stack.imgur.com/We0FO.png) This really confused me...does anyone have an idea what might be going on here? I used to have the same problems with my Java files, but after changing the encoding in my gradle build file that worked. Ultimately my question is: **What do you think should I change in order for the special characters to always be displayed correctly?**<issue_comment>username_1: > > This really confused me...does anyone have an idea what might be going on here? > > > Caching. Ctrl+F5 tells the browser to reload the resource even if it has it cached. F5 will reuse the resource from cache if it's in cache. > > What do you think should I change in order for the special characters to always be displayed correctly? > > > You may have already done it given the F5/Ctrl+F5 thing above. Basically, ensure that: 1. The files (.js, .html, etc.) are stored in the correct encoding and, when viewed with that encoding, show the characters correctly. Strongly recommend using the same encoding for each type of file, although theoretically it's possible to use UTF-8 for JavaScript files and (say) Windows-1252 for HTML files. But that's just asking for complexity and hassle. 2. Ensure that every step in the pipeline correctly identifies the encoding being used for the files. That means (for instance) that Tomcat needs to include the header `Content-Type: application/javascript; charset=utf-8` or similar for your .js files. (`text/javascript; charset=utf-8` will also work, but is obsolete.) For HTML files, though, [the W3C recommends](https://www.w3.org/International/questions/qa-html-encoding-declarations) including the `meta` header and *omitting* the `charset` from `Content-Type`. 3. Ensure that your HTML files identify the encoding in a `meta` tag near the top of `head` (within the first 1024 bytes) as well: The W3C [provide several reasons](https://www.w3.org/International/questions/qa-html-encoding-declarations) *(same link as the bullet above)* for doing this, such as saving the file locally and opening it (thus not having an HTTP header), making it clear to human and machine readers, etc. Upvotes: 2 [selected_answer]<issue_comment>username_2: I add a similar problem after a tomcat update on a Windows server: the javascripts content corrupted characters at the browser side. The http headers were corrects so I investigated a bit further. On the server, the javascript files were saved in utf-8 without BOM. With Wireshark, I saw that the character 'é' (C3-A9 in the file UTF-8 encoded ) was transmitted as (C3-83-C2-A9). It means that **Tomcat was reading an ANSI file** and gently converted it to UTF8! So I just added the BOM to the saved the files and it fixed the bug. (REM: it is easy to add the BOM with notepad++). But I didn't want to update all the server files, I wanted tomcat to read UTF-8 correctly. The easy fix is to define the file encoding in the **tomcat web.xml** like this: ``` default org.apache.catalina.servlets.DefaultServlet debug 0 listings false fileEncoding utf-8 1 ``` Upvotes: 2
2018/03/14
507
1,867
<issue_start>username_0: I know it is going to be a silly question, but I've looked around and found nothing which helped me understand. I have an ionic project, on which I imported [leafletjs](http://leafletjs.com) Now everything works fine, I imported it using this code: ``` import leaflet from 'leaflet'; ``` Now I wanted to add [leaftlet.easyButton](https://github.com/CliffCloud/Leaflet.EasyButton) to my project. In my mind this library should extend the leaflet one, right? ``` import leaflet from 'leaflet'; import leaflet from 'leaflet-easybutton'; ``` This approach give a problem with the namespace, of course. Now what I want to achieve is to use leaflet by extending his methods to include the one provided by the easybutton library. In order to be able to do something like: ``` leaflet.easyButton('fa-globe', function(btn, map){ helloPopup.setLatLng(map.getCenter()).openOn(map); }).addTo( YOUR_LEAFLET_MAP ); ``` So the question is, how do I import the second library in typescript in order to be able to use it as the above example showed?<issue_comment>username_1: I've encounted a similar problem trying to import two different Keyboard plugins. A solution is to name one of the plugins like this: ``` // imports import leaflet from 'leaflet'; import * as easyButton from 'leaflet-easybutton'; // usage leaflet //first plugin easyButton.leaflet //second plugin ``` Upvotes: 1 <issue_comment>username_2: The [Leaflet.EasyButton](https://github.com/CliffCloud/Leaflet.EasyButton) plugin only performs the side effect of attaching some new methods (`easyButton`…) and classes (`Control.EasyButton`…) to the `L` Leaflet global namespace. Therefore you should just need to import it for side effect: ```ts import * as L from 'leaflet'; import 'leaflet-easybutton'; L.easyButton( /* ... */ ); ``` Upvotes: 3 [selected_answer]
2018/03/14
306
1,082
<issue_start>username_0: I got my *pip* install directory in ``` /Library/Python/2.7/site-packages ``` Somehow, after I install the wordbatch library, my *pip* install path changes to ``` /usr/local/lib/python2.7/site-packages ``` Does anyone know how to change the *pip* install path?<issue_comment>username_1: I've encounted a similar problem trying to import two different Keyboard plugins. A solution is to name one of the plugins like this: ``` // imports import leaflet from 'leaflet'; import * as easyButton from 'leaflet-easybutton'; // usage leaflet //first plugin easyButton.leaflet //second plugin ``` Upvotes: 1 <issue_comment>username_2: The [Leaflet.EasyButton](https://github.com/CliffCloud/Leaflet.EasyButton) plugin only performs the side effect of attaching some new methods (`easyButton`…) and classes (`Control.EasyButton`…) to the `L` Leaflet global namespace. Therefore you should just need to import it for side effect: ```ts import * as L from 'leaflet'; import 'leaflet-easybutton'; L.easyButton( /* ... */ ); ``` Upvotes: 3 [selected_answer]
2018/03/14
366
1,420
<issue_start>username_0: I have the Image ByteArray and want to convert the byte array into png image and add in ImageView as you see in the below code. ``` byte[] imageBytes = webClient.DownloadDataTaskAsync(uri); ImageView view = new ImageView(this.Context); //Here need to add the converted image into ImageView view.SetImageSource(); ``` I achieved this, by converting ImageBytes into a bitmap and add the bitmap in ImageView. But it has some memory problem. As I adding more no.of times frequently in my source, I couldn't use a bitmap to add in ImageView due to the memory exception. So please help me. Thanks.<issue_comment>username_1: I've encounted a similar problem trying to import two different Keyboard plugins. A solution is to name one of the plugins like this: ``` // imports import leaflet from 'leaflet'; import * as easyButton from 'leaflet-easybutton'; // usage leaflet //first plugin easyButton.leaflet //second plugin ``` Upvotes: 1 <issue_comment>username_2: The [Leaflet.EasyButton](https://github.com/CliffCloud/Leaflet.EasyButton) plugin only performs the side effect of attaching some new methods (`easyButton`…) and classes (`Control.EasyButton`…) to the `L` Leaflet global namespace. Therefore you should just need to import it for side effect: ```ts import * as L from 'leaflet'; import 'leaflet-easybutton'; L.easyButton( /* ... */ ); ``` Upvotes: 3 [selected_answer]
2018/03/14
1,111
4,399
<issue_start>username_0: I added an connector for AJP to my spring boot 2 project ``` @Bean public ServletWebServerFactory servletContainer() { TomcatServletWebServerFactory tomcat = new TomcatServletWebServerFactory() { @Override protected void postProcessContext(Context context) { SecurityConstraint securityConstraint = new SecurityConstraint(); securityConstraint.setUserConstraint("CONFIDENTIAL"); SecurityCollection collection = new SecurityCollection(); collection.addPattern("/*"); securityConstraint.addCollection(collection); context.addConstraint(securityConstraint); } }; tomcat.addAdditionalTomcatConnectors(redirectConnector()); return tomcat; } private Connector redirectConnector() { Connector connector = new Connector("AJP/1.3"); connector.setScheme("http"); connector.setPort(ajpPort); connector.setSecure(false); connector.setAllowTrace(false); return connector; } ``` This works fine. I can now access my spring boot application over my apache webserver. But now if i run my spring boot application i can not do access my spring boot application directly. So this url doesn't work anymore <http://localhost:13080/online/showlogin?m=test> If i disable the AJP Connector the URL works again. I have tried the following ``` private Connector redirectConnector2() { Connector connector = new Connector(TomcatServletWebServerFactory.DEFAULT_PROTOCOL); connector.setScheme("http"); connector.setPort(13080); connector.setSecure(false); connector.setAllowTrace(false); return connector; } ... tomcat.addAdditionalTomcatConnectors(redirectConnector2()); ... ``` But this does not help me.<issue_comment>username_1: This works for me: ``` @Bean public WebServerFactoryCustomizer servletContainer() { return server -> { if (server instanceof TomcatServletWebServerFactory) { ((TomcatServletWebServerFactory) server).addAdditionalTomcatConnectors(redirectConnector()); } }; } private Connector redirectConnector() { Connector connector = new Connector("AJP/1.3"); connector.setScheme("http"); connector.setPort(ajpPort); connector.setSecure(false); connector.setAllowTrace(false); return connector; } ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: We used the code of [username_1 answers](https://stackoverflow.com/a/49277513) for a longer time successfully but it stopped working after we had upgraded to a Spring Boot version > 2.2.4. We got this error message on startup: > > APPLICATION FAILED TO START > > > Description: > > > The Tomcat connector configured to listen on port 1234 failed to start. The port may already be in use or the connector may be misconfigured. > > > Action: > > > Verify the connector's configuration, identify and stop any process that's listening on port 1234, or configure this application to listen on another port. > > > But the port was not used, so what was the problem? The issue was caused by the fix for the [Ghostcat vulnerability](https://nvd.nist.gov/vuln/detail/CVE-2020-1938) of AJP in Tomcat that was included in Spring Boot 2.2.5. Now you have two options, either you use AJP with a secret: ``` final Connector connector = new Connector("AJP/1.3"); connector.setScheme("http"); connector.setPort(ajpPort); connector.setAllowTrace(false); final AbstractAjpProtocol protocol = (AbstractAjpProtocol) connector.getProtocolHandler(); connector.setSecure(true); protocol.setSecret(ajpSecret); ``` or without one, but for that you have to explicitly set `setSecretRequired` to `false`: ``` final Connector connector = new Connector("AJP/1.3"); connector.setScheme("http"); connector.setPort(ajpPort); connector.setAllowTrace(false); final AbstractAjpProtocol protocol = (AbstractAjpProtocol) connector.getProtocolHandler(); connector.setSecure(false); protocol.setSecretRequired(false); ``` **Note:** The later solution will make your tomcat vulnerable to Ghostcat again. For more information have a look at this thread: [Springboot -The AJP Connector is configured with secretRequired="true" but the secret attribute is either null or "" after upgrade to 2.2.5](https://stackoverflow.com/questions/60501470/) Upvotes: 3