date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/15
506
1,629
<issue_start>username_0: On cppreference there is this example (<http://en.cppreference.com/w/cpp/language/user_literal>): ``` void operator"" _print ( const char* str ) { std::cout << str; } int main(){ 0x123ABC_print; } ``` Output: 0x123ABC And I fail to understand what exactly this is doing. First I thought that 0x123ABC would just be seen as a string, but `0x123ABCHello_print` doesn't compile. Then I thought that the `operator<<` is overloaded so that it always prints it in hexadecimal form, but `123_print` prints `123`. Also it's case-sensitive: `0x123abC_print` prints `0x123abC`. Can someone explain this to me? On one hand it only takes integers as argument but on the other it treats them like string literals.<issue_comment>username_1: <http://en.cppreference.com/w/cpp/language/user_literal> `void operator"" _print(const char* str)` shows that your literal is taken as `const char*` and then printed out, it's why it's case-sensitive. `0x123ABCHello_print` doesn't work because `0x123ABCHello` is not a number, for user-defined string literals you'd need `"0x123ABCHello"_print` Upvotes: 3 [selected_answer]<issue_comment>username_2: In [the example code](http://en.cppreference.com/w/cpp/language/user_literal) you see: ``` 12_w; // calls operator "" _w("12") ``` Which means that an integer literal is converted to a `const char[]`. This is then accepted by your user-defined literal. Since it's a `const char*` ,`operator<<` will just print until it hits `\0`, no special handling as you'd have normally when printing out an integer literal such as `std::cout << 0xBADFOOD;`. Upvotes: 2
2018/03/15
570
2,439
<issue_start>username_0: We are using database per tenant logic for our application. We currently need to build the connection string dynamically depending on the currently logged user. We would really like to use the tenant database context as a service (DI). However, the dbcontext initialization is done in the configure services method and the IHttpContextAccessor is null at this stage. Is there a way to call services.addDbContext but notify him to wait until IHttpContextAccessor is instanced? or I really need to instance it each time? Thanks! Here some of my code : ``` var sp = services.BuildServiceProvider(); services.AddDbContext(options => options.UseSqlServer(new DataConnectionAPI(sp.GetService(),sp.GetService()).DataConnectionString(), ma => ma.MigrationsAssembly("ESData"))); ```<issue_comment>username_1: I would recommend to register a factory that modifies the connection string at runtime and returns the contexts instance. `IHttpContextAccessor` is only available during a http-request same goes for all informations that are necessary to access the users information because the authorize filter must be executed first. <https://learn.microsoft.com/en-us/aspnet/core/mvc/controllers/filters> **Edit:** You wouldn't register the db-context in your configureservice method. You would create a constructor within your DbContext that takes [DbContextOptions](https://learn.microsoft.com/en-us/ef/core/miscellaneous/configuring-dbcontext#configuring-dbcontextoptions) (base-constructor). You would create a DbContextFactory that injects HttpContextAccesor and whatever necessary and then inject this factory at runtime to createa DbContext manually and return the value. Then register the factory ``` services.AddScoped ``` **Attention:** Don't implement a [IDesignTimeDbContextFactory](https://learn.microsoft.com/en-us/dotnet/api/microsoft.entityframeworkcore.design.idesigntimedbcontextfactory-1?view=efcore-2.0) for this purpose, this should be used for creating migrations only. Upvotes: 0 <issue_comment>username_2: I can suggest to not to use AddDbContext, but register services by yourself. Something like that: ``` services .AddScoped() .AddScoped(sp => new DbContextOptionsBuilder() .UseSqlServer( new DataConnectionAPI( sp.GetService(), sp.GetService() ).DataConnectionString()) .Options) ``` After you can build service provider. Upvotes: 3 [selected_answer]
2018/03/15
620
2,490
<issue_start>username_0: I have a custom `TableViewCell` that contain a `lablel`. I want to check the size of the `label` when the cell is displayed to determine if the text is truncated or not. I found different topic that handle this subject but nothing worked for me. I tried to check the size in `layoutSubviews` ``` override func layoutSubviews() { super.layoutSubviews() print(label.bounds.size.width) } ``` and also to implement the `willDisplay` delegate ``` override func tableView(_ tableView: UITableView, willDisplay cell: UITableViewCell, forRowAt indexPath: IndexPath) { ..... print(cell.label.bounds.size.width) } ``` I always get the same width of the label before the view is loaded (the width set on the interface builder) **Edit:** If I reload the tableView than I get the right size. Do you have an idea about the cause?<issue_comment>username_1: I would recommend to register a factory that modifies the connection string at runtime and returns the contexts instance. `IHttpContextAccessor` is only available during a http-request same goes for all informations that are necessary to access the users information because the authorize filter must be executed first. <https://learn.microsoft.com/en-us/aspnet/core/mvc/controllers/filters> **Edit:** You wouldn't register the db-context in your configureservice method. You would create a constructor within your DbContext that takes [DbContextOptions](https://learn.microsoft.com/en-us/ef/core/miscellaneous/configuring-dbcontext#configuring-dbcontextoptions) (base-constructor). You would create a DbContextFactory that injects HttpContextAccesor and whatever necessary and then inject this factory at runtime to createa DbContext manually and return the value. Then register the factory ``` services.AddScoped ``` **Attention:** Don't implement a [IDesignTimeDbContextFactory](https://learn.microsoft.com/en-us/dotnet/api/microsoft.entityframeworkcore.design.idesigntimedbcontextfactory-1?view=efcore-2.0) for this purpose, this should be used for creating migrations only. Upvotes: 0 <issue_comment>username_2: I can suggest to not to use AddDbContext, but register services by yourself. Something like that: ``` services .AddScoped() .AddScoped(sp => new DbContextOptionsBuilder() .UseSqlServer( new DataConnectionAPI( sp.GetService(), sp.GetService() ).DataConnectionString()) .Options) ``` After you can build service provider. Upvotes: 3 [selected_answer]
2018/03/15
283
928
<issue_start>username_0: ``` class employee: def _init__(self,name, salary): self.name = name self.salary = salary def printemp(self): print "Name:" , self.name print "Salary:", self.salary emp1=employee('pavan',29) emp1.printemp() ``` Whenever I try to execute this, I am getting the below error: > > Traceback (most recent call last): File "new2.py", line 10, in > > emp1=employee('pavan',29) TypeError: this constructor takes no arguments > > ><issue_comment>username_1: The constructor method should be named `__init__`; yours is named `_init__`. Upvotes: 1 <issue_comment>username_2: You've missed one `_` before init Here ``` class employee: def __init__(self,name, salary): self.name = name self.salary = salary def printemp(self): print ("Name:" , self.name) print ("Salary:", self.salary) emp = employee('pavan', 12) ``` Upvotes: 1 [selected_answer]
2018/03/15
343
1,221
<issue_start>username_0: I have a few variables ``` var itemCount = 0; var pageCount = 0; var groupCount = 0; ``` Along with those variables, I have three buttons with data attributes that match these three variables ``` ITEM PAGE GROUP ``` What I want to do is each time I click a specific button, it increments that variable, ex. I click the ITEM button, itemCount adds 1. I thought I had the function correct, but when I click it, the count is not incrementing *(I am displaying some other information in a modal popup as well)*: ``` $('#activeQ').on('show.bs.modal', function (event) { var button = $(event.relatedTarget); var aCat = button.data('count'); var theCount = eval(aCat+"Count"); theCount++; console.log(theCount); }); ```<issue_comment>username_1: The constructor method should be named `__init__`; yours is named `_init__`. Upvotes: 1 <issue_comment>username_2: You've missed one `_` before init Here ``` class employee: def __init__(self,name, salary): self.name = name self.salary = salary def printemp(self): print ("Name:" , self.name) print ("Salary:", self.salary) emp = employee('pavan', 12) ``` Upvotes: 1 [selected_answer]
2018/03/15
361
981
<issue_start>username_0: Suppose I have an R function: ``` x <- function(x) { x <- substr(x, 1, 1) return(x) } ``` If I run ``` x(abc) ``` I will get ``` [1] "a" ``` In order to allow my function to run a list, i.e. `a = list('aas', 'cvs', 'mmm')` I could use 'lapply' function ``` lapply(a, x) ``` But how to incorporate the lapply() function into my function in order to allow my function directly run a list? i.e.: ``` x(a) ```<issue_comment>username_1: ``` new_function <- function(x) { lapply(x, substr, 1, 1) } ``` Upvotes: 1 <issue_comment>username_2: You can use the following trick: ``` fun <- function(x) { if (is.list(x)) { return(lapply(x, fun)) } x <- substr(x, 1, 1) return(x) } ``` If you want the function to return a character vector rather than a list, replace `lapply` with `sapply`. If you want it to also accept a vector as input, change the condition `is.list(x)` to `length(x) > 1`. Upvotes: 3 [selected_answer]
2018/03/15
1,564
5,353
<issue_start>username_0: I am trying to compare two datetime variables in a stored procedure sql server. In the below code snippet `@createdDate` is taken as user input and then comparing with a column of type datetime. I am unable to check the ''='' property ``` set @sqlquery = 'Select v.*, vsc.vidhanSabhaConstituencyName, where 1=1 ' set @sqlquery = @sqlquery +'AND v.createdBy ='''+cast(@createdBy as nvarchar(100))+'''' if(@VoterIdNumber is not null) set @sqlquery = @sqlquery+'AND v.voterIDNumber= '''+@VoterIdNumber+'''' if(@createdDate is not null) set @sqlquery = @sqlquery+'AND v.dataIsCreated = '''+cast(@createdDate as varchar(100))+'''' else set @sqlquery = @sqlquery+'AND v.dataIsCreated= '''+cast(getdate() as varchar(100))+'''' Execute sp_Executesql @sqlquery ``` I've tried casting and converting the `@createdDate` variable without success. It works with other operators like `>=` or `<=` but not with `=` . Help is appreciated<issue_comment>username_1: i just give u an example,here SMSDATE is datetime ,it gives output for me.try to implement the same logic in your procedure ``` SELECT COUNT(*) FROM tbl_1 WHERE CONVERT(varchar, SMSDATE,110) = CONVERT(varchar, GETDATE(),110) ``` Upvotes: 0 <issue_comment>username_2: First, you shouldn't use `cast` on a `datetime`. Use `convert` instead so you can specify an appropriate format for the result. The output of `cast` is going to depend on your region settings. For instance, when I run `select cast(getdate() as varchar(100));` on my instance of SQL Server, I get the following: ``` Mar 15 2018 9:37AM ``` The precision of this value is obviously far more limited than that of a `datetime`: any two `datetime` values that happen to occur within the same minute will be converted to the same string. So if I try to use `=` to compare this value to values stored in a `datetime` field of a table, I'll find that the comparison will fail if the stored value has a nonzero seconds or milliseconds component. If you must convert a `datetime` to a string, use [`convert`](https://learn.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql) with the style parameter that is most appropriate for how you're going to use it. For instance, if I run `select convert(varchar(100), getdate(), 126);`, I get: ``` 2018-03-15T09:43:42.240 ``` This is a much better choice of format for a date/time literal because it doesn't discard any data from the original value (like how my first example lost its seconds and milliseconds components) and is usable regardless of your region settings. All that said, <NAME> had a very important comment on your question: you should be passing in a string of parameters rather than trying to concatenate everything. This will work even if the number of parameters that you actually want to include in your predicate is variable. For instance, consider the following example: ``` create table dbo.Test (A int, B datetime); insert dbo.Test values (1, getdate()); go declare @A int; declare @B datetime; declare @sql nvarchar(max) = 'select * from dbo.Test where 1=1'; if @A is not null set @sql = @sql + ' and A = @A'; if @B is not null set @sql = @sql + ' and B = @B'; print @sql; exec sp_executesql @sql, N'@A int, @B datetime', @A = @A, @B = @B; ``` If I run this query as written, with `@A` and `@B` both left at null, then the value of `@sql` will simply be: ``` select * from dbo.Test where 1=1 ``` If I set `@A` to 1, then I get: ``` select * from dbo.Test where 1=1 and A = @A ``` If I also set `@B` to a non-null value, then I get: ``` select * from dbo.Test where 1=1 and A = @A and B = @B ``` This is much easier to understand and maintain, and it's much safer as well. All the string concatenation you're doing in your original query is just begging for a SQL injection attack. You can either read up on this topic at [this link](https://en.wikipedia.org/wiki/SQL_injection), or simply consider what will happen to your query if I do something like: ``` set @createdBy = 'Admin''; drop table dbo.v; --'; ``` This should be enough to go on, but feel free to ask if you have any questions. Upvotes: 0 <issue_comment>username_3: I have modified my stored procedure as suggested by Marc and Aaron. Below is my code snippet . ``` set @sqlquery = N'Select v.*, vsc.vidhanSabhaConstituencyName, lsc.lokSabhaConstituencyName, wc.wardName.....'' SET @sqlquery = @sqlquery + N' AND v.createdBy = @createdBy'; If @VoterIdNumber Is Not Null SET @sqlquery = @sqlquery + N' AND v.voterIDNumber = @VoterIDNumber'; If @createdDate Is Null SET @createdDate = null SET @sqlquery = @sqlquery + N' AND CONVERT(date, v.dataIsCreated) = @createdDate'; EXEC sp_executesql @sqlquery, N'@createdBy varchar(50), @VoterIDNumber varchar(50), @createdDate date', @createdBy = @createdBy, @VoterIDNumber = @VoterIdNumber, @createdDate = @createdDate ``` Upvotes: -1 [selected_answer]<issue_comment>username_4: You can try like this declare @date datetime set @date = '2018-03-13' select \* from user\_tbl where cast(createddate as date) = cast(@date as date) here, @date should be your sp date parameter Upvotes: 0
2018/03/15
2,041
7,361
<issue_start>username_0: This is my first question on Stack overflow so thank you in advance for any help/ advice given. I am currently making a "Library Database" using only ASP.Net and C#. It is a university assignment and we are limited to this, Data must be saved and withdrawn using JSON. While i have been able to Add a book listing and display the listing in a grid view, I need to be able to display and edit the information using a standard form, using a drop down list and text boxes to display each section. The below code snippet shows the Book class used to keep all the variables for the JSON File ``` public class Book { public string id { get; set; } public string title { get; set; } public string author { get; set; } public string year { get; set; } public string publisher { get; set; } public string isbn { get; set; } public Book(string id, string title, string author, string year, string publisher, string isbn) { this.id = id; this.title = title; this.author = author; this.year = year; this.publisher = publisher; this.isbn = isbn; } } ``` The below code shows the Other class being used to generate a list from the above variables. ``` public class BookList { public List bookList { get; set; } public BookList() { bookList = new List(); } } ``` The below code shows what i currently have for my Edit Book Page. ``` public partial class EditBook : System.Web.UI.Page { public const string FILENAME = @"C:\Users\User\Documents\Assessments\19383038_CSE2ICX_Assignment3\JsonFiles\BookList.Json"; string jsonText = " "; BookList bookList = new BookList(); protected void Page_Load(object sender, EventArgs e) { try { jsonText = File.ReadAllText(FILENAME); } catch (FileNotFoundException) { } BookList bookList = JsonConvert.DeserializeObject(jsonText); JObject jObj = JObject.Parse(jsonText); if (!IsPostBack) { ddl.DataTextField = "id"; ddl.DataValueField = "id"; ddl.DataSource = bookList.bookList; ddl.DataBind(); } } protected void ddl\_SelectedIndexChanged(object sender, EventArgs e) { txtEnterID.Text = ""; } ``` Now, I have managed to populate the Drop Down List, but hours of research and racking my brain, i cannot find a way to translate that onto the text boxes. Currently I have no way of having the Textboxes a) Identify which value the Drop down list is using b) Using that value to determine which variable of the JSON file it uses, and from which group. Now im aware that its very likely I cannot use this method to populate the text boxes, and probably need to start from scratch. I don't need the code handed to me, but i would appreciate if someone could give me an example snippet, or push me on to the write track with how to tackle this task. Please no Ajax, Jquery or java. Thank you again if you made it to the end.<issue_comment>username_1: i just give u an example,here SMSDATE is datetime ,it gives output for me.try to implement the same logic in your procedure ``` SELECT COUNT(*) FROM tbl_1 WHERE CONVERT(varchar, SMSDATE,110) = CONVERT(varchar, GETDATE(),110) ``` Upvotes: 0 <issue_comment>username_2: First, you shouldn't use `cast` on a `datetime`. Use `convert` instead so you can specify an appropriate format for the result. The output of `cast` is going to depend on your region settings. For instance, when I run `select cast(getdate() as varchar(100));` on my instance of SQL Server, I get the following: ``` Mar 15 2018 9:37AM ``` The precision of this value is obviously far more limited than that of a `datetime`: any two `datetime` values that happen to occur within the same minute will be converted to the same string. So if I try to use `=` to compare this value to values stored in a `datetime` field of a table, I'll find that the comparison will fail if the stored value has a nonzero seconds or milliseconds component. If you must convert a `datetime` to a string, use [`convert`](https://learn.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql) with the style parameter that is most appropriate for how you're going to use it. For instance, if I run `select convert(varchar(100), getdate(), 126);`, I get: ``` 2018-03-15T09:43:42.240 ``` This is a much better choice of format for a date/time literal because it doesn't discard any data from the original value (like how my first example lost its seconds and milliseconds components) and is usable regardless of your region settings. All that said, <NAME> had a very important comment on your question: you should be passing in a string of parameters rather than trying to concatenate everything. This will work even if the number of parameters that you actually want to include in your predicate is variable. For instance, consider the following example: ``` create table dbo.Test (A int, B datetime); insert dbo.Test values (1, getdate()); go declare @A int; declare @B datetime; declare @sql nvarchar(max) = 'select * from dbo.Test where 1=1'; if @A is not null set @sql = @sql + ' and A = @A'; if @B is not null set @sql = @sql + ' and B = @B'; print @sql; exec sp_executesql @sql, N'@A int, @B datetime', @A = @A, @B = @B; ``` If I run this query as written, with `@A` and `@B` both left at null, then the value of `@sql` will simply be: ``` select * from dbo.Test where 1=1 ``` If I set `@A` to 1, then I get: ``` select * from dbo.Test where 1=1 and A = @A ``` If I also set `@B` to a non-null value, then I get: ``` select * from dbo.Test where 1=1 and A = @A and B = @B ``` This is much easier to understand and maintain, and it's much safer as well. All the string concatenation you're doing in your original query is just begging for a SQL injection attack. You can either read up on this topic at [this link](https://en.wikipedia.org/wiki/SQL_injection), or simply consider what will happen to your query if I do something like: ``` set @createdBy = 'Admin''; drop table dbo.v; --'; ``` This should be enough to go on, but feel free to ask if you have any questions. Upvotes: 0 <issue_comment>username_3: I have modified my stored procedure as suggested by Marc and Aaron. Below is my code snippet . ``` set @sqlquery = N'Select v.*, vsc.vidhanSabhaConstituencyName, lsc.lokSabhaConstituencyName, wc.wardName.....'' SET @sqlquery = @sqlquery + N' AND v.createdBy = @createdBy'; If @VoterIdNumber Is Not Null SET @sqlquery = @sqlquery + N' AND v.voterIDNumber = @VoterIDNumber'; If @createdDate Is Null SET @createdDate = null SET @sqlquery = @sqlquery + N' AND CONVERT(date, v.dataIsCreated) = @createdDate'; EXEC sp_executesql @sqlquery, N'@createdBy varchar(50), @VoterIDNumber varchar(50), @createdDate date', @createdBy = @createdBy, @VoterIDNumber = @VoterIdNumber, @createdDate = @createdDate ``` Upvotes: -1 [selected_answer]<issue_comment>username_4: You can try like this declare @date datetime set @date = '2018-03-13' select \* from user\_tbl where cast(createddate as date) = cast(@date as date) here, @date should be your sp date parameter Upvotes: 0
2018/03/15
1,093
4,645
<issue_start>username_0: I want to track how much a scroll is needed for my users in my app. So I would like to count the amount of recyclerview items that have been displayed in a user's screen before he takes some action. Is there a way to calculate this? Maybe working with visible items & scroll position? I am using a `RecyclerView` with a `GridLayoutManager`<issue_comment>username_1: I just figured out a solution: ``` var oldFirstPos = -1 var oldLastPos = -1 var totalItemsViewed = 0 with(rvBasic) { layoutManager = gridLayoutManager adapter = BasicRecyclerViewAdapter(layoutInflater, items) addOnScrollListener(object : RecyclerView.OnScrollListener() { override fun onScrolled(recyclerView: RecyclerView?, dx: Int, dy: Int) { super.onScrolled(recyclerView, dx, dy) val currFirstPos = gridLayoutManager.findFirstCompletelyVisibleItemPosition() val currLastPos = gridLayoutManager.findLastCompletelyVisibleItemPosition() totalItemsViewed += when (oldFirstPos) { -1 -> currLastPos - currFirstPos + 1 else -> when { dy > 0 -> Math.abs(currLastPos - oldLastPos) //scrolling to bottom else -> Math.abs(currFirstPos - oldFirstPos) //scrolling to top } } oldLastPos = currLastPos oldFirstPos = currFirstPos Log.d("Items viewed", "items viewed: ${totalItemsViewed}") } }) } ``` My idea was the following: Grab the current `firstVisibleCompletelyVisiblePosition` and `lastCompletelyVisiblePosition` and then calculate the delta of those positions since the last scroll depending on the scroll direction. The initial scroll is identified by setting the `oldFirstPos` to `-1` (`oldLastPos` could theoretically be any value), since the position in the `GridLayoutManager` starts at 0. To identify the amount initially visible - and therefore viewed - items the following simple formula is needed: `currLastPos - currFirstPos + 1` (the `+1` is there since the position starts at 0). When the user now scrolls to the bottom `dy gets greater than 0` and therefore the absolute change in the `lastPos` is important for us. The same applies when the scroll is in the opposite direction. Though, with the key difference, that now the absolute change in the `firstPos` is the change in the amount of visible items since the last `onScrolled` was triggered. **Why is it important to adapt the position change calculation when the scroll direction changes?** Lets assume you have 19 items in your Grid with a `spanCount of 4`, hence the last row will consist of 3 items. Scrolling down to it, the `totalItemsViewed` will count up to 19. Now when you scroll back up you will miss an item, because if you scroll a little bit upwards the `lastPos` will change from 19 to 16, but the `firstPos` will change from (depending on how many items are visible at a time on the display) 12 to 8. Now you have basically lost an item. If your `items % spanCount == 0` then you will be fine without adapting the change according to the scroll direction. *Note: this solution only works with a vertical GridLayoutManager (horizontal is similar though)* Upvotes: 2 [selected_answer]<issue_comment>username_2: As per @username_1's answer i modified the code when scroll to top, you could do something like this in java: ``` private int oldFirstPos = -1, oldLastPos = -1,totalItemsViewed = 0; rv.addOnScrollListener(new RecyclerView.OnScrollListener() { @Override public void onScrolled(@NonNull RecyclerView recyclerView, int dx, int dy) { super.onScrolled(recyclerView, dx, dy); GridLayoutManager layoutManager = ((GridLayoutManager)recyclerView.getLayoutManager()); int currFirstPos = layoutManager.findFirstCompletelyVisibleItemPosition(); int currLastPos = layoutManager.findLastCompletelyVisibleItemPosition(); totalItemCount = layoutManager.getItemCount(); if(oldFirstPos == -1){ totalItemsViewed += currLastPos - currFirstPos + 1; }else{ if(dy > 0){ totalItemsViewed += Math.abs(currLastPos - oldLastPos); }else{ totalItemsViewed -= Math.abs(oldLastPos - currLastPos); } } oldLastPos = currLastPos; oldFirstPos = currFirstPos; Log.e("totalItemsViewed",totalItemsViewed); } }); ``` Upvotes: 0
2018/03/15
468
2,001
<issue_start>username_0: I have a database that's running using local storage in Kubernetes. Whenever I start up the Pod with the database I would like to run a Job that can look at our backups and backfill any data that we have that isn't on the local disk. I was looking at the PostStart lifecycle hook, but that just lets me run a command from the main container, which would be a very hacky way to submit this Job. Are there any better approaches to this other than writing my own controller?<issue_comment>username_1: You can use [Helm](https://github.com/kubernetes/helm) for deploy your database and manage it's lifecycle. Helm has many [type of hooks](https://github.com/kubernetes/helm/blob/master/docs/charts_hooks.md) and you can use any container for a hook. I think in your case you can use post-upgrade + post-install for do everything you want. Just check a [documentation](https://docs.helm.sh/). Upvotes: 0 <issue_comment>username_2: You can use an [init container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) that mounts in the same volumes as the main container, populates it with any missing data and then exits so the main container can start. If you need the job container to be running at the same time as your main container, you can instead just put the container described above as a second container in the Pod. If you need an actual kubernetes Job to be created then, as you say, I think the only options would be to create a custom controller or to run an apiserver client such as `kubectl` as a sidecar container, but you could use the ServiceAccount token that's automatically mounted into your Pod to authenticate with the apiserver and then just apply the necessary RBAC rules to the ServiceAccount to create a Job. You'd have to use some sort of shared data volume to mount the same data into both the Pods spawned by the Job and the main Pod in order to share the data (there are a few other options that are possible also). Upvotes: 1
2018/03/15
1,199
4,439
<issue_start>username_0: I have overloaded assignment operator for the class with a 2D array, but in order to do memory management and resizing correct I have to delete previous matrix first, then construct a new one, and only then I can start assigning. ```cpp Matrix& Matrix::operator = (const Matrix& m1){ for (int i = 0; i < m_rows; ++i) delete[] m_matrix[i]; delete[] m_matrix; m_matrix = new double*[m1.rows()]; for (int i = 0; i < m1.rows(); ++i) m_matrix[i] = new double[m1.cols()](); for (int k = 0; k < m1.rows(); ++k) for (int j = 0; j < m1.cols(); ++j) m_matrix[k][j] = m1.m_matrix[k][j]; m_rows = m1.rows(); m_cols = m1.cols(); return *this; } ``` In fact, this part is destructor of my class: ```cpp for (int i = 0; i < m_rows; ++i) delete[] m_matrix[i]; delete[] m_matrix; ``` And this part is similar to a constructor: ```cpp m_matrix = new double*[m1.rows()]; for (int i = 0; i < m_rows; ++i) m_matrix[i] = new double[m1.cols()](); ``` What annoys me is that I have to copy constructors' and destructors' code in the assignment function (and some other functions too!) to make it work properly. Is there a better way to write it?<issue_comment>username_1: The canonical implementation of the assignment operator leverages existing functionality (copy/move ctor, dtor, and`swap()`; note that using a non-specialized `std::swap()` would be bad). It looks like this: ``` T& T::operator= (T val) { val.swap(*this); return *this; } ``` It nicely avoids reimplementing otherwise existing logic. It also deals gracefully with self-assignment which is a problem in your original code (it will do work but self-assignment is generally rather uncommon; optimizing it with a check against self-assignment typically pessimizes code). The argument is passed by value to take advantage of copy elision. The primary caveats with this approach are below. In general I prefer the canonical implementation as it is generally more correct and the outlined issue are often not really that relevant (e.g., when the object was just created anyway the transferred memory is actually "hot"). 1. It does not attempt to reuse already allocated and possibly "hot" memory. Instead it always uses new memory. 2. If the amount of held data is huge, there are copies temporarily held which may exceed system limits. Reusing existing memory and/or release memory first would both address this issue. Upvotes: 1 <issue_comment>username_2: The ideal improvement would be `Matrix& Matrix::operator=(const Matrix&) = default;`. If you switch to using `std::vector` for matrix storage you won't need to implement the copy/move constructors/assignments and destructor at all. If what you are doing is a programming exercise, create your own dynamic array and use that in the implementation of your matrix. I cannot recommend enough watching [Better Code: Runtime Polymorphism by <NAME>](https://youtu.be/QGcVXgEVMJg), he makes an effective demonstration of why you should strive to write classes that do not require non-default implementations of copy/move constructors/assignments and destructor. Example: ``` template class Matrix { std::vector storage\_; unsigned cols\_ = 0; public: Matrix(unsigned rows, unsigned cols) : storage\_(rows \* cols) , cols\_(cols) {} // Because of the user-defined constructor above // the default constructor must be provided. // The default implementation is sufficient. Matrix() = default; unsigned columns() const { return cols\_; } unsigned rows() const { return storage\_.size() / cols\_; } // Using operator() for indexing because [] can only take one argument. T& operator()(unsigned row, unsigned col) { return storage\_[row \* cols\_ + col]; } T const& operator()(unsigned row, unsigned col) const { return storage\_[row \* cols\_ + col]; } // Canonical swap member function. void swap(Matrix& b) { using std::swap; swap(storage\_, b.storage\_); swap(cols\_, b.cols\_); } // Canonical swap function. Friend name injection. friend void swap(Matrix& a, Matrix& b) { a.swap(b); } // This is what the compiler does for you, // not necessary to declare these at all. Matrix(Matrix const&) = default; Matrix(Matrix&&) = default; Matrix& operator=(Matrix const&) = default; Matrix& operator=(Matrix&&) = default; ~Matrix() = default; }; ``` Upvotes: 3 [selected_answer]
2018/03/15
1,205
4,854
<issue_start>username_0: Hey I am creating a java project. In which I have a insert record frame, on insert frame I have a option to enter father ID and if the user did not know the father id, so I have set a button to find the father id. when the user will click on that button, the new frame will appear and user can search for id, the result will show in a table and when user click on the particular record, the frame will dispose and should set the respective id on the previous frame. I have written the code for it, and it is passing the value to the previous frame but it is not setting the value to the textfield that I want it to be. Where I am doing wrong? Here is the code. FamilyInsert.java ``` public class FamilyInsert extends javax.swing.JFrame { /** * Creates new form FamilyInsert */ int id = DBManager.genID(); public int fid; public FamilyInsert() { initComponents(); txtId.setText(""+id); txtName.requestFocus(); } public void setFid(int fid){ txtFid.setText(""+fid); System.out.println("setFID "+fid); } public void reset() { txtName.setText(""); txtFather.setText(""); txtFid.setText(""); txtCity.setText(""); txtState.setText(""); txtName.requestFocus(); } private void btnSubmitActionPerformed(java.awt.event.ActionEvent evt) { int id = Integer.parseInt(txtId.getText()); String name = txtName.getText(); String fname = txtFather.getText(); int fid = Integer.parseInt(txtFid.getText()); String city = txtCity.getText(); String state = txtState.getText(); Family family = new Family(id,name,fname,fid, city,state); boolean flag = false; flag = DBManager.insertMember(family); if(flag==true){ JOptionPane.showMessageDialog(this,"Successfully Saved"); id++; txtId.setText(""+id); reset(); } else { JOptionPane.showMessageDialog(this,"Error Occured"); } } private void txtFidActionPerformed(java.awt.event.ActionEvent evt) { // TODO add your handling code here: } private void btnSearchActionPerformed(java.awt.event.ActionEvent evt) { SearchFatherFrame f = new SearchFatherFrame(); f.setLocationRelativeTo(null); f.setVisible(true); } ``` } and from the search frame: ``` private void jTable1MouseClicked(java.awt.event.MouseEvent evt) { int id; if(evt.getClickCount()==2){ if(jTable1.getSelectedRow()!=-1) { int index = jTable1.getSelectedRow(); Family s = list.get(index); id = s.getId(); System.out.println("ID from search frame "+id); FamilyInsert f = new FamilyInsert(); f.setFid(id); this.dispose(); //JOptionPane.showMessageDialog(this, s.getId()+"\n"+s.getName()); } } ```<issue_comment>username_1: Could you try ``` public void setFid(int fid){ txtFid.setText(""+fid); System.out.println("setFID "+fid); yourJFrame.setVisible(true); //Reloads the frame } ``` Upvotes: -1 <issue_comment>username_2: Your problem is that you're creating a ***new*** FamilyInsert object within the other class, and changing its state, but this leaves the state of the original FamilyInsert object unchanged. What you need to do instead is to pass a reference of the original displayed FamilyInsert into the 2nd object, and then change its state. Change this: ``` SearchFatherFrame f = new SearchFatherFrame(); ``` to something more like: ``` SearchFatherFrame f = new SearchFatherFrame(this); ``` Pass the reference into the class and use to set a field: ``` public class SearchFatherFrame { private FamilyInsert familyInsert; public SearchFatherFrame(FamilyInsert familyInsert) { this.familyInsert = familyInsert; // other code.... } } ``` Then use that reference passed in to change the state of the original object. ``` if(jTable1.getSelectedRow()!=-1) { int index = jTable1.getSelectedRow(); Family s = list.get(index); id = s.getId(); System.out.println("ID from search frame "+id); // FamilyInsert f = new FamilyInsert(); // f.setFid(id); familyInsert.setFid(id); // **** add this.dispose(); //JOptionPane.showMessageDialog(this, s.getId()+"\n"+s.getName()); } ``` Also you want the 2nd window to be a JDialog not a JFrame. Please see: [The Use of Multiple JFrames, Good/Bad Practice?](http://stackoverflow.com/questions/9554636) Upvotes: 2 [selected_answer]
2018/03/15
853
2,947
<issue_start>username_0: First of all, I do know about `--keep-index`. This is not what I want because it still stashes *all* changes, but leaves the staged one in the worktree. I would like to only stash the unstaged files, if possible without adding all changes again with `git stash --patch`.<issue_comment>username_1: You cannot do it direclty, but you can eventually isolate only unstaged or only staged changes, or both, without even using any `git-stash` command at all : --- ```sh git switch -c separated-stashes ``` Equivalent to the old `checkout -b` : creates a new branch and switch on it. It won't change neither your worktree nor your index so you'll basically have the same `git status`' output as before, but this time on the new branch. Contrary to a simple `git switch` / `git checkout`, it won't warn you to stash or commit your changes before switching branches, because you are explicitly creating a brand new branch through `-c`/`-b`. --- ```sh git commit -m "staged" ``` Create a first commit on the new branch containing only staged changes from the beginning. --- ```sh git add -u && git commit -m "unstaged" ``` Create a second commit with unstaged changes from the beginning. --- ```sh git switch - # == git checkout - ``` Go back to the previous branch you were on. Now you have stashed everything and can `cherry-pick` what you need (staged / unstaged) wherever and whenever you want. --- This might look a little cumbersome but you're free to define a git alias to automate it : ```sh git config --global alias.bratisla '!git switch -c separated-stashes; git commit -m "staged changes"; git add -u; git commit -m "unstaged changes"; git switch -' # why this name ? : youtu.be/LpE1bJp8-4w ``` Upvotes: 0 <issue_comment>username_2: Best I can come up with is: ``` git commit -n -m temp git stash push -u git reset HEAD~1 ``` This will commit without triggering any pre-commit hooks. Then it will stash the changes that remain (i.e. the unstaged changes from before). Finally, it will reset head back to the pre-commit state (before the "temp" commit). Upvotes: 1 <issue_comment>username_3: If you want to store the diff between the index (what's staged) and the worktree (what's not staged yet), this simply is `git diff` : ``` # store it : git diff > stash.patch # if you additionally want to put the unstaged changes away : git stash -k ``` To apply at later these changes on the worktree (not on the index) : use `git apply` ``` git apply stash.patch ``` --- You could also use what gets stored in the stash to re-create that diff : ``` # stash the changes : git stash -k # to reapply them on the worktree at a later time : # the 'unstaged changes' are the diff between # - what the index was (stash^2) # - and what the worktree was (stash) git diff stash^2 stash | git apply - # again : 'git apply' will apply the changes on the *worktree*, not the index ``` Upvotes: 2
2018/03/15
721
2,635
<issue_start>username_0: Where can I find information about Hikari properties that can be modified at runtime? I tried to modify *connectionTimeout*. I can do it and it will be modified in the `HikariDataSource` without an exception (checked by setting and then getting the property) but it takes no effect. If I initially do: ``` HikariConfig config = new HikariConfig(); config.setConnectionTimeout(12000); HikariDataSource pool = new HikariDataSource(config); ``` and later on I do ``` config.setConnectionTimeout(5000); ``` Hikari tries to get a new connection for 12 seconds instead of 5 seconds. Or is there a way to change the value with effect? Are there other properties with the same behaviour?<issue_comment>username_1: You can't dynamically update the property values by resetting them on the config object - the config object is ultimately read once when instantiating the Hikari Pool (have a look at the source code in [PoolBase.java](https://github.com/brettwooldridge/HikariCP/blob/dev/src/main/java/com/zaxxer/hikari/pool/PoolBase.java) to see how this works. You can however do what you want and update the connection timeout value at runtime via JMX. How to do this is explained in the hikari documentation [here](https://github.com/brettwooldridge/HikariCP/wiki/MBean-(JMX)-Monitoring-and-Management) Upvotes: 2 <issue_comment>username_2: You can do this through the MX bean, but you don't need to use JMX ```java public void updateTimeout(final long connectionTimeoutMs, final HikariDataSource ds) { var poolBean = ds.getHikariPoolMXBean(); var configBean = ds.getHikariConfigMXBean(); poolBean.suspendPool(); // Block new connections being leased configBean.setConnectionTimeout(connectionTimeoutMs); poolBean.softEvictConnections(); // Close unused cnxns & mark open ones for disposal poolBean.resumePool(); // Re-enable connections } ``` Bear in mind you will need to enable pool suspension in your initial config ```java var config = new HikariConfig(); ... config.setAllowPoolSuspension(true); ``` Upvotes: 2 <issue_comment>username_3: If your JVM has JMX enabled (I recommend for every prod), you could: 1. SSH-tunnel JMX port to your local machine 2. Connect to the VM in a JMX client like JConsole 3. Operate pool MBean as needed *Note: JMX port must never be public to the internet, be sure that firewall protects you.* SSH Tunnel command example: ``` ssh -i ${key_path} -N -L 9000:localhost:9000 -L 9001:localhost:9001 ${user}@${address} ``` [![Pool MBean configuring](https://i.stack.imgur.com/bKbTN.png)](https://i.stack.imgur.com/bKbTN.png) Upvotes: 1
2018/03/15
1,214
3,060
<issue_start>username_0: Say I have a dataframe `df`: ``` x y z 0 1 2 3 1 4 5 6 2 7 8 9 ``` I wanna have two new columns that are x \* y and x \* z: ``` x y z xy xz 0 1 2 3 2 3 1 4 5 6 20 24 2 7 8 9 56 63 ``` So I define a function `func` (just for example) that takes either the string `'y'` or the string `'z'` as an argument to indicate which column I want to multiply with the column x: ``` def func(row, colName): return row['x'] * row[colName] ``` And apply the function to the dataframe `df`: ``` df['xz'] = df.apply(func, axis=1) ``` Apparently it is wrong here because I didn't specify the `colName`, `'y'` or `'z'`. Question is, `df.apply()` just takes function name, how do I tell it to take the two arguments?<issue_comment>username_1: One possible solution: ``` df['xz'] = df.apply(lambda x: func(x['x'], x[colName]), axis=1) ``` and your function would become: ``` def func(x, colName): return x * colName ``` Upvotes: 0 <issue_comment>username_2: You can use lambda function with specify columns, but also is necessary change `func`: ``` def func(row, colName): return row * colName cols = ['y', 'z'] for c in cols: df['x' + c] = df.apply(lambda x: func(x['x'], x[c]), axis=1) ``` If is not possible change `func`: ``` def func(row, colName): return row['x'] * row[colName] cols = ['y', 'z'] for c in cols: df['x' + c] = df.apply(lambda x: func(x, c), axis=1) ``` --- ``` print (df) x y z xy xz 0 1 2 3 2 3 1 4 5 6 20 24 2 7 8 9 56 63 ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: You can do this with `assign` in a *dict comprehension*. **Option 1** Keeping the first column fixed: ``` def func(row, j): return row['x'] * row[j] cols = ['y', 'z'] df.assign(**{'x' + c : df.apply(func, args=c, axis=1) for c in cols}) x y z xy xz 0 1 2 3 2 3 1 4 5 6 20 24 2 7 8 9 56 63 ``` --- **Option 2** An alternative with neither of the columns fixed: ``` def func(row, i, j): return row[i] * row[j] pairs = [('x', 'y'), ('x', 'z')] df.assign(**{''.join(p) : df.apply(func, args=p, axis=1) for p in pairs}) x y z xy xz 0 1 2 3 2 3 1 4 5 6 20 24 2 7 8 9 56 63 ``` Upvotes: 2 <issue_comment>username_4: In `pandas 0.22.0` I was able to do the following to get your expected output: ``` df['xy'] = df.apply(func, axis=1, args='y') df['xz'] = df.apply(func, axis=1, args='z') ``` The docstring for `pd.DataFrame.apply` shows the following: ``` pd.DataFrame.apply(self, func, axis=0, broadcast=False, raw=False, reduce=None, args=(), **kwds) . . . args : tuple; Positional arguments to pass to function in addition to the array/series ``` so you need to pass any positional arguments to your `func` using the `args` keyword argument in `df.apply()` Upvotes: 0 <issue_comment>username_5: I think `eval` is perfect here ``` df['x*y'],df['x*z']=df.eval('x*y'),df.eval('x*z') df Out[14]: x y z x*y x*z 0 1 2 3 2 3 1 4 5 6 20 24 2 7 8 9 56 63 ``` Upvotes: 2
2018/03/15
315
1,272
<issue_start>username_0: I am getting following error: `Module '".../node_modules/moment/moment"' has no exported member 'default'`when I use `import * as _moment from 'moment'; import { default as _rollupMoment } from 'moment'; const moment = _rollupMoment || _moment;`<issue_comment>username_1: That's because `moment` does not have something called `default` did you mean `defaultFormat`? Also you don't need your second import. You can just say `_moment.defaultFormat` Upvotes: 2 <issue_comment>username_2: I had so much trouble with trying to properly import Moment that I switched to the [date-fns](https://date-fns.org/) library instead. date-fns takes a somewhat different approach - as the name implies, it's a collection of date functions, rather than a huge 'god object' that does everything. The additional benefits are that a) you can import just the individual functions that you need, and b) if you import the whole thing, it tree-shakes really well. So you wind up with a much smaller impact on your module size. Upvotes: 2 <issue_comment>username_3: From Moment's Docs: *Note: If you have trouble importing moment, try adding "allowSyntheticDefaultImports": true in compilerOptions in your tsconfig.json file and then use the syntax* Upvotes: 2
2018/03/15
511
1,434
<issue_start>username_0: Wondering if a dictionary, if included in a loop, is cleared when the loop moves on. ``` for x in list: dict_1 = {} do_some_stuff_here: continue ``` Is the dictionary cleared when moving onto the next item in the list?<issue_comment>username_1: Why don't you just check it? ``` In [1]: for i in range(5): ...: my_dict = {} ...: my_dict[i] = i+1 ...: In [2]: my_dict Out[2]: {4: 5} ``` If you want your dictionary to keep values you have to declare it earlier. ``` In [5]: my_dict = {} In [6]: for i in range(5): ...: my_dict[i] = i+5 ...: In [7]: my_dict Out[7]: {0: 5, 1: 6, 2: 7, 3: 8, 4: 9} ``` Upvotes: 2 <issue_comment>username_2: Yes, by the time the loop finishes, the dictionary you initialized inside the loop will be cleared. If you don't want the dictionary to be constantly cleared, initialize the dictionary outside of the loop. ``` dict_1 = {} for x in list: do_some_stuff_here_with_dict_1: continue ``` Here is a comparison between the two: ``` outside = {} # Outside of loop for x in range(5): inside = {} # Inside of loop outside[x] = 0 inside[x] = 0 print(outside) print(inside) ``` Output: ``` {0: 0, 1: 0, 2: 0, 3: 0, 4: 0} {4: 0} ``` Essentially, each time the loop is run, you are setting the dictionary back to `{}`. So after the program finishes, your `dictionary` has only one entry. Upvotes: 0
2018/03/15
1,399
3,371
<issue_start>username_0: I want to filter dataframe according to the following conditions firstly (d<5) and secondly (value of col2 not equal its counterpart in col4 if value in col1 equal its counterpart in col3). If the original dataframe `DF` is as follows: ```python +----+----+----+----+---+ |col1|col2|col3|col4| d| +----+----+----+----+---+ | A| xx| D| vv| 4| | C| xxx| D| vv| 10| | A| x| A| xx| 3| | E| xxx| B| vv| 3| | E| xxx| F| vvv| 6| | F|xxxx| F| vvv| 4| | G| xxx| G| xxx| 4| | G| xxx| G| xx| 4| | G| xxx| G| xxx| 12| | B|xxxx| B| xx| 13| +----+----+----+----+---+ ``` The desired Dataframe is: ```python +----+----+----+----+---+ |col1|col2|col3|col4| d| +----+----+----+----+---+ | A| xx| D| vv| 4| | A| x| A| xx| 3| | E| xxx| B| vv| 3| | F|xxxx| F| vvv| 4| | G| xxx| G| xx| 4| +----+----+----+----+---+ ``` Code I have tried that did not work as expected: ```python cols=[('A','xx','D','vv',4),('C','xxx','D','vv',10),('A','x','A','xx',3),('E','xxx','B','vv',3),('E','xxx','F','vvv',6),('F','xxxx','F','vvv',4),('G','xxx','G','xxx',4),('G','xxx','G','xx',4),('G','xxx','G','xxx',12),('B','xxxx','B','xx',13)] df=spark.createDataFrame(cols,['col1','col2','col3','col4','d']) df.filter((df.d<5)& (df.col2!=df.col4) & (df.col1==df.col3)).show() +----+----+----+----+---+ |col1|col2|col3|col4| d| +----+----+----+----+---+ | A| x| A| xx| 3| | F|xxxx| F| vvv| 4| | G| xxx| G| xx| 4| +----+----+----+----+---+ ``` What should I do to achieve the desired result?<issue_comment>username_1: Your logic condition is wrong. IIUC, what you want is: ```python import pyspark.sql.functions as f df.filter((f.col('d')<5))\ .filter( ((f.col('col1') != f.col('col3')) | (f.col('col2') != f.col('col4')) & (f.col('col1') == f.col('col3'))) )\ .show() ``` I broke the `filter()` step into 2 calls for readability, but you could equivalently do it in one line. Output: ```python +----+----+----+----+---+ |col1|col2|col3|col4| d| +----+----+----+----+---+ | A| xx| D| vv| 4| | A| x| A| xx| 3| | E| xxx| B| vv| 3| | F|xxxx| F| vvv| 4| | G| xxx| G| xx| 4| +----+----+----+----+---+ ``` Upvotes: 8 [selected_answer]<issue_comment>username_2: You can also write like below (without `pyspark.sql.functions`): ``` df.filter('d<5 and (col1 <> col3 or (col1 = col3 and col2 <> col4))').show() ``` Result: ``` +----+----+----+----+---+ |col1|col2|col3|col4| d| +----+----+----+----+---+ | A| xx| D| vv| 4| | A| x| A| xx| 3| | E| xxx| B| vv| 3| | F|xxxx| F| vvv| 4| | G| xxx| G| xx| 4| +----+----+----+----+---+ ``` Upvotes: 5 <issue_comment>username_3: faster way (without `pyspark.sql.functions`) ``` df.filter((df.d<5)&((df.col1 != df.col3) | (df.col2 != df.col4) & (df.col1 ==df.col3)))\ .show() ``` Upvotes: 4 <issue_comment>username_4: I am late to the party, but someone might find this useful. If your conditions were to be in a list form e.g. `filter_values_list =['value1', 'value2']` and you are filtering on a single column, then you can do: ```py df.filter(df.colName.isin(filter_values_list) #in case of == df.filter(~df.colName.isin(filter_values_list) #in case of != ``` Upvotes: 2
2018/03/15
799
2,103
<issue_start>username_0: ``` --*-- -***- --*-- ``` bars are blanks ``` print('', '*', ' \n', '***', ' \n', '', '*', '') ``` This is what i made and it doesn't work...I thought ''=blank and since there's comma it's one more blank so there should be 2 blanks as a result? anyway what should i do using only one print f(x)<issue_comment>username_1: Your logic condition is wrong. IIUC, what you want is: ```python import pyspark.sql.functions as f df.filter((f.col('d')<5))\ .filter( ((f.col('col1') != f.col('col3')) | (f.col('col2') != f.col('col4')) & (f.col('col1') == f.col('col3'))) )\ .show() ``` I broke the `filter()` step into 2 calls for readability, but you could equivalently do it in one line. Output: ```python +----+----+----+----+---+ |col1|col2|col3|col4| d| +----+----+----+----+---+ | A| xx| D| vv| 4| | A| x| A| xx| 3| | E| xxx| B| vv| 3| | F|xxxx| F| vvv| 4| | G| xxx| G| xx| 4| +----+----+----+----+---+ ``` Upvotes: 8 [selected_answer]<issue_comment>username_2: You can also write like below (without `pyspark.sql.functions`): ``` df.filter('d<5 and (col1 <> col3 or (col1 = col3 and col2 <> col4))').show() ``` Result: ``` +----+----+----+----+---+ |col1|col2|col3|col4| d| +----+----+----+----+---+ | A| xx| D| vv| 4| | A| x| A| xx| 3| | E| xxx| B| vv| 3| | F|xxxx| F| vvv| 4| | G| xxx| G| xx| 4| +----+----+----+----+---+ ``` Upvotes: 5 <issue_comment>username_3: faster way (without `pyspark.sql.functions`) ``` df.filter((df.d<5)&((df.col1 != df.col3) | (df.col2 != df.col4) & (df.col1 ==df.col3)))\ .show() ``` Upvotes: 4 <issue_comment>username_4: I am late to the party, but someone might find this useful. If your conditions were to be in a list form e.g. `filter_values_list =['value1', 'value2']` and you are filtering on a single column, then you can do: ```py df.filter(df.colName.isin(filter_values_list) #in case of == df.filter(~df.colName.isin(filter_values_list) #in case of != ``` Upvotes: 2
2018/03/15
1,090
3,239
<issue_start>username_0: How is the boundingbox object defined that takes opencv's tracker.init() function? is it `(xcenter,ycenter,boxwidht,boxheight)` or `(xmin,ymin,xmax,ymax)` or `(ymin,xmin,ymax,xmax)` or something completely different? I am using python and OpenCV 3.3 and i basically do the following on each object i want to track for each frame of a video: ``` tracker = cv2.trackerKCF_create() ok = tracker.init(previous_frame,bbox) bbox = tracker.update(current_frame) ```<issue_comment>username_1: The Answer is: `(xmin,ymin,boxwidth,boxheight)` Upvotes: 5 [selected_answer]<issue_comment>username_2: The other post states the answer as a fact, so let's look at how to figure it out on your own. The Python version of OpenCV is a wrapper around the main C++ API, so when in doubt, it's always useful to consult either the [main documentation](https://docs.opencv.org/3.4.0/index.html), or even the [source code](https://github.com/opencv/opencv). There is a [short tutorial](https://docs.opencv.org/3.4.0/da/d49/tutorial_py_bindings_basics.html) providing some basic information about the Python bindings. First, let's look at [`cv::TrackerKCF`](https://docs.opencv.org/3.4.0/d2/dff/classcv_1_1TrackerKCF.html). The [`init`](https://docs.opencv.org/3.4.0/d0/d0a/classcv_1_1Tracker.html#a4d285747589b1bdd16d2e4f00c3255dc) member takes the bounding box as an instance of [`cv::Rect2d`](https://docs.opencv.org/3.4.0/dc/d84/group__core__basic.html#ga894fe0940d40d4d65f008a2ca4e616bd) (i.e. a variant of [`cv::Rect_`](https://docs.opencv.org/3.4.0/d2/d44/classcv_1_1Rect__.html) which represents the parameters using `double` values): ``` bool cv::Tracker::init(InputArray image, const Rect2d& boundingBox) ``` Now, the question is, how is a `cv::Rect2d` (or in general, the variants of `cv::Rect_`) represented in Python? I haven't found any part of documentation that states this clearly (although I think it's hinted at in the tutorials), but there is some useful information in the bindings tutorial mentioned earlier: > > ... > But there may be some basic OpenCV datatypes like Mat, Vec4i, > Size. They need to be extended manually. For example, a Mat type > should be extended to Numpy array, Size should be extended to a tuple > of two integers etc. > ... > All such manual wrapper functions are placed > in `modules/python/src2/cv2.cpp`. > > > Not much, so let's look at [the code](https://github.com/opencv/opencv/blob/master/modules/python/src2/cv2.cpp) they point us at. Lines [941](https://github.com/opencv/opencv/blob/master/modules/python/src2/cv2.cpp#L941)-[954](https://github.com/opencv/opencv/blob/master/modules/python/src2/cv2.cpp#L954) are what we're after: ``` template<> bool pyopencv_to(PyObject* obj, Rect2d& r, const char* name) { (void)name; if(!obj || obj == Py_None) return true; return PyArg_ParseTuple(obj, "dddd", &r.x, &r.y, &r.width, &r.height) > 0; } template<> PyObject* pyopencv_from(const Rect2d& r) { return Py_BuildValue("(dddd)", r.x, r.y, r.width, r.height); } ``` The `PyArg_ParseTuple` in the first function is quite self-explanatory. A 4-tuple of double (floating point) values, in the order x, y, width and height. Upvotes: 3
2018/03/15
1,220
3,523
<issue_start>username_0: I need to select only the 2018 records from the crdd and schedule date. This is my current query: ``` Select distinct t1.inbound_ship_appointment_id as ISA, t1.STANDARD_CARRIER_ALPHA_CODE as carrier_name, t1.current_appt_start as scheduled_date, t1.WAREHOUSE_ID as FC, t2.carrier_req_del_date_local as crdd, datediff(days, t1.current_appt_start , t2.carrier_req_del_date_local) from d_fc_appointment_details t1 join o_appointment_visibility t2 on t1.inbound_ship_appointment_id= t2.inbound_ship_appointment_id where t1.STANDARD_CARRIER_ALPHA_CODE in ( 'AZIM','BTIU','HJBI','HUBG','PGLI','SCDS','SWIF','XAMS','XHMC','XHME', 'XHMM','XJMN','XJMW') OR t1.STANDARD_CARRIER_ALPHA_CODE LIKE '%IMP% ``` Please advise<issue_comment>username_1: The Answer is: `(xmin,ymin,boxwidth,boxheight)` Upvotes: 5 [selected_answer]<issue_comment>username_2: The other post states the answer as a fact, so let's look at how to figure it out on your own. The Python version of OpenCV is a wrapper around the main C++ API, so when in doubt, it's always useful to consult either the [main documentation](https://docs.opencv.org/3.4.0/index.html), or even the [source code](https://github.com/opencv/opencv). There is a [short tutorial](https://docs.opencv.org/3.4.0/da/d49/tutorial_py_bindings_basics.html) providing some basic information about the Python bindings. First, let's look at [`cv::TrackerKCF`](https://docs.opencv.org/3.4.0/d2/dff/classcv_1_1TrackerKCF.html). The [`init`](https://docs.opencv.org/3.4.0/d0/d0a/classcv_1_1Tracker.html#a4d285747589b1bdd16d2e4f00c3255dc) member takes the bounding box as an instance of [`cv::Rect2d`](https://docs.opencv.org/3.4.0/dc/d84/group__core__basic.html#ga894fe0940d40d4d65f008a2ca4e616bd) (i.e. a variant of [`cv::Rect_`](https://docs.opencv.org/3.4.0/d2/d44/classcv_1_1Rect__.html) which represents the parameters using `double` values): ``` bool cv::Tracker::init(InputArray image, const Rect2d& boundingBox) ``` Now, the question is, how is a `cv::Rect2d` (or in general, the variants of `cv::Rect_`) represented in Python? I haven't found any part of documentation that states this clearly (although I think it's hinted at in the tutorials), but there is some useful information in the bindings tutorial mentioned earlier: > > ... > But there may be some basic OpenCV datatypes like Mat, Vec4i, > Size. They need to be extended manually. For example, a Mat type > should be extended to Numpy array, Size should be extended to a tuple > of two integers etc. > ... > All such manual wrapper functions are placed > in `modules/python/src2/cv2.cpp`. > > > Not much, so let's look at [the code](https://github.com/opencv/opencv/blob/master/modules/python/src2/cv2.cpp) they point us at. Lines [941](https://github.com/opencv/opencv/blob/master/modules/python/src2/cv2.cpp#L941)-[954](https://github.com/opencv/opencv/blob/master/modules/python/src2/cv2.cpp#L954) are what we're after: ``` template<> bool pyopencv_to(PyObject* obj, Rect2d& r, const char* name) { (void)name; if(!obj || obj == Py_None) return true; return PyArg_ParseTuple(obj, "dddd", &r.x, &r.y, &r.width, &r.height) > 0; } template<> PyObject* pyopencv_from(const Rect2d& r) { return Py_BuildValue("(dddd)", r.x, r.y, r.width, r.height); } ``` The `PyArg_ParseTuple` in the first function is quite self-explanatory. A 4-tuple of double (floating point) values, in the order x, y, width and height. Upvotes: 3
2018/03/15
498
1,810
<issue_start>username_0: I'm struggling to find a way to execute a function in the main controller when it loads. When the main controller is loaded the first time, I can get that function executed inside `onInit`. But the issue is when user logs out and logs back in the main controller, the method `onInit` does not get executed again. Is there a way to execute a function every time when controller loads?<issue_comment>username_1: What do you mean by "when user logout and log back in the Main Controller"?? When your app loads the view the first time, it executes the onInit, onBeforeRendering, onAfterRendering and other lifecycle events. But if you don't destroy the controller instance you never 'log out' of it. It remains there as an object in your DOM and their functions can be called whenever is needed. Now, if you are using the UI5 Router to navigate back and forth to other views, then I suggest you to set 'PatternMatched' events in your router. This events will be fired whenever the given pattern is match, no matter if it is the 1st or the n-th time. Check out: * [read optional url parameters](https://stackoverflow.com/questions/48320140/read-optional-url-parameters/48330932#48330932) * [Step 32: Routing with Parameters](https://sapui5.netweaver.ondemand.com/sdk/#/topic/2366345a94f64ec1a80f9d9ce50a59ef) Upvotes: 0 <issue_comment>username_2: **Below code will help you to achieve what your are looking for** ``` onInit: function() { this.getRouter().getRoute("routeName").attachPatternMatched(this._onObjectMatched, this); } _onObjectMatched: function() { //this function executes every time you navigate to this page } ``` [Demokit link for detailed information](https://sapui5.hana.ondemand.com/#/api/sap.ui.core.routing.Router/events/Summary) Upvotes: 3 [selected_answer]
2018/03/15
492
1,729
<issue_start>username_0: I have a problem with an update with MongoDB. My schema look like this: ``` Project: { _id: ObjectId(pro_id) // some data dashboard_group: [ { _id: ObjectId(dgr_id) dgr_name: "My Dashboard" dgr_tasks: [ id1, id2, ... ] }, // other dashboards ] } ``` I want to remove id2 but the $pull operator seems not work. Mongo return me this : ``` result: { lastErrorObject: { n: 1, updatedExisting: true }, ok: 1 } ``` This is my request: ``` db.Project.findOneAndUpdate({ "dashboard_group._id": dgr_id }, { $pull: { "dashboard_group.$.dgr_tasks": id2 } }); ``` dgr\_id is already cast to ObjectId before the query and I verified the value that I want to remove. Can anyone have an idea ?<issue_comment>username_1: So, I found a solution with the `$[]` identifier. It's not its basic utility, but it fit to my case. A task ID cannot be at 2 location, it belongs to 1 and only 1 dashboard. So if you make a request like : ``` db.Project.findOneAndModify({ "dashboard_group._id": dgr_id }, { $pull: { "dashboard_group.$[].dgr_tasks": id2 } }); ``` Mongo will remove all value that match id2. Without the `{multi: true}` option, it will make the update 1 time, and my item is indeed remove from my nested array. Upvotes: 1 <issue_comment>username_2: You will need to select the particular array element using `"$elemMatch"` like this Query : `{"dashboard_group":{"$elemMatch":{dgr_name:"My Dashboard"}}}` Update : `{$pull:{"dashboard_group.$.dgr_tasks":"id2"}}` Upvotes: 3 [selected_answer]
2018/03/15
487
1,977
<issue_start>username_0: I'm want to hide script field from new scripts. (script are shown by default by every script) [![enter image description here](https://i.stack.imgur.com/BRulS.jpg)](https://i.stack.imgur.com/BRulS.jpg) One way is writing `CustomEditor` for our script. but i want faster way do that. without writing editor script for my new script.<issue_comment>username_1: I suggest that create new attribute for more control. ``` using System; [AttributeUsage(AttributeTargets.Class, Inherited = false, AllowMultiple = false)] public sealed class HideScriptField : Attribute { } ``` And as editor script for all script that has this new attribute ``` using UnityEditor; using UnityEngine; [CustomEditor(typeof(MonoBehaviour), true)] public class DefaultMonoBehaviourEditor : Editor { private bool hideScriptField; private void OnEnable() { hideScriptField = target.GetType().GetCustomAttributes(typeof(HideScriptField), false).Length > 0; } public override void OnInspectorGUI() { if (hideScriptField) { serializedObject.Update(); EditorGUI.BeginChangeCheck(); DrawPropertiesExcluding(serializedObject, "m_Script"); if (EditorGUI.EndChangeCheck()) serializedObject.ApplyModifiedProperties(); } else { base.OnInspectorGUI(); } } } ``` Now in every new `MonoBehaviour` script if you want to hide script field you can add simply this attribute (`HideScriptField`) on it. and done. Upvotes: 2 <issue_comment>username_2: From what I understand from your extremely poorly worded question I managed to conclude that you need [attributes](https://docs.unity3d.com/Manual/Attributes.html) ``` [SerializeField] private CustomClass x; ``` is a private field that is shown in inspector. ``` [HideInInspector] public CustomClass x; ``` is a public field that is not visible in inspector. Upvotes: -1
2018/03/15
484
1,471
<issue_start>username_0: I'm doing some comparison with dates, and when I do the following: ``` from_unixtime(unix_timestamp(a11.duedate),'dd-MM-yyyy') < from_unixtime(unix_timestamp(),'dd-MM-yyyy') ``` **returns false** The a11.duedate is 21-02-2018 and is returning false when comparing with today's date. When I compare it in miliseconds, **returns true**: ``` unix_timestamp(a11.duedate) < unix_timestamp() ``` What I am missing here? In my view, both conditions should return true.<issue_comment>username_1: Use the standard `yyyy-MM-dd` comparison. Note that the return type of `from_unixtime` is `string` and hence you get `false` (because of the string comparison) with the query you run. ``` from_unixtime(unix_timestamp(a11.duedate,'dd-MM-yyyy'),'yyyy-MM-dd') < from_unixtime(unix_timestamp(),'yyyy-MM-dd') ``` One more option is compare the unix timestamps rather than converting them to date strings. Upvotes: 1 <issue_comment>username_2: Why are you doing all this conversion? There is no need to go to unix timestamps for this. Assuming the value is stored as a date (which it should be): ``` a11.duedate < CURRENT_DATE() ``` If it is not stored as a date, then I believe you can do: ``` unix_timestamp(a11.duedate, 'dd-MM-yyyy') < unix_timestamp(current_date()) ``` Or, if you want to practice arithmetic: ``` unix_timestamp(a11.duedate, 'dd-MM-yyyy') < floor(current_timestamp / (24*60*60)) * 24*60*60 ``` Upvotes: 3 [selected_answer]
2018/03/15
616
1,955
<issue_start>username_0: edit: After my plan changed (Bryan saying I could not use the 'select' library for TkInter) I tried using multi-threading and I've typed something which doesn't seem to work: ``` def receive_data(): try: (client_socket, address) = server_socket.accept() print client_socket.recv(1024) except socket.error: print 'error' def server_window(): window = Tk() data = '' activate_window(window, 666, 666) b1 = Button(window, text='RECEIVE DATA', fg='sienna', bg='khaki', command=lambda info=data: handle_waiter(info)) b1.grid(sticky='W', row=2, column=0, columnspan=1) b1 = Button(window, text='ORDERS', fg='sienna', bg='khaki', command=orders_window) b1.grid(sticky='W', row=1, column=0, columnspan=1) thread = threading.Thread(target=receive_data()) thread.start() window.mainloop() ``` Yet, what I have done seems to be blocking the GUI... I would be glad if anyone could help me<issue_comment>username_1: Use the standard `yyyy-MM-dd` comparison. Note that the return type of `from_unixtime` is `string` and hence you get `false` (because of the string comparison) with the query you run. ``` from_unixtime(unix_timestamp(a11.duedate,'dd-MM-yyyy'),'yyyy-MM-dd') < from_unixtime(unix_timestamp(),'yyyy-MM-dd') ``` One more option is compare the unix timestamps rather than converting them to date strings. Upvotes: 1 <issue_comment>username_2: Why are you doing all this conversion? There is no need to go to unix timestamps for this. Assuming the value is stored as a date (which it should be): ``` a11.duedate < CURRENT_DATE() ``` If it is not stored as a date, then I believe you can do: ``` unix_timestamp(a11.duedate, 'dd-MM-yyyy') < unix_timestamp(current_date()) ``` Or, if you want to practice arithmetic: ``` unix_timestamp(a11.duedate, 'dd-MM-yyyy') < floor(current_timestamp / (24*60*60)) * 24*60*60 ``` Upvotes: 3 [selected_answer]
2018/03/15
460
1,462
<issue_start>username_0: I feel like this is incredibly easy to fix, but for some reason it isn't. I want to run a program in linux that opens python file filename.py by writing: > > python3 filename arg > > > but it only works if i write: > > python3 filename.py arg > > > Is there an easy way to run it without adding the extension? And without removing the extension completely? I wouldn't have imagined this to be a problem at all, but here we are. Thankful for help!<issue_comment>username_1: Use the standard `yyyy-MM-dd` comparison. Note that the return type of `from_unixtime` is `string` and hence you get `false` (because of the string comparison) with the query you run. ``` from_unixtime(unix_timestamp(a11.duedate,'dd-MM-yyyy'),'yyyy-MM-dd') < from_unixtime(unix_timestamp(),'yyyy-MM-dd') ``` One more option is compare the unix timestamps rather than converting them to date strings. Upvotes: 1 <issue_comment>username_2: Why are you doing all this conversion? There is no need to go to unix timestamps for this. Assuming the value is stored as a date (which it should be): ``` a11.duedate < CURRENT_DATE() ``` If it is not stored as a date, then I believe you can do: ``` unix_timestamp(a11.duedate, 'dd-MM-yyyy') < unix_timestamp(current_date()) ``` Or, if you want to practice arithmetic: ``` unix_timestamp(a11.duedate, 'dd-MM-yyyy') < floor(current_timestamp / (24*60*60)) * 24*60*60 ``` Upvotes: 3 [selected_answer]
2018/03/15
2,477
6,588
<issue_start>username_0: How would you solve? Macros: * 1g of protein = 4cal * 1g of carbs = 4cal * 1g of fat = 9cal Calorie limit = **1000cal** initially i get 3 input fields, 1 for each macro split like this. * input for protein value = 100g (40% of cals) * input for carbs value = 100g (40% of cals) * input for fat value 22g (20% of cals) What i need to do is, if i edit any of the input gram value, the neighbour inputs change values to facilitate my change and the total grams still add up to 100% of my calorie limit. I need a solution in JS /jQuery. Here is a starter template you may find useful. <http://jsbin[--DOT--]com/gaqugiwebo/2/edit?html,js,console,output><issue_comment>username_1: You could make something like this: `a = input1 + input2 + input3` And the make this calculation: `%input1 = (input1 * 100) / a`. But I'm agree with @amflare you could try harder. The problem was basic math. ``` function calculatePorc() { // calculate total var total = 0; for(var i = 1; i < 4; i++) { total += parseInt($('input'+i).val()) || 0 } // calculate porcetage for(i = 1; i < 4; i++) { var current = parseInt($('input'+i).val()) || 0; var porc = (current * 100) / total || 0; } } ``` Here is a [JSFIDDLE](https://jsfiddle.net/khkbdLen/16/) Upvotes: 1 <issue_comment>username_2: There are other jQuery-based options for 3-way inputs that "must always equal 100%". A couple I've played with: --- jQuery UI range slider ====================== A jQuery-UI range [slider](https://jqueryui.com/slider/) with [multiple handles](https://jqueryui.com/slider/#range), like this: (from my answer [here](https://stackoverflow.com/a/68834441/8112776)) [![example](https://i.stack.imgur.com/IPRdj.png)](https://stackoverflow.com/a/68834441/8112776) **Demo snippet:** ```js const dvRed=29.9, dvGreen=58.7; //defaults var dvs=[dvRed, dvRed+dvGreen]; $(document).ready( function(){ //setup slider $('#slider').slider({ min: 0, max: 100, step: 0.1, values: dvs, range: true, slide: function(event, ui) { $.each(ui.values, function(i, v) { updateSlider(); //update vals on change }); } }); updateSlider(); //initial update of vals }); function updateSlider(){ //get slider values var R=Math.round($('#slider').slider('values')[0]*10)/10, B=Math.round((100-$('#slider').slider('values')[1])*10)/10, G=Math.round(((100-(R+B)))*10)/10; //set slider track to 3 colors $('.ui-slider').css('background','linear-gradient(90deg, red 0% '+R+'%, green '+R+'% '+(G+R)+'%, blue '+(G+R)+'% 100%'); //center labels between handles $('#val1').html(R+'%').css('left',R/2+'%'); $('#val2').html(G+'%').css('left',R+(G/2)+'%'); $('#val3').html(B+'%').css('left',R+G+(B/2)+'%'); //set body background color var bg='rgb('+R*2.55+','+G*2.55+','+B*2.55+')'; $(document.body).css('background', bg); } ``` ```css body{ font-family: Roboto, Helvetica, Arial, sans-serif; } #slider_cont{ position:relative; width:60vw; margin:5vh 20vw 10vh; } #slider{ width:100%; } #slider_vals{ width:100%; } #val1{ position:absolute; left:33%; color:red; } #val2{ position:absolute; left:50%; color:green; } #val3{ position:absolute; left:66%; color:blue; } .ui-slider-range { background:transparent !important;} .ui-slider{background-image:linear-gradient(90deg, red 0 40%, green 40% 60%, blue 60% 100%);} ``` ```html ``` --- Pie chart as control ==================== Another option would be to use a pie chart as a control, either custom-built from [SVG](https://developer.mozilla.org/docs/Web/SVG/Tutorial/Paths#arcs), or better yet, from an existing library such as [Chart.js](https://www.chartjs.org/docs/latest/charts/doughnut.html). ![animated screenshot](https://i.stack.imgur.com/vhrxR.gif) **Demo snippet:** ```js var r=2990, g=5870, b=1140; var step=150, lastHover=-1, adj; var chart=document.getElementById('chart-area'); $('#rval').html(r); $('#gval').html(g); $('#bval').html(b); var config = { type: 'pie', data: { datasets: [{ data: [ r,g,b ], backgroundColor: [ '#f22','#1f1','#1e90ff' ], label: 'rgb' }], labels: ['R','G','B'] }, options: { legend: {display: false}, animation: { duration: 200, easing: "easeOutQuart", onProgress : function(){ addLabels(this); } }, responsive: true, tooltips: { enabled: false }, onHover: function(e,i){ if (typeof i[0]!=="undefined"){lastHover=(i[0]._index); } }, } }; chart.onmouseout=function(){lastHover=null;}; window.onload = function(){ var ctx = chart.getContext('2d'); window.myPie = new Chart(ctx, config); }; chart.onwheel=(function(e) { ///// wheel turned e.preventDefault(); adj=(e.deltaY<0?step:-step); upd(lastHover,adj); }); $('button').click( function() { ///// button clicked adj=(this.id.substr(1,2)=='up'?step:-step); var id=(['r','g','b']).indexOf(this.id.substr(0,1)); upd(id, adj); }); function upd(id, adj){ var dd=config.data.datasets[0].data; /*subf*/function adjust(vChg, v2, v3){ var adj2=Math.round(((10000-(vChg+v2+v3))/(v2+v3))*v2); return[vChg, v2+adj2, 10000-(vChg+v2+adj2)]; } switch(id){ case 0: [dd[0],dd[1],dd[2]]=adjust(dd[0]+adj,dd[1],dd[2]); break; case 1: [dd[1],dd[2],dd[0]]=adjust(dd[1]+adj,dd[2],dd[0]); break; case 2: [dd[2],dd[0],dd[1]]=adjust(dd[2]+adj,dd[0],dd[1]); } [r,g,b]=dd; var prev_r=parseInt($('#rval').html()), //get values prev_g=parseInt($('#gval').html()), prev_b=parseInt($('#bval').html()); if(r>prev_r){ $('#rup').css('background','red');} //hilite affected buttons else{if(rprev\_g){ $('#gup').css('background','green');} else{if(gprev\_b){ $('#bup').css('background','blue');} else{if(b ``` ```css body{font-size:16px; font-family:Helvetica; padding:0; margin:0;} #cancont{position:absolute;left:75px; width:225px; height:10px; } canvas {position:relative; right:0; } button{height:25px; transition:all 500ms; } button:hover{ transform:scale(1.15); background:springgreen; } #rup,#gup,#bup{border-radius:0 50% 50% 0; } #rdn,#gdn,#bdn{border-radius:50% 0 0 50%; } #rup,#rdn{border-color:red;} #gup,#gdn{border-color:green;} #bup,#bdn{border-color:blue;} table{ border-collapse:collapse; padding:0; position:absolute; top:20px; } td{text-align:center;} tr{margin:100px 0;} h4{ position:absolute; bottom:10px; } ``` ```html | | | | | --- | --- | --- | | - R | | R + | | - G | | G + | | - B | | B + | #### Adjust with RGB± buttons, or use the mouse-wheel over chart. ``` Upvotes: 0
2018/03/15
512
1,052
<issue_start>username_0: I have the code below: ``` binance.depth("GTOBTC", (error, depth, symbol) => { console.log(depth.bids); }) ``` This outputs: ``` '0.00003061': 481, '0.00003050': 100, '0.00003047': 330, '0.00003046': 395, '0.00003044': 1000, ``` I would like to loop through this object and store the string value before the ':' into a temporary variable.<issue_comment>username_1: `the string value before the ':'` is called a key, and you access an object's keys using `Object.keys()`. In your case : ``` let keys = Object.keys(depth.bids) // ['0.00003061', '0.00003050', etc. ] for( let key of keys ){ // First pass key=='0.00003061', second pass key=='0.00003050' } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use `Object.keys()` to get the keys of an object. ``` var bids = {'0.00003061': 481, '0.00003050': 100, '0.00003047': 330, '0.00003046': 395, '0.00003044': 1000 }; var keys = Object.keys(bids ); // output: ["0.00003061", "0.00003050", "0.00003047", "0.00003046", "0.00003044"] ``` Upvotes: 1
2018/03/15
458
1,151
<issue_start>username_0: I have an array ``` myArray = ['A123000','A234000','A456000'] ``` I want to slice out the first 4 characters from each array item, so the output looks like this: ``` ['A123','A234','A456'] ``` I tried this but it doesn't like array - Am I missing an each or something? Help! ``` myArray.slice(0,3); ```<issue_comment>username_1: Use the same `slice` within `map` for each element ``` var output = myArray.map( s => s.slice(0,4) ) ``` or with `substring` ``` var output = myArray.map( s => s.substring(0,4) ) ``` Upvotes: 2 <issue_comment>username_2: ```js let myArray = ['A123000','A234000','A456000']; let newArray = myArray.map(a => a.slice(0,4) ); console.log( newArray ); ``` Upvotes: 2 <issue_comment>username_3: You can use `.map` and then use `.substring` on each element ```js var arr = ['A123000','A234000','A456000']; var sliced = arr.map(el => el.substring(0, 4)); console.log(sliced); ``` Upvotes: 0 <issue_comment>username_4: Try following ```js let myArray = ['A123000','A234000','A456000']; result = myArray.map(a => a.slice(0,4) ); console.log( result ); ``` Upvotes: 0
2018/03/15
557
1,716
<issue_start>username_0: I am confusing with the "**'iter' was not declared in this scope**" error. ``` #include using std::vector; int main() { vector vec{1,2,3,4,5,6}; for(std::size\_t i,vector::iterator iter=vec.begin();iter!=vec.end();++i,++iter) { //do something } } ```<issue_comment>username_1: It can, but both variables need to be the same type. The common thing to do, when you need to maintain an index as well as an iterator is to write ``` { // extra scoping block std::size_t i = 0; // indeed, you need to initialise i for(vector::iterator iter = vec.begin(); iter != vec.end(); ++i,++iter) { // do something } } ``` Upvotes: 3 <issue_comment>username_2: Just like you can do ``` int a = 10, b = 20; ``` A for loop's first section does the same thing. Since you can't do ``` int a = 10, double b = 20; ``` The same thing applies to the for loop. Technically you can have two different types as you can declare an type and a pointer to that same type on the same line like ``` int i = 0, *b = nullptr; ``` And that is also valid to do in a for loop. Upvotes: 5 [selected_answer]<issue_comment>username_3: In C++17, there is a feature known as a [structured binding declaration](http://en.cppreference.com/w/cpp/language/structured_binding#Case_2:_binding_a_tuple-like_type) that allows you do to do this. For example: ``` for (auto [iter, i] = std::tuple{vec.begin(), 0u}; iter != vec.end(); ++iter, ++i) { /* ... */ } ``` While this nearly matches the syntax you were hoping for, the `tuple` part is not very readable, so I would just declare one of the variables outside of the loop. [Live Example](https://wandbox.org/permlink/OiHEqSlJPYiNoOfy) Upvotes: 3
2018/03/15
508
1,668
<issue_start>username_0: Have any of you experienced this issue in Acumatica? Adding OrderBy to Select2 in PXProjection attribute has no effect. I checked the SQL query in Request Profiler and the data is sorted by key fields of the DAC.<issue_comment>username_1: It can, but both variables need to be the same type. The common thing to do, when you need to maintain an index as well as an iterator is to write ``` { // extra scoping block std::size_t i = 0; // indeed, you need to initialise i for(vector::iterator iter = vec.begin(); iter != vec.end(); ++i,++iter) { // do something } } ``` Upvotes: 3 <issue_comment>username_2: Just like you can do ``` int a = 10, b = 20; ``` A for loop's first section does the same thing. Since you can't do ``` int a = 10, double b = 20; ``` The same thing applies to the for loop. Technically you can have two different types as you can declare an type and a pointer to that same type on the same line like ``` int i = 0, *b = nullptr; ``` And that is also valid to do in a for loop. Upvotes: 5 [selected_answer]<issue_comment>username_3: In C++17, there is a feature known as a [structured binding declaration](http://en.cppreference.com/w/cpp/language/structured_binding#Case_2:_binding_a_tuple-like_type) that allows you do to do this. For example: ``` for (auto [iter, i] = std::tuple{vec.begin(), 0u}; iter != vec.end(); ++iter, ++i) { /* ... */ } ``` While this nearly matches the syntax you were hoping for, the `tuple` part is not very readable, so I would just declare one of the variables outside of the loop. [Live Example](https://wandbox.org/permlink/OiHEqSlJPYiNoOfy) Upvotes: 3
2018/03/15
1,164
4,253
<issue_start>username_0: In another [thread](https://stackoverflow.com/a/42760021/9472937) I found a solution for an underline for a *segmented control*. The important line for my problem seems to be this one: ``` let underlineWidth: CGFloat = self.bounds.size.width / CGFloat(self.numberOfSegments) ``` It turns out that the calculation of the width is not entirely correct. Since I have always 3 segments I expected this value to be a third of the screen width (regardless of the device). On this screenshot you can see the application running on two different devices: [Screenshot](https://i.stack.imgur.com/k52eB.png) As you can see, on the iPhone 6S, the width of the underline is slightly too big, where as on the iPhone 8Plus it's too small. That can only mean that self.bounds.size.width doesn't return the correct width. --- The whole class for the segmented control: ``` import UIKit import Foundation extension UISegmentedControl{ func removeBorder(){ let backgroundImage = UIImage.getColoredRectImageWith(color: UIColor.clear.cgColor, andSize: self.bounds.size) let backgroundImageTest = UIImage.getColoredRectImageWith(color: UIColor.red.cgColor, andSize: self.bounds.size) self.setBackgroundImage(backgroundImage, for: .normal, barMetrics: .default) self.setBackgroundImage(backgroundImageTest, for: .selected, barMetrics: .default) self.setBackgroundImage(backgroundImage, for: .highlighted, barMetrics: .default) let deviderImage = UIImage.getColoredRectImageWith(color: UIColor.clear.cgColor, andSize: CGSize(width: 1.0, height: self.bounds.size.height)) self.setDividerImage(deviderImage, forLeftSegmentState: .selected, rightSegmentState: .normal, barMetrics: .default) self.setTitleTextAttributes([NSAttributedStringKey.foregroundColor: UIColor.gray], for: .normal) self.setTitleTextAttributes([NSAttributedStringKey.foregroundColor: UIColor(red: 67/255, green: 129/255, blue: 244/255, alpha: 1.0)], for: .selected) } func addUnderlineForSelectedSegment(){ removeBorder() let underlineWidth: CGFloat = self.bounds.size.width / CGFloat(self.numberOfSegments) let underlineHeight: CGFloat = 2.0 let underlineXPosition = CGFloat(selectedSegmentIndex * Int(underlineWidth)) let underLineYPosition = self.bounds.size.height - 1.0 let underlineFrame = CGRect(x: underlineXPosition, y: underLineYPosition, width: underlineWidth, height: underlineHeight) let underline = UIView(frame: underlineFrame) underline.backgroundColor = UIColor(red: 67/255, green: 129/255, blue: 244/255, alpha: 1.0) underline.tag = 1 self.addSubview(underline) } func changeUnderlinePosition(){ guard let underline = self.viewWithTag(1) else {return} let underlineFinalXPosition = (self.bounds.width / CGFloat(self.numberOfSegments)) * CGFloat(selectedSegmentIndex) UIView.animate(withDuration: 0.1, animations: { underline.frame.origin.x = underlineFinalXPosition }) } } extension UIImage{ class func getColoredRectImageWith(color: CGColor, andSize size: CGSize) -> UIImage{ UIGraphicsBeginImageContextWithOptions(size, false, 0.0) let graphicsContext = UIGraphicsGetCurrentContext() graphicsContext?.setFillColor(color) let rectangle = CGRect(x: 0.0, y: 0.0, width: size.width, height: size.height) graphicsContext?.fill(rectangle) let rectangleImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() return rectangleImage! } } ```<issue_comment>username_1: It is possible that you are trying to obtain dimensions in the method `func viewDidLoad()`. Try to take the exact dimensions in the method `func viewDidLayoutSubviews()`. [See also this answer](https://stackoverflow.com/a/41330484/8069241) Upvotes: 2 <issue_comment>username_2: The exact size of the view is only correct after the view has rendered. Before this has been done the view has a default size (e.g. as in the xib or storyboard). In `viewDidLayoutSubviews()` or `viewDidAppear()`/`viewWillAppear()` the size will definitely be calculated right. Upvotes: -1
2018/03/15
435
1,473
<issue_start>username_0: If anyone has done anything like below please help. What I'm looking for is macro that looks at my A2 value and copy that in column D based on value B with "\_"(underscore) after it. [![enter image description here](https://i.stack.imgur.com/aytDN.png)](https://i.stack.imgur.com/aytDN.png)<issue_comment>username_1: Your request is little short on particulars but this will do what you're asking. ``` dim i as long with worksheets("sheet1") for i=1 to .cells(2, "B").value2 .cells(.rows.count, "D").end(xlup).offset(1, 0) = .cells(2, "A").value & format(i, "\_0") next i end with ``` Upvotes: 0 <issue_comment>username_2: You would need 2 loops for this. One looping through column A and one counting up to the value in column B. ``` Option Explicit Public Sub WriteValues() With Worksheets("Sheet1") Dim aLastRow As Long aLastRow = .Cells(.Rows.Count, "A").End(xlUp).Row 'get last used row in col A Dim dRow As Long dRow = 1 'start row in col D Dim aRow As Long For aRow = 1 To aLastRow 'loop through col A Dim bCount As Long For bCount = 1 To .Cells(aRow, "B").Value 'how many times is A repeated? .Cells(dRow, "D").Value = .Cells(aRow, "A") & "_" & bCount 'write into column D dRow = dRow + 1 'count rows up in col D Next bCount Next aRow End With End Sub ``` Upvotes: 3 [selected_answer]
2018/03/15
1,527
6,659
<issue_start>username_0: At regular times we want to clean up (delete) records from our production DB (DB2) and move them to an archive DB (also DB2 database having the same schema). To complete the story there are plenty of foreign key constraints in our DB. So if record b in table B has a foreign key to record a in table A and we are deleting record a in production DB then also record b must be deleted in the production B and both records must be created in the archive DB. Of course it is very important that no data gets lost. So that it is not possible that we delete records in the production DB while these records will never be inserted in the archive DB. What is the best approach to do this ? FYI I have checked <https://www.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.admin.dm.doc/doc/r0024482.html> and the proposed solutions have following short comings. * ***Load utility***, ***Ingest utility***, ***Import utility*** : is only addressing the part of inserting records in the archive DB. It doesn't cover the full move. * ***Export utility*** : is only covering a means of exporting data (which might be imported by the Import utility\*. * ***db2move***, ***restore command***, ***db2relocatedb, ADMIN\_COPY\_SCHEMA, ADMIN\_MOVE\_TABLE*** and ***split mirror*** : are not an option if you only want to move specific records meeting a certain condition to the archive DB. So based on my research, the current best solution seems to be a kind of in-house developed script that is 1. Exporting the records to move in IXF format 2. Importing those exported records in the archive DB 3. deleting those records in the production DB In order to cause no transaction log full errors, this script should do this in batches (e.g. of 50000 records) In order to have no foreign key constraint errors in step 3: we must also assure that in step 1 we are also exporting all records having foreign key constraint to the exported records and all records having a foreign key constraint to these records ...<issue_comment>username_1: There are tools out there for this (such as Optim Archive) which may better satisfy requirements you didn't realize you had. In the interim - look into federation and the tool `asntdiff`. On the archive database you can define a connection to the live database (`CREATE SERVER`). Using this definition you can define nicknames to the live tables (`CREATE NICKNAME`). Using these nicknames you can load the appropriate data into your archive table. You can either use your favorite data movement utility - load, import, insert, etc. Once loaded you can verify the tables by using the asntdiff tool with appropriate selection criteria. [The `-f` option is great](https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_10.2.1/com.ibm.swg.im.iis.db.repl.utilities.doc/topics/iiyrctdfrfileopt.html?cp=SSEPGG_11.1.0). Once you are satisfied the data exists in both locations you can delete the rows in the live database. For your foreign key relationships - use the view SYSCAT.TABDEP to find such dependencies. You can define your foreign keys as "not enforced" (or don't define them) in the archive database to avoid errors during the previous process. Data archiving is a big and common topic regardless of the database. You may also want to look at [range partitioned tables](https://www.ibm.com/developerworks/data/library/dmmag/DMMag_2010_Issue3/DistributedDBA/index.html) for better performance and control. Upvotes: 0 <issue_comment>username_2: Questions that ask the "best" approach have limited use because the assessment criteria are omitted. Sometimes the assessment criteria differ between technicians and business people. Sometimes multiple policies of the client company can determine such criteria, so awareness of local policies and procedures or patterns is crucial . Often the operational-requirements and security-requirements and licensing-requirements will influence the approach, apart from the skill level and experience of the implementation-team. Occasionally corporates have specific standardised tools for archival and deletion, or specific patterns sometimes influenced by the industry-sector or even industry-specific regulatory requirements. As stackoverflow is a programming oriented website, questions like yours can be considered off-topic because you are asking for advice about which design-approach are possible while omitting lots of context that is specific to your company/industry-sector that may well influence the solution pattern. Some typical requirements or questions that influence the approach are below: * do local security requirements allow the data to leave the Db2 environment? (i.e. data stored on disk outside of Db2 tables). Sometimes this constrains use of export, or load-from-file/pipe). The data can be at risk of modification or inspection or deletion (whether accidental or deliberate) whilst outside of the RDBMS. * the restartability of the solution in the event of runtime errors. This is often a crucial requirement. When copying data between different physical databases (even if the same RDBMS) there are many possibilities of error (network errors, resource issues, concurrency issues, operational issues etc). Must the solution guarantee that any restarts after failures resume from the point of failure, or must cleanup happen and the entire job be restarted? The answer can determine the design. * if federation exists between the two databases (or if it can be added within the Db2-licence terms), then this is often the easiest practical approach to push or pull content. Local and remote tables appear to be in the same logical database which simplifies the approach. The data never needs to leave the RDBMS. This also simplifies restartability of failed jobs. It also allows the data to remain encrypted if that is a requirement. * if SQL-replication or Q-based-replication is licensed then it can be configured to intelligently sync the source and target tables and respect RI if suitably configured. This approach requires significant configuration skills. * if the production database is highly-available, and/or if the archival database is highly-available then the solution must respect the HA approach. Sometimes this prevents use of LOAD, depending on the operating-system platform of the Db2-server. * timing windows for scheduling are often crucial. If the archival+removal job must guarantee to fully complete with specific time intervals this can influence the design pattern. * if fastest rollout is a key requirement then range-partitioning is usually the best option. Upvotes: 2 [selected_answer]
2018/03/15
954
3,740
<issue_start>username_0: Currently I am working on a project using Unity and the Virtual Reality Toolkit (VRTK). I want to log the times of the grabbed objects (How long the user grabs an object). So far I got this: ``` using System; using System.Linq; using System.Text; using System.IO; using System.Collections; using System.Collections.Generic; using UnityEngine; using VRTK; public class System_2_Script_Slider_X : MonoBehaviour { float Timer; void Start() { Timer = 0; if (GetComponent() == null) { Debug.LogError("Team3\_Interactable\_Object\_Extension is required to be attached to an Object that has the VRTK\_InteractableObject script attached to it"); return; } GetComponent().InteractableObjectGrabbed += new InteractableObjectEventHandler(ObjectGrabbed); GetComponent().InteractableObjectUngrabbed += new InteractableObjectEventHandler(ObjectUngrabbed); } private void ObjectGrabbed(object sender, InteractableObjectEventArgs e) { //Some Code Timer += Time.deltaTime; } private void ObjectUngrabbed(object sender, InteractableObjectEventArgs e) { //Some Code File.AppendAllText("PATH/Timer.txt", Timer.ToString()); Timer = 0; } } ``` Unfortunately this doesn't work as I get times of around 2 milliseconds even though I grabbed the object way longer. So it seems the `ObjectGrabbed()` function starts the timer but stops it immediately. How can I fix this?<issue_comment>username_1: The object grabbed function is likely only called once (when you grab). It doesn't start a timer, it just add the time since last frame to your variable. If it is only called once, it will only add the time of one frame to the `Time` variable. What you should do instead is register **when** the object was grabbed so you can compute the total length when it is released: ``` using System; using System.Linq; using System.Text; using System.IO; using System.Collections; using System.Collections.Generic; using UnityEngine; using VRTK; public class System_2_Script_Slider_X : MonoBehaviour { private float _timeWhenGrabbed; void Start() { if (GetComponent() == null) { Debug.LogError("Team3\_Interactable\_Object\_Extension is required to be attached to an Object that has the VRTK\_InteractableObject script attached to it"); return; } GetComponent().InteractableObjectGrabbed += new InteractableObjectEventHandler(ObjectGrabbed); GetComponent().InteractableObjectUngrabbed += new InteractableObjectEventHandler(ObjectUngrabbed); } private void ObjectGrabbed(object sender, InteractableObjectEventArgs e) { //Some Code \_timeWhenGrabbed = Time.time; } private void ObjectUngrabbed(object sender, InteractableObjectEventArgs e) { //Some Code var grabDuration = Time.time - \_timeWhenGrabbed; File.AppendAllText("PATH/Timer.txt", grabDuration.ToString()); } } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Probably the `ObjectGrabbed` method gets fired only when the `Grab` action occurs, and not every frame the object is grabbed. So you should probably setup your code by using some `Timer` or `StopWatch` (The stopwatch has more precision). By starting the timer into the `ObjectGrabbed` method and stopping the timer into the `ObjectUngrabbed` method. Something like this: ``` using System.Diagnostic.Stopwatch; ... private StopWatch Watcher = new StopWatch(); ... private void ObjectGrabbed(object sender, InteractableObjectEventArgs e) { Watcher.Start(); } private void ObjectUngrabbed(object sender, InteractableObjectEventArgs e) { Watcher.Stop(); long millis = Watcher.ElapsedMilliseconds; Watcher.Reset(); File.AppendAllText("PATH/Timer.txt", millis.ToString()); } ``` Upvotes: 2
2018/03/15
631
1,864
<issue_start>username_0: I met this problem when I was compiling the Android 7.1.2 source code after I updated my debian. I do not know what is the real problem . It seems problem from the flex. However, how can i solve it? > > FAILED: /bin/bash -c "prebuilts/misc/linux-x86/flex/flex-2.5.39 > -oout/host/linux-x86/obj/STATIC\_LIBRARIES/libaidl-common\_intermediates/aidl\_language\_l.cpp > system/tools/aidl/aidl\_language\_l.ll" flex-2.5.39: loadlocale.c:130: > \_nl\_intern\_locale\_data: Assertion `cnt < (sizeof (\_nl\_value\_type\_LC\_TIME) / sizeof (\_nl\_value\_type\_LC\_TIME[0]))' failed > . > Aborted > > ><issue_comment>username_1: Same issue for me on Ubuntu 18.04. LC\_TIME was set to en\_GB.UTF-8. ``` export LC_ALL=C ``` Fixed it for me Upvotes: 6 <issue_comment>username_2: I built AOSP (Android O/P) downloaded from Google on a newly setup 18.04 and it built fine. Did not have to change the locale. Locale was set to en\_GB.UTF-8. Then I had to build Android N, an IMX distro, on the same machine and the build failed with the above error. After changing the locale variable the build worked fine. Upvotes: 2 <issue_comment>username_3: i was also facing the same error and before make i run "export LC\_ALL=C" in terminal and Issue is fixed. How to integrate this variable in android source code so that i can avoid to run before compilation . Upvotes: 1 <issue_comment>username_4: I had this again recently building `AOSP` with `Ubuntu 22.04`. Setting the `locale` didn't work at all. However i found this [github thread](https://github.com/sonyxperiadev/bug_tracker/issues/136) which suggested to rebuild flex with ``` cd prebuilts/misc/linux-x86/flex rm flex-2.5.39 tar zxf flex-2.5.39.tar.gz cd flex-2.5.39 ./configure make mv flex ../ cd ../ rm -rf flex-2.5.39 mv flex flex-2.5.39 ``` which solved the error for me. Upvotes: 2
2018/03/15
477
1,629
<issue_start>username_0: I want to use Angular Material in my MEAN stack app, but I get the following error: > > Could not find Angular Material core theme. Most Material components > may not work as expected. For more info refer to the theming guide: > <https://material.angular.io/guide/theming> > > > In my Angular CLI apps I manage to work with Angular material, but with this app I just can't seem to make it work. It is probably because of the structure of the app: [![enter image description here](https://i.stack.imgur.com/nvIbC.png)](https://i.stack.imgur.com/nvIbC.png) As you can see on the image I tried with the @import statement. On the homepage I put a checkbox Material item (here named jjjjjj), but as you can see it doesn't have the theme so it doesn't look good. [![enter image description here](https://i.stack.imgur.com/JhtW4.png)](https://i.stack.imgur.com/JhtW4.png)<issue_comment>username_1: I had the same problem and for me it works try to add this line instead: ``` @import "~@angular/material/prebuilt-themes/indigo-pink.css"; ``` you can choose here the theme you want to apply [Material theme](https://material.angular.io/guide/getting-started#step-4-include-a-theme) Upvotes: 2 <issue_comment>username_2: I had the same problem, and it was because I was trying to import the material theme into the css file of one of my components. Moving the import statement to the top-level styles.css file worked. * @angular/core 6.0.0 * @angular/material 6.4.1 Add the line below to the file `src/styles.css`: ``` @import "@angular/material/prebuilt-themes/indigo-pink.css"; ``` Upvotes: 2
2018/03/15
537
1,985
<issue_start>username_0: I want to pass an array to a function from component template, the following is the code for my toolbar: **toolbar.component.html** ```html *{{item.icon}}* ``` **toolbar.component.ts** ```js import { Component, OnInit } from '@angular/core'; @Component({ selector: 'app-toolbar', templateUrl: './toolbar.component.html', styleUrls: ['./toolbar.component.scss'] }) export class ToolbarComponent implements OnInit { items: ToolBarItem[] constructor() {} ngOnInit() {} } export class ToolBarItem { icon = 'border_clear'; color: string; command: () => void; commandParams: any[]; } ``` Here I want to init items of toolbar with varies commands. **main.ts** ```js ... items: [ { icon: 'mode_edit', color: 'blue', command: (name, family) => { console.log('editClick!' + name + family); }, commandParams: ['mohammad', 'farahmand'], }, { icon: 'delete', color: 'red', command: () => { console.log('deleteClick!'); }, } ], ... ``` But i get this error: > > Error: Template parse errors: Parser Error: Unexpected token . at > column 14 in [item.command(...item.commandParams)] in ... > > ><issue_comment>username_1: It's unlikely that you're going to get this syntax to work in a template (there are many valid typescript constructs that don't work in templates). You could write a helper method in the component instead, that takes the item as an argument, and then makes the appropriate call, as in, for example: ```js public doCommand(item: ToolbarItem): void { item.command(...item.commandParams); } ``` and then change your template to: Upvotes: 5 [selected_answer]<issue_comment>username_2: write getter in the component ```js public get items(): ToolBarItem[] { return ...this.item.commandParams; // or whatever You need } ``` template HTML `...` Upvotes: -1
2018/03/15
436
1,639
<issue_start>username_0: I'm writing a function for calling into JavaScript from Swift and would like to accept the name of a function and a list of arguments to call with. The list should be able to contain anything that I can convert into JSON. e.g. ``` callJS(function: "console.log", withArgs: [1, "Hello"]) ``` My naive attempt at this was this: ``` func callJS(function: String, withArgs args: [Encodable]) { let encoder = JSONEncoder() let data = try! encoder.encode(list) let jsonArgs = String(data: data, encoding: .utf8) executeJS("\(function)(...\(jsonArgs))") } ``` Unfortunately, that fails at runtime with the following fatal error: > > Fatal error: Array does not conform to Encodable because Encodable does not conform to itself. You must use a concrete type to encode or decode. > > > Is there any way to get the compiler to embed the types from the call sites, so that `Encodable` knows what to encode?<issue_comment>username_1: It's unlikely that you're going to get this syntax to work in a template (there are many valid typescript constructs that don't work in templates). You could write a helper method in the component instead, that takes the item as an argument, and then makes the appropriate call, as in, for example: ```js public doCommand(item: ToolbarItem): void { item.command(...item.commandParams); } ``` and then change your template to: Upvotes: 5 [selected_answer]<issue_comment>username_2: write getter in the component ```js public get items(): ToolBarItem[] { return ...this.item.commandParams; // or whatever You need } ``` template HTML `...` Upvotes: -1
2018/03/15
1,480
4,807
<issue_start>username_0: Consider the following functions, taken from [the answers](http://www.seas.upenn.edu/~cis194/fall16/sols/08-functor-applicative.hs) to this [problem set](http://www.seas.upenn.edu/~cis194/fall16/hw/08-functor-applicative.html): ``` func6 :: Monad f => f Integer -> f (Integer,Integer) func6 xs = do x <- xs return $ if x > 0 then (x, 0) else (0, x) func6' :: Functor f => f Integer -> f (Integer,Integer) -- slightly unorthodox idiom, with an partially applied fmap func6' = fmap $ \x -> if x > 0 then (x,0) else (0,x) -- func7 cannot be implemented without Monad if we care about the precise -- evaluation and layzness behaviour: -- > isJust (func7 (Just undefined)) -- *** Exception: Prelude.undefined -- -- If we care not, then it is equivalent to func6, and there we can. Note that -- > isJust (func6 (Just undefined)) -- True func7 :: Monad f => f Integer -> f (Integer,Integer) func7 xs = do x <- xs if x > 0 then return (x, 0) else return (0, x) -- func9 cannot be implemented without Monad: The structure of the computation -- depends on the result of the first argument. func9 :: Monad f => f Integer -> f Integer -> f Integer -> f Integer func9 xs ys zs = xs >>= \x -> if even x then ys else zs ``` Although I understand the counterexample for `func7`, I don't understand the given reasoning for why we can implement `func7` and `func9` using monads only. How do monad/applicative/functor laws fit with the above reasoning?<issue_comment>username_1: I don't think typeclass laws are what you need to be worrying about here; in fact, I think the typeclasses unnnecessarily complicate the exercise, if your purpose is to understand nonstrictness. Here's a simpler example where everything is monomorphic, and rather than give examples using bottom, we're going to use `:sprint` in GHCi to watch the extent of the evaluation. ### func6 My `x6` example here corresponds to `func6` in the question. ``` λ> x6 = Just . bool 'a' 'b' =<< Just True ``` Initially, nothing has been evaluated. ``` λ> :sprint x6 x6 = _ ``` Now we evaluate 'isJust x6'. ``` λ> isJust x6 True ``` And now we can see that `x6` has been partially evaluated. Only to its head, though. ``` λ> :sprint x6 y = Just _ ``` Why? Because there was no need to know the result of the `bool 'a' 'b'` part just to determine whether the `Maybe` was going to be a `Just`. So it remains an unevaluated thunk. ### func7 My `x7` example here corresponds to `func7` in the question. ``` λ> x7 = bool (Just 'a') (Just 'b') =<< Just True x :: Maybe Char ``` Again, initially nothing is evaluated. ``` λ> :sprint x7 x = _ ``` And again we'll apply `isJust`. ``` λ> isJust x7 True ``` In this case, the content of the `Just` did get evaluated (so we say this definition was "more strict" or "not as lazy"). ``` λ> :sprint x7 x = Just 'b' ``` Why? Because we had to evaluate the `bool` application before we could tell whether it was going to produce a `Just` result. Upvotes: 2 <issue_comment>username_2: [username_1's answer](https://stackoverflow.com/a/49306344/2751851) covers `func6` versus `func7` very well. (In short, the difference is that, thanks to laziness, `func6 @Maybe` can decide whether the constructor used for the result should be `Just` or `Nothing` without actually having to look at any value within its argument.) As for `func9`, what makes `Monad` necessary is that the function involves using values found in `xs` to decide on the functorial context of the result. (Synonyms for "functorial context" in this setting include "effects" and, as the solution you quote puts it, "structure of the computation".) For the sake of illustration, consider: ``` func9 (fmap read getLine) (putStrLn "Even!") (putStrLn "Odd!") ``` It is useful to compare the types of `fmap`, `(<*>)` and `(>>=)`: ``` (<$>) :: Functor f => (a -> b) -> (f a -> f b) -- (<$>) = fmap (<*>) :: Applicative f => f (a -> b) -> (f a -> f b) (=<<) :: Monad f => (a -> f b) -> (f a -> f b) -- (=<<) = filp (>>=) ``` The `a -> b` function passed to `fmap` has no information about `f`, the involved `Functor`, and so `fmap` cannot change the effects at all. `(<*>)` can change the effects, but only by combining the effects of its two arguments -- the `a -> b` functions that might be found in the `f (a -> b)` argument have no bearing on that whatsoever. With `(>>=)`, though, the `a -> f b` function is used precisely to generate effects from values found in the `f a` argument. I suggest [*Difference between Monad and Applicative in Haskell*](https://stackoverflow.com/q/23342184/2751851) as further reading on what you gain (and lose) when moving between `Functor`, `Applicative` and `Monad`. Upvotes: 2 [selected_answer]
2018/03/15
1,390
4,643
<issue_start>username_0: I am creating a directive in VSCode Editor which loads a html page on specifying a given path: Below is the code for same: ``` @Directive({ selector: 'html-outlet' }) export class HtmlOutlet { @Input() html: string; constructor(private vcRef: ViewContainerRef, private compiler: Compiler) { } ngOnChanges() { const html = this.html; if (!html) return; @Component({ selector: 'dynamic-comp', templateUrl: html }) class DynamicHtmlComponent { }; @NgModule({ imports: [CommonModule], declarations: [DynamicHtmlComponent] }) class DynamicHtmlModule { } this.compiler.compileModuleAndAllComponentsAsync(DynamicHtmlModule) .then(factory => { const compFactory = factory.componentFactories.find(x => x.componentType === DynamicHtmlComponent); const cmpRef = this.vcRef.createComponent(compFactory, 0); }); }} ``` However I keep getting below error: ERROR in ./src/client/app/shared/directives/html-outlet.directive.ts Module not found: Error: Can't resolve ' html' in 'D:\ccw-dev\newnewclient\src\client\app\shared\directives' The same code however works fine in plunker <https://plnkr.co/edit/l8BwjGIMC5tUVjIeh4u4?p=preview> I am wondering what wrong i am doing. I am on angular 5.2.6 version in my VScode solution.<issue_comment>username_1: I don't think typeclass laws are what you need to be worrying about here; in fact, I think the typeclasses unnnecessarily complicate the exercise, if your purpose is to understand nonstrictness. Here's a simpler example where everything is monomorphic, and rather than give examples using bottom, we're going to use `:sprint` in GHCi to watch the extent of the evaluation. ### func6 My `x6` example here corresponds to `func6` in the question. ``` λ> x6 = Just . bool 'a' 'b' =<< Just True ``` Initially, nothing has been evaluated. ``` λ> :sprint x6 x6 = _ ``` Now we evaluate 'isJust x6'. ``` λ> isJust x6 True ``` And now we can see that `x6` has been partially evaluated. Only to its head, though. ``` λ> :sprint x6 y = Just _ ``` Why? Because there was no need to know the result of the `bool 'a' 'b'` part just to determine whether the `Maybe` was going to be a `Just`. So it remains an unevaluated thunk. ### func7 My `x7` example here corresponds to `func7` in the question. ``` λ> x7 = bool (Just 'a') (Just 'b') =<< Just True x :: Maybe Char ``` Again, initially nothing is evaluated. ``` λ> :sprint x7 x = _ ``` And again we'll apply `isJust`. ``` λ> isJust x7 True ``` In this case, the content of the `Just` did get evaluated (so we say this definition was "more strict" or "not as lazy"). ``` λ> :sprint x7 x = Just 'b' ``` Why? Because we had to evaluate the `bool` application before we could tell whether it was going to produce a `Just` result. Upvotes: 2 <issue_comment>username_2: [username_1's answer](https://stackoverflow.com/a/49306344/2751851) covers `func6` versus `func7` very well. (In short, the difference is that, thanks to laziness, `func6 @Maybe` can decide whether the constructor used for the result should be `Just` or `Nothing` without actually having to look at any value within its argument.) As for `func9`, what makes `Monad` necessary is that the function involves using values found in `xs` to decide on the functorial context of the result. (Synonyms for "functorial context" in this setting include "effects" and, as the solution you quote puts it, "structure of the computation".) For the sake of illustration, consider: ``` func9 (fmap read getLine) (putStrLn "Even!") (putStrLn "Odd!") ``` It is useful to compare the types of `fmap`, `(<*>)` and `(>>=)`: ``` (<$>) :: Functor f => (a -> b) -> (f a -> f b) -- (<$>) = fmap (<*>) :: Applicative f => f (a -> b) -> (f a -> f b) (=<<) :: Monad f => (a -> f b) -> (f a -> f b) -- (=<<) = filp (>>=) ``` The `a -> b` function passed to `fmap` has no information about `f`, the involved `Functor`, and so `fmap` cannot change the effects at all. `(<*>)` can change the effects, but only by combining the effects of its two arguments -- the `a -> b` functions that might be found in the `f (a -> b)` argument have no bearing on that whatsoever. With `(>>=)`, though, the `a -> f b` function is used precisely to generate effects from values found in the `f a` argument. I suggest [*Difference between Monad and Applicative in Haskell*](https://stackoverflow.com/q/23342184/2751851) as further reading on what you gain (and lose) when moving between `Functor`, `Applicative` and `Monad`. Upvotes: 2 [selected_answer]
2018/03/15
1,994
5,392
<issue_start>username_0: I'm trying to create a circle navigation button to follow mouse movement when the cursor is inside a certain box. ```js var cer = document.getElementById('cerchio'); var pro = document.getElementById('prova'); pro.addEventListener("mouseover", function() { var e = window.event; var x = e.clientX; var y = e.clientY; cer.style.top = y + "px"; cer.style.left = x + "px"; cer.style.transition = "2s"; }); pro.addEventListener("mouseout", function() { cer.style.top = "15px"; cer.style.left = "15px"; }); ``` ```css #prova { width: 200px; height: 200px; border: 1px solid black; } #cerchio { width: 90px; height: 90px; border: 1px solid red; border-radius: 90px; position: absolute; left: 15px; top: 15px; } #innercircle { width: 120px; height: 120px; position: relative; left: 40px; top: 30px; border: 1px solid red; } ``` ```html ``` so it actually follows the first position of the mouse inside the black bordered box, i want it to update the cursor positioning every time and follow it, also i don't want the red circle to go out the red box, any suggestion? please javascript only not jquery, thanks!<issue_comment>username_1: "mousemove" is the event you want to track for this as you want the position of the mouse as it moves inside the element. You should also pass the event as a callback to the event handler I also fixed the positioning using the getBoundingClientRect() method. ```js var cer = document.getElementById('cerchio'); var pro = document.getElementById('prova'); var innerC = document.getElementById('innercircle'); innerC.addEventListener("mousemove", function(e) { var square = this.getBoundingClientRect(); var squareX = square.x; var squareY = square.y; var squareWidth = square.width; var squareHeight = square.height; var mouseX = e.clientX; var mouseY= e.clientY; var x = e.clientX; var y = e.clientY; cer.style.top = (-squareY + mouseY - (squareHeight / 2 - 15)) + "px"; cer.style.left = (-squareX + mouseX - (squareWidth / 2 - 15)) + "px"; cer.style.transition = "2s"; }); innerC.addEventListener("mouseout", function() { cer.style.top = "15px"; cer.style.left = "15px"; }); ``` ```css #prova { width: 200px; height: 200px; border: 1px solid black; } #cerchio { width: 90px; height: 90px; border: 1px solid red; border-radius: 90px; position: absolute; z-index: -1; left: 15px; top: 15px; } #innercircle { width: 120px; height: 120px; position: relative; z-index: 2; left: 40px; top: 30px; border: 1px solid red; } ``` ```html ``` Upvotes: 0 <issue_comment>username_2: Your main problem is that you're using 'mouseover'. This event only activates when the mouse enters the element. This way, if works the first time you move over the rectangle, or when you move between the black and red rectangles. If you use 'mousemove', it works right. ```js var cer = document.getElementById('cerchio'); var pro = document.getElementById('prova'); pro.addEventListener("mousemove", function() { var e = window.event; var x = e.clientX; var y = e.clientY; cer.style.top = y + "px"; cer.style.left = x + "px"; cer.style.transition = "2s"; }); pro.addEventListener("mouseout", function() { cer.style.top = "15px"; cer.style.left = "15px"; }); ``` ```css #prova { width: 200px; height: 200px; border: 1px solid black; } #cerchio { width: 90px; height: 90px; border: 1px solid red; border-radius: 90px; position: absolute; left: 15px; top: 15px; } #innercircle { width: 120px; height: 120px; position: relative; left: 40px; top: 30px; border: 1px solid red; } ``` ```html ``` Upvotes: 0 <issue_comment>username_3: The main problem is your usage of `window.event` and wrong event handlers. Here's a solution that uses standard event handling: ```js var cer = document.getElementById('cerchio'); var pro = document.getElementById('prova'); var proR = pro.getBoundingClientRect(); var cirR = cer.getBoundingClientRect(); // radii var rW = (cirR.right - cirR.left) / 2; var rH = (cirR.bottom - cirR.top) / 2; // page coords of center var oX = (proR.right + proR.left) / 2; var oY = (proR.bottom + proR.top) / 2; var x, y; // max movement var max = 15; function setPos(x, y) { cer.style.left = (x + oX - rW) + "px"; cer.style.top = (y + oY - rH) + "px"; } pro.addEventListener("mouseleave", function() { setPos(0, 0); }); pro.addEventListener("mousemove", function(e) { // 0,0 is at center x = e.clientX - oX; y = e.clientY - oY; // limit to max if (x < -max) x = -max; if (x > max) x = max; if (y < -max) y = -max; if (y > max) y = max; // set circle position setPos(x, y); }); setPos(0, 0); ``` ```css #prova { display: inline-block; border: 1px solid black; padding: 40px; } #innercircle { width: 120px; height: 120px; border: 1px solid red; } #cerchio { width: 90px; height: 90px; border: 1px solid red; border-radius: 50%; transition: .5s; position: absolute; pointer-events: none; } #prova, #innercircle, #cerchio { box-sizing: border-box; } ``` ```html ``` I've also changed the calculation to 1. determine the x,y values such that the center of the area is (0, 0) 2. limit the values to a set boundary 3. add back the offset to position the circle Upvotes: 2
2018/03/15
1,085
4,112
<issue_start>username_0: I have 2 classes: ``` import lombok.Builder; @Builder public class B extends A { } ``` and ``` import lombok.Builder; @Builder public class A { } ``` on the `@Builder` on `B` I get the message: > > The return type is incompatible with A.builder(). > > > Is this a limitation of lombok? or something I'm doing wrong? If I leave the `@Builder` off `A`, then the Builder on `B` doesn't seem to consider the fields in `A` in the constructors for `B`.<issue_comment>username_1: Without knowing the implementation details of lombok or trying it out i'd say no because the pattern won't allow it. If you implement the builder pattern all of your methods (except of build() ) will always have the class which the builder exists for as return type. That means class A's methods will only return A. So does B always return B. If you now let B extend from A it will not override A's methods because it's return type does not match. Vice versa it cannot implement the builder methods in B because those methods already exist in A. They cannot coexist by OOP design. You may be able to create a generic builder but that does not solve the problem. If you really need to extend from A you're problem may come from another design decision which the builder pattern cannot solve. I'd assume that instead of extending the class you'd have default values in your builder which lombok should support. Those default values then reflect what class A may support by default. In a use-case where you'd rather have B doing stuff you'd then call the builder methods and override those default values. Edit: Oh and maybe [have a look here](https://stackoverflow.com/questions/44948858/lombok-builder-on-a-class-that-extends-another-class) Upvotes: -1 <issue_comment>username_2: It is only possible with a workaround ([See #78](https://github.com/peichhorn/lombok-pg/issues/78)) *From [Reinhard.codes](https://reinhard.codes/2015/09/16/lomboks-builder-annotation-and-inheritance/)* > > We have been using @Builder on the class itself, but you can also put it on a class’s constructor or on a static method. In that case, Lombok will create a setter method on the builder class for every parameter of the constructor/method. That means you can create a custom constructor with parameters for all the fields of the class including its superclass. > > > ``` @AllArgsConstructor public class Parent { private String a; } public class Child extends Parent { private String b; @Builder private Child(String a, String b){ super(a); this.b = b; } } ``` Upvotes: 2 <issue_comment>username_3: I cannot reproduce your exact problem anymore, but this may be because Lombok has evolved. Part of your question, however, was that the builder for be does not include the fields for a. That remains true, as is also for `@AllArgsConstructor`. Inheritance is not Lombok’s strong suit. Thad said, since you can write your constructor yourself and can put `@Builder` on the constructor, the following will generate a builder for B just as you wished: ``` @Builder public class A { String a; } ``` --- ``` public class B extends A { @Builder B(String a, String b) { super(a); this.b = b; } String b; } ``` Upvotes: 1 <issue_comment>username_4: The latest lombok release 1.18.2 includes [the new experimental `@SuperBuilder`](https://projectlombok.org/features/experimental/SuperBuilder). It supports inheritance and fields from superclasses (also abstract ones). The only requirement is that all superclasses must have the `@SuperBuilder` annotation. With it, the solution is as simple as this: ``` @SuperBuilder public class B extends A { private String b; } @SuperBuilder public class A { private String a; } B instance = B.builder().b("b").a("a").build(); ``` Upvotes: 4 <issue_comment>username_5: faced issue using lombok with java inheritance, resolved after using the below annotations on parent and child class: @EqualsAndHashCode(callSuper = true) @SuperBuilder @Data @AllArgsConstructor @NoArgsConstructor Upvotes: 2
2018/03/15
985
3,823
<issue_start>username_0: I would like to use the `number_format()` function without to specify the number of decimals. (If 2 decimals, display 2, if 5, display 5) Is that possible? Thanks a lot<issue_comment>username_1: Without knowing the implementation details of lombok or trying it out i'd say no because the pattern won't allow it. If you implement the builder pattern all of your methods (except of build() ) will always have the class which the builder exists for as return type. That means class A's methods will only return A. So does B always return B. If you now let B extend from A it will not override A's methods because it's return type does not match. Vice versa it cannot implement the builder methods in B because those methods already exist in A. They cannot coexist by OOP design. You may be able to create a generic builder but that does not solve the problem. If you really need to extend from A you're problem may come from another design decision which the builder pattern cannot solve. I'd assume that instead of extending the class you'd have default values in your builder which lombok should support. Those default values then reflect what class A may support by default. In a use-case where you'd rather have B doing stuff you'd then call the builder methods and override those default values. Edit: Oh and maybe [have a look here](https://stackoverflow.com/questions/44948858/lombok-builder-on-a-class-that-extends-another-class) Upvotes: -1 <issue_comment>username_2: It is only possible with a workaround ([See #78](https://github.com/peichhorn/lombok-pg/issues/78)) *From [Reinhard.codes](https://reinhard.codes/2015/09/16/lomboks-builder-annotation-and-inheritance/)* > > We have been using @Builder on the class itself, but you can also put it on a class’s constructor or on a static method. In that case, Lombok will create a setter method on the builder class for every parameter of the constructor/method. That means you can create a custom constructor with parameters for all the fields of the class including its superclass. > > > ``` @AllArgsConstructor public class Parent { private String a; } public class Child extends Parent { private String b; @Builder private Child(String a, String b){ super(a); this.b = b; } } ``` Upvotes: 2 <issue_comment>username_3: I cannot reproduce your exact problem anymore, but this may be because Lombok has evolved. Part of your question, however, was that the builder for be does not include the fields for a. That remains true, as is also for `@AllArgsConstructor`. Inheritance is not Lombok’s strong suit. Thad said, since you can write your constructor yourself and can put `@Builder` on the constructor, the following will generate a builder for B just as you wished: ``` @Builder public class A { String a; } ``` --- ``` public class B extends A { @Builder B(String a, String b) { super(a); this.b = b; } String b; } ``` Upvotes: 1 <issue_comment>username_4: The latest lombok release 1.18.2 includes [the new experimental `@SuperBuilder`](https://projectlombok.org/features/experimental/SuperBuilder). It supports inheritance and fields from superclasses (also abstract ones). The only requirement is that all superclasses must have the `@SuperBuilder` annotation. With it, the solution is as simple as this: ``` @SuperBuilder public class B extends A { private String b; } @SuperBuilder public class A { private String a; } B instance = B.builder().b("b").a("a").build(); ``` Upvotes: 4 <issue_comment>username_5: faced issue using lombok with java inheritance, resolved after using the below annotations on parent and child class: @EqualsAndHashCode(callSuper = true) @SuperBuilder @Data @AllArgsConstructor @NoArgsConstructor Upvotes: 2
2018/03/15
1,945
8,738
<issue_start>username_0: I'm migrating our application from Spring Boot 1.5.9 to version 2.0.0. In version 1.5.9 we have successfully used mixed Annotations on several Domain Classes e.g: ```java ... @org.springframework.data.mongodb.core.mapping.Document(collection = "folder") @org.springframework.data.elasticsearch.annotations.Document(indexName = "folder") public class Folder { ... } ``` The same approach causes probems in Spring Boot 2.0.0. When MongoDB annotatnion **@DBRef** is used, Spring throws exception while ElasticsearchRepository creation: ```java java.lang.IllegalStateException: No association found! ``` Here comes classes and confs pom.xml ```xml ... 1.8 org.springfrsamework.boot spring-boot-starter-parent 2.0.0.RELEASE ... org.springframework.boot spring-boot-starter-web org.springframework.boot spring-boot-starter-data-mongodb org.springframework.boot spring-boot-starter-data-elasticsearch org.springframework.boot spring-boot-starter-test test org.springframework.boot spring-boot-starter-tomcat org.projectlombok lombok 1.16.18 provided ... ``` Application.java ```java ... @EnableMongoRepositories("com.hydra.sbmr.repoMongo") @EnableElasticsearchRepositories("com.hydra.sbmr.repoElastic") @SpringBootApplication public class Application extends SpringBootServletInitializer { public static void main(String[] args) { SpringApplication.run(Application.class, args); } } ``` Folder.java (**Note this @DBRef couses exception**) ```java package com.hydra.sbmr.model; import lombok.Getter; import lombok.Setter; import org.springframework.data.annotation.Id; import org.springframework.data.mongodb.core.mapping.DBRef; @org.springframework.data.mongodb.core.mapping.Document(collection = "folder") @org.springframework.data.elasticsearch.annotations.Document(indexName = "folder") public class Folder { @Id @Getter @Setter private String id; // Why MongoDB core mapping @DBRef causes java.lang.IllegalStateException: No association found! exception // while ElasticsearchRepository creation??? @DBRef @Getter @Setter private Profile profile; @Getter @Setter private String something; } ``` Profile.java ```java package com.hydra.sbmr.model; import lombok.Getter; import lombok.Setter; import org.springframework.data.annotation.Id; @org.springframework.data.mongodb.core.mapping.Document(collection = "profile") public class Profile { @Id @Getter @Setter private String id; @Getter @Setter String blah; } ``` FolderElasticRepository.java ```java package com.hydra.sbmr.repoElastic; import com.hydra.sbmr.model.Folder; import org.springframework.data.elasticsearch.repository.ElasticsearchRepository; public interface FolderElasticRepository extends ElasticsearchRepository { } ``` You can find whole mini project on GitHub: <https://github.com/hydraesb/sbmr> My question: * Is there any solution that will work with mixed Annotatnions on Domain Classes (mongo and elastic) in Spring Boot 2.0.0???<issue_comment>username_1: I have the same issue and the solution that i found is to extends SimpleElasticsearchMappingContext like this : ``` package com.mypackage; import com.fasterxml.jackson.databind.DeserializationFeature; import com.fasterxml.jackson.databind.ObjectMapper; import org.elasticsearch.client.Client; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.elasticsearch.core.ElasticsearchTemplate; import org.springframework.data.elasticsearch.core.EntityMapper; import org.springframework.data.elasticsearch.core.convert.MappingElasticsearchConverter; import org.springframework.http.converter.json.Jackson2ObjectMapperBuilder; import java.io.IOException; @Configuration public class ElasticsearchConfiguration { @Bean public ElasticsearchTemplate elasticsearchTemplate(Client client, Jackson2ObjectMapperBuilder jackson2ObjectMapperBuilder) { return new ElasticsearchTemplate(client, new MappingElasticsearchConverter(new CustomElasticsearchMappingContext()), new CustomEntityMapper(jackson2ObjectMapperBuilder.createXmlMapper(false).build())); } public class CustomEntityMapper implements EntityMapper { private ObjectMapper objectMapper; public CustomEntityMapper(ObjectMapper objectMapper) { this.objectMapper = objectMapper; objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false); objectMapper.configure(DeserializationFeature.ACCEPT_SINGLE_VALUE_AS_ARRAY, true); } @Override public String mapToString(Object object) throws IOException { return objectMapper.writeValueAsString(object); } @Override public T mapToObject(String source, Class clazz) throws IOException { return objectMapper.readValue(source, clazz); } } } package com.mypackage; import org.springframework.data.elasticsearch.core.mapping.ElasticsearchPersistentProperty; import org.springframework.data.elasticsearch.core.mapping.SimpleElasticsearchMappingContext; import org.springframework.data.elasticsearch.core.mapping.SimpleElasticsearchPersistentEntity; import org.springframework.data.mapping.model.Property; import org.springframework.data.mapping.model.SimpleTypeHolder; public class CustomElasticsearchMappingContext extends SimpleElasticsearchMappingContext { @Override protected ElasticsearchPersistentProperty createPersistentProperty(Property property, SimpleElasticsearchPersistentEntity owner, SimpleTypeHolder simpleTypeHolder) { return new CustomElasticsearchPersistentProperty(property, owner, simpleTypeHolder); } } package com.mypackage; import org.springframework.data.elasticsearch.core.mapping.ElasticsearchPersistentProperty; import org.springframework.data.elasticsearch.core.mapping.SimpleElasticsearchPersistentProperty; import org.springframework.data.mapping.PersistentEntity; import org.springframework.data.mapping.model.Property; import org.springframework.data.mapping.model.SimpleTypeHolder; public class CustomElasticsearchPersistentProperty extends SimpleElasticsearchPersistentProperty { public CustomElasticsearchPersistentProperty(Property property, PersistentEntity owner, SimpleTypeHolder simpleTypeHolder) { super(property, owner, simpleTypeHolder); } @Override public boolean isAssociation() { return false; } } ``` Upvotes: 2 <issue_comment>username_2: I have faced this problem also and I fixed with solution of @username_1 ``` @Bean fun elasticsearchTemplate(client: JestClient, converter: ElasticsearchConverter, builder: Jackson2ObjectMapperBuilder): ElasticsearchOperations { val entityMapper = CustomEntityMapper(builder.createXmlMapper(false).build()) val mapper = DefaultJestResultsMapper(converter.mappingContext, entityMapper) return JestElasticsearchTemplate(client, converter, mapper) } @Bean @Primary fun mappingContext(): SimpleElasticsearchMappingContext { return MappingContext() } @Bean fun elasticsearchConverter(): ElasticsearchConverter { return MappingElasticsearchConverter(mappingContext()) } inner class CustomEntityMapper(private val objectMapper: ObjectMapper) : EntityMapper { init { objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false) objectMapper.configure(DeserializationFeature.ACCEPT_SINGLE_VALUE_AS_ARRAY, true) } @Throws(IOException::class) override fun mapToString(`object`: Any): String { return objectMapper.writeValueAsString(`object`) } @Throws(IOException::class) override fun mapToObject(source: String, clazz: Class): T { return objectMapper.readValue(source, clazz) } } inner class MappingContext : SimpleElasticsearchMappingContext() { override fun createPersistentProperty(property: Property, owner: SimpleElasticsearchPersistentEntity<\*>, simpleTypeHolder: SimpleTypeHolder): ElasticsearchPersistentProperty { return PersistentProperty(property, owner, simpleTypeHolder) } } inner class PersistentProperty(property: Property, owner: SimpleElasticsearchPersistentEntity<\*>, simpleTypeHolder: SimpleTypeHolder) : SimpleElasticsearchPersistentProperty(property, owner, simpleTypeHolder) { override fun isAssociation(): Boolean { return false } } ``` Upvotes: 0
2018/03/15
537
1,647
<issue_start>username_0: I have 2 arrays: ``` arr1 = [1,2,3]; arr2 = [2,3,4]; ``` They have 2 common values. I want to compare them to get answer 2. Is there any `lodash` function?<issue_comment>username_1: Yes, you can use [\_.intersection](https://lodash.com/docs/4.17.5#intersection) and get common values ```js var arr1 = [1,2,3]; var arr2 = [2,3,4]; console.log(_.intersection(arr1, arr2)); ``` Though it gets you common values of the two arrays, in this case 2,3 not just 2. If what you want is the number of common values, just ``` _.size(_.intersection(arr1, arr2)); ``` Or you could create your own function composing the other two: ``` let numberOfCommonValues = _.flowRight(_.size, _.intersection); console.log(numberOfCommonValues([1,2,3,4,5], [1,4,5])); ``` Hope it helps. Upvotes: 1 <issue_comment>username_2: I can't exactly tell you if there's a lodash function to do this, but it seems kind of trivial. Just go ahead and define a var, then iterate through one of the arrays and compare the value to every value in the other... ``` function countCommonalities(arr1, arr2) { var counter = 0; for(var i = 0; i < arr1.length; i++) { for(var z = 0; z < arr2.length; z++) { if(arr1[i] === arr2[z]) { counter++; } } } return counter; } ``` Upvotes: 0 <issue_comment>username_3: Yes, there is a lodash function named `intersection`. **So, below is the code you can use to find the length of the common values of two arrays.** ``` arr1 = [1,2,3] arr2 = [2,3,4] console.log(_.intersection(arr1, arr2).length); ``` Upvotes: 3 [selected_answer]
2018/03/15
575
1,678
<issue_start>username_0: I have encountered one interesting problem Lets say I have objects in my database likt this: ``` [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ``` Lets consider numbers as ids of these objects If I want to get two objects after `5`, I need to use this query: ``` MyObject.objects.filter(id__gt=5).order_by("id")[:2] ``` This will return to me this result: ``` [6, 7] ``` Which is right result. Now I want to get two objects **before object 5**. I do this query: ``` MyObject.objects.filter(id__lt=5).order_by("id")[:2] ``` However, this returns to me this: ``` [1, 2] instead of [3, 4] ``` So I need to query objects starting from object of **id=5** I know that there is ranged query, but it does not suit when working only with ids. Is it possible to query objects before certain id starting from this object itself? **======== UPDATE ========** There is a catch, sometimes objects are also filtered by another condition and this means that id's are not correlated by their indexes: ``` [34, 45, 46, 66, 100, 105, 211] ``` How it is possible to get two objects, for example, just before `66`?<issue_comment>username_1: By doing descending order ``` MyObject.objects.filter(id__lt=5).order_by("-id")[:2] ``` You will get 2 number nearest to 5 --- If you expect never to delete data then you can use [range](https://docs.djangoproject.com/en/2.0/ref/models/querysets/#range)(between) ``` MyObject.objects.filter(id__range=(4, 3)) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: we can get it by using slicing & descending ordering ``` obj_id = 5 MyObject.objects.filter(id__lt=obj_id).order_by("-id")[1:3] ``` Upvotes: 2
2018/03/15
364
1,521
<issue_start>username_0: I have different tables in my scheme with different columns, but I want to store data of when was the table modified or when was the data stored, so I added some columns to specify that. I realized that I had to add the same "modification\_date" and "modification\_time" columns to all my tables, so I thought about making a new table called DATA\_INFO so I won't need to do so, but every table has a different PRIMARY KEY and I don't know which one to add as FOREIGN KEY to the DATA\_INFO table. I don't know if I have to maybe add all of them or is there another way to do what I need.<issue_comment>username_1: It's better to have the same "modification\_datetime" column in all tables, rather than trying to keep that data in a central table. That's what we have done at every shop I've worked in. Upvotes: 3 [selected_answer]<issue_comment>username_2: I want to emphasize that a separate table is not reasonable for this purpose. The lack of an obvious foreign key is a hint. Unlike Tab Allerman, tables that I create are much less likely to be updated, so I have three additional columns on most tables: * `CreatedBy` -- the user who created the row * `CreatedAt` -- when the row was creatd * `CreatedOn` -- the system where the table was created The most important point is that this information can -- in many databases -- be implemented using `default` values rather than triggers. That is a big advantage of working within a single row. The fewer triggers, the better. Upvotes: 0
2018/03/15
1,528
5,149
<issue_start>username_0: ``` def multipliers(): return [lambda x : i * x for i in range(4)] print [m(2) for m in multipliers()] ``` I partially understood(which is dangerous) the reason that `i` is same for all functions because Python’s closures are late binding. The output is `[6, 6, 6, 6]` (not `[0, 2, 4, 6]` as I was expecting). --- I see that it works fine with a generator, my expected output is coming in below version. ``` def multipliers(): return (lambda x : i * x for i in range(4)) print [m(2) for m in multipliers()] ``` Any simple explanation why it is working in the below version ?<issue_comment>username_1: It only works because you call each function before the next one is created. The generator is lazy, it yields each function immediately, so before `i` is incremented. Compare if you force all of the generator to be consumed before you call the functions: ``` >>> def multipliers(): ... return (lambda x : i * x for i in range(4)) ... >>> print [m(2) for m in multipliers()] [0, 2, 4, 6] >>> print [m(2) for m in list(multipliers())] [6, 6, 6, 6] ``` If you want early binding then you can simulate it here with default arguments: ``` >>> def multipliers(): ... return (lambda x, i=i : i * x for i in range(4)) ... >>> print [m(2) for m in multipliers()] [0, 2, 4, 6] >>> print [m(2) for m in list(multipliers())] [0, 2, 4, 6] ``` To clarify my comment about the generator being lazy: the generator `(lambda x : i * x for i in range(4))` will go through values of `i` from 0 to 3 inclusive, but it yields the first function while `i` is still 0, at that point it hasn't bothered to do anything about the cases for 1 to 3 (which is why we say it is lazy). The list comprehension `[m(2) for m in multipliers()]` calls the first function `m` immediately, so `i` is still 0. Then the next iteration of the loop retrieves another function `m` where `i` is now 1. Again the function is called immediately so it sees `i` as 1. And so on. Upvotes: 4 [selected_answer]<issue_comment>username_2: You're looking for a simple explanation for a complex phenomenon, but I'll try and keep it short. The first function returns a list of functions, each of which is a closure over the `multipliers` function. The interpreter therefore stores a reference to a "cell", referencing the `i` local variable, allowing the value to live on after the function call in which it was created has ended, and its local namespace has been destroyed. Unfortunately, the reference in the cell is to the value of the variable at the time the function terminated, not its value at the time it was used to create the lambda (since it was used four times in a loop the interpreter would have to create a separate cell for each use, which it doesn't). Your second function returns a generator expression, which has its own local namespace that preserves the value of the local variables (in this case, notably, `i`) while suspended during the processing of a `yield`ed result. You will observe that you can recast this explicitly as a generator function, which might help to explain the operation of the second example: ``` def multipliers(): for i in range(4): yield lambda x : i * x ``` This too gives the required result. Upvotes: 3 <issue_comment>username_3: Some points to understand this complex example: * closure of 3 functions created point to the same `i` in `make_fns_by_...` scope * The generator is "lazy" as explained below in details - it actually changed the code call sequence ```py def make_fns_by_list(): fns = [] for i in list(range(3)): def f(): print(i) # ref. to "global var" `i` in closure print(id(f), f.__closure__, f.__closure__[0].cell_contents) fns.append(f) return fns def make_fns_by_generator(): for i in list(range(3)): def f(): print(i) # ref. to "global var" `i` in closure print(id(f), f.__closure__, f.__closure__[0].cell_contents) yield(f) def call_fns(): fns = make_fns_by_generator() # generator is lazy, do nothing here # for f in fns: # print(id(f), f.__closure__, f.__closure__[0].cell_contents) # same as below which is easier for explanation: fns_iter = iter(fns) f = next(fns_iter) # generator is "lazy", it make `f` here print(id(f), f.__closure__, f.__closure__[0].cell_contents, '-->', f()) # and called at once f = next(fns_iter) print(id(f), f.__closure__, f.__closure__[0].cell_contents, '-->', f()) f = next(fns_iter) print(id(f), f.__closure__, f.__closure__[0].cell_contents, '-->', f()) print('-' * 100) fns = make_fns_by_list() # list is working hard, it make `f` here fns_iter = iter(fns) f = next(fns_iter) print(id(f), f.__closure__, f.__closure__[0].cell_contents, '-->', f()) # and called at once f = next(fns_iter) print(id(f), f.__closure__, f.__closure__[0].cell_contents, '-->', f()) f = next(fns_iter) print(id(f), f.__closure__, f.__closure__[0].cell_contents, '-->', f()) def main(): call_fns() if __name__ == '__main__': main() ``` Upvotes: 0
2018/03/15
352
1,391
<issue_start>username_0: I currently have three cloudformation stacks: 1. kms-stack 2. vpc-stack 3. sqs-stack They all export outputs that I can see when I describe that stack but when I run: ``` aws cloudformation list-exports ``` Only the outputs from the vpc stack are printed to the CLI.<issue_comment>username_1: You should be using `describe-stacks` CLI to get the information. Use the following command: ``` aws cloudformation describe-stacks ``` CLI details can be found [here](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/describe-stacks.html). Upvotes: 0 <issue_comment>username_2: The output of a stack, and the exports from a stack, are not the same thing. Output of a stack is specified in the Output section. Each element you include in the output can also be exported if so desired. Exports must be unique within a region. ``` aws cloudformation list-exports ``` will list ALL exported values from all stacks within a region; So review your cloudformation scripts for each stack and determine if you are actually exporting the values you need. In the following VPC example, the VPC id will be exported; the IGW will not although both will be displayed in describe-stacks: ``` Outputs: VPC: Value: Ref: VPC Export: Name: MyVpcID InternetGateway: Value: Ref: InternetGateway ``` Upvotes: 2 [selected_answer]
2018/03/15
785
2,653
<issue_start>username_0: I have the simple following code : ``` var S = require('string'); function matchBlacklist(inputString) { var blacklist = ["facebook", "wikipedia", "search.ch", "local.ch"]; var found = false; for (var i = 0; i < blacklist.length; i++) { if (S(inputString).contains(blacklist[i]) > -1) { found = true; } } return (found); } matchBlacklist("www.facebook.com/this_is_a_test"); // returns true matchBlacklist("www.example.com/this_is_a_test"); // returns true ``` But it always returns true; as it should return false for the second case<issue_comment>username_1: You should test for `if(string.includes(substring)){ ... }` and not `if(string.includes(substring) > -1 ){ ... }` but here's a more elegant one-liner : ```js const blacklist = ["facebook", "wikipedia", "search.ch", "local.ch"]; const matchBlacklist = inputString => blacklist.some(word => inputString.includes(word)) console.log( matchBlacklist("www.facebook.com/this_is_a_test") ); // returns true console.log( matchBlacklist("www.example.com/this_is_a_test") ); // returns false ``` Upvotes: 2 <issue_comment>username_2: A better approach using the `find` method of arrays, which return the element if the expression evaluates `true` else undefined, then return a boolean based on the result of the find [Find method MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/find) [Includes method MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/includes) ```js function matchBlacklist(inputString) { var blacklist = ["facebook", "wikipedia", "search.ch", "local.ch"]; return !!blacklist.find(b => inputString.includes(b)) } console.log(matchBlacklist("www.facebook.com/this_is_a_test")); console.log(matchBlacklist("www.example.com/this_is_a_test")); ``` Upvotes: 2 <issue_comment>username_3: You can use "indexOf" instead "contains" method. "contains" method will return boolean value, so you cannot compare the boolean value with a number. "indexOf" will return a index number of a string, so you can do the comparison here. ``` var S = require('string'); function matchBlacklist(inputString) { var blacklist = ["facebook", "wikipedia", "search.ch", "local.ch"]; var found = false; for (var i = 0; i < blacklist.length; i++) { if (S(inputString).indexOf(blacklist[i]) > -1) { found = true; } } return (found); ``` } ``` matchBlacklist("www.facebook.com/this_is_a_test"); // returns true matchBlacklist("www.example.com/this_is_a_test"); // returns truetes ``` Upvotes: 0
2018/03/15
1,134
2,730
<issue_start>username_0: I have a table with details of sold cars. Some of these cars have been resold within last 1, 2 or 3 years. The table looks like this: ``` Car_Type || Car_Reg_No || Sold_Date || Listing_No Hatch || 23789 || 2017-02-03 11:26 || X6529 Coupe || 16723 || 2016-11-07 09:40 || N8156 Sedan || 35216 || 2016-05-23 10:34 || M8164 Hatch || 23789 || 2016-09-16 04:30 || O7361 ``` Now, I need to query records (cars) which were re-sold within 1 year of their latest sold date and how many times were they sold. So, my output would be like this: ``` Car_Type || Car_Reg_No || Sold_Count || Latest_Sold_Date Hatch || 23789 || 2 || 2017-02-03 11:26 ``` In essence, How do I check for re-sold records within a specific time frame of their latest sold date?<issue_comment>username_1: You can do this by finding the max, and joining based on your conditions. ``` declare @TableA table (Car_Type varchar(64) ,Car_Reg_No int ,Sold_Date datetime ,Listing_No varchar(6)) insert into @TableA values insert into @TableA values ('Hatch',23789,'2017-02-03 11:26','X6529'), ('Coupe',16723,'2017-11-07 09:40','N8156'), ('Sedan',35216,'2017-05-23 10:34','M8164'), ('Hatch',23789,'2016-09-16 04:30','O7361'), ('Coupe',16723,'2014-11-07 09:40','N8156') ;with cte as( select Car_Type ,Car_Reg_No ,Latest_Sold_Date = max(Sold_Date) from @TableA group by Car_Type ,Car_Reg_No) select a.Car_Type ,a.Car_Reg_No ,Sold_Count = count(b.Listing_No) + 1 ,a.Latest_Sold_Date from cte a inner join @TableA b on b.Car_Reg_No = a.Car_Reg_No and b.Sold_Date != a.Latest_Sold_Date and datediff(day,b.Sold_Date,a.Latest_Sold_Date) < 366 --if you want only cars which were sold within last year too, uncomment this --and datediff(day,a.Latest_Sold_Date,getdate()) < 366 group by a.Car_Type ,a.Car_Reg_No ,a.Latest_Sold_Date ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: By my understanding.., ``` select sd1.Car_Type, sd1.Car_Reg_No, count(sd1.Car_Reg_No) + 1 'no of sales in last one year', --1 is added because, see the last condition sd1.Sold_Date 'Last sold date' from( select *,ROW_NUMBER() over(partition by Car_Reg_No order by sold_date desc) as rn from #Table) as sd1 join (select * from #Table) as sd2 on sd1.Car_Type = sd2.Car_Type and DATEDIFF(dd,sd2.Sold_Date,sd1.Sold_Date) < 366 and sd1.rn = 1 and sd1.Sold_Date <> sd2.Sold_Date -- here last sold is eliminated. so count is added by one. group by sd1.Car_Type,sd1.Sold_Date, sd1.Car_Reg_No order by sd1.Car_Reg_No ``` Upvotes: 0
2018/03/15
612
2,109
<issue_start>username_0: I do not know why but I can not save my Object to FirebaseDatabase. Can someone help me? My object: ``` public class ChatEntity { public static final String ENTITY_IDENTIFIER = "chats"; private String id; private String chatTitle; private Map users; private Map administratorsUser; private Bitmap chatIcon; ... Getters and Setters ... } ``` I'm saving this with: ``` FirebaseDatabase.getInstance().getReference(ChatEntity.ENTITY_IDENTIFIER).child(chat.getId()).setValue(chat); ``` And only this is saved: ``` { "1521122180142&Teste&tduWYxVHRVPVIx6Sv4p8fNwKKJi2" : { "chatTitle" : "Teste", "id" : "1521122180142&Teste&tduWYxVHRVPVIx6Sv4p8fNwKKJi2" } } ``` The field `chatIcon` was `null`, so it's ok. But the two Maps has 1 and 2 entrys, and don't getting saved. How can I save it?<issue_comment>username_1: As you might see, the Firebase database only takes the `String` objects that you try to save. Any other object is not guaranteed to be saved, specially the `Bitmap` one. If you want to save a `Bitmap`: * Use Firebase Storage * Get the **Base64** `String` from the `Bitmap`. To save the nested `HashMap`, you might want to call the database several times (one for each `Map`), or to convert the Maps' values into Strings with a `/` for each time you are editing a child (although both of them are very messy). Upvotes: 2 [selected_answer]<issue_comment>username_2: As per [official documentation](https://firebase.google.com/docs/database/android/read-and-write), the `Bitmap` is not a supported data type in Firebase Realtime database. Remember, as a general rule, I'd say, never use base64. You don't really need to base64-encode your image. If you want to store images, don't use Firebase Realtime database, use [Firebase Storage](https://firebase.google.com/docs/storage/). In order to have your database populated with `users` and `administratorsUser`, change the type from those maps with `ProfileEntity` objects. If you want to use maps, you can use to set/update them directly on a `DatabaseReference` object. Upvotes: 0
2018/03/15
745
2,165
<issue_start>username_0: Well, i tried to find online my answer but actually I didn't and I really need help.. * I have a **text file** (file.txt) that contain : > > > ``` > C:/Users/00_file/toto.odb, > dis,455, > stre,54, > stra,25, > C:/Users/00_file/tota.odb, > > ``` > > * And a TCL script that allows me to read *values* of each lines : > > > ``` > set Infile [open "C:/Users/00_file/file.txt" r] > set filelines [split $Infile ","] > set Namepath [lindex $filelines 1 0] #*doesn't work* > set dis [lindex $filelines 2 0] # *work good* > ... > > ``` > > The problem is when I want the complete line 1 of the text file with my TCL script, some informations are missing and extra caracter disapear.. How can I have the complete string (line 1 of my text file) ? Thanks a lot !<issue_comment>username_1: You open the file for reading but you don't actually read from it. $Infile is just (basically) a pointer to a file descriptor, not the contents of the file: ``` % set fh [open file.txt r] % puts $fh file3 ``` The idiomatic way to read from a file: line-by-line ``` set fh [open "C:/Users/00_file/file.txt" r] set data [list] while {[get $fh line] != -1} { lappend data [split $line ,] } close $fh ``` Or, read the whole file and split it on newlines ``` set fh [open "C:/Users/00_file/file.txt" r] set data [lmap line [split [read -nonewline $fh] \n] {split $line ,}] close $fh ``` Then access the data ``` set Namepath [lindex $data 0 0] ;# first line, first field set dis [lindex $data 1 1] ;# second line, second field ``` Upvotes: 1 <issue_comment>username_2: Tcl code will be as follow: ```html set file [open c:/filename.txt ] set file_device [read $file] set data [split $file_device "\n"] for {set count 0} {$count < 2} {incr count} { puts $data # for every iterartion one line will be printed. # split /n is use for getting the end of each line. # open command open the file at given path. # read command is use to read the open file. } close $file break ``` this will take the line one after another. Upvotes: 0
2018/03/15
814
2,763
<issue_start>username_0: For example, if I want to detect all odd numbers in an array and set them to zero, I can use: ``` def setToZeroIfOdd(n): if n % 2 == 0: pass else: return 0 numbers = range(1,1000) numbers = map(setToZeroIfOdd, numbers) ``` which works like a charm. But when I try something like ``` def setToZeroIfDivisibleBy(n, divisor): if n % divisor == 0: return 0 else: pass numbers = map(setToZeroIfDivisibleBy(divisor=3), numbers) ``` it expects two arguments. Likewise, ``` numbers = map(setToZeroIfDivisibleBy, numbers, divisor=3) ``` does not work. How can I pass that `divisor` argument from within `map()`?<issue_comment>username_1: Try using lambda function ``` numbers = map(lambda n: setToZeroIfDivisibleBy(n, divisor=3), numbers) ``` And rather than `pass` did you mean `return n`? Upvotes: 4 [selected_answer]<issue_comment>username_2: You make a function which returns a function: ``` def setToZeroIfDivisibleBy(divisor): def callback(n): if n % divisor == 0: return 0 else: pass return callback numbers = map(setToZeroIfDivisibleBy(3), numbers) ``` BTW, you can entirely omit empty branches like `else: pass`; it doesn't do anything. Since it results in a `None`, I don't think that's what you want either. You probably want `return n` there instead. Upvotes: 3 <issue_comment>username_3: You can use [`functools.partial`](https://docs.python.org/3/library/functools.html#functools.partial) to make partial functions ``` from functools import partial def setToZeroIfDivisibleBy(n, divisor): if n % divisor == 0: return 0 else: pass numbers = range(1,1000) numbers = map(partial(setToZeroIfDivisibleBy, divisor=3), numbers) ``` Upvotes: 4 <issue_comment>username_4: Another approach, instead of using `partial`, is to supply an infinite (or at least, long enough) sequence of 2nd arguments for the two-argument function: ``` from itertools import repeat numbers = map(setToZeroIfDivisibleBy, numbers, repeat(3)) ``` --- In Python 2, `map` will append `None` as necessary to the shorter of the two sequences to make them the same length. Assuming that will cause problems (either because your function cannot handle `None` as an input value or you end up with an infinite loop), you can either use `itertools.imap`, which stops after exhausting the shorter sequence: ``` from itertools import imap, repeat numbers = list(imap(setToZeroIfDivisibleBy, numbers, repeat(3))) ``` or pass the length of `numbers` as a second argument to `repeat` so that the two sequences are the same length. ``` from itertools import repeat numbers = map(setToZeroIfDivisibleBy, numbers, repeat(3, len(numbers))) ``` Upvotes: 0
2018/03/15
655
2,336
<issue_start>username_0: I'd like to reliably count the number of rows in a given excel table using excel formulas. The rough equivalent of: ``` ActiveWorkbook.Worksheets("Sheet1").Range("Table1").Rows.Count ``` Using built-in Excel formulas.<issue_comment>username_1: Try using lambda function ``` numbers = map(lambda n: setToZeroIfDivisibleBy(n, divisor=3), numbers) ``` And rather than `pass` did you mean `return n`? Upvotes: 4 [selected_answer]<issue_comment>username_2: You make a function which returns a function: ``` def setToZeroIfDivisibleBy(divisor): def callback(n): if n % divisor == 0: return 0 else: pass return callback numbers = map(setToZeroIfDivisibleBy(3), numbers) ``` BTW, you can entirely omit empty branches like `else: pass`; it doesn't do anything. Since it results in a `None`, I don't think that's what you want either. You probably want `return n` there instead. Upvotes: 3 <issue_comment>username_3: You can use [`functools.partial`](https://docs.python.org/3/library/functools.html#functools.partial) to make partial functions ``` from functools import partial def setToZeroIfDivisibleBy(n, divisor): if n % divisor == 0: return 0 else: pass numbers = range(1,1000) numbers = map(partial(setToZeroIfDivisibleBy, divisor=3), numbers) ``` Upvotes: 4 <issue_comment>username_4: Another approach, instead of using `partial`, is to supply an infinite (or at least, long enough) sequence of 2nd arguments for the two-argument function: ``` from itertools import repeat numbers = map(setToZeroIfDivisibleBy, numbers, repeat(3)) ``` --- In Python 2, `map` will append `None` as necessary to the shorter of the two sequences to make them the same length. Assuming that will cause problems (either because your function cannot handle `None` as an input value or you end up with an infinite loop), you can either use `itertools.imap`, which stops after exhausting the shorter sequence: ``` from itertools import imap, repeat numbers = list(imap(setToZeroIfDivisibleBy, numbers, repeat(3))) ``` or pass the length of `numbers` as a second argument to `repeat` so that the two sequences are the same length. ``` from itertools import repeat numbers = map(setToZeroIfDivisibleBy, numbers, repeat(3, len(numbers))) ``` Upvotes: 0
2018/03/15
1,093
3,583
<issue_start>username_0: We are using TinyMce with image plugin. <https://www.tinymce.com/docs/plugins/image/> This plugin by default adds image file dimensions, when width and height fields are left blank. Is there any way to prevent this using config? Or do I have to hack it?<issue_comment>username_1: If you set the `image_dimensions` option to `false` the plugin no longer includes width and height when inserting an image: <https://www.tinymce.com/docs/plugins/image/#image_dimensions> Upvotes: 1 <issue_comment>username_2: You'll need to make your own version of the `image` plugin to do what you're describing. Upvotes: 0 <issue_comment>username_3: It's not exactly the logic you wanted, but it could be a good work-around. You can change the size of the image when it is inserted into the textarea. The script sets the image width to a maximum of 1000px (you can adapt the algorithm to your needs) ``` selector: 'textarea', setup: function (editor) { editor.on('init', function(args) { editor = args.target; editor.on('NodeChange', function(e) { if (e && e.element.nodeName.toLowerCase() == 'img') { width = e.element.width; height = e.element.height; if (width > 1000) { height = height / (width / 1000); width = 1000; } tinyMCE.DOM.setAttribs(e.element, {'width': width, 'height': height}); } }); }); }, ...... ``` Upvotes: 3 <issue_comment>username_4: Based on username_3 answer the react solution is like so, in my case I wanted a width of `100%` and a height of `370px` using `onNodeChange` function ```js onNodeChange={(e) => { if (e && e.element.nodeName.toLowerCase() == 'img') { editorRef.current.dom.setAttribs(e.element, { width: '100%', height: '370px', }) } }} ``` You can also customize the image width and height while editing ```js init={{ ... content_style: 'body { font-family:Helvetica,Arial,sans-serif; font-size:14px } img { max-width: 100%; height: 370px; }', }} ``` here is the full code to create your custom editor ```js import { useRef } from 'react' import PropTypes from 'prop-types' import { Editor } from '@tinymce/tinymce-react' export default function MyEditor({ initialValue, onChange, onFocus, ...props }) { const editorRef = useRef(null) return ( { editorRef.current = editor }} init={{ placeholder: 'Insérer le contenu ici...', language: 'fr\_FR', branding: false, // height: 500, menubar: true, plugins: [ 'template', 'searchreplace', 'quickbars', 'image', 'link', 'codesample', 'emoticons', 'insertdatetime', 'nonbreaking', 'pagebreak', 'advlist', 'autolink', 'lists', 'link', 'image', 'charmap', 'preview', 'anchor', 'searchreplace', 'visualblocks', 'code', 'insertdatetime', 'media', 'table', 'help', // 'wordcount', ], toolbar: 'undo redo | blocks | bold italic forecolor | link image| alignleft aligncenter alignright alignjustify | bullist numlist outdent indent | removeformat searchreplace help', content\_style: 'body { font-family:Helvetica,Arial,sans-serif; font-size:14px } img { max-width: 100%; height: 370px; }', ...props, }} onEditorChange={onChange} onNodeChange={(e) => { if (e && e.element.nodeName.toLowerCase() == 'img') { editorRef.current.dom..setAttribs(e.element, { width: '100%', height: '370px', }) } }} onFocus={onFocus} /> ) } MyEditor.propTypes = { initialValue: PropTypes.string, onChange: PropTypes.func, onFocus: PropTypes.func, } ``` Upvotes: 0
2018/03/15
811
2,315
<issue_start>username_0: I'm trying to reindex a dataframe's multiindex at one sublevel. The df in question looks like this: ``` test = pd.DataFrame({ 'day':[1,3,5], 'position':['A', 'B', 'A'], 'value':[20, 45, 3] }) test.set_index(['day', 'position']) >> value day position 1 A 20 3 B 45 5 A 3 ``` And my goal is to reindex the `day` level to transform the dataframe into the following: ``` >>> value day position 1 A 20.0 2 A 20.0 3 A 20.0 4 A 20.0 5 A 3.0 1 B 0.0 2 B 0.0 3 B 45.0 4 B 45.0 5 B 45.0 ``` So essentially I need to reindex `day` to days 1 through 5 for every position group and then forwardfill and fillna with 0.<issue_comment>username_1: Use: * first reshape by [`unstack`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html) * add mising days by [`reindex`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html) * forward fill by `ffill` * replace first `NaN`s by [`fillna`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html) * reshape by by [`stack`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html) * [`sort_index`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html) for expected output --- ``` df = (test.set_index(['day', 'position']) .unstack() .reindex(range(1,6)) .ffill() .fillna(0) .stack() .sort_index(level=[1,0])) print (df) value day position 1 A 20.0 2 A 20.0 3 A 20.0 4 A 20.0 5 A 3.0 1 B 0.0 2 B 0.0 3 B 45.0 4 B 45.0 5 B 45.0 ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I reorder your index ``` test.set_index(['position', 'day']).reindex(pd.MultiIndex.from_product([['A','B'],list(range(1,6))])).sort_index().groupby(level=0).ffill().fillna(0) Out[30]: value A 1 20.0 2 20.0 3 20.0 4 20.0 5 3.0 B 1 0.0 2 0.0 3 45.0 4 45.0 5 45.0 ``` Upvotes: 1
2018/03/15
861
3,485
<issue_start>username_0: I need prioritize the project routes vs the packages routes in Laravel 5.6.12. I've read that one solution could be placing the RouteServiceProvider call before than the packages call. All right, but defaultly, when I install with composer the dependencies, all the external ServiceProviders appears before to RouteServiceProvider. If I check my bootstrap/cache/services.php generated: ``` 23 => 'Fideloper\\Proxy\\TrustedProxyServiceProvider', 24 => 'Laravel\\Tinker\\TinkerServiceProvider', 25 => 'Yajra\\DataTables\\DataTablesServiceProvider', 26 => 'Spatie\\Permission\\PermissionServiceProvider', 27 => 'Intervention\\Image\\ImageServiceProvider', 28 => 'Spatie\\MediaLibrary\\MediaLibraryServiceProvider', 29 => 'Spatie\\LaravelImageOptimizer\\ImageOptimizerServiceProvider', 30 => 'Laracasts\\Flash\\FlashServiceProvider', 31 => 'Jenssegers\\Agent\\AgentServiceProvider', 32 => 'DaveJamesMiller\\Breadcrumbs\\BreadcrumbsServiceProvider', 33 => 'JoseAragon\\MyPackage\\MyPackageServiceProvider', 34 => 'App\\Providers\\AppServiceProvider', 35 => 'App\\Providers\\AuthServiceProvider', 36 => 'App\\Providers\\EventServiceProvider', 37 => 'App\\Providers\\RouteServiceProvider', ``` RouteServiceProvider is the last item. I cant put it before the package, because in my config/app.php I don't have the ServiceProviders thats appear in the services.php generated. I need put 37 -> RouteServiceProvider before 33 -> MyPackageServiceProvider that have a lot of routes. Can you help me? Really I need use the package routes, but if I need create a new route in the Laravel project, override and prioritize this routes before that the package routes. Do you know other solution? Thanks a lot!!!<issue_comment>username_1: in your `config/app.php` inside providers array where you are registering the `ServiceProvider` ``` $providers = [ //othere Services providers MyPackageServiceProvider::class, RouteServiceProvider::class ]; ``` if you run `php artisan optimize` Your `MyPackageServiceProvider` will loads first. Upvotes: -1 <issue_comment>username_2: `Illuminate\Foundation\Application::registerConfiguredProviders` is an issue here. **Solution:** Create namespace like `Illuminate\CustomServices` and place your ServiceProvider within it. --- More background on the problem: `Illuminate\Foundation\Application::registerConfiguredProviders` 1. creates a collection from your App config providers array; 2. splits this array in 2 chunks [Everything that starts with `Illuminate\`, rest of it]; 3. adds all the composer packages service providers in between; And this will give you a result array where all your ServiceProviders are ranked as you ranked them, but after everything that starts with `Illuminate\` and after 3rd Party Composer ServiceProviders. Upvotes: -1 <issue_comment>username_3: You have to **disable the auto-discovering feature** of the 3rd party library. To do that, open your *composer.json* file and add the libraries you want to disable auto-discovery for in the extra like this ``` "extra": { "laravel": { "dont-discover": [ "vendor/library-name", "spatie/laravel-permission" ] }, ``` Then **manually set the auto-discovery** of the libraries in any order you want in your *config/app* file of you laravel project. This will fix the problem of having auto-generated-provider before some laravel default provider. You can now make out your own Provider order as you want. Upvotes: 2
2018/03/15
595
2,266
<issue_start>username_0: I have 4 differents lines/commands (the addition is just an example) ``` one<- (1+1) two<- (2+2) three<-(3+3) four<-(4+4) ``` I need to run randomly any of this four command lines (one, two, three or four), I am no focus in the addition result. I did try with: ``` list=c("one", "two", "three", "four") number <- sample(list, 1) number ``` but lamentably didnt run the line/command. I expect that the sampling can run on the console any of these 4 commands. Thanks in advance<issue_comment>username_1: in your `config/app.php` inside providers array where you are registering the `ServiceProvider` ``` $providers = [ //othere Services providers MyPackageServiceProvider::class, RouteServiceProvider::class ]; ``` if you run `php artisan optimize` Your `MyPackageServiceProvider` will loads first. Upvotes: -1 <issue_comment>username_2: `Illuminate\Foundation\Application::registerConfiguredProviders` is an issue here. **Solution:** Create namespace like `Illuminate\CustomServices` and place your ServiceProvider within it. --- More background on the problem: `Illuminate\Foundation\Application::registerConfiguredProviders` 1. creates a collection from your App config providers array; 2. splits this array in 2 chunks [Everything that starts with `Illuminate\`, rest of it]; 3. adds all the composer packages service providers in between; And this will give you a result array where all your ServiceProviders are ranked as you ranked them, but after everything that starts with `Illuminate\` and after 3rd Party Composer ServiceProviders. Upvotes: -1 <issue_comment>username_3: You have to **disable the auto-discovering feature** of the 3rd party library. To do that, open your *composer.json* file and add the libraries you want to disable auto-discovery for in the extra like this ``` "extra": { "laravel": { "dont-discover": [ "vendor/library-name", "spatie/laravel-permission" ] }, ``` Then **manually set the auto-discovery** of the libraries in any order you want in your *config/app* file of you laravel project. This will fix the problem of having auto-generated-provider before some laravel default provider. You can now make out your own Provider order as you want. Upvotes: 2
2018/03/15
1,146
3,945
<issue_start>username_0: Starting from Windows 10 Fall Creators Update (version 16299.15) and OneDrive build 17.3.7064.1005 the On-Demand Files are available for users (<https://support.office.com/en-us/article/learn-about-onedrive-files-on-demand-0e6860d3-d9f3-4971-b321-7092438fb38e>) Any OneDrive file now can have one of the following type: online-only, locally available, and always available. Using WinAPI how can I know that the file (e.g. "C:\Users\Username\OneDrive\Getting started with OneDrive.pdf") is online-only file?<issue_comment>username_1: Take a look at the `PKEY_FilePlaceholderStatus` property for the file ([at the shell level](https://blogs.msdn.microsoft.com/benkaras/2006/08/29/property-consumerism/), not the file-system level). [This blog post](https://blogs.msdn.microsoft.com/matthew_van_eerde/2013/09/24/shellproperty-exe-v2-read-all-properties-on-a-file-set-properties-of-certain-non-vt_lpwstr-types/) has a example program you can test. [This question](https://stackoverflow.com/questions/47803994/how-to-create-an-iconlist-property-in-the-windows-property-system) also hints to some undocumented properties you might want to take a look at. Microsoft has a UWP [example on MSDN](https://learn.microsoft.com/en-us/windows/uwp/files/quickstart-determining-availability-of-microsoft-onedrive-files). Upvotes: 1 <issue_comment>username_2: To check for "online only" all you need is to call `GetFileAttributes()` and see if the `FILE_ATTRIBUTE_OFFLINE` attribute is set. In fact this isn't new for OneDrive, that attribute has existed for a long time. There are other OneDrive attributes available via the shell (although the property you need is `PKEY_StorageProviderState` rather than `PKEY_FilePlaceholderStatus`) but "online only" is easy to check for. Edit: Another filesystem attribute, `FILE_ATTRIBUTE_PINNED` *is* new for Windows 10, and is used by OneDrive to indicate a file that's "always available". Edit: As of 2019 it appears that OneDrive now uses `FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS` rather than `FILE_ATTRIBUTE_OFFLINE`, as suggested below. Edit: PKEY\_StorageProviderState was broken in Windows 10 1903, and still not fixed in 1909. It returns 4 ("uploading") for all files in any apps other than Explorer. Upvotes: 2 <issue_comment>username_3: After years, I'm still using `FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS` attribute described [here](https://learn.microsoft.com/en-gb/windows/win32/fileio/file-attribute-constants?redirectedfrom=MSDN) to determine if a file or a directory is completely present locally or not. Microsoft docs says the following for `FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS`: > > *When this attribute is set, it means that the file or directory is not fully present locally. For a file that means that not all of its data is on local storage (e.g. it may be sparse with some data still in remote storage). For a directory it means that some of the directory contents are being virtualized from another location. Reading the file / enumerating the directory will be more expensive than normal, e.g. it will cause at least some of the file/directory content to be fetched from a remote store. Only kernel-mode callers can set this bit.* > > > There are some advantages of `FILE_ATTRIBUTE_RECALL_ON_DATA_ACCESS`: 1. It can be used for both files and directories. 2. It can be set in kernel mode only, so there is no chance for anyone to set the attribute arbitrary. And as it described in [this answer](https://superuser.com/questions/1214542/what-do-new-windows-8-10-attributes-mean-no-scrub-file-x-integrity-v-pinn/1287315), there are still some interesting undocumented attributes which can provide additional information about cloud files. Note: I didn't accept username_2's answer because I mentioned FILE\_ATTRIBUTE\_RECALL\_ON\_DATA\_ACCESS attribute in comments and started using it a year earlier than he updated his answer. Upvotes: 4 [selected_answer]
2018/03/15
892
2,971
<issue_start>username_0: I am attempting to create an implementation of the A\* algorithm on a 2D grid and have arrived stuck at the point where I am needing to create a set of a node's neighbours. Below are the structs I am using. ``` // Holds values for x and y locations on the grid struct Coord { int x, y; }; // holds data for each node required for A* struct Node { int type; // used for defining if this node is a blocker, empty, start or end Coord location; int g = 0; int h = 0; int f = g + h; Node *parent_; // pointer to this node's parent std::string debugmessage; }; ``` The error appears when I create this function here: ``` // finds a node's neighbours for A* std::set neighbours(Node& n\_) { std::set neighbours\_; Node temp = n\_; int x = temp.location.x; int y = temp.location.y; // start at the location belonging to 'n\_' for (y; y < HEIGHT; y++) { for (x; x < WIDTH; x++) { // east if (x < WIDTH - 1) { neighbours\_.insert(astarArray[x + 1][y]); } // west if (x > 0) { neighbours\_.insert(astarArray[x - 1][y]); } // south if (y < HEIGHT - 1) { neighbours\_.insert(astarArray[x][y + 1]); } // north if (y > 0) { neighbours\_.insert(astarArray[x][y -1]); } } } return neighbours\_; } ``` Thank you for your time.<issue_comment>username_1: You cannot have a std::set of something without overloading operator< or defining your own custom comparator. A std::set is typically a red-black tree with the objects being the keys, and that requires being able to compare the keys. So either you make an operator< for nodes or you can make a custom comparator. Info on the custom comparator [here](https://stackoverflow.com/questions/2620862/using-custom-stdset-comparator). Upvotes: 2 <issue_comment>username_2: > > std::set is an associative container that contains a sorted set of unique objects of type Key. Sorting is done using the key comparison function Compare. > > > [source](http://en.cppreference.com/w/cpp/container/set) You have to overload the operator< for your node. Upvotes: 1 <issue_comment>username_3: Many constructors (of the std) need comparison operators to work. You use `std::set`, which dont know how to compare two `Node` objects. As said in <http://en.cppreference.com/w/cpp/container/set> > > std::set is an associative container that contains a sorted set of > unique objects of type Key. Sorting is done using the key comparison > function Compare. > > > So you need to define comparison operator or give `std::set` a Compare functor as parameter. The compiler tells you the first one missing: "<" ``` struct Node { friend bool operator< (const Node& _nLeft, const Node& _nRight); //friend not necessary since we use struct (full public) ... }; bool operator< (const Node& _nLeft, const Node& _nRight) { if (_nLeft.type < _nRight.type) return true; ... return false; } ``` Upvotes: -1 [selected_answer]
2018/03/15
1,427
6,264
<issue_start>username_0: As per subject, I updated the `Owin.Security.WsFederation` and dependent packages to version 4.0 and I get the error. I did not make any code changes other than changing ``` using Microsoft.IdentityModel.Protocols; ``` to ``` using Microsoft.IdentityModel.Protocols.WsFederation; ``` where is the `WsFederationConfiguration` class seems to be now. Here is my *StartupAuth*: ``` public void ConfigureAuth(IAppBuilder app) { app.UseCookieAuthentication( new CookieAuthenticationOptions { AuthenticationType = CookieAuthenticationDefaults.AuthenticationType }); // Create WsFed configuration from web.config wsfed: values var wsconfig = new WsFederationConfiguration() { Issuer = ConfigurationManager.AppSettings["wsfed:Issuer"], TokenEndpoint = ConfigurationManager.AppSettings["wsfed:TokenEndPoint"], }; /* * Add x509 certificates to configuration * */ // certificate.1 must always exist byte[] x509Certificate; x509Certificate = Convert.FromBase64String(ConfigurationManager.AppSettings["wsfed:certificate.1"]); wsconfig.SigningKeys.Add(new X509SecurityKey(new X509Certificate2(x509Certificate))); // certificate 2 may exist if (ConfigurationManager.AppSettings["wsfed:certificate.2"] != null) { x509Certificate = Convert.FromBase64String(ConfigurationManager.AppSettings["wsfed:certificate.2"]); wsconfig.SigningKeys.Add(new X509SecurityKey(new X509Certificate2(x509Certificate))); } // certificate 3 may exist if (ConfigurationManager.AppSettings["wsfed:certificate.3"] != null) { x509Certificate = Convert.FromBase64String(ConfigurationManager.AppSettings["wsfed:certificate.3"]); wsconfig.SigningKeys.Add(new X509SecurityKey(new X509Certificate2(x509Certificate))); } // Apply configuration to wsfed Auth Options var wsoptions = new WsFederationAuthenticationOptions { SignInAsAuthenticationType = CookieAuthenticationDefaults.AuthenticationType, Configuration = wsconfig, Wreply = ConfigurationManager.AppSettings["wsfed:Wreply"], Wtrealm = ConfigurationManager.AppSettings["wsfed:Wtrealm"], }; wsoptions.TokenValidationParameters.NameClaimType = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"; // Add WdFederation middleware to Owin pipeline app.UseWsFederationAuthentication(wsoptions); } ``` Is there something else 4.0 needs to validate the signature? I assume it's talking about the signature of the token from the issuer. I didn't see how to enable ShowPII to see what key it's looking at. I am using MVC5 with the full framework. Not core. **Update**: I tried to modify the code to use the metadata provided by the identity provider in a properties file to create the `WsFederationConfiguration` and I still get the same error. I'm not sure what the Signature is, or where I get it from if it's not in the idp metadata. **Update2**: Here are the changes I made to use the wsfed metadata provided by the sts in a properties file. (I have removed the actual base64 encoded metadata, but needless to say it is the same XML you get when you regest the metadata from an STS that publishes it as and endpoint. As I said above, I get the same error: ``` public void ConfigureAuth(IAppBuilder app) { WsFederationConfiguration wsconfig; app.UseCookieAuthentication( new CookieAuthenticationOptions { AuthenticationType = CookieAuthenticationDefaults.AuthenticationType }); var metaDataDocument = System.Text.Encoding.UTF8.GetString( Convert.FromBase64String("...c2NyaXB0b3I+")); using (var metaDataReader = XmlReader.Create(new StringReader(metaDataDocument), SafeSettings)) { wsconfig = (new WsFederationMetadataSerializer()).ReadMetadata(metaDataReader); } // Apply configuration to wsfed Auth Options var wsoptions = new WsFederationAuthenticationOptions { SignInAsAuthenticationType = CookieAuthenticationDefaults.AuthenticationType, Configuration = wsconfig, Wreply = ConfigurationManager.AppSettings["wsfed:Wreply"], Wtrealm = ConfigurationManager.AppSettings["wsfed:Wtrealm"], }; wsoptions.TokenValidationParameters.NameClaimType = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn"; // Add WdFederation middleware to Owin pipeline app.UseWsFederationAuthentication(wsoptions); } ```<issue_comment>username_1: The easiest way to use WIF with owin is through **the usage of the federation meta data** (which lives at *FederationMetadata/2007-06/FederationMetadata.xml*). **Then you don't need to setup anything at all** which is explained in [Configure claims based web applications using OWIN WsFederation middleware](http://blog.baslijten.com/configure-claims-based-web-applications-using-owin-wsfederation-middleware/) . The precondition is of course that your STS publishes a meaningful `FederationMetaData` document. The nice advantage is that your public keys needed for validation are automatically picked up by your application (and renewing them is done seamlessly). This is IMHO that is much easier than the approach you are taking. You can follow [Manual configuration of OWIN WS-Federation Identity provider](http://www.vodovnik.com/2015/04/15/manual-configuration-of-owin-ws-federation-identity-provider/) as it describes a more easy way than yours. Upvotes: 1 <issue_comment>username_2: I worked with some folks on the team at MS. The issue here was that our STS is using SHA1 to sign the token and the new version of weFederation doesn't support SHA1 as it is not-secure and is deprecated. Upvotes: 3 [selected_answer]
2018/03/15
285
1,083
<issue_start>username_0: I've been experimenting with Google Colab to work on Python notebooks with team members. However, the VMs that Colab runs on appear to only have ~13GB of RAM. The datasets we're working with require more (64 GB of RAM would be sufficient). Is there a way to increase the RAM available to Colab notebooks? Like by integrating with other services in the Google Cloud Platform?<issue_comment>username_1: Not at the moment, unfortunately. Upvotes: 2 [selected_answer]<issue_comment>username_2: Unfortunately, neither a Swap file is possible to create (the jupyter notebook don't have this kind of permission). Upvotes: 0 <issue_comment>username_3: You can edit (add or remove) ram or vcpu by powering off your instance. ![google cloud edit image](https://i.stack.imgur.com/MNe6F.png) Upvotes: -1 <issue_comment>username_4: Now it is possible to have 25 GBs ram on colab. After crashing the season then colab asks you to activate high-ram. [![enter image description here](https://i.stack.imgur.com/DOKqP.png)](https://i.stack.imgur.com/DOKqP.png) Upvotes: 0
2018/03/15
573
2,294
<issue_start>username_0: I have an angular js module where all the routes are setup. I have defined a variable "maintenance". if this is set to true , I want the page to be redirected to maintenance page. I have setup the states using stateprovider. I am trying to redirect using the code below - ``` if(maintenance){ $state.go('maintenance'); } ``` This doesn't seem to work. However If I do the below , the redirect is successful - ``` $urlRouterProvider.otherwise('/signup/productSelector'); ``` I assume using "otherwise" may not be the correct solution in this case. How can I redirect? **EDIT** In the below example , I would like any call to app.html to be redirected to maintenance page irrespective of what is present after #. ``` https:///app.html#/orders/resi ```<issue_comment>username_1: You cannot use the state service within a configure method. Instead if you'd like to redirect to a certain state after the angular module has been loaded you could do it in a .run function insteadn ``` angular.module().run(['$state' '$rootScope', function($state, $rootScope) { $rootScope.$on('$stateChangeStart', function(e, toState, toParams, fromState, fromParams) { if (maintanance) { // If logged out and transitioning to a logged in page: e.preventDefault(); $state.go('maintenance'); } }); ``` Upvotes: 1 <issue_comment>username_2: You cannot use the state service within the config method since it is still being configured at that point. If you'd like to specificly redirect right after the angular module is run then you could execute the $state.go in a .run function as follows: ``` angular.module("yourModule").run(['$state', function($state) { $state.go('maintenance'); }]) ``` Or better yet you can force the redirection to happen after every state transition using the transition services: ``` angular.module("yourModule").run(['$transition', function($transition) { $transition.onBefore({}, function(transition) { var stateService = transition.router.stateService; if (maintenance && transition.to().name !== 'maintenance') { return stateService.target('maintenance'); } }) }]) ``` <https://ui-router.github.io/guide/transitionhooks> Upvotes: 3 [selected_answer]
2018/03/15
829
3,937
<issue_start>username_0: We currently have a big monolithic J2EE application (weblogic / DB2). It is a typical OLTP application. We are considering to split this application into 2 applications where each application has its own database which is not directly accessible by the other application. This also means that each application need to expose an interface for the functionality that is needed by the other application. So what are potentially the major benefits of splitting such an existing application into 2 applications ?<issue_comment>username_1: We can't weigh advantages and disadvantages without looking what that application is all about, what are business rules, how you are dividing application and how two applications share business rules. Dividing an application into two application is not just diving java classes into two groups. It require depth analysis from different perspective. Hope this helps. Upvotes: 0 <issue_comment>username_2: Most of applications using 10% of code during 90% of the time. The core idea of the micro-services which is modern SOA. You are able to elastically scale the critical part of your application in the micro-service specific special cloud. Cloud is an elastic cluster, where each node is a virtual server (XEN or VMware etc.). Cloud can extend or shrink nodes count automatically according to the load factor, without manual attention. With the classic monolithic applications, you need to scale a whole monolithic application. Usually such application use a large amount of RAM, as well as require strong hardware or strong and expensive virtual server. Another disadvantage of monolithic - if you need to release a new business feature, release cycle will be really long, since you've done a modification to the huge and complicated code-base with a code entropy. It will require time/budget expensive regression testing. But you have a benefit - different application parts (subsystems and modules) can be integrated much easier comparative to the SOA approach, off cause if you have good application design. On another side - you can split your application logic to the set of small applications, such application called a micro-service. For instance you have one micro-service responsible for UI only - i.e. only Spring MVC with Angluar.js, another micro-service for business logic and persistence, and one more micro-service for obtaining data from social networks. All those services are interconnected with each other using some web-services usually RestFull, but can be SOAP or something like Google Protocol Buffers RPC etc. So now you are able to auto-scale only UI micro-service, which is expected to be performance critical, without touching business logic and social network micro-services. And you are able to update UI micro-service even once a weak, since UI only testing is not so expensive like business logic testing. But there is a disadvantage - cluster structure became more complicated, and require stronger team to maintain it (usually automate with some Chef or Docker based scripts). It is also hardly to implement subsystem integrations and you need to think about your system design more carefully. So, if you have a huge and complicated system which is hi-loaded (like Amazon.com, Ebay, Facebook or stackoverflow). SOA approach gives you an opportunity to save some money on infrastructure and hardware. But it will be more expensive in development. And if you system is very simple, i.e. internet cafe site with a few pages - monolithic approach is preferred. Upvotes: 2 [selected_answer]<issue_comment>username_3: If scalability isn't your concern, then I'd be pointing at the following benefits: * Increased change velocity - shorter time for a feature to get from the idea phase to production (lower complexity for developers) * Lower cost of testing (smaller scope to test) * Improved quality (again, smaller scope to test) Upvotes: 1
2018/03/15
519
1,527
<issue_start>username_0: I have code like this, where ROP is taken from excel (float numbers). After the loop val and count are float numbers and somehow Python round the result to integer. How to avoid this? ``` ROP = df['ROP fph'].values.tolist() ROP = np.array(ROP) ROPav = np.array([0]*len(ROP)) ave_width = 100 for i in range(len(ROP)): val = 0 count = 0 for j in range(ave_width): if i-j >= 0: val += ROP[i-j] count += 1 if i+j < len(ROP): val += ROP[i+j] count += 1 ROPav[i] = val/count ``` for example here the results for one line : ``` val = 16326.239 count = 200 val/count = 81.632 ROPav[i] = 81 ```<issue_comment>username_1: The documentation for [numpy.array](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.array.html) says: > > dtype : data-type, optional > > > The desired data-type for the array. If not given, then the type will > be determined as the minimum type required to hold the objects in the > sequence. This argument can only be used to ‘upcast’ the array. For > downcasting, use the .astype(t) method. > > > You initialize with: ``` ROPav = np.array([0]*len(ROP)) ``` Which makes this an int array. Try instead ``` ROPav = np.array([0.0]*len(ROP)) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You have to specify a data type or use a value with correct data type when calling `np.array`. Try: ``` ROPav = np.array([0.0]*len(ROP)) ``` Upvotes: 0
2018/03/15
1,360
4,147
<issue_start>username_0: This code throws an error when pass the value directly, but it doesn't show any error if pass the value by using parameter. --It throws an Error ``` DECLARE @sql NVARCHAR(4000) DECLARE @ID INT=1234 SET @sql = N'select [count] FROM dbo.Table_1 AS t JOIN dbo.table_2 AS t2 ON t.store_number = t2.store_number AND t2.[year] = 17 AND t2.week_of_year = 6 AND t2.day_of_week = 2 WHERE t.RC_ID = @ID' EXEC sp_executesql @sql ``` -- It throws an error ``` select [count] FROM dbo.Table_1 AS t JOIN dbo.table_2 AS t2 ON t.store_number = t2.store_number AND t2.[year] = 17 AND t2.week_of_year = 6 AND t2.day_of_week = 2 WHERE t.ID = 1234 -- IT WORKS DECLARE @sql DECLARE @ID INT SET @ID = 1234 select [count] FROM dbo.Table_1 AS t JOIN dbo.table_2 AS t2 ON t.store_number = t2.store_number AND t2.[year] = 17 AND t2.week_of_year = 6 AND t2.day_of_week = 2 WHERE t.ID = @ID ``` The Error is : > > Msg 245, Level 16, State 1, Line 1 Conversion failed when converting > the varchar value 'TEST' to data type int. > > > But there is No data like 'Test' in the table.<issue_comment>username_1: One of your values that you are comparing as integers contains a bad value: ``` select t2.* from table_2 t2 where try_convert(int, year) is null or try_convert(int, week_of_year) is null or try_convert(int, day_of_week) or try_convert(int, id) is null; ``` Whether the error occurs depends on the execution plan. Upvotes: 1 <issue_comment>username_2: One of your table columns that you are filtering is not a numeric data type. When you do ``` WHERE VarcharColumn = 1 ``` The SQL Server engine will always try to convert the most complex type to the simplest one, in this case "1" is a integer and VarcharColumn is `VARCHAR`, so the engine will try to convert all the values stored in VarcharColumn to integer before filtering by value 1. Since at least one value stored there is not an integer ("TEST") then the conversion fails and that message pops up. You have 2 solutions: * Validate and fix all your values in those columns so they are actually numbers and alter the data type to the corresponding one. * Compare against the same type. `WHERE Column = '1'` Of course always try to keep your data types in check. Also in your dynamicSQL query, the declaration of the `@ID` must be inside your script (it's also missing an initial value). ``` DECLARE @sql NVARCHAR(4000) SET @sql = N' DECLARE @ID INT = 1 select [count] FROM dbo.Table_1 AS t JOIN dbo.table_2 AS t2 ON t.store_number = t2.store_number AND t2.[year] = 17 AND t2.week_of_year = 6 AND t2.day_of_week = 2 WHERE t.RC_ID = @ID' EXEC sp_executesql @sql ``` The reason for the error poping up 'sometimes' is because the different forms of your statement are making the execution plan do things in different order. If it tries to convert the varchar value to int first, it will fail. If it tries to convert the int value to varchar (for example) then it won't fail. To find the problem, try this: ``` SELECT * FROM dbo.Table_1 AS T WHERE CONVERT(VARCHAR(200), T.store_number) = 'Test' OR CONVERT(VARCHAR(200), T.ID) = 'Test' SELECT * FROM dbo.table_2 AS T WHERE CONVERT(VARCHAR(200), T.store_number) = 'Test' OR CONVERT(VARCHAR(200), T.[year]) = 'Test' OR CONVERT(VARCHAR(200), T.week_of_year) = 'Test' OR CONVERT(VARCHAR(200), T.day_of_week) = 'Test' ``` Upvotes: 0 <issue_comment>username_3: from what i see you are trying to use a parameter(@id) into the sp\_executeSQL without never passing it. A quick fix would be to do something like that ``` DECLARE @sql NVARCHAR(4000) DECLARE @ID INT = 10 SET @sql = N'select [count] FROM dbo.Table_1 AS t JOIN dbo.table_2 AS t2 ON t.store_number = t2.store_number AND t2.[year] = 17 AND t2.week_of_year = 6 AND t2.day_of_week = 2 WHERE t.RC_ID = ' + cast(@ID as nvarchar(20)) EXEC sp_executesql @sql ``` Hope this helps Upvotes: 0
2018/03/15
875
2,738
<issue_start>username_0: I have the following trigger ``` SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER TRIGGER [dbo].[TG_LT_box_name_delete] ON [dbo].[lt_box_naming] AFTER DELETE AS Set NoCount On update a set used_customer_no = NULL -- select a.* , '' as '||||', b.* from lt_facility_ref a join LT_BOX_NAMING b on b.location = ((a.facility) + '/' + a.zone + '-' + a.row_box + '-' + convert(varchar,a.seat)) JOIN deleted i ON b.id_key = i.id_key ``` it's a bit of a messy trigger. I have three tables a main table `lt_box_naming` that on insert and update store data and modify fields. it has a location field that location field is the combination of several fields from another table `lt_facility_ref` the data looks like this ``` this is lt_facility_ref facility zone row_box seat seat_no used_customer_no PRK Balcony B 33 17559 8626324 PRK Balcony B 34 15889 NULL PRK Balcony B 35 17558 8626324 WZO Orchestra D 6 15890 NULL WZO Orchestra D 7 17557 3147711 this is lt_facility_ref id_key customer_no type location seat_no 1 8626324 I PRK/Balcony-A-1 17512 2 8626324 I PRK/Orchestra-B-101 8527 3 3147711 C PRK/CenterHouse-B-23 8526 4 1235878 I WZO/TopTier-EE-1 12222 ``` When a record gets inserted into `lt_box_naming` an insert trigger states using the same update code as above updates `used_customer_no` to the customer no where this account is saved and that works without issue. on a very rare occurrence we will need to do the reverse, on delete do the same logic and update the `used_customer_no` back to NULL to free it up. The reason for the join was because it was built like this -- the data comes from the `lt_facility_ref` and gets stored with a significant amount of other data in `lt_box_naming`<issue_comment>username_1: Seems like the delete trigger is on lt\_box\_naming and your UPDATE statement JOIN is also on that table. Since this trigger is AFTER DELETE, that row won't exist. Can you just remove the reference to LT\_BOX\_NAMING in the UPDATE statement and use the data from the **deleted** row to join lt\_facility\_ref? Upvotes: 1 <issue_comment>username_2: With `AFTER DELETE` the record is already gone so you can't use it in a join. You can use the already referenced `DELETED` instead. ``` update a set used_customer_no = NULL -- select a.* , '' as '||||', b.* from lt_facility_ref a JOIN deleted i ON i.location = ((a.facility) + '/' + a.zone + '-' + a.row_box + '-' + convert(varchar,a.seat)) ``` Upvotes: 3 [selected_answer]
2018/03/15
665
2,369
<issue_start>username_0: I made a calendar with the [jQuery UI plugin datepicker](http://api.jqueryui.com/datepicker/). --- What I want to do is to append an element after the `tr`, where the selected day is in. In the structure below, I marked the `tr`, which I try to select with `<---- *THIS ROW*`. The structure of the calendar, which the plugin creates, looks somehow like this: ``` 8 9 | ... ...and so on ``` So I tried to do that with the following code: ``` $(".ui-datepicker-calendar").find(".ui-datepicker-current-day").parent("tr").addClass("open"); ``` I tried several other methods like `.closest()`, but none of them worked either. Does anyone know how I can select that? Here you have a snippet: ```js $('#calendar').datepicker({ inline: true, firstDay: 1, showOtherMonths: true, onSelect: function(){ $(".ui-datepicker-calendar").find(".ui-datepicker-current-day").parent("tr").addClass("open"); } }); ``` ```css tr.open{ background:green; } ``` ```html ```<issue_comment>username_1: Try `$(".ui-datepicker-calendar tr:eq(1)")` Upvotes: 0 <issue_comment>username_2: It looks like the datepicker is redrawn each time a date gets selected. You can use `setTimeout()` function to schedule the class addition so that it gets executed after the DOM has changed and the picker has been redrawn: ```js $(function() { $('#calendar').datepicker({ inline: true, firstDay: 1, showOtherMonths: true, onSelect: function() { setTimeout(function() { $(".ui-datepicker-calendar") .find(".ui-datepicker-current-day") .parent().addClass("open"); }, 100); } }); }); ``` ```css tr.open { background: green; } ``` ```html ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: Your code is not working because there is not any element with the CSS class `ui-datepicker-current-day` at the moment of the event execution. You could postpone a bit the calling until the CSS class is set: ```js $('#calendar').datepicker({ inline: true, firstDay: 1, showOtherMonths: true, onSelect: function(){ setTimeout(function(){ $(".ui-datepicker-calendar").find(".ui-datepicker-current-day").parent("tr").addClass("open") }, 10); } }); ``` ```css tr.open{ background-color:green; } ``` ```html ``` Upvotes: 1
2018/03/15
924
3,472
<issue_start>username_0: Consider this property ``` @JsonProperty private Map myMap; ``` When a contained `java.util.Date` value is serialized as long, it will not be deserialized to `Date` again because the type information is not present in `Map`. How can I bypass the problem? I read answers about [this question](https://stackoverflow.com/questions/18796349/jackson-de-serializing-date-to-string-to-date-in-generic-maps) which would be a work around but there would be no way to distinguish strings containing dates from dates serialized as strings in the map. **Can I tell Jackson to include type information for each map value such that Jackson can deserialize them correctly?**<issue_comment>username_1: Implement a custom Deserializer and add the Annotation `@JsonDeserialize(using = DateDeserializer.class)` to your field. Take a look at this example: **Your Json-Bean**: ``` public class Foo { private String name; @JsonProperty @JsonDeserialize(using = DateDeserializer.class) private Map dates; [...] // getter, setter, equals, hashcode } ``` **Deserializer**: ``` public class DateDeserializer extends JsonDeserializer> { private TypeReference> typeRef = new TypeReference>() {}; @Override public Map deserialize(JsonParser p, DeserializationContext ctxt, Map target) throws IOException, JsonProcessingException { Map map = new ObjectMapper().readValue(p, typeRef); for(Entry e : map.entrySet()){ Long value = e.getValue(); String key = e.getKey(); if(value instanceof Long){ // or if("date".equals(key)) ... target.put(key, new Date(value)); } else { target.put(key, value); // leave as is } } return target; } @Override public Map deserialize(JsonParser paramJsonParser, DeserializationContext ctxt) throws IOException, JsonProcessingException { return this.deserialize(paramJsonParser, ctxt, new HashMap<>()); } } ``` **Simple test**: ``` public static void main(String[] args) throws Exception { Foo foo1 = new Foo(); foo1.setName("foo"); foo1.setData(new HashMap(){{ put("date", new Date()); put("bool", true); put("string", "yeah"); }}); ObjectMapper mapper = new ObjectMapper(); String jsonStr = mapper.writeValueAsString(foo1); System.out.println(jsonStr); Foo foo2 = mapper.readValue(jsonStr, Foo.class); System.out.println(foo2.equals(foo1)); } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: Finally, I came up with this solution. Deserializer: ``` private TypeReference> typeRef = new TypeReference>() { }; @Override public Map deserialize(JsonParser p, DeserializationContext ctxt, Map target) throws IOException { Map map = new ObjectMapper().readValue(p, typeRef); for (Map.Entry e : map.entrySet()) { if (e.getKey().endsWith("[date]")) { target.put(e.getKey().substring(0, e.getKey().length() - 6), new Date((Long) e.getValue())); } else { target.put(e.getKey(), e.getValue()); } } return target; } ``` Serializer: ``` @Override public void serialize(Map value, JsonGenerator gen, SerializerProvider serializers) throws IOException { Map adaptedValue = new HashMap<>(value); for (Map.Entry e : value.entrySet()) { if (e.getValue() instanceof Date) { adaptedValue.put(e.getKey() + "[date]", ((Date) e.getValue()).getTime()); adaptedValue.remove(e.getKey()); } } new ObjectMapper().writeValue(gen, adaptedValue); } ``` The map key is adapted dependent on the data type. This is easily extendable. Upvotes: 1
2018/03/15
1,806
5,758
<issue_start>username_0: I try to cipher and decipher a string in the simplest way but it does not work ... After few hours of research I try to post my problem here. I have reduce the problem at its minimum but it still doesn't work and I don't understand the error. Here is my code : ``` class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) val password = "<PASSWORD>" val plainText = "<PASSWORD>" val encryptText = encrypt(plainText, password) text.text = decrypt(encryptText, password) } private fun encrypt(plainText: String, password: String): String { val cipher = Cipher.getInstance("AES/CBC/PKCS5Padding") val key = SecretKeySpec(password.toByteArray(charset("UTF-8")), "AES") cipher.init(Cipher.ENCRYPT_MODE, key) return String(cipher.doFinal(plainText.toByteArray(charset("UTF-8")))) } private fun decrypt(encrypted: String, password: String): String { val cipher = Cipher.getInstance("AES/CBC/PKCS5Padding") val key = SecretKeySpec(password.toByteArray(charset("UTF-8")), "AES") val r = SecureRandom() r.setSeed(r.generateSeed(16)) val byteIV = ByteArray(16) r.nextBytes(byteIV) cipher.init(Cipher.DECRYPT_MODE, key, IvParameterSpec(byteIV)) return String(cipher.doFinal(encrypted.toByteArray(charset("UTF-8")))) } } ``` and here is my error : ``` AndroidRuntime: FATAL EXCEPTION: main Process: com.cryptapp, PID: 17710 java.lang.RuntimeException: Unable to start activity ComponentInfo{com.cryptapp/com.cryptapp.MainActivity}: javax.crypto.IllegalBlockSizeException: error:1e00007b:Cipher functions:OPENSSL_internal:WRONG_FINAL_BLOCK_LENGTH at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2778) at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2856) at android.app.ActivityThread.-wrap11(Unknown Source:0) at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1589) at android.os.Handler.dispatchMessage(Handler.java:106) at android.os.Looper.loop(Looper.java:164) at android.app.ActivityThread.main(ActivityThread.java:6494) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:438) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:807) Caused by: javax.crypto.IllegalBlockSizeException: error:1e00007b:Cipher functions:OPENSSL_internal:WRONG_FINAL_BLOCK_LENGTH at com.android.org.conscrypt.NativeCrypto.EVP_CipherFinal_ex(Native Method) at com.android.org.conscrypt.OpenSSLCipher$EVP_CIPHER.doFinalInternal(OpenSSLCipher.java:570) at com.android.org.conscrypt.OpenSSLCipher.engineDoFinal(OpenSSLCipher.java:351) at javax.crypto.Cipher.doFinal(Cipher.java:1741) at com.cryptapp.MainActivity.decrypt(MainActivity.kt:47) at com.cryptapp.MainActivity.onCreate(MainActivity.kt:23) at android.app.Activity.performCreate(Activity.java:7009) at android.app.Activity.performCreate(Activity.java:7000) at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1214) at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2731) ``` Does anyone know what is going wrong with my code ? Thank you per advance.<issue_comment>username_1: You are getting this error because the input data is not a multiple of the block-size (16 bytes for AES). This is possible a duplicate of [Cipher: What is the reason for IllegalBlockSizeException?](https://stackoverflow.com/questions/16192140/cipher-what-is-the-reason-for-illegalblocksizeexception) Upvotes: 0 <issue_comment>username_2: The issue happens when you create a `String` out of the bytes returned by `Cipher.doFinal`, which are not meant to be used to create a `String` directly. Indeed, if you print the 2 byte arrays you get this: ``` Cipher.doFinal(..) -> [-124, -59, 116, -79, 27, 67, -75, 5, -6, -58, 9, -104, 16, 65, -9, -95] encryptText.toByteArray() -> [-17, -65, -67, -17, -65, -67, 116, -17, -65, -67, 27, 67, -17, -65, -67, 5, -17, -65, -67, -17, -65, -67, 9, -17, -65, -67, 16, 65, -17, -65, -67, -17, -65, -67] ``` You should save directly the byte array or convert it to something else, like Base64. Moreover, in the decryption `Cipher` you randomly generate an initialization vector, while in the encryption `Cipher` you don't, so you'll never be able to decrypt what you encrypted. A full example: ``` import java.security.SecureRandom import java.util.* import javax.crypto.Cipher import javax.crypto.spec.IvParameterSpec import javax.crypto.spec.SecretKeySpec fun main(args: Array) { val password = "<PASSWORD>" val plainText = "<PASSWORD>" val r = SecureRandom() r.setSeed(r.generateSeed(16)) val byteIV = ByteArray(16) r.nextBytes(byteIV) val encryptText = encrypt(plainText, password, byteIV) val bytes = Base64.getDecoder().decode(encryptText) println(decrypt(bytes, password, byteIV)) } fun encrypt(plainText: String, password: String, byteIV: ByteArray): String { val cipher = Cipher.getInstance("AES/CBC/PKCS5Padding") val key = SecretKeySpec(password.toByteArray(charset("UTF-8")), "AES") cipher.init(Cipher.ENCRYPT\_MODE, key, IvParameterSpec(byteIV)) val bytes = cipher.doFinal(plainText.toByteArray(charset("UTF-8"))) return Base64.getEncoder().encodeToString(bytes) } fun decrypt(bytes: ByteArray, password: String, byteIV: ByteArray): String { val cipher = Cipher.getInstance("AES/CBC/PKCS5Padding") val key = SecretKeySpec(password.toByteArray(charset("UTF-8")), "AES") cipher.init(Cipher.DECRYPT\_MODE, key, IvParameterSpec(byteIV)) return String(cipher.doFinal(bytes)) } ``` Upvotes: 3 [selected_answer]
2018/03/15
1,110
3,535
<issue_start>username_0: I've started implementing an iOS game with swift and SpriteKit. I have an object called "bubble" which basically is an SKSpriteNode (with zPosition=0, with image) that have a child (which is an SKCropNode of a person image cropped to a circle, with zPozition=1). That's ok if one bubble covers another bubble as a whole, but somehow it seems like the bubbles are partially covered with the person images. a demo picture is provided: (my final output should be that bubble1 will be on top of child-of-bubble-2) Maybe the problem is that SpriteKit ignores child ordering and just set any node with a zPozition to be in it's correct place? That is - all the bubbles are drawn first because they have zPosition=0 and then all the bubble-children are drawn, as they all have zPosition=1? If this is the case, what can I do to make sure all bubble parts are drawn together (and other bubbles can cover that bubble, I don't care) knowing that I have a dynamic amount of bubbles? [![bubbles partially cover each other](https://i.stack.imgur.com/cNYQ4.jpg)](https://i.stack.imgur.com/cNYQ4.jpg)<issue_comment>username_1: You are getting this error because the input data is not a multiple of the block-size (16 bytes for AES). This is possible a duplicate of [Cipher: What is the reason for IllegalBlockSizeException?](https://stackoverflow.com/questions/16192140/cipher-what-is-the-reason-for-illegalblocksizeexception) Upvotes: 0 <issue_comment>username_2: The issue happens when you create a `String` out of the bytes returned by `Cipher.doFinal`, which are not meant to be used to create a `String` directly. Indeed, if you print the 2 byte arrays you get this: ``` Cipher.doFinal(..) -> [-124, -59, 116, -79, 27, 67, -75, 5, -6, -58, 9, -104, 16, 65, -9, -95] encryptText.toByteArray() -> [-17, -65, -67, -17, -65, -67, 116, -17, -65, -67, 27, 67, -17, -65, -67, 5, -17, -65, -67, -17, -65, -67, 9, -17, -65, -67, 16, 65, -17, -65, -67, -17, -65, -67] ``` You should save directly the byte array or convert it to something else, like Base64. Moreover, in the decryption `Cipher` you randomly generate an initialization vector, while in the encryption `Cipher` you don't, so you'll never be able to decrypt what you encrypted. A full example: ``` import java.security.SecureRandom import java.util.* import javax.crypto.Cipher import javax.crypto.spec.IvParameterSpec import javax.crypto.spec.SecretKeySpec fun main(args: Array) { val password = "<PASSWORD>" val plainText = "<PASSWORD>" val r = SecureRandom() r.setSeed(r.generateSeed(16)) val byteIV = ByteArray(16) r.nextBytes(byteIV) val encryptText = encrypt(plainText, password, byteIV) val bytes = Base64.getDecoder().decode(encryptText) println(decrypt(bytes, password, byteIV)) } fun encrypt(plainText: String, password: String, byteIV: ByteArray): String { val cipher = Cipher.getInstance("AES/CBC/PKCS5Padding") val key = SecretKeySpec(password.toByteArray(charset("UTF-8")), "AES") cipher.init(Cipher.ENCRYPT\_MODE, key, IvParameterSpec(byteIV)) val bytes = cipher.doFinal(plainText.toByteArray(charset("UTF-8"))) return Base64.getEncoder().encodeToString(bytes) } fun decrypt(bytes: ByteArray, password: String, byteIV: ByteArray): String { val cipher = Cipher.getInstance("AES/CBC/PKCS5Padding") val key = SecretKeySpec(password.toByteArray(charset("UTF-8")), "AES") cipher.init(Cipher.DECRYPT\_MODE, key, IvParameterSpec(byteIV)) return String(cipher.doFinal(bytes)) } ``` Upvotes: 3 [selected_answer]
2018/03/15
1,505
4,056
<issue_start>username_0: I can plot a dataframe (2 "Y" values) and add vertical lines (2) to the plot, and I can specify custom legend text for either the Y values OR the vertical lines, but not both at the same time. ``` import pandas as pd import matplotlib.pyplot as plt d = {'x' : [1., 2., 3., 4.], 'y1' : [8., 6., 4., 2.], 'y2' : [-4., 13., 2.2, -1.1]} df = pd.DataFrame(d) ax = df.plot(x='x', y=['y1'], linestyle='-', color='b') df.plot(x='x', y=['y2'], linestyle='--', color='y', ax=ax) ax.legend(labels=['y1custom', 'y2custom']) plt.axvline(x=1.5, color='r', linestyle='--', label='vline1.5custom') plt.axvline(x=3.5, color='k', linestyle='--', label='vline3.5custom') plt.legend() # <---- comment out....or not....for different effects plt.show() ``` A key line in the code is "plt.legend()". With it in the code, I get this (note legend has dataframe column labels "y1" and "y2" instead of my desired custom labels): [![with plt.legend() call](https://i.stack.imgur.com/tdRYf.png)](https://i.stack.imgur.com/tdRYf.png) With "plt.legend()" removed, I get this (legend has my custom labels for the dataframe values only, legend for vertical lines does not even appear!): [![without plt.legend() call](https://i.stack.imgur.com/wGD2t.png)](https://i.stack.imgur.com/wGD2t.png) How can I get the best of both worlds, specifically the following (in whatever order) for my legend?: ``` y1custom y2custom vline1.5custom vline3.5custom ``` Sure I could rename the columns of the dataframe first, but...ugh! There must be a better way.<issue_comment>username_1: Each call to `legend()` overwrites the initially created legend. So you need to create one single legend with all the desired labels in. This means you can get the current labels via `ax.get_legend_handles_labels()` and replace those you do not like with something else. Then specify the new list of labels when calling `legend()`. ``` import pandas as pd import matplotlib.pyplot as plt d = {'x' : [1., 2., 3., 4.], 'y1' : [8., 6., 4., 2.], 'y2' : [-4., 13., 2.2, -1.1]} df = pd.DataFrame(d) ax = df.plot(x='x', y=['y1'], linestyle='-', color='b') df.plot(x='x', y=['y2'], linestyle='--', color='y', ax=ax) ax.axvline(x=1.5, color='r', linestyle='--', label='vline1.5custom') ax.axvline(x=3.5, color='k', linestyle='--', label='vline3.5custom') h,labels = ax.get_legend_handles_labels() labels[:2] = ['y1custom','y2custom'] ax.legend(labels=labels) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/Jvwbo.png)](https://i.stack.imgur.com/Jvwbo.png) Upvotes: 4 [selected_answer]<issue_comment>username_2: You could do this: ``` d = {'x' : [1., 2., 3., 4.], 'y1' : [8., 6., 4., 2.], 'y2' : [-4., 13., 2.2, -1.1]} df = pd.DataFrame(d) ax = df.plot(x='x', y=['y1'], linestyle='-', color='b') df.plot(x='x', y=['y2'], linestyle='--', color='y', ax=ax) l1 = plt.axvline(x=1.5, color='r', linestyle='--', label='vline1.5custom') l2 = plt.axvline(x=3.5, color='k', linestyle='--', label='vline3.5custom') #move ax.legend after axvlines and get_label ax.legend(labels=['y1custom', 'y2custom',l1.get_label(),l2.get_label()]) plt.show() ``` Output: [![enter image description here](https://i.stack.imgur.com/owxmI.png)](https://i.stack.imgur.com/owxmI.png) Upvotes: 1 <issue_comment>username_3: `label` can be passed to `plot()` as long as you specify the column being plotted: ``` import pandas as pd import matplotlib.pyplot as plt d = {'x' : [1., 2., 3., 4.], 'y1' : [8., 6., 4., 2.], 'y2' : [-4., 13., 2.2, -1.1]} df = pd.DataFrame(d) ax = df['y1'].plot(x='x', linestyle='-', color='b', label='y1custom') df['y2'].plot(x='x', linestyle='--', color='y', ax=ax, label='y2custom') plt.axvline(x=1.5, color='r', linestyle='--', label='vline1.5custom') plt.axvline(x=3.5, color='k', linestyle='--', label='vline3.5custom') plt.legend() plt.show() ``` This approach avoids having to mess around with the legend afterwards: [![matplotlib figure+custom legend](https://i.stack.imgur.com/U2UFn.png)](https://i.stack.imgur.com/U2UFn.png) Upvotes: 2
2018/03/15
481
1,964
<issue_start>username_0: PropertyInfo.GetValue() returns an object. I need to cast that object to the type returned from PropertyInfo.PropertyType. How can I do it? The only way I think is a switch on PropertyType.ToString(). Is there another way? TIA<issue_comment>username_1: var prop= PropertyInfo.getValue() as Propertytype; i use this for casting . Upvotes: -1 <issue_comment>username_2: What you are asking is not possible. You would need to choose what to do based on the type: ``` public class Foo { public void FooFunc() { ... } } public class Bar { public Foo Foo { get; set; } } ... public void Whatever() { Bar bar = new Bar(); bar.Foo = new Foo(); foreach (PropertyInfo propertyInfo in typeof(Bar).GetProperties()) { // the compiler will not know what the property is! // it would only be known at runtime so you will not be // able to access foo propertyInfo.PropertyType p = propertyInfo.GetValue(bar) as propertyInfo.PropertyType; // compiler error p.FooFunc(); // compiler error // you can do this: Foo p = propertyInfo.GetValue(bar) as Foo; if (p != null) // cast was successful p.FooFunc(); // but you would need such a check for each type you want // to include // alternatively use conditions based on the type object p = propertyInfo.GetValue(bar); if (p.GetType() == typeof(Foo)) ((Foo)p).FooFunc(); // or with pattern matching switch propertyInfo.GetValue(bar) { case Foo f: f.FooFunc; break; } } } ``` Upvotes: 0 <issue_comment>username_3: It IS possible. With the PropertyInfo Value and PropertyType you can: var typedPropertyVal = Convert.ChangeType(propertyVal, propertyType); This is the post that helped me: technico.qnownow.com/reflection-setvalue-type-conversion Upvotes: 1
2018/03/15
550
2,157
<issue_start>username_0: Complete long shot, but is there a method, either via API or directly in the admin console to view the the last time a user accessed either an email or attachment within the account? For example: Tammy receives an email at 3:00 PM Tammy opens the email at 3:01 PM Tammy opens the email attachment at 3:02 PM --- Is is possible to know that Tammy opened the email at 3:01 PM and/or opened the attachment at 3:02 PM?<issue_comment>username_1: var prop= PropertyInfo.getValue() as Propertytype; i use this for casting . Upvotes: -1 <issue_comment>username_2: What you are asking is not possible. You would need to choose what to do based on the type: ``` public class Foo { public void FooFunc() { ... } } public class Bar { public Foo Foo { get; set; } } ... public void Whatever() { Bar bar = new Bar(); bar.Foo = new Foo(); foreach (PropertyInfo propertyInfo in typeof(Bar).GetProperties()) { // the compiler will not know what the property is! // it would only be known at runtime so you will not be // able to access foo propertyInfo.PropertyType p = propertyInfo.GetValue(bar) as propertyInfo.PropertyType; // compiler error p.FooFunc(); // compiler error // you can do this: Foo p = propertyInfo.GetValue(bar) as Foo; if (p != null) // cast was successful p.FooFunc(); // but you would need such a check for each type you want // to include // alternatively use conditions based on the type object p = propertyInfo.GetValue(bar); if (p.GetType() == typeof(Foo)) ((Foo)p).FooFunc(); // or with pattern matching switch propertyInfo.GetValue(bar) { case Foo f: f.FooFunc; break; } } } ``` Upvotes: 0 <issue_comment>username_3: It IS possible. With the PropertyInfo Value and PropertyType you can: var typedPropertyVal = Convert.ChangeType(propertyVal, propertyType); This is the post that helped me: technico.qnownow.com/reflection-setvalue-type-conversion Upvotes: 1
2018/03/15
1,908
6,700
<issue_start>username_0: I'm using Spring MVC 4.3.11.RELEASE and have a vanilla resource handler for static resources. It's working fine - for resources that exist. However if not, it appears to return a 404 to the DispatcherServlet which is happy with that response since it found a handler. I've got ControllerAdvice for NoHandlerFoundException which works fine for controllers but isn't meant to handle this case. So Spring MVC punts completely and I get the nasty Tomcat 404 response. I can find no way to configure handling for this case so I can return a proper response. With TRACE enabled for Spring, you see the following for such a request: ``` 2018-03-15T14:22:05,361 TRACE [] DispatcherServlet - Bound request context to thread: org.apache.catalina.connector.RequestFacade@597aa896 2018-03-15T14:22:05,361 DEBUG [] DispatcherServlet - DispatcherServlet with name 'dispatcher' processing GET request for [/creditcard/static/doh] 2018-03-15T14:22:05,361 TRACE [] DispatcherServlet - Testing handler map [org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping@4b720a14] in DispatcherServlet with name 'dispatcher' 2018-03-15T14:22:05,361 DEBUG [] questMappingHandlerMapping - Looking up handler method for path /static/doh 2018-03-15T14:22:05,364 DEBUG [] questMappingHandlerMapping - Did not find handler method for [/static/doh] 2018-03-15T14:22:05,364 TRACE [] DispatcherServlet - Testing handler map [org.springframework.web.servlet.handler.BeanNameUrlHandlerMapping@67db7dde] in DispatcherServlet with name 'dispatcher' 2018-03-15T14:22:05,364 TRACE [] BeanNameUrlHandlerMapping - No handler mapping found for [/static/doh] 2018-03-15T14:22:05,364 TRACE [] DispatcherServlet - Testing handler map [org.springframework.web.servlet.handler.SimpleUrlHandlerMapping@4698270f] in DispatcherServlet with name 'dispatcher' 2018-03-15T14:22:05,364 DEBUG [] SimpleUrlHandlerMapping - Matching patterns for request [/static/doh] are [/static//**] 2018-03-15T14:22:05,364 DEBUG [] SimpleUrlHandlerMapping - URI Template variables for request [/static/doh] are {} 2018-03-15T14:22:05,364 DEBUG [] SimpleUrlHandlerMapping - Mapping [/static/doh] to HandlerExecutionChain with handler [ResourceHttpRequestHandler [locations=[ServletContext resource [/static//]], resolvers=[org.springframework.web.servlet.resource.PathResourceResolver@20537c7e]]] and 1 interceptor 2018-03-15T14:22:05,364 TRACE [] DispatcherServlet - Testing handler adapter [org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter@442b38d3] 2018-03-15T14:22:05,364 TRACE [] DispatcherServlet - Testing handler adapter [org.springframework.web.servlet.mvc.HttpRequestHandlerAdapter@3128c8a7] 2018-03-15T14:22:05,364 DEBUG [] DispatcherServlet - Last-Modified value for [/creditcard/static/doh] is: -1 2018-03-15T14:22:05,364 TRACE [] ResourceHttpRequestHandler - Applying "invalid path" checks to path: doh 2018-03-15T14:22:05,364 TRACE [] PathResourceResolver - Resolving resource for request path "doh" 2018-03-15T14:22:05,364 TRACE [] PathResourceResolver - Checking location: ServletContext resource [/static//] 2018-03-15T14:22:05,364 TRACE [] PathResourceResolver - No match for location: ServletContext resource [/static//] 2018-03-15T14:22:05,364 TRACE [] ResourceHttpRequestHandler - No matching resource found - returning 404 2018-03-15T14:22:05,364 DEBUG [] DispatcherServlet - Null ModelAndView returned to DispatcherServlet with name 'dispatcher': assuming HandlerAdapter completed request handling 2018-03-15T14:22:05,364 TRACE [] DispatcherServlet - Cleared thread-bound request context: org.apache.catalina.connector.RequestFacade@597aa896 2018-03-15T14:22:05,364 DEBUG [] DispatcherServlet - Successfully completed request ``` Thoughts?<issue_comment>username_1: I had the same issue. My solution as follows: * spring.resources.add-mappings=false (either in yaml or application.properties) * in Config of WebMvcConfigurer 'addResourceHandlers' method add all your mappings, e.g.: ``` @Configuration public class WebMvcConfig implements WebMvcConfigurer { @Override public void addResourceHandlers(ResourceHandlerRegistry registry) { registry.addResourceHandler("/home.html**") .addResourceLocations("classpath:/static/views/home/"); } @Override public void configureDefaultServletHandling( DefaultServletHandlerConfigurer configurer) { // DO not enable this ! // configurer.enable(); // Remove this method! } } ``` * must remove default servlet handler, as it will try to handle and return with status 404, but never throws the NoHandlerFoundException * add GlobalExceptionHandler Controller advice: ``` @ControllerAdvice public class GlobalExceptionHandler { @ExceptionHandler(NoHandlerFoundException.class) public String handleNotFoundError(Exception ex) { return "redirect:/your404page"; } } ``` Hope this helps! For me it works perfectly! Upvotes: 2 <issue_comment>username_2: The username_1 answer only works for spring-boot, as `spring.resources.add-mappings=false` is a spring-boot configuration so re-writing required configuration to through NoHandlerFoundException and handle it. 1. Create a class `GlobalExceptionHandler` annotated with `@ControllerAdvice` ``` @ControllerAdvice public class GlobalExceptionHandler { @ExceptionHandler(NoHandlerFoundException.class) public String handleNotFoundError(Exception ex) { return "redirect:/yourCustom404page"; } } ``` 2. By default, when a page/resource does not exist the servlet container will throw a default 404 page. If you want a custom 404 response then you need to tell `DispatcherServlet` to throw the exception if no handler is found. We can do this by setting the `throwExceptionIfNoHandlerFound` servlet initialization parameter to `true` In spring-mvc java based configuration is ``` public class AppInitializer extends AbstractAnnotationConfigDispatcherServletInitializer { ... @Override protected DispatcherServlet createDispatcherServlet(WebApplicationContext servletAppContext) { final DispatcherServlet servlet = (DispatcherServlet) super.createDispatcherServlet(servletAppContext); servlet.setThrowExceptionIfNoHandlerFound(true); return servlet; } } ``` if xml based configuration, initialize your dispatcher servlet like this ``` dispatcher org.springframework.web.servlet.DispatcherServlet throwExceptionIfNoHandlerFound true ``` Upvotes: 2
2018/03/15
325
1,290
<issue_start>username_0: I am getting some variables from a JSON file using amp-list and amp-template. One of the variables I've gathered is a number that I need to round up or down. The number would normally be displayed as {{number}}. However, I need to manipulate this number to make it a whole number. I am not sure how to do the conversion on that amp var. any help is appreciated.<issue_comment>username_1: Mustache is a Logicless template so you can't do any math with it, all your data have to be prepared elsewhere before it has reached a mustache. More info here <https://mustache.github.io/mustache.5.html> Upvotes: 0 <issue_comment>username_2: If you're displaying data in an `amp-list` you can do the rounding inside the `src` attribute by using `amp-state` and `map`: ``` ... ``` and then use `roundedValue` in your template. See this page for the list of supported functions: <https://www.ampproject.org/es/docs/reference/components/amp-bind#white-listed-functions> But keep in mind that if your objects have a lot of fields you may run into the expression size limit that AMP imposes on expressions inside attributes (they can't perform more than 50 operations IIRC, including function calls, math operations, field dereferencing, etc). Upvotes: 2 [selected_answer]
2018/03/15
1,133
4,204
<issue_start>username_0: I'm using Visual Studio 2017, and I really need a menu with useful buttons in the toolbar, now when I got into the customization menu **TOOLS** -> **Customize** -> **Commands**. [![Customize -> Commands menu](https://i.stack.imgur.com/VaLJi.png)](https://i.stack.imgur.com/VaLJi.png) I made a custom menu `"MY\_MENU"` and menu item `"Close Project"`, but for some reason I just can't move the menu item into the menu (see picture). How do I do this? [![enter image description here](https://i.stack.imgur.com/gbiWe.png)](https://i.stack.imgur.com/gbiWe.png)<issue_comment>username_1: > > **Warning:** After you customize a toolbar or menu, make sure that its check box remains selected in the Customize dialog box. Otherwise, your changes won't persist after you close and reopen Visual Studio. > > > [Adding, removing, or moving a menu on the menu bar](https://msdn.microsoft.com/en-us/library/wdee4yb6.aspx#Anchor_0) ===================================================================================================================== 1. On the menu bar, choose Tools, Customize. The Customize dialog box opens. 2. On the Commands tab, leave the Menu bar option button selected, leave Menu Bar selected in the list next to that option, and then perform one of the following sets of steps: [![enter image description here](https://i.stack.imgur.com/OBa9T.png)](https://i.stack.imgur.com/OBa9T.png) [Adding, removing, or moving a toolbar](https://msdn.microsoft.com/en-us/library/wdee4yb6.aspx#Anchor_1) ======================================================================================================== 1. On the menu bar, choose Tools, Customize. The Customize dialog box opens. 2. On the Toolbar tab, perform one of the following sets of steps: To add a toolbar, choose the New button, specify a name for the toolbar that you want to add, and then choose the OK button. [![enter image description here](https://i.stack.imgur.com/LiGjD.png)](https://i.stack.imgur.com/LiGjD.png) [Customizing a menu or a toolbar](https://msdn.microsoft.com/en-us/library/wdee4yb6.aspx#Anchor_2) ================================================================================================== 1. On the menu bar, choose Tools, Customize. The Customize dialog box opens. 2. On the Commands tab, choose the option button for the type of element that you want to customize. 3. In the list for that type of element, choose the menu or toolbar that you want to customize, and then perform one of the following sets of steps: To add a command, choose the Add Command button. In the Add Command dialog box, choose an item in the Categories list, choose an item in the Commands list, and then choose the OK button. [![enter image description here](https://i.stack.imgur.com/rnjFT.png)](https://i.stack.imgur.com/rnjFT.png) Upvotes: 2 <issue_comment>username_2: I believe I had the same problem as the questioner, and managed to solve it, details below. Note: I use Visual Studio 2015 Professional, but this should apply to later versions as well. Scenario -------- I created a new Toolbar *FooToolBar*, and placed to the toolbars like this: ![BarMenu can be seen on the toolbar, next to it, a Solution Explorer icon](https://i.stack.imgur.com/hRxDf.png) Goal ---- To place the "Solution explorer" icon inside the menu. ![BarMenu can be seen on the toolbar, next to it, a Solution Explorer icon; an arrow is pointing from the icon towards the menu](https://i.stack.imgur.com/FY6Rq.png) Problem ------- Inside the **Customize** window you can't just drag&drop the item to the menu: ![Customize window showed](https://i.stack.imgur.com/0SoVP.png) Solution -------- Inside the *Toolbar* combobox, a new item appeared, select that: ![Toolbar combobox showing FooToolBar | BarMenu](https://i.stack.imgur.com/kDYPP.png) After this, you will see a new area where you should place the icon you wanted to. Also, remove the previous icon from the previous, *FooToolBar* toolbar item. After doing this, press OK and it will work. Result ------ ![Image of the tool bar working](https://i.stack.imgur.com/7VES6.png) I hope I helped someone. :-) Upvotes: 3 [selected_answer]
2018/03/15
955
3,507
<issue_start>username_0: The aim of the template is to add subnets to an existing Vnet but when executing it using the powershell command ``` New-AzureRmResourceGroupDeployment -Name testing -ResourceGroupName rgname -TemplateFile C:\Test\deploy.json -TemplateParameterFile C:\Test\parameterfile.json ``` The following error is displayed and I really cant understand what it means.Here is the error **\* "Error: Code=InvalidRequestContent; Message=The request content was invalid and could not be deserialized: 'Cannot populate JSON array ontotype'Microsoft.WindowsAzure.ResourceStack.Frontdoor.Templates.Schema.TemplateResourceCopy'. Path 'properties.template.resources[0].copy' "\*** Following is my input file(parameter.json) ``` { "VNetSettings":{ "value":{ "name":"VNet2", "addressPrefixes":"10.0.0.0/16", "subnets":[ { "name": "sub5", "addressPrefix": "10.0.5.0/24" }, { "name":"sub6", "addressPrefix":"10.0.6.0/24" } ] } } } ``` The following is my template(deploy.json) ``` { "contentversion":"1.0.0.0", "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "parameters":{ "VNetSettings": {"type":"object"}, "noofsubnets": { "type":"int" } }, "resources": [ { "type":"Microsoft.Network/virtualNetworks/subnets", "apiVersion": "2015-06-15", "location":"[resourceGroup().location]", "copy": [ { "name":"subnets", "count":"[parameters('noofsubnets')]", "input": { "name": "[concat(parameters('VNetSettings').name, '/',parameters('VNetSettings').subnets[copyIndex('subnets')].name)]", "properties":{ "addressPrefix": "[parameters('VNetSettings').subnets[copyIndex('subnets')].addressPrefix]" } } } ] } ] } ``` I guess the error should be in and around the copy statement.<issue_comment>username_1: if you create a subnet resource, you need to structure json like a full blown resource: ``` "name": "[concat('bla/bla-', copyIndex())]", "type": xxx, "apiVersion": xxx, "location": xxx, "copy": { "name": xxx, "count": xxx }, "properties": { "addressPrefix": xxx } ``` and just use `copyIndex()` function. without `'subnets'` Upvotes: 1 <issue_comment>username_2: Here is the solution.Thanks to @username_1 for your leads. ``` { "contentversion":"1.0.0.0", "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "parameters":{ "VNetSettings":{"type":"object"}, "noofsubnets": { "type":"int" } }, "variables": { "vnetname":"[parameters('VNetSettings').name]" }, "resources": [ { "type":"Microsoft.Network/virtualNetworks/subnets", "name": "[concat(variables('vnetname'), '/',parameters('VNetSettings').subnets[copyIndex()].name)]", "apiVersion": "2015-06-15", "location":"[resourceGroup().location]", "properties":{ "addressPrefix": "[parameters('VNetSettings').subnets[copyIndex()].addressPrefix]" }, "copy":{ "name":"subnetcopy", "count":"[parameters('noofsubnets')]" } } ] } ``` Upvotes: 1 [selected_answer]
2018/03/15
669
2,045
<issue_start>username_0: Button element doesn't stretch to full parent width with left/right zero technique. It perfectly works for `a` tag but not for `button`. What am I missing? The question is why left/right approach isn't working for button. I know that I can use wrapper, `calc()` or flexbox. But it seems strange that old way doesn't work. [Fiddle](https://jsfiddle.net/9rtnj8rL/25/) ```css #container { position: relative; width: 400px; height: 60px; padding: 0 20px; background: #ccc; } .button { position: absolute; left: 20px; right: 20px; height: 20px; background: green; color: white; border: 1px solid #000; } a.button { top: 0; } button.button { top: 20px; display: block; } button.button-full { width: 100%; top: 40px; } ``` ```html Link button Button button Button button ```<issue_comment>username_1: you can add button width to 100% so that it will be 100% of its parent container. ``` .button { position: absolute; left: 0; right: 0; **width: 100%;** background: green; color: white; border: 1px solid #000; } ``` Upvotes: 0 <issue_comment>username_2: Add `width: calc();` to the `.button` as in ``` a.button { top: 0; width: calc(); } ``` This will stretch it to 100% and retain your padding of the parent element. Upvotes: 0 <issue_comment>username_3: You can set `width:inherit` on the `button`, but be aware that it won't work when `box-sizing:border-box` is set on the container, otherwise you will probably need `width:calc(100% - paddings)`, also mentioned in the other answer. ``` button { width: inherit; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_4: Your button is overflowing the parent since your button is `absolute` and has left of `20px`. **There is no need to compensate for the parent padding.** Change left `left: 0px;` or remove `position: absolute;`. ``` .button{ position: absolute; left: 0px; ... } ``` I would rather be removing `position: absolute`; Upvotes: 0
2018/03/15
1,161
3,366
<issue_start>username_0: I'm learning Tkinter and am using the book `Tkinter by example`. I tried to test the example of the book, and something is wrong. ``` import Tkinter as tk class Todo(tk.Frame): def __init__(self,tasks=None): tk.Frame.__init__(self,tasks) if not tasks: self.tasks=[] else: self.tasks=tasks self.tasks.title("To-Do App v1") self.tasks.geometry("300x400") todol=tk.Label(self,text="---Ado Items Here---",bg="lightgrey",fg="black",pady=10) self.tasks.append(todol) for task in self.tasks: task.pack(side=tk.TOP,file=tk.X) self.task_creat=tk.Text(self,height=3,bg="white",fg="black") self.task_creat.pack(side=tk.BOTTOM,fill=tk.X) self.task_creat.focus_set() self.bind("",self.add\_task) self.colour\_schemes=[{"bg":"lightgrey","fg":"black"},{"bg":"grey","fg":"white"}] def add\_task(self,event=None): task\_text=self.task\_creat.get(1.0,tk.END).strip() if len(task\_text)>0: new\_task = tk.Label(self,text=task\_text,pady=10) \_,task\_style\_choice=divmod(len(self.tasks),2) my\_scheme\_choice = self.colour\_schemes[task\_style\_choice] new\_task.configure(bg=my\_scheme\_choice["bg"]) new\_task.configure(fg=my\_scheme\_choice["fg"]) new\_task.pack(side=tk.TOP,fill=tk.X) tk.append(new\_task) self.task\_create.delete(1.0,tk.END) if \_\_name\_\_=="\_\_main\_\_": todo=tk.Tk() app=Todo(todo) todo.mainloop() ``` Error raised: ``` Traceback (most recent call last): File "", line 1, in runfile('E:/TKinter/tkinter\_by\_example/2\_1.py', wdir='E:/TKinter/tkinter\_by\_example') File "D:\Anaconda\lib\site-packages\spyder\utils\site\sitecustomize.py", line 880, in runfile execfile(filename, namespace) File "D:\Anaconda\lib\site-packages\spyder\utils\site\sitecustomize.py", line 87, in execfile exec(compile(scripttext, filename, 'exec'), glob, loc) File "E:/TKinter/tkinter\_by\_example/2\_1.py", line 39, in app=Todo(todo) File "E:/TKinter/tkinter\_by\_example/2\_1.py", line 18, in \_\_init\_\_ self.tasks.append(todol) File "D:\Anaconda\lib\lib-tk\Tkinter.py", line 1904, in \_\_getattr\_\_ return getattr(self.tk, attr) AttributeError: append ```<issue_comment>username_1: you can add button width to 100% so that it will be 100% of its parent container. ``` .button { position: absolute; left: 0; right: 0; **width: 100%;** background: green; color: white; border: 1px solid #000; } ``` Upvotes: 0 <issue_comment>username_2: Add `width: calc();` to the `.button` as in ``` a.button { top: 0; width: calc(); } ``` This will stretch it to 100% and retain your padding of the parent element. Upvotes: 0 <issue_comment>username_3: You can set `width:inherit` on the `button`, but be aware that it won't work when `box-sizing:border-box` is set on the container, otherwise you will probably need `width:calc(100% - paddings)`, also mentioned in the other answer. ``` button { width: inherit; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_4: Your button is overflowing the parent since your button is `absolute` and has left of `20px`. **There is no need to compensate for the parent padding.** Change left `left: 0px;` or remove `position: absolute;`. ``` .button{ position: absolute; left: 0px; ... } ``` I would rather be removing `position: absolute`; Upvotes: 0
2018/03/15
734
2,666
<issue_start>username_0: I have a function with a completion handler, returning one parameter or more. In a client, when executing a completion handler, I'd like to have an `unowned` reference to `self`, as well as having access to the parameter passed. Here is the Playground example illustrating the issue and the goal I'm trying to achieve. ``` import UIKit struct Struct { func function(completion: (String) -> ()) { completion("Boom!") } func noArgumentsFunction(completion: () -> Void) { completion() } } class Class2 { func execute() { Struct().noArgumentsFunction { [unowned self] in //... } Struct().function { (string) in // Need [unowned self] here //... } } } ```<issue_comment>username_1: Is it just the syntax for including [unowned self] in the closure parameter list you need? ``` struct Struct { func function(completion:(String)->()) { completion("Boom!") } } class Class { func execute() { Struct().function { [unowned self] string in print(string) print(self) } } } Class().execute() ``` [![enter image description here](https://i.stack.imgur.com/oKAfh.png)](https://i.stack.imgur.com/oKAfh.png) Upvotes: 1 <issue_comment>username_2: As I said in my comment ``` Struct().function { [unowned self] (string) in //your code here } ``` **Capture list** and **closure parameters** that should be the order in closures more info from [Apple Documentation](https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/AutomaticReferenceCounting.html) > > **Defining a Capture List** > > > Each item in a capture list is a pairing of > the weak or unowned keyword with a reference to a class instance (such > as self) or a variable initialized with some value (such as delegate = > self.delegate!). These pairings are written within a pair of square > braces, separated by commas. > > > Place the capture list before a closure’s parameter list and return > type if they are provided: > > > ``` lazy var someClosure: (Int, String) -> String = { [unowned self, weak delegate = self.delegate!] (index: Int, stringToProcess: String) -> String in // closure body goes here } ``` > > If a closure does not specify a parameter list or return type because > they can be inferred from > context, place the capture list at the very start of the closure, > followed by the in keyword: > > > ``` lazy var someClosure: () -> String = { [unowned self, weak delegate = self.delegate!] in // closure body goes here } ``` Upvotes: 4 [selected_answer]
2018/03/15
765
2,728
<issue_start>username_0: What is the time complexity of the following algorithm, how can I find the best and worst case in my code: ``` boolean b = true; integer rn = 0; for(int i=1; i<=n; i++) { for(int j=1; j<=m; j++) { rn = Math.Random() // random number between j and m if(j%rn==0) b = false; while(b) { for(int k=1; k<=o; k++) { for(int l=1; l<=p; l++) { //some stuff rn = Math.Random() // random number between l and p if(l%rn==0) b=false; } } } b = true; } } ``` The first two for loops will always run so I guess this is the best case but how can I measure the worst case here?<issue_comment>username_1: Is it just the syntax for including [unowned self] in the closure parameter list you need? ``` struct Struct { func function(completion:(String)->()) { completion("Boom!") } } class Class { func execute() { Struct().function { [unowned self] string in print(string) print(self) } } } Class().execute() ``` [![enter image description here](https://i.stack.imgur.com/oKAfh.png)](https://i.stack.imgur.com/oKAfh.png) Upvotes: 1 <issue_comment>username_2: As I said in my comment ``` Struct().function { [unowned self] (string) in //your code here } ``` **Capture list** and **closure parameters** that should be the order in closures more info from [Apple Documentation](https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/AutomaticReferenceCounting.html) > > **Defining a Capture List** > > > Each item in a capture list is a pairing of > the weak or unowned keyword with a reference to a class instance (such > as self) or a variable initialized with some value (such as delegate = > self.delegate!). These pairings are written within a pair of square > braces, separated by commas. > > > Place the capture list before a closure’s parameter list and return > type if they are provided: > > > ``` lazy var someClosure: (Int, String) -> String = { [unowned self, weak delegate = self.delegate!] (index: Int, stringToProcess: String) -> String in // closure body goes here } ``` > > If a closure does not specify a parameter list or return type because > they can be inferred from > context, place the capture list at the very start of the closure, > followed by the in keyword: > > > ``` lazy var someClosure: () -> String = { [unowned self, weak delegate = self.delegate!] in // closure body goes here } ``` Upvotes: 4 [selected_answer]
2018/03/15
1,412
5,322
<issue_start>username_0: I am trying to parse some JSON into a class that has another class as one of it's properties. I am using Newtonsoft.Json as my JSON parser. ``` private class OrderModel { public string OrderId {get; set;} public string OrderDescription {get; set;} public List OrderItems {get; set;} // Collection of OrderDetails } private class OrderDetailModel { public string ProductId {get; set;} public string ProductName {get; set;} public decimal UnitPrice {get; set;} public int Quantity {get; set;} } ``` **Here is some sample JSON** ``` { ... //JSON data above here "transactionData": { "orders": [{ "orderId": 111, "orderDescription": "Giant Food Mart", "orderItems": [{ "productId": 65, "productName": "Dried Beef", "unitPrice": 10.00, "quantity": 7 }, { "productId": 23, "productName": "Carrots", "unitPrice": 1.25, "quantity": 100 } ] }, { "orderId": 112, "orderDescription": "Bob's Corner Variety", "orderItems": [{ "productId": 523, "productName": "Red Licorice", "unitPrice": 0.50, "quantity": 27 }, { "productId": 321, "productName": "Gummy Worms", "unitPrice": 1.50, "quantity": 50 } ] } ] } ... //JSON data below here } ``` My C# code to populate the objects with the JSON data ``` var parsedJson = JObject.Parse(jsonResponse); var transactionData = parsedJson["transactionData"]; // Jump to the transactionData node var orders = transactionData .Select(x => new OrderModel { OrderId = (string)x["orderId"], OrderDescription = (string)x["orderDescription"], OrderItems = x["orderItems"].Select(y => new OrderDetailModel { ProductId = (string)y["productId"], // not being recognized ProductName = (string)y["productName"], // not being recognized UnitPrice = (decimal)y["unitPrice"], // not being recognized Quantity = (int)y["quantity"] // not being recognized }).ToList() }).ToList(); ``` The problem is when I try and populate the `OrderDetailModel`. None of the properties of `ObjectDetailModel` are being recognized by intellisense. Is there something wrong with my LINQ statement that I am missing? I wanted to use this method of populating the `OrderModel` and `OrderDetailModel` objects because the property names did not have to match the JSON property names. I was hoping to do the mapping in the LINQ Lambda statement. **UPDATE** To help clarify an answer some of the comments. I can't even compile the code. As soon as I try an type in one of the properties from `OrderDetailModel` it is not recognized. ``` var parsedJson = JObject.Parse(jsonResponse); var transactionData = parsedJson["transactionData"]; // Jump to the transactionData node var orders = transactionData .Select(x => new OrderModel { OrderId = (string)x["orderId"], OrderDescription = (string)x["orderDescription"], OrderItems = x["orderItems"].Select(y => new OrderDetailModel { //properties for OrderDetailModel not recognized here }).ToList() }).ToList(); ``` Is there a reason why this Lambda statement shouldn't work?<issue_comment>username_1: Is it just the syntax for including [unowned self] in the closure parameter list you need? ``` struct Struct { func function(completion:(String)->()) { completion("Boom!") } } class Class { func execute() { Struct().function { [unowned self] string in print(string) print(self) } } } Class().execute() ``` [![enter image description here](https://i.stack.imgur.com/oKAfh.png)](https://i.stack.imgur.com/oKAfh.png) Upvotes: 1 <issue_comment>username_2: As I said in my comment ``` Struct().function { [unowned self] (string) in //your code here } ``` **Capture list** and **closure parameters** that should be the order in closures more info from [Apple Documentation](https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/AutomaticReferenceCounting.html) > > **Defining a Capture List** > > > Each item in a capture list is a pairing of > the weak or unowned keyword with a reference to a class instance (such > as self) or a variable initialized with some value (such as delegate = > self.delegate!). These pairings are written within a pair of square > braces, separated by commas. > > > Place the capture list before a closure’s parameter list and return > type if they are provided: > > > ``` lazy var someClosure: (Int, String) -> String = { [unowned self, weak delegate = self.delegate!] (index: Int, stringToProcess: String) -> String in // closure body goes here } ``` > > If a closure does not specify a parameter list or return type because > they can be inferred from > context, place the capture list at the very start of the closure, > followed by the in keyword: > > > ``` lazy var someClosure: () -> String = { [unowned self, weak delegate = self.delegate!] in // closure body goes here } ``` Upvotes: 4 [selected_answer]
2018/03/15
1,522
5,154
<issue_start>username_0: Applying XSL in Java works on some data samples, but produces an empty result on others, where a command line processor still produces a valid result. Below is an example where i see the difference. given an XSL and an XML listed below, the command line: ``` saxonb-xslt -s:metsmods_test3.xml -xsl:metsmods2.xsl ``` returns: ``` Warning: at xsl:stylesheet on line 5 column 61 of metsmods2.xsl: Running an XSLT 1.0 stylesheet with an XSLT 2.0 processor main_label:Wachsende Häuser aus lebenden Bäumen entstehend identifier:urn:nbn:de:kobv:83-goobi-3255500 main_label:II. Teil. Art und Verwendung der Naturbauten. identifier:urn:nbn:de:kobv:83-goobi-3255762 main_label:III. Teil. Erörterung sonstiger Punkte. identifier:urn:nbn:de:kobv:83-goobi-3255929 main_label: identifier:urn:nbn:de:kobv:83-goobi-3256094 main_label: identifier:urn:nbn:de:kobv:83-goobi-3256100 identifier.kobv:990006350260302884 title:Wachsende Häuser aus lebenden Bäumen entstehend title:Wachsende Häuser aus lebenden Bäumen entstehend ``` But the java tranformer snipplet (taken from <https://examples.javacodegeeks.com/core-java/xml/xpath/xpath-xslt-tutorial/> and listed below) produces no output on the same files saxonb-xslt does. I assume this has to do with namespaces or access to DTD's, which the command line processor and the chosen transformer handle differently. but what exactly? why no exceptions while processing in Java? code: ``` public class Main { private static Document document; public static void main(String[] args) throws Exception { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); File xml = new File("/home/peter/1stax/src/metsmods_test3.xml"); File xsl = new File("/home/peter/1stax/src/metsmods2.xsl"); DocumentBuilder builder = factory.newDocumentBuilder(); document = builder.parse(xml); // Use a Transformer for output TransformerFactory transformerFactory = TransformerFactory.newInstance(); StreamSource style = new StreamSource(xsl); Transformer transformer = transformerFactory.newTransformer(style); DOMSource source = new DOMSource(document); StreamResult result = new StreamResult(System.out); transformer.transform(source, result); } ``` } xsl: ``` xml version="1.0" encoding="utf-8"? DMDLOG\_0000 main\_label: identifier: identifier.: identifier.: title: ``` xml data: ``` xml version="1.0" encoding="UTF-8"? Goobi - ugh-3.0-ugh-2.0.0-29-g3b6efe1 - 21−December−2016 Goobi Wachsende Häuser aus lebenden Bäumen entstehend Wachsende Häuser aus lebenden Bäumen entstehend 990006350260302884 Deutsche Gartenbaubibliothek#Monographien#Gartenbau Zentralbibliothek#Monographien#Technik, Architektur, Bauwesen ger [1926] https://creativecommons.org/publicdomain/mark/1.0/ 320 Seiten reformatted digital 8Af6500 Universitätsbibliothek der Technischen Universität Berlin aut Wiechula Arthur Wiechula, Arthur Berlin 2018 Universitätsbibliothek der Technischen Universität Berlin [Electronic ed.] ```<issue_comment>username_1: Make sure you explicitly use a namespace aware DocumentBuilderFactory if you want to process XML and XSLT with Java as otherwise you won't get a meaningful result, XSLT itself is XML depending on namespaces and any XML input it uses is also better processed with namespace support. Of course to simply use XML from a file as the input you don't need a DocumentBuilder and a DOMSource, you can use a StreamSource as well where then the XSLT processor takes care of processing the input in a namespace aware mode. Upvotes: 3 [selected_answer]<issue_comment>username_2: Following the answer by username_1 i have removed the completely unnecessary "parse to DOM" part from the Snipplet form "javacodegeeks" i was using (which must have been messing up the namespaces) and went the way of the least resistance, just replacing it by a StreamSource. and it works now! here is the new code: ``` import org.w3c.dom.Document; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.Transformer; import javax.xml.transform.TransformerFactory; import javax.xml.transform.stream.StreamResult; import javax.xml.transform.stream.StreamSource; import java.io.File; public class Main { private static Document document; public static void main(String[] args) throws Exception { DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); File xml = new File("/home/peter/1stax/src/metsmods_test3.xml"); File xsl = new File("/home/peter/1stax/src/metsmods2.xsl"); // Use a Transformer for output TransformerFactory transformerFactory = TransformerFactory.newInstance(); StreamSource style = new StreamSource(xsl); Transformer transformer = transformerFactory.newTransformer(style); StreamSource source = new StreamSource (xml); StreamResult result = new StreamResult(System.out); transformer.transform(source, result); } } ``` Upvotes: 2
2018/03/15
851
3,059
<issue_start>username_0: I'm implementing this [button](https://stackblitz.com/edit/angular-ypqkcf-eduhze?file=app%2Fprogress-spinner%2Fprogress-spinner.component.ts) in my app. In my button component I have: ``` @Input() callback: () => Promise; onClick() { this.spinnerService.show(); this.callback().then(() => { this.spinnerService.hide(); }, () => { this.spinnerService.hide(); }); } ``` (I don't use async / await because the "tech lead" doesn't want me too) The component template: ``` ``` Everything works fine when I pass this kind of code to the component's input: ``` doSomething = () => { return myService.doSomething() .toPromise() .then((response) => console.log('promise resolved')); } ``` But I need to break from this function earlier: ``` doSomething() { if (condition) { return; } return myService.doSomething() .toPromise() .then((response) => console.log('promise resolved')); } ``` I get > > ERROR TypeError: Cannot read property 'then' of undefined > > > I've tried forcing a promise with the same result ``` if (condition) { return Promise.resolve(null); // or return new Promise((resolve) => { resolve(null); }); } ```<issue_comment>username_1: try this ``` doSomething() { if (condition) { return new Promise((resolve,reject)=> reject('Some Error Message')); } return myService.doSomething() .toPromise() .then((response) => console.log('promise resolved')); } ``` Upvotes: 2 <issue_comment>username_2: For this input ``` @Input() callback: () => Promise; ``` there is no guarantee that it's assigned to proper value. Additional check if `callback` was specified should be added. A safer way to manage this is to use `async` functions in these places because they consistently use promises: ``` async onClick() { this.spinnerService.show(); try { if (this.callback) await this.callback() } finally { this.spinnerService.hide(); } } ``` `doSomething` is supposed to return a promise unconditionally, and this can also be addressed with `async`: ``` async doSomething() { if (!condition) return myService.doSomething().toPromise(); } ``` If `async` functions can't be used for some reason (although there are no good ones, because they are spec-compliant, first-class citizens in TypeScript), promises should be consistently processed in functions (also helpful for testing). `doSomething` isn't properly typed, and this allows improper function return. It should be: ``` onClick(): Promise { this.spinnerService.show(); return Promise.resolve(this.callback ? this.callback() : undefined) .catch(() => {}) .then(() => { this.spinnerService.hide(); }); } ``` and ``` doSomething(): Promise { if (condition) { return Promise.resolve(); } return myService.doSomething().toPromise(); } ``` And the type of `callback` will be `Promise`, not `Promise`. Regular (non-arrow) `doSomething` method should be bound to `this` in constructor. Upvotes: 4 [selected_answer]
2018/03/15
268
889
<issue_start>username_0: I have a scenario where there is a list of items and each items have name and value selector side by side(so two inputs). The user selects the name (its radio button) and then selects the value. I am using `redux-form` and so far what I achieved: submitting gives value as `{item1: 1, item2: 2}` Now there will lots of values for different category items and the it will be a big messy object with all category data in one place and I want to avoid that. How can I get this one item data as `{first: {item1: 1, item2: 2}}` or as a collection `[{item1: 1, item2: 2}]`<issue_comment>username_1: Wrap items into `first` object: ``` ``` On submitting you'll get `{first: {item1: 1, item2: 2}}`. Upvotes: 5 [selected_answer]<issue_comment>username_2: You can also use `FormSection` ``` import { FormSection } from 'redux-form'; ``` then.. ``` ``` Upvotes: 0
2018/03/15
295
1,072
<issue_start>username_0: I need a PowerShell script to check for recent backup files and list any folders which DO NOT contain a recent backup. I need to search a folder and it's sub folders and only return the folder names which DO NOT contain a file `*.cfg` less than 30 days old. So far I have `(c:\backups contains multiple subfolders which contain the *.cfg files)` - ``` gci C:\backups\ -Recurse -Include *.cfg | Where-Object {$_.LastWriteTime -gt (Get-Date).AddDays(-30)}| Select Directory -Unique ``` This works but in reverse, it lists folder which do have the file. I need to reverse the logic and list folders which do not contain the file. I have tried several options with -not and ! in the where-object but nothing works. Would be great if someone could help?<issue_comment>username_1: Wrap items into `first` object: ``` ``` On submitting you'll get `{first: {item1: 1, item2: 2}}`. Upvotes: 5 [selected_answer]<issue_comment>username_2: You can also use `FormSection` ``` import { FormSection } from 'redux-form'; ``` then.. ``` ``` Upvotes: 0
2018/03/15
264
900
<issue_start>username_0: I want to test a async API which accepts 2 parameters. First is a string of operation to be performed and second is callback function. I want to test the response which i get as a parameter in the callback function. ``` someApi('getName', (response) => { // I want to test the response object. console.log(response); }) ``` I can test the API call like : ``` spyOn(window.someApi) expect(someApi).toHaveBeenCalledWith('name_1', jasmine.any(Function)) //let mockResponse = {name:'name1'}; ``` But how do i test if i am getting correct response as response.name = 'name1'.<issue_comment>username_1: Wrap items into `first` object: ``` ``` On submitting you'll get `{first: {item1: 1, item2: 2}}`. Upvotes: 5 [selected_answer]<issue_comment>username_2: You can also use `FormSection` ``` import { FormSection } from 'redux-form'; ``` then.. ``` ``` Upvotes: 0
2018/03/15
394
1,418
<issue_start>username_0: It seems to list according to location but it isn't using my location instead listing from the highest point of South Africa(Polokwane) to the lowest(Cape Town). I have tried changing my location many times and it makes no difference to the results. ``` LatLng myLoc; public DistanceArrange(LatLng current) { myLoc = current; } @Override public int compare(final Sites sites, final Sites sites1) { double lat = MapsActivity.DMStoDD(MapsActivity.removeLastChar(sites.getLatitude())); double lon = MapsActivity.DMStoDD(MapsActivity.removeLastChar(sites.getLongitude())); double lat1 = MapsActivity.DMStoDD(MapsActivity.removeLastChar(sites1.getLatitude())); double lon1 = MapsActivity.DMStoDD(MapsActivity.removeLastChar(sites1.getLongitude())); double distance = computeDistanceBetween(myLoc, new LatLng(lat, lon)); double distance1 = computeDistanceBetween(myLoc, new LatLng(lat1, lon1)); return (int)(distance - distance1 ); } ``` The DMStoDD is a conversion function of my own which works fine because I'm currently plotting all of these points on the map<issue_comment>username_1: Wrap items into `first` object: ``` ``` On submitting you'll get `{first: {item1: 1, item2: 2}}`. Upvotes: 5 [selected_answer]<issue_comment>username_2: You can also use `FormSection` ``` import { FormSection } from 'redux-form'; ``` then.. ``` ``` Upvotes: 0
2018/03/15
552
1,511
<issue_start>username_0: I have an array like: ``` array(4) { [0]=> array(34) { ["id"]=> int(6) ["order_reference"]=> string(9) "200123130" ["store_reference"]=> .... [1] array(34) { ["id"]=> int(6) ["order_reference"]=> string(9) "20222220" ["store_reference"]=> ... ``` I would like to have this array: ``` array(4) { [200123130]=> array(34) { ["id"]=> int(6) ["order_reference"]=> string(9) "200123130" ["store_reference"]=> .... [20222220] array(34) { ["id"]=> int(6) ["order_reference"]=> string(9) "20222220" ["store_reference"]=> ... ``` This is what I tried: ``` $i = 0; foreach ($data['data'] as &$key) { $data['data'][$key['order_reference']] = array_values($data['data'][$i]); $i++; } ``` But I get the a bigger array, starting like: ``` array(9) { [0]=> &array(34) { ["id"]=> ``` Any ideas how to fix this ?<issue_comment>username_1: You can simply do: ``` $newarray = array_combine(array_column($orig, "order_reference"), $orig); ``` * array\_combine() -- Creates an array by using one array for keys and another for its values * array\_column() -- Return the values from a single column in the input array Upvotes: 3 [selected_answer]<issue_comment>username_2: You can do this with a single call to `array_column` using the third `$index_key` parameter: ``` $new = array_column($orig, null, 'order_reference'); ``` See <https://eval.in/972672> Upvotes: 1
2018/03/15
313
1,200
<issue_start>username_0: I am trying to reupload my apk after renaming the package name. I followed the directions [here](https://stackoverflow.com/questions/16804093/android-studio-rename-package) without issue. The problem comes up when I try to upload the new apk to google play. For some reason the package name comes up as the previous package name and stops me from uploading the apk with the conflicting package name. I am uploading this apk as a new app. I searched for the old package name throughout my project and it only comes up in the R.java file within the comments. Is there somewhere else I need to update the package name?<issue_comment>username_1: You can simply do: ``` $newarray = array_combine(array_column($orig, "order_reference"), $orig); ``` * array\_combine() -- Creates an array by using one array for keys and another for its values * array\_column() -- Return the values from a single column in the input array Upvotes: 3 [selected_answer]<issue_comment>username_2: You can do this with a single call to `array_column` using the third `$index_key` parameter: ``` $new = array_column($orig, null, 'order_reference'); ``` See <https://eval.in/972672> Upvotes: 1
2018/03/15
815
2,807
<issue_start>username_0: I have got many cells in my file whose content is of the form ``` '14.05 ``` I want to mass convert these cells, which contain text, to numbers, which I can compute with. ``` 14.05 ``` Is there a build-in function in Libre Office for such mass conversion, or do I need to fumble around with a macro?<issue_comment>username_1: Use Find and Replace to edit the data again, as explained at <https://ask.libreoffice.org/en/question/1843/changing-text-to-numbers/?answer=68235#post-id-68235>. > > Put .\* into 'Search For:' and & into 'Replace With:' > > > Then format the cells as numbers. Related: [LibreOffice Calc: How to convert unformatted text to numbers (or other format)](https://stackoverflow.com/questions/16125504/libreoffice-calc-how-to-convert-unformatted-text-to-numbers-or-other-format) Upvotes: 2 <issue_comment>username_2: If that apostrophe is a quote prefix rather than really cell content like so: [![enter image description here](https://i.stack.imgur.com/6YrtS.png)](https://i.stack.imgur.com/6YrtS.png) (note the apostrophe is only shown in formula bar but not in the cell) then the following will be possible: Select the whole column `A`. Then select `Data` - `Text to columns` from the menu bar: [![enter image description here](https://i.stack.imgur.com/r9dZb.png)](https://i.stack.imgur.com/r9dZb.png) Then click `OK`. Now all the content which looks like numbers will be converted to numeric. The above works when dot is set as the decimal separator in your `Calc`. If you are using different locale settings where comma ist set as decimal separator, then `14.05` will never treated as numeric. Then only `14,05` will be treated as numeric. Upvotes: 6 [selected_answer]<issue_comment>username_3: Select the cells. Right click / format cells Note what the format is currently selected to - not what you expected, I'm sure Then, choose the Number format you wish / enter Select the cells once again Menu / Data / Text to Columns Be sure to select Trim spaces / Enter That should do it. Upvotes: 0 <issue_comment>username_4: "Text to Columns" works for one column only, not an array, so it may be tedious. My problem was in importing a text (e.g., CSV) file with numbers. Opening a CSV-file (e.g., TAB-separated) in Libre Calc makes text-formatted numbers ('3.14). Pasting the same information to an empty ods-file converts the numbers to type Number/General. So, to convert a rectangle with 'numbers to numbers, mark your 'data, copy to a file (linux: cat > file.txt), open it in a suitable editor not destroying TABs, copy (or use xclip < file.txt if you have xclip installed), than paste to Libre Calc. An import dialog will appear - select TABs. You should have the decimal separator consistent with your locale. Upvotes: 0
2018/03/15
920
2,577
<issue_start>username_0: Given is an generic Array T[] arr. Every Quadrupel in this Array shall be rotated left, so > > "a b c d e f" --> "b c d a e f" > > > If the Arrays End isn't a complete Quadrupel, these elements shouldn't rotate. My Idea was: ``` T[] arr2; int x; T temp0 = arr[x+0]; T temp1 = arr[x+1]; T temp2 = arr[x+2]; T temp3 = arr[x+3]; arr2[x+3] = temp0; arr2[x+2] = temp3; arr2[x+1] = temp2; arr2[x+0] = temp1; x += 1; ``` So I'm switching the Elements of the first Quadrupel and saving them to a second array. But how can I implement, that an incomplete Quadruple doesn't rotate, an it stops an the end of the Array?<issue_comment>username_1: Use `arr.length`. At every assignment, check to see if the length of the array is less than the value that you're currently at. Upvotes: 0 <issue_comment>username_2: Do the following to process each quad. Note that my code assume that you properly defined arr and arr2; **This is if you want the results in a new array `arr2`** ``` for(int i = 3; i < arr.length; i += 4) // i is pointing at the end of each quadruple, so it will never show an incomplete one { arr2[i-3] = arr[i-2]; arr2[i-2] = arr[i-1]; arr2[i-1] = arr[i]; arr2[i] = arr[i-3]; } int nbRemainingElements = arr.length % 4; for(int i = 0; i < nbRemainingElements; ++i) { int index = arr.length - i - 1; arr2[index] = arr[index]; } ``` **This is if you want to modify `arr` without instanciating another array** ``` for(int i = 3; i < arr.length; i += 4) // i is pointing at the end of each quadruple, so it will never show an incomplete one { T firstQuadElement = arr[i-3]; arr[i-3] = arr[i-2]; arr[i-2] = arr[i-1]; arr[i-1] = arr[i]; arr[i] = firstQuadElement; } ``` By the way, a solution if you want two different arrays could also be to copy arr using `array.copy` and then apply solution 2 Upvotes: 1 [selected_answer]<issue_comment>username_3: You need to put your code inside a loop (which you probably missed in your question) Please pay attention to: 1. Finish looping early enough NOT to include last unfinished Quadrupel (thus the limit `length-3` which also prevents out of bounds exception, as there is no sense for checking for Quadrupel if only 3 items left. 2. Just copy remaining part of the array as-is Sample code below: ``` int x = 0; while(x < arr.length - 3) { arr2[x+3] = arr[x+0]; arr2[x+2] = arr[x+3]; arr2[x+1] = arr[x+2]; arr2[x+0] = arr[x+1]; x +=4; } while(x < arr.length) { arr2[x] = arr[x]; x++; } ``` Upvotes: 2
2018/03/15
1,025
2,909
<issue_start>username_0: I'm trying to separate fibonacci sequence that are 2 digits long but the first digit is giving out zero's ``` int[] fib = new int[input]; int[] fib2 = new int[input]; fib[0] = 0; fib[1] = 1; fib2[0] = 0; fib2[1] = 1; for (int i = 2; i < input; i++) { fib[i] = fib[i - 1] + fib[i - 2]; // first fibonnaci sequence to output fib2[i] = fib2[i - 1] + fib2[i - 2]; // second fibonnaci sequence to output fib[i] = separate(fib[i]); //get the second digit of the fibonacci sequence fib2[i] = separate2(fib2[i]);//get the first digit of the fibonacci sequence but it output zero's } public static int separate(int x) { //int result = x; x = x % 10; return x; } public static int separate2(int y) { //int result = x; y = y / 10; return y; } ``` normal fibonacci sequence is 1,1,2,3,5,8,13,21,34,55,89 output should be 1,1,2,3,5,8,1,3,2,1,3,4,5,5,8,9<issue_comment>username_1: Use `arr.length`. At every assignment, check to see if the length of the array is less than the value that you're currently at. Upvotes: 0 <issue_comment>username_2: Do the following to process each quad. Note that my code assume that you properly defined arr and arr2; **This is if you want the results in a new array `arr2`** ``` for(int i = 3; i < arr.length; i += 4) // i is pointing at the end of each quadruple, so it will never show an incomplete one { arr2[i-3] = arr[i-2]; arr2[i-2] = arr[i-1]; arr2[i-1] = arr[i]; arr2[i] = arr[i-3]; } int nbRemainingElements = arr.length % 4; for(int i = 0; i < nbRemainingElements; ++i) { int index = arr.length - i - 1; arr2[index] = arr[index]; } ``` **This is if you want to modify `arr` without instanciating another array** ``` for(int i = 3; i < arr.length; i += 4) // i is pointing at the end of each quadruple, so it will never show an incomplete one { T firstQuadElement = arr[i-3]; arr[i-3] = arr[i-2]; arr[i-2] = arr[i-1]; arr[i-1] = arr[i]; arr[i] = firstQuadElement; } ``` By the way, a solution if you want two different arrays could also be to copy arr using `array.copy` and then apply solution 2 Upvotes: 1 [selected_answer]<issue_comment>username_3: You need to put your code inside a loop (which you probably missed in your question) Please pay attention to: 1. Finish looping early enough NOT to include last unfinished Quadrupel (thus the limit `length-3` which also prevents out of bounds exception, as there is no sense for checking for Quadrupel if only 3 items left. 2. Just copy remaining part of the array as-is Sample code below: ``` int x = 0; while(x < arr.length - 3) { arr2[x+3] = arr[x+0]; arr2[x+2] = arr[x+3]; arr2[x+1] = arr[x+2]; arr2[x+0] = arr[x+1]; x +=4; } while(x < arr.length) { arr2[x] = arr[x]; x++; } ``` Upvotes: 2
2018/03/15
690
2,043
<issue_start>username_0: In SAP Hybris On Premise, is it possible to create campaigns using a CSV file? If yes, then how?<issue_comment>username_1: Use `arr.length`. At every assignment, check to see if the length of the array is less than the value that you're currently at. Upvotes: 0 <issue_comment>username_2: Do the following to process each quad. Note that my code assume that you properly defined arr and arr2; **This is if you want the results in a new array `arr2`** ``` for(int i = 3; i < arr.length; i += 4) // i is pointing at the end of each quadruple, so it will never show an incomplete one { arr2[i-3] = arr[i-2]; arr2[i-2] = arr[i-1]; arr2[i-1] = arr[i]; arr2[i] = arr[i-3]; } int nbRemainingElements = arr.length % 4; for(int i = 0; i < nbRemainingElements; ++i) { int index = arr.length - i - 1; arr2[index] = arr[index]; } ``` **This is if you want to modify `arr` without instanciating another array** ``` for(int i = 3; i < arr.length; i += 4) // i is pointing at the end of each quadruple, so it will never show an incomplete one { T firstQuadElement = arr[i-3]; arr[i-3] = arr[i-2]; arr[i-2] = arr[i-1]; arr[i-1] = arr[i]; arr[i] = firstQuadElement; } ``` By the way, a solution if you want two different arrays could also be to copy arr using `array.copy` and then apply solution 2 Upvotes: 1 [selected_answer]<issue_comment>username_3: You need to put your code inside a loop (which you probably missed in your question) Please pay attention to: 1. Finish looping early enough NOT to include last unfinished Quadrupel (thus the limit `length-3` which also prevents out of bounds exception, as there is no sense for checking for Quadrupel if only 3 items left. 2. Just copy remaining part of the array as-is Sample code below: ``` int x = 0; while(x < arr.length - 3) { arr2[x+3] = arr[x+0]; arr2[x+2] = arr[x+3]; arr2[x+1] = arr[x+2]; arr2[x+0] = arr[x+1]; x +=4; } while(x < arr.length) { arr2[x] = arr[x]; x++; } ``` Upvotes: 2
2018/03/15
751
2,274
<issue_start>username_0: we have 5 node ICP cluster in local env,,, and we are getting ImagepullError and ImagePullBackoff errors while deploying resources. we found Image-manager service is terminating frequently. what do to get of this prob. [![enter image description here](https://i.stack.imgur.com/cLLiN.png)](https://i.stack.imgur.com/cLLiN.png)<issue_comment>username_1: Use `arr.length`. At every assignment, check to see if the length of the array is less than the value that you're currently at. Upvotes: 0 <issue_comment>username_2: Do the following to process each quad. Note that my code assume that you properly defined arr and arr2; **This is if you want the results in a new array `arr2`** ``` for(int i = 3; i < arr.length; i += 4) // i is pointing at the end of each quadruple, so it will never show an incomplete one { arr2[i-3] = arr[i-2]; arr2[i-2] = arr[i-1]; arr2[i-1] = arr[i]; arr2[i] = arr[i-3]; } int nbRemainingElements = arr.length % 4; for(int i = 0; i < nbRemainingElements; ++i) { int index = arr.length - i - 1; arr2[index] = arr[index]; } ``` **This is if you want to modify `arr` without instanciating another array** ``` for(int i = 3; i < arr.length; i += 4) // i is pointing at the end of each quadruple, so it will never show an incomplete one { T firstQuadElement = arr[i-3]; arr[i-3] = arr[i-2]; arr[i-2] = arr[i-1]; arr[i-1] = arr[i]; arr[i] = firstQuadElement; } ``` By the way, a solution if you want two different arrays could also be to copy arr using `array.copy` and then apply solution 2 Upvotes: 1 [selected_answer]<issue_comment>username_3: You need to put your code inside a loop (which you probably missed in your question) Please pay attention to: 1. Finish looping early enough NOT to include last unfinished Quadrupel (thus the limit `length-3` which also prevents out of bounds exception, as there is no sense for checking for Quadrupel if only 3 items left. 2. Just copy remaining part of the array as-is Sample code below: ``` int x = 0; while(x < arr.length - 3) { arr2[x+3] = arr[x+0]; arr2[x+2] = arr[x+3]; arr2[x+1] = arr[x+2]; arr2[x+0] = arr[x+1]; x +=4; } while(x < arr.length) { arr2[x] = arr[x]; x++; } ``` Upvotes: 2
2018/03/15
1,375
4,898
<issue_start>username_0: I have 24 fields in my table all of them are of nvarchar type except one of int type which is ID now I want to search in all fields for any text written by the user in textBox for example if the user will write "20" in the textBox it then should show all records that their fields have "20" .. I've written a stored procedure but it is not working whenever I write any thing that is existing in the fields of my table it doesn't return any thing in the data grid view !! This is my stored procedure: ``` create proc searchClientProfile @search nvarchar(255) as select * from clientProfile where convert(nvarchar,ID)+compName+Addres+compPhone+compEmail+compWebsite+shipAddress+shipPhone+primeContact+primePhone+primeEmail+secContact+secPhone+secEmail+sector+established+industry+anulPrch+country+region+town+stat+city+zip like '%@search%' ``` And this is the code in my textBox TextChanged event: ``` cmd = new SqlCommand("searchClientProfile", cn); cmd.CommandType = CommandType.StoredProcedure; SqlParameter[] parm = new SqlParameter[1]; parm[0] = new SqlParameter("@search", SqlDbType.NVarChar, 255); parm[0].Value = searchClientTxt.Text.Trim(); cmd.Parameters.AddRange(parm); da = new SqlDataAdapter(cmd); da.Fill(getSearchedClientProfiles); searchClientsDgv.DataSource = getSearchedClientProfiles; ```<issue_comment>username_1: If I understand properly you want to check if a column of of your 20 contains your search query. It should look something like that : ``` select * from clientProfile where convert(nvarchar,ID) like '%@search%' OR compName like '% ' + @search + '%' OR Addres like '%'+@search + '%' OR compPhone like '%'+@search + '%' OR compEmail like '%'+@search + '%' OR compWebsite like '%'+@search + '%' OR shipAddress like '%'+@search + '%' OR shipPhone like '%'+@search + '%' OR primeContact like '%'+@search + '%' OR primePhone like '%'+@search + '%' OR primeEmail like '%'+@search + '%' OR secContact like '%'+@search + '%' OR secPhone like '%'+@search + '%' OR secEmail like '%'+@search + '%' OR sector like '%'+@search + '%' OR established like '%'+@search + '%' OR industry like '%'+@search + '%' OR anulPrch like '%'+@search + '%' OR country like '%'+@search + '%' OR region like '%'+@search + '%' OR town like '%'+@search + '%' OR stat like '%'+@search + '%' OR city like '%'+@search + '%' OR zip like '%'+@search + '%' ``` I don't know how you built your application but the performances will not be amazing on that kind a query Upvotes: 1 <issue_comment>username_2: In concatenation, you should handle `NULL` values. It can be done using `ISNULL` or `CONCAT`. Also, what if in `ecContact` and `secPhone` you have `sea` and `rch` correspondingly? You will concatenate them as `search` and have a match. You need to use a separator. If I were you, I will just write the condition check for column. ``` create proc searchClientProfile ' + @search + ' nvarchar(255) as select * from clientProfile where convert(nvarchar,ID) like '%' + @search + '%' OR compName like '%' + @search + '%' OR Addres like '%' + @search + '%' OR compPhone like '%' + @search + '%' OR compEmail like '%' + @search + '%' OR compWebsite like '%' + @search + '%' OR shipAddress like '%' + @search + '%' OR shipPhone like '%' + @search + '%' OR primeContact like '%' + @search + '%' OR primePhone like '%' + @search + '%' OR primeEmail like '%' + @search + '%' OR secContact like '%' + @search + '%' OR secPhone like '%' + @search + '%' OR secEmail like '%' + @search + '%' OR sector like '%' + @search + '%' OR established like '%' + @search + '%' OR industry like '%' + @search + '%' OR anulPrch like '%' + @search + '%' OR country like '%' + @search + '%' OR region like '%' + @search + '%' OR town like '%' + @search + '%' OR stat like '%' + @search + '%' OR city like '%' + @search + '%' OR zip like '%' + @search + '%' ``` Upvotes: 0 <issue_comment>username_3: Another option (clearly not sargable). **Note:** an ALIAS is required. **Example** ``` Select * From clientProfil A Where (Select A.* for XML Raw) like '%'+@search+'%' ``` Upvotes: 2 <issue_comment>username_4: Thanks a lot for help guys ... Form the answers seen I altered my stored procedure specially the like part from '%@search%' to '%'+@search+'%' second in my C# code I've added to TextChanged event a line of code that clears the previous content of data grid view so now the code of TextChanged event looks like: ``` getSearchedClientProfiles.Clear(); cmd = new SqlCommand("searchClientProfile", cn); cmd.CommandType = CommandType.StoredProcedure; SqlParameter[] parm = new SqlParameter[1]; parm[0] = new SqlParameter("@search", SqlDbType.NVarChar, 255); parm[0].Value = searchClientTxt.Text.Trim(); cmd.Parameters.AddRange(parm); da = new SqlDataAdapter(cmd); da.Fill(getSearchedClientProfiles); searchClientsDgv.DataSource = getSearchedClientProfiles; ``` Upvotes: 0