date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/14
977
2,342
<issue_start>username_0: Yep, much discussed and similar questions down voted multiple times.. I still can't figure this one out.. Say I have a dataframe like this: ``` df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD')) ``` I want to end up with four separate list (a, b, c and d) with the data from each column. Logically (to me anyway) I would do: ``` list_of_lst = df.values.T.astype(str).tolist() for column in df.columns: i = 0 while i < len(df.columns) - 1: column = list_of_lst[1] i = i + 1 ``` But assigning variable names in a loop is not doable/recommended... Any suggestions how I can get what I need?<issue_comment>username_1: ``` retList = dict() for i in df.columns: iterator = df[i].tolist() retList[i] = iterator ``` You'd get a dictionary with the keys as the column names and values as the list of values in that column. Modify it to any data structure you want. `retList.values()` will give you a list of size 4 with each inner list being the list of each column values Upvotes: 0 <issue_comment>username_2: I think the best is create `dictionary of list` by [`DataFrame.to_dict`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_dict.html): ``` np.random.seed(456) df = pd.DataFrame(np.random.randint(0,10,size=(10, 4)), columns=list('ABCD')) print (df) A B C D 0 5 9 4 5 1 7 1 8 3 2 5 2 4 2 3 2 8 4 8 4 5 6 0 9 5 8 2 3 6 6 7 0 0 3 7 3 5 6 6 8 3 8 9 6 9 5 1 6 1 d = df.to_dict('l') print (d['A']) [5, 7, 5, 2, 5, 8, 7, 3, 3, 5] ``` If really want `A`, `B`, `C` and `D` lists: ``` for k, v in df.to_dict('l').items(): globals()[k] = v print (A) [5, 7, 5, 2, 5, 8, 7, 3, 3, 5] ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: You can transpose your dataframe and use `df.T.values.tolist()`. But, if you are manipulating numeric arrays thereafter, it's advisable you skip the `tolist()` part. ``` df = pd.DataFrame(np.random.randint(0, 100, size=(5, 4)), columns=list('ABCD')) # A B C D # 0 17 56 57 31 # 1 3 44 15 0 # 2 94 36 87 30 # 3 44 49 56 76 # 4 29 5 35 24 list_of_lists = df.T.values.tolist() # [[17, 3, 94, 44, 29], # [56, 44, 36, 49, 5], # [57, 15, 87, 56, 35], # [31, 0, 30, 76, 24]] ``` Upvotes: 0
2018/03/14
762
2,090
<issue_start>username_0: Doing this with the [date-functions.js](https://gist.github.com/xaprb/8492729) library (used e.g. in datetimepicker jQuery plugin): ``` Date.parseDate('2018-03-10 12:12', 'Y-m-d H:i') ``` gives: ``` Sat Mar 10 2018 12:12:00 GMT+0100 (Paris, Madrid) ``` **How to get the result as Unix timestamp or GMT / UTC time instead?**<issue_comment>username_1: Use [MomentJS](https://momentjs.com) instead. You can specify exactly what format the username_2ing you're parsing is in. [MomentJS](https://momentjs.com) can then provide you with the underlying `Date` object, unix timestamp as well as convert to UTC. ```js var d = moment('2018-03-10 12:12', 'YYYY-MM-DD HH:mm'); console.log(d.toDate()); console.log(d.unix()); console.log(d.utc().toDate()); ``` You could of course also parse the date as UTC too instead of treating it as a local time. ``` moment.utc('2018-03-10 12:12', 'YYYY-MM-DD HH:mm'); ``` ***NOTE*** Bit difficult for me to test UTC as I'm in the UK and GMT and UTC are *virtually* the same. Upvotes: 0 <issue_comment>username_2: ```js var date = new Date('2018-03-10 12:12'.replace(' ', 'T')); // Unix console.log(Math.floor(date.getTime() / 1000)); // UTC console.log(date.toUTCString()); ``` As always, please have a look at the documentation at MDN: <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date> Upvotes: 0 <issue_comment>username_3: A username_2ing like '2018-03-10 12:12' will usually be parsed as local as there is no timezone offset. It's also not ISO 8601 compliant so using the built-in parser will yield different results in different browsers. While you can use a library, to parse it as UTC and get the time value is just 2 lines of code: ```js function toUTCTimeValue(s) { var b = s.split(/\D/); return Date.UTC(b[0],b[1]-1,b[2],b[3],b[4]); } // As time value console.log(toUTCTimeValue('2018-03-10 12:12')); // Convert to Date object and print as timestamp console.log(new Date(toUTCTimeValue('2018-03-10 12:12')).toISOString()); ``` Upvotes: 2 [selected_answer]
2018/03/14
986
2,930
<issue_start>username_0: I am trying to get values from a form into my database as boolean type. Basically, each question has many checkboxes, and if a user checks the box, i want it to put in 1 for true and whichever ones are not checked to insert 0. I have attempted to get them in, however no luck. Please see additional code/snippets below. Can someone please help? ``` Question 1 ========== Item1 Item2 ``` PHP CODE: ``` if(isset($_POST['submit'])){ try{ $Query = $db->prepare('INSERT INTO Results ( 1, 2 ) VALUES (:1, :2)'); $Query->execute(); ```<issue_comment>username_1: When a checkbox isn't checked, it's not sent. That means it is **not set to zero** if unchecked. What does that mean? It means you need to check if it's sent or not. If yes, it's 1, if not - it's 0. The code you're after is the following: ``` $stmt->execute(array( ':UserID' =>$_POST['UserID'], ':B1' => isset($_POST['B1']) ? 1 : 0, ':B2' => isset($_POST['B2']) ? 1 : 0, ':B2' => isset($_POST['B3']) ? 1 : 0, ':B2' => isset($_POST['B4']) ? 1 : 0, ':B2' => isset($_POST['B5']) ? 1 : 0, ':B2' => isset($_POST['B6']) ? 1 : 0 )); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: If a checkbox isn't checked (see "checked"-Attribute of input type checkbox), nothing is sent to the server. `$_POST['B1']` gets the value of the element with NAME 'B1' if it is sent to the server. So ``` $b1 = (isset($_POST['B1'])) ? 1 : 0; ``` should do the trick. $b1 will be 0 if the checkbox is unchecked and 1 if it is checked. **Option 1** The SQL looks like you are trying to put something like 01100 into the database field 'B2'. Usually you would store the result of every checkbox in a single database column. But if you really want to concat them, your code should look like this: ``` $stmt = $conn->prepare('INSERT INTO Results ($UserID, B1, B2 ) VALUES (:UserID, :B1, :B2, :B3, :B4, :B5, :B6);'); $stmt->execute(array( ':UserID' =>$_POST['UserID'], ':B1' =>$_POST['B1'], ':B2' =>(isset($_POST['B2'])) ? 1 : 0 + (isset($_POST['B3'])) ? 1 : 0 + (isset($_POST['B4'])) ? 1 : 0 + (isset($_POST['B5'])) ? 1 : 0 + (isset($_POST['B6'])) ? 1 : 0 )); ``` **Option 2** If you have one database field for every B1...B6-value then you've forgotten to take these columns into your SQL-INSERT-Statement: ``` INSERT INTO Results ($UserID, B1, B2, B3, B4, B5, B6) [...] ``` And your stmt execute is wrong and should be: ``` $stmt->execute(array( ':UserID' =>$_POST['UserID'], ':B1' => isset($_POST['B1']) ? 1 : 0, ':B2' => isset($_POST['B2']) ? 1 : 0, ':B3' => isset($_POST['B3']) ? 1 : 0, ':B4' => isset($_POST['B4']) ? 1 : 0, ':B5' => isset($_POST['B5']) ? 1 : 0, ':B6' => isset($_POST['B6']) ? 1 : 0 )); ``` Upvotes: 0
2018/03/14
1,145
5,517
<issue_start>username_0: I have gone through a lot of docs but it seems my problem is strange. I have configured Oauth but I am not able to get the bearer token back. whenever I hit api to get the token, I get 200 but nothing back in response(I am expecting bearer token). Below is the config: ``` public partial class Startup { public void ConfigureAuth(IAppBuilder app) { OAuthAuthorizationServerOptions oAuthOptions = new OAuthAuthorizationServerOptions { AllowInsecureHttp = true, TokenEndpointPath = new PathString("/token"), AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(20), Provider = new ApplicationOAuthProvider() }; app.UseOAuthAuthorizationServer(oAuthOptions); app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions { Provider = new OAuthBearerAuthenticationProvider() }); HttpConfiguration config = new HttpConfiguration(); //config.Filters.Add(new ); //config.MapHttpAttributeRoutes(); // There can be multiple exception loggers. (By default, no exception loggers are registered.) //config.Services.Replace(typeof(IExceptionHandler), new GlobalExceptionHandler()); WebApiConfig.Register(config); //enable cors origin requests app.UseCors(CorsOptions.AllowAll); app.UseWebApi(config); } } public static class WebApiConfig { /// /// /// /// public static void Register(HttpConfiguration config) { // Web API configuration and services // Configure Web API to use only bearer token authentication. config.SuppressDefaultHostAuthentication(); config.Filters.Add(new HostAuthenticationFilter(OAuthDefaults.AuthenticationType)); // Web API routes config.MapHttpAttributeRoutes(); config.Filters.Add(new HostAuthenticationAttribute("bearer")); //added this config.Filters.Add(new AuthorizeAttribute()); config.Routes.MapHttpRoute("DefaultApi", "api/{controller}/{id}", new { id = RouteParameter.Optional } ); var jsonFormatter = config.Formatters.OfType().First(); jsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver(); } public class ApplicationOAuthProvider : OAuthAuthorizationServerProvider { public override async Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context) { context.Validated(); } public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context) { var form = await context.Request.ReadFormAsync(); if (myvalidationexpression) { var identity = new ClaimsIdentity(context.Options.AuthenticationType); identity.AddClaim(new Claim(ClaimTypes.Role, "AuthorizedUser")); context.Validated(identity); } else { context.SetError("invalid\_grant", "Provided username and password is incorrect"); } } } ``` Now when I launch the APi and hit /token, I get this as below: [API Request](https://i.stack.imgur.com/WTtpc.png)<issue_comment>username_1: I think that code you have written in WebApiConfig.cs to suppress host authentication and some other code is creating the issue. I have a working example for bearer token generation in web API, which is working properly and generating token. WebApiConfig.cs file code: ``` public static class WebApiConfig { public static void Register(HttpConfiguration config) { // Web API configuration and services // Web API routes config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } ``` Startup.cs Code: ``` [assembly: OwinStartup(typeof(WebAPI.Startup))] namespace WebAPI { public class Startup { public void Configuration(IAppBuilder app) { HttpConfiguration config = new HttpConfiguration(); ConfigureOAuth(app); WebApiConfig.Register(config); app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll); } public void ConfigureOAuth(IAppBuilder app) { OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions() { AllowInsecureHttp = true, TokenEndpointPath = new PathString("/token"), AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(60), Provider=new ApplicationOAuthProvider(), //AuthenticationMode = AuthenticationMode.Active }; app.UseOAuthAuthorizationServer(OAuthServerOptions); app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions { Provider = new OAuthBearerAuthenticationProvider() } ); } } } ``` Controller to check authorization call after adding bearer token in the request. ``` public class TokenTestController : ApiController { [Authorize] public IHttpActionResult Authorize() { return Ok("Authorized"); } } ``` Upvotes: 2 <issue_comment>username_2: install the following package Microsoft.Owin.Host.SystemWeb Upvotes: -1
2018/03/14
723
3,334
<issue_start>username_0: i have a class called Feature and it contains the following methods setUser(boolean),execute(), doExecute() And according to the below stated parameters, when i call execute() method, doExecute() method should be called only once. I tried to test that doExecute() method is called only once in the below code using sinon, but I receive an error message says: doExecute() method is called zero times. please let me know how to check correctly if doExecute() is called exactly once **code**: ``` t.context.clock = sinon.useFakeTimers(); const domain = 'testDomain'; const delayInMillis = 0; const delayInSecs = delayInMillis / 1000; const feature = new Feature(domain, delayInMillis); feature.setUser(false); const p = feature.execute() .then(() => sinon.spy(feature.doExecute())) .then(() => t.pass()); sinon.assert.callCount(sinon.spy(feature.doExecute()),1); t.context.clock.restore(); return p; }); ```<issue_comment>username_1: I think that code you have written in WebApiConfig.cs to suppress host authentication and some other code is creating the issue. I have a working example for bearer token generation in web API, which is working properly and generating token. WebApiConfig.cs file code: ``` public static class WebApiConfig { public static void Register(HttpConfiguration config) { // Web API configuration and services // Web API routes config.MapHttpAttributeRoutes(); config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); } } ``` Startup.cs Code: ``` [assembly: OwinStartup(typeof(WebAPI.Startup))] namespace WebAPI { public class Startup { public void Configuration(IAppBuilder app) { HttpConfiguration config = new HttpConfiguration(); ConfigureOAuth(app); WebApiConfig.Register(config); app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll); } public void ConfigureOAuth(IAppBuilder app) { OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions() { AllowInsecureHttp = true, TokenEndpointPath = new PathString("/token"), AccessTokenExpireTimeSpan = TimeSpan.FromMinutes(60), Provider=new ApplicationOAuthProvider(), //AuthenticationMode = AuthenticationMode.Active }; app.UseOAuthAuthorizationServer(OAuthServerOptions); app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions { Provider = new OAuthBearerAuthenticationProvider() } ); } } } ``` Controller to check authorization call after adding bearer token in the request. ``` public class TokenTestController : ApiController { [Authorize] public IHttpActionResult Authorize() { return Ok("Authorized"); } } ``` Upvotes: 2 <issue_comment>username_2: install the following package Microsoft.Owin.Host.SystemWeb Upvotes: -1
2018/03/14
865
3,444
<issue_start>username_0: Consider below code. How can I test this without using third party libraries? The Assert line is never executed, because it is a different thread and the vm stops running. Many thanks! ``` public class FileParserTask extends AsyncTask> { private FileParserResult mResult; public interface FileParserResult { void onFinish(ArrayList cities); } public FileParserTask(final FileParserResult result) { mResult = result; } @Override protected ArrayList doInBackground(File... files) { ArrayList cities = new ArrayList<>(); try { InputStream is = new FileInputStream(files[0]); JsonReader reader = new JsonReader(new InputStreamReader(is, "UTF-8")); reader.beginArray(); while (reader.hasNext()) { City city = new Gson().fromJson(reader, City.class); cities.add(city); } reader.endArray(); reader.close(); } catch (Exception e) { e.printStackTrace(); } Collections.sort(cities, (o1, o2) -> o1.getName().compareTo(o2.getName())); mResult.onFinish(cities); return cities; } } ``` Test code: ``` @RunWith(AndroidJUnit4.class) public class CityServiceTest { File file = new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS), "cities-medium.json"); @Test public void givenInputAbuThenIShouldGetXResults() throws InterruptedException { new FileParserTask(cities -> { Assert.assertEquals("Input should give back 200 results", 3, cities.size()); }).execute(file); } } ```<issue_comment>username_1: As you say, the problem is the AsyncTask running in a background thread, via an `ExecutorService`. Like with a `Future` though, it provides a `get()` method that will wait for, and return, the result. ``` new FileParserTask(cities -> { Assert.assertEquals("Input should give back 200 results", 3, cities.size()); }).execute(file).get(); ``` Upvotes: 0 <issue_comment>username_2: Although the code you need to test: ``` Assert.assertEquals("Input should give back 200 results", 3, cities.size()); ``` is being run in an `AsyncTask`, that's not really relevant to unit testing. `AsyncTask` has most likely been extensively tested by Google so you know that it will work as an `AsyncTask`. The real testing seems to be the functionality that needs to be run in the background, i.e. the business logic contained in `doInBackground`. Thinking about it in terms of business logic, there is a need to populate an `ArrayList` and propagate it to the app. Android prefers this to be done on a background thread and propagation can be handled by notifications etc, both of which have been tested and released as working by Google so you don't really need to include them in a unit test. How you populate `ArrayList` is the real unit test. `AsyncTask` would be relevant for an integration test but you'd most likely be testing a different aspect of the app for that, i.e. what it displays rather than what it receives from a background thread. So for a unit test I'd refactor out the code in `doInBackground` so that it can be tested independently of how Android wants it to be run. Upvotes: 2 <issue_comment>username_3: Sorry, did you override the onPostExecute method of the AsyncTask. You are keeping the Result handler, but not using it anywhere. ``` @Override protected void onPostExecute(Object result) { mResult.processFinish(result); } ``` As for the assertion it looks good to me as it is. Upvotes: 1
2018/03/14
877
2,941
<issue_start>username_0: I'm trying to convert an `SKSpriteNode` item name to an `Int`... That's the code: ``` let item = SKSpriteNode(imageNamed: "enemy") item.name = "1" ``` Then, in `touchesEnded`: ``` guard let touch = touches.first else { return } let location = touch.location(in: self) let touchedSKSpriteNode = self.atPoint(location) processItemTouched(node: touchedSKSpriteNode as! SKSpriteNode) ``` The func `processItemTouched` tries to extract the name of the touched element and converts it to an Int: ``` func processItemTouched(node: SKSpriteNode) { let num: Int = Int(node.name) // Error } ``` But there is an error: "Value of optional type 'Int?' not unwrapped; did you mean to use '!' or '?'?" After clicking on **Fix-it**, it becomes: ``` let num: Int = Int(node.name)! // Error, again ``` But another error appears: "Value of optional type 'String?' not unwrapped; did you mean to use '!' or '?'?" Finally, it's working, after fixing: ``` let num: Int = Int(node.name!)! ``` It works but there's a problem: if I try to verify if `num != nil`, Xcode says that "Comparing non-optional value of type 'Int' to nil always returns true". Is there a way to avoid this alert?<issue_comment>username_1: This is a tricky situation, since both the name of the node and the result of the conversation can be nil. I'd suggest providing a default value for the name using `??`, forcefully unwrapping the optional with `!` is very inelegant and dangerous (if a node does not have a name and you try to use this function with it, your app will crash). You should either: * declare `num` as an **optional** (either explicitly or by leaving out the type altogether): ``` let num = Int(node.name ?? "") //Int? ``` * provide a **default value** for `num`: ``` let num = Int(node.name ?? "") ?? 0 //Int ``` [Learn more about optionals here.](https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/TheBasics.html#//apple_ref/doc/uid/TP40014097-CH5-ID330) Upvotes: 1 <issue_comment>username_2: You are using `Int` type construtor `Int?(String)` which means it needs a `String` and returns an optional `Int` (`Int?`). `node.name` is an optional property of `SKNode`, because nodes can have no name. So you have been suggested to force dereferencing (but this is very dangerous). You can provide a default value as `node.name ??` or use the if-let: ``` if let name = node.name { // do something with name in the case it exists! } ``` Then you tried to store the `Int?` produced in a variable of type `Int`, and the IDE suggested you to force dereferencing it (again a bad thing). Again you can provide a default value `let i : Int = Int(node.name ?? "0") ?? 0` or use the if-let pattern: ``` if let name = node.name, let i = Int(name) { // do want you want with i and name... } else { // something bad happened } ``` Upvotes: 1 [selected_answer]
2018/03/14
581
2,081
<issue_start>username_0: I am developing a web service where I need to call an Oracle procedure in PHP. The Oracle procedure will take time to process and after completion it will write in a table. How do I return an error in the web service response if the procedure call is taking too much time? Note: I am stuck with PHP 5.2 and I cannot install cURL.<issue_comment>username_1: This is a tricky situation, since both the name of the node and the result of the conversation can be nil. I'd suggest providing a default value for the name using `??`, forcefully unwrapping the optional with `!` is very inelegant and dangerous (if a node does not have a name and you try to use this function with it, your app will crash). You should either: * declare `num` as an **optional** (either explicitly or by leaving out the type altogether): ``` let num = Int(node.name ?? "") //Int? ``` * provide a **default value** for `num`: ``` let num = Int(node.name ?? "") ?? 0 //Int ``` [Learn more about optionals here.](https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/TheBasics.html#//apple_ref/doc/uid/TP40014097-CH5-ID330) Upvotes: 1 <issue_comment>username_2: You are using `Int` type construtor `Int?(String)` which means it needs a `String` and returns an optional `Int` (`Int?`). `node.name` is an optional property of `SKNode`, because nodes can have no name. So you have been suggested to force dereferencing (but this is very dangerous). You can provide a default value as `node.name ??` or use the if-let: ``` if let name = node.name { // do something with name in the case it exists! } ``` Then you tried to store the `Int?` produced in a variable of type `Int`, and the IDE suggested you to force dereferencing it (again a bad thing). Again you can provide a default value `let i : Int = Int(node.name ?? "0") ?? 0` or use the if-let pattern: ``` if let name = node.name, let i = Int(name) { // do want you want with i and name... } else { // something bad happened } ``` Upvotes: 1 [selected_answer]
2018/03/14
1,045
4,279
<issue_start>username_0: My goal is to build a simple app which: * Has `UITableViewCell`s by fetching items from `Firebase`. * Each cell performs a segue to another `ViewController` when tapped. * Shows further details of the fetched items in the presented `ViewController`. With the work I've done so far: * I can successfully fetch data from the database and put it in a dictionary. I am also able to populate `UITableViewCell`s based on this data. * Cells presents a new `ViewController` as desired when tapped. The problem is, regardless of the cell I tap, my `ViewController` always presents the least recently added item from `Firebase` database. Let me provide my database structure and my Swift code: **Firebase structure:** ``` simple-app: └── upcoming: └── fourth: └── desc: "description for the fourth item" └── name: "fourth item" └── third: └── desc: "description for the third item" └── name: "third item" └── second: └── desc: "description for the second item" └── name: "second item" └── first: └── desc: "description for the first item" └── name: "first item" ``` **Swift code:** ``` import Foundation import UIKit import Firebase class Upcoming: UITableViewController{ @IBOutlet weak var upcomingItems: UITableView! var itemName: String? var itemDesc: String? var ref: DatabaseReference! var refHandle: UInt! var itemList = [Item]() let cellId = "cellId" override func viewDidLoad() { ref = Database.database().reference() fetchItems() } override func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return itemList.count } override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = UITableViewCell(style: .subtitle, reuseIdentifier: cellId) itemName = itemList[indexPath.row].name itemDesc = itemList[indexPath.row].desc cell.textLabel?.text = itemName return cell } override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { performSegue(withIdentifier: "DetailedInfo", sender: self) } override func prepare(for segue: UIStoryboardSegue, sender: Any?) { if segue.identifier == "DetailedInfo" { var vc = segue.destination as! DetailedInfo vc.name = itemName vc.desc = itemDesc } } func fetchItems(){ refHandle = ref.child("upcoming").observe(.childAdded, with: {(snapshot) in if let dictionary = snapshot.value as? [String: AnyObject]{ print(dictionary) // dictionary is as desired, no corruption let item = Item() item.setValuesForKeys(dictionary) self.itemList.append(token) DispatchQueue.main.async { self.tableView.reloadData() } } }) } } ``` I can see four different cells with four different names as desired, but no matter which cell is tapped, next `ViewController` shows `name` and `desc` values of the least recently added item, which is `first`, from the database. Any idea is appreciated to fix this issue.<issue_comment>username_1: Try to use this approach ``` refHandle = ref.child("upcoming").observeSingleEvent(of: .value, with: { (snapshot) in ``` But this is the case if there is no need to constantly check the database updates. Upvotes: 0 <issue_comment>username_2: itemName and itemDesc will always be the last cell because you write it in cellForRow, set the items in didSelectRow ``` override func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell { let cell = UITableViewCell(style: .subtitle, reuseIdentifier: cellId) cell.textLabel?.text = itemList[indexPath.row].name return cell } override func tableView(_ tableView: UITableView, didSelectRowAt indexPath: IndexPath) { itemName = itemList[indexPath.row].name itemDesc = itemList[indexPath.row].desc performSegue(withIdentifier: "DetailedInfo", sender: self) } ``` Upvotes: 3 [selected_answer]
2018/03/14
516
2,053
<issue_start>username_0: I have a CouchDb instance running a peruser database configuration. Each user database generated (when a user is added to the `_users` database) needs to have the same design documents with view/list logic etc. What is the defacto way to add the design documents to the database upon database creation? Is it simply to add them after a successful user creation request? Or is there a more elegant way of doing this in CouchDb?<issue_comment>username_1: There is not a mechanism for initialize newly created user databases, you should include it in your user creation logic. If you want to decouple user creation and db initialization, I suggest you to explore the following strategy 1 - Create a template database and place on it your design documents that should be applied to every user db 2 - Listen continuously *\_db\_updates* endpoint where db level events will be notified. [This](https://www.npmjs.com/package/follow) library can help you. 3 - When a db that matches the user db name pattern is created, you can trigger a replication from the template database to the newly created database using the *\_replicate* endpoint. Upvotes: 3 [selected_answer]<issue_comment>username_2: If you plan on using the Follow npm module as @username_1 suggested, please consider using the Cloudant's version. The Iriscouch's version (the one pointed by @username_1) is way out of date. For example it doesn't support CouchDB v2.x among other issues. I worked with the Cloudant's team to improve all this these last days and they just released the updated npm package yesterday here: <https://www.npmjs.com/package/cloudant-follow?activeTab=versions> The `0.17.0-SNAPSHOT.47` version embarks the patches we worked on so don't use the `0.16.1` (which is officially the latest). You can read more about the issues we fixed here: <https://github.com/cloudant-labs/cloudant-follow/issues/54> <https://github.com/cloudant-labs/cloudant-follow/issues/50> <https://github.com/cloudant-labs/cloudant-follow/issues/47> Upvotes: 1
2018/03/14
749
2,352
<issue_start>username_0: I have an input data set, which may look something like this: ``` DF=data.frame( Variable = c("Test1", "Test2", "Test3"), Distribution = c("Normal", "Exponential","Poisson"), Variable = c(2, 3, 4), SD = c(2, NA, NA)) ``` I want to use the random probability functions (e.g. `rnorm` `rexp` and `rbinom`) using the distributions given in the data frame `DF`. So, how do I turn the text input into the correct functions? I want to use the corresponding values in the `Variable` and `SD` columns as the mean values/standard deviations if appropriate.<issue_comment>username_1: Something like: ``` DF=data.frame( Variable = c("Test1", "Test2", "Test3"), Distribution = c("Normal", "Exponential","Poisson"), VariablePrm = c(2, 3, 4), SD = c(2, NA, NA), stringsAsFactors = FALSE) # functions-lookup fun_vec <- c("rnorm", "rexp", "rpois") names(fun_vec) <- c("Normal", "Exponential", "Poisson") DF$fun <- fun_vec[DF$Distribution] # create expr my_expr <- function(x) { txt <- paste0(x[1], "<-", x[5], "(", 10, ", ", x[3], ifelse(is.na(x[4]), "", paste0(", ", x[4])), ")") } want <- apply(DF, 1, function(x) eval(parse(text = my_expr(x)))) colnames(want) <- DF$Variable want ``` Upvotes: 0 <issue_comment>username_2: @username_1 solution is working, but involves some expression parsing which is not needed here. We might get this a lot easier by creating a list of functions to use them later. ``` # generating data: DF=data.frame( Variable = c("Test1", "Test2", "Test3"), Distribution = c("Normal", "Exponential","Poisson"), VariablePrm = c(2, 3, 4), SD = c(2, NA, NA), stringsAsFactors = FALSE) # creating function list and selecting this functions by Distribution column fun_vec <- c(Normal=rnorm, Exponential=rexp, Poisson=rpois) DF$fun <- fun_vec[DF$Distribution] # if SD is NA then simply call function only with variablePrm # else call with sd # 10 is the number of observations to generate generate <- function(x) { if(is.na(x$SD)){ x$fun(10, x$VariablePrm) }else{ x$fun(10, x$VariablePrm, x$SD) } } # if we apply this functions to each row we will get matrix of results # each column will have 10 rows of generated data for previously selected distribution apply(DF, 1, generate) ``` Upvotes: 3 [selected_answer]
2018/03/14
1,255
4,653
<issue_start>username_0: I am trying to make a static webpage using RMarkdown. I want to define a UI which has a first layer of tabs and then tabs underneath the first layer. I've already looked into a similar question at [RMarkdown: Tabbed and Untabbed headings](https://stackoverflow.com/questions/38062706/rmarkdown-tabbed-and-untabbed-headings) . But that answer doesn't help my cause. Please find below the tab structure that I want to acheive. ``` | Results * Discussion of Results | Quarterly Results * This content pertains to Quarterly Results | By Product * Quarterly Performance by Products | By Region * Quarterly Performance by Region * Final Words about Quarterly Results | Yearly Results * This content pertains to Yearly Results | By Product * Yearly Performance by Products | By Region * Yearly Performance by Region * Final Words about Yearly Results ``` Here is the script in .Rmd format that I was using. But the output I was able to achieve look likes this [Current Scenario](https://i.stack.imgur.com/hgtDz.png). I'd want to have `Final Words about Quarterly Results` and `Final Words about Yearly Results` outside the Region tab and in Quarterly Results and Yearly Results respectively. ``` --- output: html_document: theme: paper highlight: tango number_sections: false toc: false toc_float: false --- # Result Discussion {.tabset} We will discuss results here ## Quarterly Results {.tabset} This content pertains to Quarterly Results ### By Product Quarterly perfomance by Products ### By Region Quarterly perfomance by Region Final Words about Quarterly Results ## Yearly Results {.tabset} This content pertains to Yearly Results ### By Product Yearly perfomance by Products ### By Region Yearly perfomance by Region Final Words about Yearly Results ```<issue_comment>username_1: You mean like this ? : ``` output: html_document: theme: paper highlight: tango number_sections: false toc: false toc_float: false --- # Result Discussion {.tabset} We will discuss results here ## Quarterly Results {.tabset} This content pertains to Quarterly Results ### By Product Quarterly perfomance by Products ### By Region Quarterly perfomance by Region ## Final Words about Quarterly Results ## Yearly Results {.tabset} This content pertains to Yearly Results ### By Product Yearly perfomance by Products ### By Region Yearly perfomance by Region ## Final Words about Yearly Results ``` Upvotes: 0 <issue_comment>username_2: The problem comes from the fact that the last paragraphs (`Final Words about Quarterly Results` and `Final Words about Yearly Results`) belong to the last level 3 section and not to the parent level 2 section. You have to manually control the sectioning of the rendered `HTML` to obtain what you want. Using **`pandoc` < 2.0**, the only mean is to insert raw `HTML`: ``` --- output: html_document: theme: paper highlight: tango number_sections: false toc: false toc_float: false --- # Result Discussion {.tabset} We will discuss results here ## Quarterly Results {.tabset} This content pertains to Quarterly Results ### By Product Quarterly perfomance by Products ### By Region Quarterly perfomance by Region Final Words about Quarterly Results ## Yearly Results {.tabset} This content pertains to Yearly Results ### By Product Yearly perfomance by Products ### By Region Yearly perfomance by Region Final Words about Yearly Results ``` If you use **`pandoc` 2.0 or greater**, you can use [fenced `divs`](https://pandoc.org/MANUAL.html#extension-fenced_divs): ``` --- output: html_document: theme: paper highlight: tango number_sections: false toc: false toc_float: false --- # Result Discussion {.tabset} We will discuss results here ## Quarterly Results {.tabset} This content pertains to Quarterly Results ::: {#quarterly-product .section .level3} ### By Product Quarterly perfomance by Products ::: ::: {#quarterly-region .section .level3} ### By Region Quarterly perfomance by Region ::: Final Words about Quarterly Results ## Yearly Results {.tabset} This content pertains to Yearly Results ::: {#yearly-product .section .level3} ### By Product Yearly perfomance by Products ::: ::: {#yearly-region .section .level3} ### By Region Yearly perfomance by Region ::: Final Words about Yearly Results ``` Upvotes: 4 [selected_answer]
2018/03/14
931
3,291
<issue_start>username_0: I just want to know if possible to change the email subject if the order have the specific category like (Preorder). I want to put PO at beginning (PO New customer order #0000) then all other order the customer receive the default email subject (New Customer Order #0000). ``` add_filter('woocommerce_email_subject_new_order', 'change_admin_email_subject', 1, 2); function change_admin_email_subject( $subject, $order ) { global $woocommerce; global $product; if ( has_term( 'preorder', $product->ID ) ) { $blogname = wp_specialchars_decode(get_option('blogname'), ENT_QUOTES); $subject = sprintf( '[%s]New customer order (# %s) from %s %s', $blogname, $order->id, $order->billing_first_name, $order->billing_last_name ); } return $subject; } ``` Note: I just copy this code somewhere.<issue_comment>username_1: Use this: ``` function change_admin_email_subject( $subject, $order ) { // Get all order items $items = $order->get_items(); $found = false; // Loop through the items foreach ( $items as $item ) { $product_id = $item['product_id']; // get the categories for current item $terms = get_the_terms( $product_id, 'product_cat' ); // Loop through the categories to find if 'preorder' exist. foreach ($terms as $term) { if($term->slug == 'preorder'){ $subject = 'PO '. $subject; $found = true; break; } } if($found == true){ break; } } return $subject; } ``` Upvotes: 1 <issue_comment>username_2: This can be done this way, making some small changes: ``` add_filter('woocommerce_email_subject_new_order', 'custom_admin_email_subject', 1, 2); function custom_admin_email_subject( $subject, $order ) { $backordered = false; foreach($order->get_items() as $item_id => $item ){ if ( has_term( 'preorder', 'product_cat' , $item->get_product_id() ) ) { $backordered = true; break; } } if ( $backordered ) { $subject = sprintf( '[PO]New customer order (# %s) from %s %s', $order->get_id(), $order->get_billing_first_name(), $order->get_billing_last_name() ); } return $subject; } ``` Code goes in function.php file of the active child theme (or active theme). Tested and works. --- Or it can be done this way without a product category, checking that product is backordered: ``` add_filter('woocommerce_email_subject_new_order', 'custom_admin_email_subject', 1, 2); function custom_admin_email_subject( $subject, $order ) { $backordered = false; foreach($order->get_items() as $item_id => $item ){ $product = $item->get_product(); if( $product->get_backorders() == 'yes' && $product->get_stock_quantity() < 0 ){ $backordered = true; break; } } if ( $backordered ) { $subject = sprintf( '[PO]New customer order (# %s) from %s %s', $order->get_id(), $order->get_billing_first_name(), $order->get_billing_last_name() ); } return $subject; } ``` Code goes in function.php file of the active child theme (or active theme). Tested and works. Upvotes: 3 [selected_answer]
2018/03/14
922
3,486
<issue_start>username_0: I am facing an issue with Elasticsearch. I would like use the geo distance feature to fetch all the item located N km maximum from a given localization. Here is my DB schema: ``` { "user_id": "abcde", "pin" : { "location" : { "lat" : 40.12, "lon" : -71.34 } }, "is_active": true, "action_zone": 50 } ``` I have this query which works pretty well: ``` { "query": { "bool" : { "must" : [{ "term": { "is_active": True } }], "filter" : { "geo_distance" : { "distance" : "200km", "pin.location" : { "lat" : 40, "lon" : -70 } } } } } } ``` Now, I would like to modify this query a bit to replace dynamically the distance (200km in my example) by the value "action\_zone" of each item of in the DB. That would be great if someone could help me. :)<issue_comment>username_1: Unfortunately, the `geo_distance` query doesn't allow to use scripting in order to specify a dynamic distance. What you could do, however, would be to use a `terms` aggregation on the `action_zone` field so as to bucket all your documents within a specific action zone. ``` { "query": { "bool": { "must": [ { "term": { "is_active": True } } ], "filter": { "geo_distance": { "distance": "200km", "pin.location": { "lat": 40, "lon": -70 } } } } }, "aggs": { "zones": { "terms": { "field": "action_zone" } } } } ``` Otherwise, you could also use a `range` aggregation on the `action_zone` field with a few specific distances: ``` { "query": { "bool": { "must": [ { "term": { "is_active": "True" } } ], "filter": { "geo_distance": { "distance": "200km", "pin.location": { "lat": 40, "lon": -70 } } } } }, "aggs": { "zones": { "range": { "field": "action_zone", "ranges": [ { "to": 50 }, { "from": 50, "to": 100 }, { "from": 100, "to": 150 }, { "from": 150 } ] } } } } ``` Upvotes: 1 <issue_comment>username_2: I found the solution using a script :D Thanks anyway ! ``` { "query": { "bool" : { "must" : [{ "term": { "is_active": True } },{ "script" : { "script" : { "params": { "lat": 40.8, "lon": -70.1 }, "source": "doc['location'].arcDistance(params.lat, params.lon) / 1000 < doc['action_zone'].value", "lang": "painless" } } }] } } } ``` } Doc: <https://www.elastic.co/guide/en/elasticsearch/reference/6.1/query-dsl-script-query.html> Upvotes: 2
2018/03/14
1,100
4,382
<issue_start>username_0: I am an angular 2 beginner and I was following a tutorial... but got stuck at this point. I am not able to move forward cause I am not able to inject HTTP service inside the custom service(message.service.ts). I Tried to find figure it out,I found this is happening due to "CIRCULAR DEPENDENCY" but I am not able to solve this. Edit : I tried with HttpClient also... but no luck! Please help me with some correct line of codes so,this would make this code working again. // message.service.ts ``` import { Http, Response } from '@angular/http'; import { Injectable } from '@angular/core'; import 'rxjs/Rx' import 'rxjs/add/operator/catch' import 'rxjs/add/observable/throw'; import { Observable } from 'rxjs'; import { Message } from './message.model'; @Injectable <--- If i am removing this injectable code is working fine export class MessageService { private messages: Message[] = []; constructor(private http: Http) {} <-- and I need to comment out this too.. to avoid DI. url: string = 'http://localhost:3000/message'; addMessage(message: Message) { this.messages.push(message); const body = JSON.stringify(message); const headers = new Headers({'Content-Type': 'application/json'}); return this.http.post(this.url, body, {headers: headers}) .map((response: Response) => response.json()) .catch((error: Response) => Observable.throw(error.json())); <--- In this line catch is showing unresolved function. } getMessages() { return this.messages; } deleteMessage(message: Message) { this.messages.splice(this.messages.indexOf(message), 1); } } ``` // app.module.ts ``` import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { FormsModule, ReactiveFormsModule } from "@angular/forms"; import { AppComponent } from "./app.component"; import { MessageComponent } from "./messages/message.component"; import { MessageListComponent } from "./messages/message-list.component"; import { MessageInputComponent } from "./messages/message-input.component"; import { MessagesComponent } from "./messages/messages.component"; import { AuthenticationComponent } from "./auth/authentication.component"; import { HeaderComponent } from "./header.component"; import { routing } from "./app.routing"; import { LogoutComponent } from "./auth/logout.component"; import { SignupComponent } from "./auth/signup.component"; import { SigninComponent } from "./auth/signin.component"; import { HttpModule } from '@angular/http'; @NgModule({ declarations: [ AppComponent, MessageComponent, MessageListComponent, MessageInputComponent, MessagesComponent, AuthenticationComponent, HeaderComponent, LogoutComponent, SignupComponent, SigninComponent ], imports: [ BrowserModule, HttpModule, FormsModule, routing, ReactiveFormsModule], bootstrap: [AppComponent] }) export class AppModule { } ``` // Error Stacktrace: ``` compiler.js?7e34:485 Uncaught Error: Can't resolve all parameters for TypeDecorator: (?). at syntaxError (compiler.js?7e34:485) at CompileMetadataResolver._getDependenciesMetadata (compiler.js?7e34:15700) at CompileMetadataResolver._getTypeMetadata (compiler.js?7e34:15535) at CompileMetadataResolver._getInjectableMetadata (compiler.js?7e34:15515) at CompileMetadataResolver.getProviderMetadata (compiler.js?7e34:1587 ```<issue_comment>username_1: use HttpClient to make service calls instead of http Upvotes: -1 <issue_comment>username_2: Add `()` to your `Injectable` ``` @Injectable() export class MessageService ``` Upvotes: 1 [selected_answer]<issue_comment>username_3: When you're using a service into another service you need to have @Injectable() added. In your code you've not added parenthesis after @Injectable, like this @Injectable(). Also important to note here that once you've created your custom service, you need to add it to providers array just like we have declarations and imports array. And be careful to add it to appropriate component as they're hierarchical. As for the HTTPClient, it is available starting from angular version 4.3 or later. You don't have it for Angular 2. Upvotes: 0
2018/03/14
782
3,150
<issue_start>username_0: Recently we upgraded our .NET solution from .NET Framework 4.5 to 4.6.2. The project is in Git repository and we are having multiple branches of this repository. We re-targeted the Nuget packages to 4.6.2 and with that I could see Nuget packages getting restored automatically while rebuilding the solution which is absolutely fine and expected. Now, most of the packages are having a folder named as "net462" which contains a DLL for the package targeting to .NET Framework 4.6.2. However, folder "net45" is empty now. The problem is that when a developer switches to an old branch which points to .NET Framework 4.5, s/he gets number of errors related to reference not found. I assume because there exists a folder for "net45" but there is no assembly in that. Could anyone please suggest how can I make both the branches (targeting to 4.5, and 4.6.2) building successfully on a same machine with correct Nuget dependencies? Any help on this will be much appreciated. Thanks<issue_comment>username_1: It sounds like the branches targeting .NET 4.5 are grabbing the latest version of the NuGet packages rather than restricting themselves to the versions of the NuGet packages supporting .NET 4.5. See [this answer](https://stackoverflow.com/questions/22563518/restrict-nuget-package-updates-to-current-versions-for-some-packages#answer-22563668) for an example of how to do that ([the Microsoft documentation on NuGet versioning](https://learn.microsoft.com/en-us/nuget/reference/package-versioning#version-ranges-and-wildcards) contains additional details about restricting version ranges). Ideally your change of NuGet packages to .NET 4.6.2 at least is a new major version of the NuGet packages, but restricting the version works as long as they are a different version. Upvotes: 0 <issue_comment>username_2: > > Could anyone please suggest how can I make both the branches (targeting to 4.5, and 4.6.2) building successfully on a same machine with correct Nuget dependencies? > > > Agree with the comment of Hans "*It is not obvious why the net45 subdirectory is empty, not standard behavior. Were they stripped by hand and checked into source control perhaps?*". Nuget would not delete `.dll` file in the .NET 4.5 folder. When you switch to an old branch which points to .NET Framework 4.5, the path of assembly in the Properties winodw should point to the `...\lib\net45\..` folder. And the default behavior of Git is not add the packages folder in to the source control. So, then you build your project from old branch, Visual Studio will restore nuget packages automatically. After restore complete, Visual Studio could find the assembly in the folder "net45". So, to resolve this issue, first, you should make sure **the nuget packages in the nuget repository contains the assembly in folder for "net45"**, then when you switches to an old branch, check if there is a `\packages` folder in the solution folder, if yes, remove it and check if there is nuget restore behavior when you build the project on an old branch(Check the log on the output window). Hope this helps. Upvotes: 2 [selected_answer]
2018/03/14
1,207
4,307
<issue_start>username_0: Below are two list Some1 and Some which actually has same object data but different in order of elements in object and order of objects in array. My concern is below has to return true. Please favour ``` List lome1=new ArrayList(); Some some1 = new Some(); some1.setTime(1000); some1.setStartTime(25); some1.setEndTime(30); lome1.add(some1); Some some2 = new Some(); some2.setStartTime(125); some2.setEndTime(130); some2.setTime(100); lome1.add(some2); List lome2=new ArrayList(); Some some3 = new Some(); some3.setStartTime(125); some3.setEndTime(130); some3.setTime(100); lome2.add(some3); Some some = new Some(); some.setStartTime(25); some.setTime(1000); some.setEndTime(30); lome2.add(some); ``` Attempts which failed due to order: With deepEquals: ``` if(Arrays.deepEquals(lome1.toArray(),lome2.toArray()) ){ System.out.println("equal"); } else { System.out.println("not equal"); } ``` With hashset, both gave different hash value though data is same ``` if(new HashSet<>(lome1).equals(new HashSet<>(lome2)) ){ System.out.println("equal"); } else { System.out.println("not equal"); } ``` Check if object is contained in another ``` boolean x=true for(Some d: lome1) { if(!lome2.contains(d)) { x = false; } } if(x){ System.out.println("equal"); } else { System.out.println("not equal"); } ```<issue_comment>username_1: The `containsAll()` API provided by java collection. `lome1.containsAll(lome)` should do the trick. Upvotes: 0 <issue_comment>username_2: First Override hashcode and equals for Some Object, It may look like this, ``` @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; Some that = (Some) o; return startTime == that.startTime && endTime == that.endTime && time == that.time } @Override public int hashCode() { return Objects.hash(startTime, endTime, time); } ``` Once equals and Hashcode is set then different object with same values will give the same hashcode thus .equals() will return true Now for the list use `list1.containsAll(list2) && list2.containsAll(list1);` Upvotes: 2 <issue_comment>username_3: Comparing the two lists as HashSets is probably the best approach, since that works irrespective of the order. However, your HashSet comparison is dependent on you implementing the equals() and hashCode() functions in your "Some" class. You've not provided the source for that, so I'm guessing you've missed that. Without overriding those methods in your class, the JRE doesn't know that two Some objects are the same or not. I'm thinking something like this: ``` @Override public int hashCode() { return getTime() + getStartTime() + getEndTime(); } @Override public boolean equals(Object o) { if (o instanceof Some) { Some other = (Some)o; if (getTime() == other.getTime() && getStartTime() == other.getStartTime() && getEndTime() == other.getEndTime()) { return true; } } return false; } ``` Upvotes: 2 <issue_comment>username_4: For Java 1.8+ you could check that each element of first list is in the second and vice versa: ``` boolean equals = lome1.stream().allMatch(e -> lome2.contains(e)) && lome2.stream().allMatch(e -> lome1.contains(e)); ``` Upvotes: 0 <issue_comment>username_5: Do Something like this ``` List list = new ArrayList<>(); list.add(new User("User","20")); list.add(new User("Some User","20")); List list1 = new ArrayList<>(); list1.add(new User("User","20")); list1.add(new User("Some User","20")); List storeList = new ArrayList<>(); for (User user: list){ for (User user1:list1){ if (user.getName().equals(user1.getName()) && user.getAge().equals(user1.getAge())) storeList.add(user); } } boolean check = !storeList.isEmpty(); //OR check = storeList.size() == list.size(); System.out.println(check); ``` Upvotes: 0
2018/03/14
733
2,084
<issue_start>username_0: I have 3 columns [A, B, C] in my SQL table. I want to find table entries, where values in A is same, in B is same, but C is different. ``` A B C 1 2 3 4 5 6 *3 4 5* *3 4 6* *7 8 9* 6 1 2 *7 8 3* ``` I want to preferably get something like: ``` A B C 3 4 5 3 4 6 7 8 9 7 8 3 ``` as my result. Thanks :)<issue_comment>username_1: The core of the solution below is to aggregate over your table on both columns `A` and `B`, and then retain those groups having more than one `C` value. Then join your full table to this aggregation query to retain only the records you want. ``` SELECT t1.* FROM yourTable t1 INNER JOIN ( SELECT A, B FROM yourTable GROUP BY A, B HAVING COUNT(DISTINCT C) > 1 ) t2 ON t1.A = t2.A AND t1.B = t2.B ORDER BY t1.A, t1.B; ``` [![enter image description here](https://i.stack.imgur.com/K74Wu.png)](https://i.stack.imgur.com/K74Wu.png) Here is a demo in MySQL, though the above query should run on pretty much any other database with little modification. [Demo ----](http://rextester.com/JNQ85977) Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this: ``` select A,B,C from ( select A,B,C, avg(C * 1.0) over (partition by A,B) [avg] from MY_TABLE ) a where [avg] <> C ``` The idea behind is simple, if all numbers within a set are equal, they also are equal to the average of the set. Upvotes: 1 <issue_comment>username_3: This one should work too: ``` SELECT DISTINCT t1.* FROM test t1 INNER JOIN test t2 ON t2.a = t1.a AND t2.b = t1.b AND t2.c <> t1.c; ``` Here's a demo: [link](http://sqlfiddle.com/#!9/20cbc/2) I'm not sure about performance due to lots of duplicates being generated/truncated compared to other solutions, though. Upvotes: 1 <issue_comment>username_4: No need to count or rank; you only want to check if *at least one* qualifying row `EXISTS` --- ``` select * from thetable tt where exists( select * from thetable x where x.a = tt.a and x.b = tt.b and x.c <> tt.c ); ``` Upvotes: 0
2018/03/14
353
1,334
<issue_start>username_0: I'm trying to list folders containing a certain file on jenkins and use later this array. I read about [`findFiles`](https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/) but I can't find a way to use it in this situation. The finality is that I need to cd to those folders in a loop and perform some actions. I have only one jenkins where everything is running Use case: I have a workspace in which I have packages. I need to run some commands in some folders, I can't do it from the root of y workspace. They may be in subfolders or subsubfolders. The way I can identify a package is when it contains a `package.xml` (on ROS). Also I don't have any command to list their paths<issue_comment>username_1: If nothing else is working then you can try running a normal linux command like: ``` folders = sh( script: "locate myfile", returnStdout: true ) ``` Then split this to form an array and use the value like : ``` folders.split("\n")[1] ``` Upvotes: 1 <issue_comment>username_2: ``` def packageDirs = findFiles(glob: '**/package.xml') .findAll { f -> !f.directory } .collect{ f -> f.path.replace('/', '\\') - ~/\\[^\\]+$/ } packageDir.each { d -> dir(d) { // Process each package here } } ``` Upvotes: 0
2018/03/14
1,486
5,283
<issue_start>username_0: What I'm trying to do --------------------- On avito.ru (Russian real estate site), person's phone is hidden until you click on it. I want to collect the phone using Scrapy+Splash. Example URL: <https://www.avito.ru/moskva/kvartiry/2-k_kvartira_84_m_412_et._992361048> [![screenshot: Phone is hidden](https://i.stack.imgur.com/Htj5B.png)](https://i.stack.imgur.com/Htj5B.png) After you click the button, pop-up is displayed and phone is visible. [![enter image description here](https://i.stack.imgur.com/I6FHp.png)](https://i.stack.imgur.com/I6FHp.png) I'm using Splash [execute](http://splash.readthedocs.io/en/stable/api.html#execute) API with following Lua script: ``` function main(splash) splash:go(splash.args.url) splash:wait(10) splash:runjs("document.getElementsByClassName('item-phone-button')[0].click()") splash:wait(10) return splash:png() end ``` Problem ------- The button is not clicked and phone number is not displayed. It's a trivial task, and I have no explanation why it doesn't work. Click works fine for another field on the same page, if we replace `item-phone-button` with `js-show-stat`. So Javascript *in general* works, and the blue "Display phone" button must be special somehow. What I've tried --------------- To isolate the problem, I created a repo with minimal example script and a docker-compose file for Splash: <https://github.com/alexanderlukanin13/splash-avito-phone> Javascript code is valid, you can verify it using Javascript console in Chrome and Firefox ``` document.getElementsByClassName('item-phone-button')[0].click() ``` I've tried it with Splash versions 3.0, 3.1, 3.2, result is the same. Update ------ I've also tried: * @Lore's suggestions, including `simulateClick()` approach (see [simulate\_click](https://github.com/alexanderlukanin13/splash-avito-phone/tree/simulate_click) branch) * mouseDown/mouseUp events as described here: [Simulating a mousedown, click, mouseup sequence in Tampermonkey?](https://stackoverflow.com/questions/24025165/simulating-a-mousedown-click-mouseup-sequence-in-tampermonkey) (see [trigger\_mouse\_event](https://github.com/alexanderlukanin13/splash-avito-phone/tree/trigger_mouse_event) branch)<issue_comment>username_1: I don't know how your implementation works, but I suggest to rename `main` with `parse`, the default function called by spiders on start. If this isn't the problem, first thing to do is controlling if you have picked the right element of that class using Javascript with css selector. Maybe it exists another item with `item-phone-button` class attribute and you are clicking in the wrong place. If all above is correct, I suggest then two options that worked for me: - Using [Splash mouse\_click](https://splash.readthedocs.io/en/stable/scripting-ref.html#splash-mouse-click) and [Splash wait](https://splash.readthedocs.io/en/stable/scripting-ref.html#splash-wait) (the latter I see you have already used). If it don't work, try double click, by substituting in your code: ``` local button = splash:select('item phone-button') button:mouse_click() button:mouse_click() ``` - Using [Splash wait\_for\_resume](https://splash.readthedocs.io/en/stable/scripting-ref.html#splash-wait-for-resume), that executes javascript code until terminated and then restart LUA. Your code will become simpler too: ``` function main(splash) splash:go(splash.args.url) splash:wait_for_resume("document.getElementsByClassName([[ function main(splash) { document.getElementsByClassName('item-phone-button');[0].click() splash.resume(); } ]]) return splash:png() end ``` EDIT: it seems that is good to use `dispatchEvent` instead of `click()` like in [this example](https://developer.mozilla.org/en-US/docs/Web/Guide/Events/Creating_and_triggering_events): ``` function simulateClick() { var event = new MouseEvent('click', { view: window, bubbles: true, cancelable: true }); var cb = document.getElementById('checkbox'); var cancelled = !cb.dispatchEvent(event); if (cancelled) { // A handler called preventDefault. alert("cancelled"); } else { // None of the handlers called preventDefault. alert("not cancelled"); } } ``` Upvotes: 1 <issue_comment>username_2: The following script works for me: ``` function main(splash, args) splash.private_mode_enabled = false assert(splash:go(args.url)) btn = splash:select_all('.item-phone-button')[2] btn:mouse_click() btn.style.border = "5px solid black" assert(splash:wait(0.5)) return { num = #splash:select_all('.item-phone-button'), html = splash:html(), png = splash:png(), har = splash:har(), } end ``` There were 2 issues with the original solution: 1. There are 2 elements with 'item-phone-button' class, and button of interest is the second one. I've checked which element is matched by setting `btn.style.border = "5px solid black"`. 2. This website requires private mode to be disabled, likely because it uses localStorage. Check <http://splash.readthedocs.io/en/stable/faq.html#website-is-not-rendered-correctly> for other common suggestions. Upvotes: 4 [selected_answer]
2018/03/14
1,215
4,255
<issue_start>username_0: I use `multer` to parse multiple files sent as `multipart/data-form` with `axios` ``` ... const storage = multer.diskStorage({ destination: './gallery', filename(req, file, cb) { (1) .... }, }); const upload = multer({ storage }); router.post('/products', upload.array('images'), (req, res, next) => { Product.create(...) .then((product) => { (2) ... }) .catch(..) }) ... ``` at this point everything is fine and my images are saved. the problem is that i want to make a loop in **(1)** or **(2)** and name my files like this ``` files.forEach((file, index) => { // rename file to => product_id + '_' + index + '.jpeg' } ``` For example if i have 3 files they will be named to ``` 5a9e881c3ebb4e1bd8911126_1.jpeg 5a9e881c3ebb4e1bd8911126_2.jpeg 5a9e881c3ebb4e1bd8911126_3.jpeg ``` where `5a9e881c3ebb4e1bd8911126` is the id of the product document saved by `mongoose`. 1. how to solve this naming issue ? 2. is `multer` the best solution cause i want full control over my files ? 3. Is there a better approach with another node package ? 4. is it good to send images as `multipart/data-form` or `data URL base64` ?<issue_comment>username_1: I don't know how your implementation works, but I suggest to rename `main` with `parse`, the default function called by spiders on start. If this isn't the problem, first thing to do is controlling if you have picked the right element of that class using Javascript with css selector. Maybe it exists another item with `item-phone-button` class attribute and you are clicking in the wrong place. If all above is correct, I suggest then two options that worked for me: - Using [Splash mouse\_click](https://splash.readthedocs.io/en/stable/scripting-ref.html#splash-mouse-click) and [Splash wait](https://splash.readthedocs.io/en/stable/scripting-ref.html#splash-wait) (the latter I see you have already used). If it don't work, try double click, by substituting in your code: ``` local button = splash:select('item phone-button') button:mouse_click() button:mouse_click() ``` - Using [Splash wait\_for\_resume](https://splash.readthedocs.io/en/stable/scripting-ref.html#splash-wait-for-resume), that executes javascript code until terminated and then restart LUA. Your code will become simpler too: ``` function main(splash) splash:go(splash.args.url) splash:wait_for_resume("document.getElementsByClassName([[ function main(splash) { document.getElementsByClassName('item-phone-button');[0].click() splash.resume(); } ]]) return splash:png() end ``` EDIT: it seems that is good to use `dispatchEvent` instead of `click()` like in [this example](https://developer.mozilla.org/en-US/docs/Web/Guide/Events/Creating_and_triggering_events): ``` function simulateClick() { var event = new MouseEvent('click', { view: window, bubbles: true, cancelable: true }); var cb = document.getElementById('checkbox'); var cancelled = !cb.dispatchEvent(event); if (cancelled) { // A handler called preventDefault. alert("cancelled"); } else { // None of the handlers called preventDefault. alert("not cancelled"); } } ``` Upvotes: 1 <issue_comment>username_2: The following script works for me: ``` function main(splash, args) splash.private_mode_enabled = false assert(splash:go(args.url)) btn = splash:select_all('.item-phone-button')[2] btn:mouse_click() btn.style.border = "5px solid black" assert(splash:wait(0.5)) return { num = #splash:select_all('.item-phone-button'), html = splash:html(), png = splash:png(), har = splash:har(), } end ``` There were 2 issues with the original solution: 1. There are 2 elements with 'item-phone-button' class, and button of interest is the second one. I've checked which element is matched by setting `btn.style.border = "5px solid black"`. 2. This website requires private mode to be disabled, likely because it uses localStorage. Check <http://splash.readthedocs.io/en/stable/faq.html#website-is-not-rendered-correctly> for other common suggestions. Upvotes: 4 [selected_answer]
2018/03/14
251
1,058
<issue_start>username_0: I have a simple question. Assume that the following sql returns over milion records: ``` Select * from Table ``` If I only need to work with 100 records, will limiting rows increase significantly the performance and why ? Example oracle sql : ``` Select * from Table where rownum<=100 ```<issue_comment>username_1: If you are reading all the rows, then limiting the number of rows will be more efficient. If you are reading the rows through -- say -- a cursor, then you probably will not see any difference in performance. If your "table" is really a "view", then the two queries might optimize differently. Upvotes: 3 [selected_answer]<issue_comment>username_2: No . The best way to find an answer is quickly generate an Explain plan . And see the relative cost of execution . It will give you the quick view . Returning row is display part but fetching of the row done on the condition . So preferably it will not be slower and it attest what i am saying generate the explain plan it will give the clear picture Upvotes: 0
2018/03/14
485
2,062
<issue_start>username_0: I try to create a BaseController and this should contain a DbContest instance. ``` public abstract class BaseODataController : ODataController where T : class { protected readonly ApplicationDbContext \_db; public BaseODataController(ApplicationDbContext db) { \_db = db; } } ``` So when i create a new controller, i have to pass the DbContext through the constructor. Do i have the possibility to get a instance from the ApplicationDbContext like that: ``` public BaseODataController() { _db = //Get instance? } ``` So i dont have to pass it in every controller?<issue_comment>username_1: The ASP.NET Core dependency injection system will not resolve base class dependencies. You do need to explicitly provide that dependency to the base class from the constructors of your derived classes through the `base` statement: ``` public Foo(ApplicationDbContext db) : base(db) { _db = db; } ``` Upvotes: 0 <issue_comment>username_2: You *could* store the dependency resolver in a static class and call it directly within the base class constructor, like e.g.: ``` _db = Container.Resolver.Resolve(); ``` Upvotes: -1 <issue_comment>username_3: You do not need to set the field in your derived constructor, but you still must implement the constructor on the derived class. This has nothing to do with dependency injection; it's simply how inheritance works in C#. Constructors are not inherited, so if your want your derived class to have the same constructor available (which it will need, in order to accept the context), then it has to be declared on the derived class. However, you can simply pass the construction logic to the base constructor. ``` public abstract class BaseODataController : ODataController where T : class { protected readonly ApplicationDbContext \_db; public BaseODataController(ApplicationDbContext db) { \_db = db; } } public class FooController : BaseODataController { public FooController(ApplicationDbContext context) : base(context) { } } ``` Upvotes: 0
2018/03/14
301
934
<issue_start>username_0: I have an existing table that uses an auto-incremented `id` as its primary key. There are entries in the table with the id starting at 1: ``` id field1 == ====== 1 foo1 2 foo2 3 foo3 ``` Is there a way to update the `id` for *all existing entries* so the auto\_increment starts at another number?: ``` id field1 ==== ====== 1000 foo1 1001 foo2 1002 foo3 ``` (The order does not necessarily have to be kept if that is not possible)<issue_comment>username_1: You can use `update` to change the values and `alter table` to update the auto increment to a new value: ``` alter table t set auto_increment = 1003; -- the next value update t set id = id + 999; ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: In Sql Server it is work, MySql I'm not sure ``` DECLARE @NewRowId int = 998; DBCC CHECKIDENT('MyTableName', RESEED, @NewRowId) ``` Upvotes: 1
2018/03/14
245
783
<issue_start>username_0: I have `datetime.now()` objects, and I want to know how many hours will pass before a specific hour the next day I've tried this: ``` now = datetime.now() then = datetime.now() + timedelta(days=1) then.hour = 12 # doesn't work hours = then - now ``` But I don't know how can I specify the exact hour for `then` object<issue_comment>username_1: You can use `update` to change the values and `alter table` to update the auto increment to a new value: ``` alter table t set auto_increment = 1003; -- the next value update t set id = id + 999; ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: In Sql Server it is work, MySql I'm not sure ``` DECLARE @NewRowId int = 998; DBCC CHECKIDENT('MyTableName', RESEED, @NewRowId) ``` Upvotes: 1
2018/03/14
539
1,923
<issue_start>username_0: ``` #include #include using namespace std; int main() { char b[] = {'a','b',' ','c'}; cout << sizeof(b) << endl; cout << strlen(b) << endl; return 0; } ``` Why the above output is 4,6 isnt that 4,4 is the correct answer?<issue_comment>username_1: The behaviour of your code is undefined: `strlen` requires that `b` **must** contain a `NUL` character within the bounds of the array. (Note that `' '` is a space, **not** `NUL`. A space is processed no differently by `strlen` to any other non-`NUL` character.) If you had written ``` char b[5]={'a','b',' ','c'}; ``` then C++ would have set the final element of `b` to 0 (i.e. `NUL`) for you. Upvotes: 3 <issue_comment>username_2: Your code exhibits *undefined behavior*: The behavior of `strlen` is not defined if the argument passed to it is not `NUL`-terminated. You seem to confuse the NUL character with the space character. Upvotes: 1 <issue_comment>username_3: `strlen` traverses the array **until** it encounters a `NULL` character. Your array does not contain `NULL` characters (ASCII 0) but it contains a space (ASCII 32). So `strlen` continues **past** your array. It happens to exist a `NULL` character at position 6 (two characters after the end of the array in memory), so string length is 6. Subsequent executions of this function could return different values, as memory content outside the array is undefined. It may also crash. Upvotes: 0 <issue_comment>username_4: > > what does strlen do if it enounter a space in between char array? > > > While all the other answers (according to your code sample, legitimately!!!) hint to the undefined behaviour due to the missing null character within your array, I'll be answering your original question: Nothing special, it will count the space just as any other character (especially, it *won't* stop processing the string as you seem to have assumed)... Upvotes: 1
2018/03/14
690
2,731
<issue_start>username_0: I am trying to configure sessions for an asp.net core 2.0 website, but the session cookie is never set. I call .. ``` app.UseSession(); ``` ...in Startup.Configure and ... ``` services.AddDistributedMemoryCache(); services.AddSession(options => { options.IdleTimeout = TimeSpan.FromMinutes(10); options.Cookie.HttpOnly = false; options.Cookie.Name = "WS_AUTH_ID"; }); ``` ... in the ConfigureServices method. In the Controller I can acess ... ``` HttpContext.Session.Id; ``` ... but the id is always different for every request. Am I missing something? Update: I should metion that I can set cookies "manually" and the browser will "receive" them. ``` HttpContext.Response.Cookies.Append("Test_cookie", "yo"); ```<issue_comment>username_1: You have to type the following in your ConfigureServices method: ``` services.AddMvc() .AddSessionStateTempDataProvider(); services.AddDistributedMemoryCache(); services.AddSession(options => { options.IdleTimeout = TimeSpan.FromMinutes(30); options.Cookie.Name = ".MyApplication"; }); ``` In your Configure type the following ``` //enable session before MVC app.UseSession(); app.UseMvc(); ``` Upvotes: 1 <issue_comment>username_2: This was the cause for me: The extension `Microsoft.AspNetCore.CookiePolicy` (`UseCookiePolicy`) was blocking the session cookie. Removing this extension and running the app in a new browser window fixed the issue. Rationale: this extension blocks the cookies sent to the browser until the user accepts them. Since the session key is stored in a cookie and cookies are blocked by this extension... No cookies, no session. Another workaround could be to enable the application to work without session until the user accepts cookies (I didn't test this workaround). Hope that helps. Upvotes: 4 <issue_comment>username_3: If you have the cookie policy turned ON the session cookiewon't be created until the user accepts the use of Cookies, this is to comply with the EU's GDPR. You can remove the line `app.UseCookiePolicy();` from you `Startup` and then it will work, otherwise your users will need to agree to the use of cookies before you can use the cookie for session control. Upvotes: 3 <issue_comment>username_4: For me the problem was solved by one of the comments on the question: > > The cookie isn't written unless you add something to the session. > > > So just requesting the `Session.Id` won't help, you actually have to set something. In my case it was a variable that was only set after some condition, and before that condition was met, it would create a new session ID over and over again. Upvotes: 2
2018/03/14
1,738
6,149
<issue_start>username_0: I'm running the `Test-AdfsServerHealth` ([Ref.](https://blogs.technet.microsoft.com/aadceeteam/2015/02/13/under-the-hood-tour-of-azure-ad-connect-health-ad-fs-diagnostics-module/)) The problem is, one of the output values (value name `Output`) is an array that shows up as `System.Collection.Hashtable` and I'm trying to find a way to get this in a neat Excel format. For instance, this is one of the actual values on the CSV when I export: ``` Name Result Detail Output TestServiceAccountProperties Pass "" System.Collections.Hashtable ``` But PowerShell displays: ``` Name : TestServiceAccountProperties Result : Pass Detail : Output : {AdfsServiceAccount, AdfsServiceAccountDisabled, AdfsServiceAccountLockedOut, AdfsServiceAccountPwdExpired...} ExceptionMessage : ``` The actual command I'm running is: ``` $ServerResult = Test-AdfsServerHealth ```<issue_comment>username_1: This won't be significantly difficult, just going to be annoying to do. The reason you are getting "System.collections.hashtable" is because is unable to display everything in that property in a single format like that, there is way to much information. You will have to create another object and put whatever information you want in there. This prob won't work exactly like you want, but with some tweaking it should get you there. ``` $ServerResult = Test-ADFSServerHealth $Object = New-PSObject -Property @{ 'Name' = $ServerResult.name 'Result' = $ServerResult.Result 'Detail' = $ServerResult.Detail 'Output' = ($ServerResult.Output | out-string -stream) 'ExceptionMessage' = $ServerResult.ExceptionMessage } ``` If your interested, here are the resources I used to find this answer. [Converting hashtable to array of strings](https://stackoverflow.com/questions/21413483/converting-hashtable-to-array-of-strings) <https://devops-collective-inc.gitbooks.io/the-big-book-of-powershell-gotchas/content/manuscript/new-object_psobject_vs_pscustomobject.html> Upvotes: 1 <issue_comment>username_2: **tl;dr**: ``` Test-AdfsServerHealth | Select-Object Name, Result, Detail, @{ n='Output' e={ $_.prop2.GetEnumerator().ForEach({ '{0}={1}' -f $_.Key, $_.Value }) -join ' ' } } | ExportTo-Csv out.csv ``` The above serializes each `.Output` hashtable's entries into single-line string composed of space-separated `=` pairs (PSv4+ syntax) that should work reasonably well in CSV output. --- Since **CSV is a *text* format**, **PowerShell serializes objects to be exported by calling their `.ToString()` method**. **Complex objects such as `[hashtable]` instances often yield just their full type name (`System.Collections.Hashtable`)** for `.ToString()`, which isn't useful in a CSV. A simplified example (I'm using `ConvertTo-Csv`, but the example applies analogously to `Export-Csv`): ``` # Create a custom object whose .col2 property is a hashtable with 2 # sample entries and convert it to CSV PS> [pscustomobject] @{ prop1 = 1; Output = @{ name='foo'; ID=666 } } | ConvertTo-Csv "prop1","Output" "1","System.Collections.Hashtable" ``` If all output objects from `Test-AdfsServerHealth` had the same hashtable structure in their `.Output` property, you could try to *flatten* the hashtable by making its entries columns in their own right, but it sounds like that is not the case. You must therefore **manually transform the hashtable into a text representation that fits into a single CSV column**: You can do this with `Select-Object` and a *calculated property* that performs the transformation for you, but you need to decide on a text representation that makes sense in the context of a CSV file. In the following example, a single-line string composed of space-separated `=` pairs is created (PSv4+ syntax). ``` [pscustomobject] @{ prop1 = 1; Output = @{ name='foo'; ID=666 } } | Select-Object prop1, @{ n='Output' e={ $_.prop2.GetEnumerator().ForEach({ '{0}={1}' -f $_.Key, $_.Value }) -join ' ' } } | ConvertTo-Csv ``` For an explanation of the hashtable format that creates the calculated `prop2` property, see [this answer](https://stackoverflow.com/a/39861920/45375) of mine. The above yields: ``` "prop1","prop2" "1","ID=666 name=foo" ``` Note, however, that if the values in your hashtables are again complex objects that serialize to their type name only, you'd have to apply the approach *recursively*. --- ### Optional reading: Flattening a hashtable property into individual columns If the hashtable-valued properties of the objects to export to a CSV file all have the **same structure**, you can opt to **make the hashtable entries each their own output column.** Let's take the following sample input: a collection of 2 custom objects whose `.prop2` value is a hashtable with a uniform set of keys (entries): ``` $coll = [pscustomobject] @{ prop1 = 1; prop2 = @{ name='foo1'; ID=666 } }, [pscustomobject] @{ prop1 = 2; prop2 = @{ name='foo2'; ID=667 } } ``` If you know the key names (of interest) up front, you can simply **use an explicit list of calculated properties to create the individual columns**: ``` $coll | select prop1, @{ n='name'; e={ $_.prop2.name } }, @{ n='ID'; e={ $_.prop2.ID } } | ConvertTo-Csv ``` The above yields the following, showing that the hashtable entries became their own columns, `name` and `ID`: ``` "prop1","name","ID" "1","foo1","666" "2","foo2","667" ``` More advanced techniques are required **if you do *not* know the key names up front**: ``` # Create the list of calculated properties dynamically, from the 1st input # object's .prop2 hashtable. $propList = foreach ($key in $coll[0].prop2.Keys) { # The script block for the calculated property must be created from a # *string* in this case, so we can "bake" the key name into it. @{ n=$key; e=[scriptblock]::Create("`$_.prop2.$key") } } $coll | Select-Object (, 'prop1' + $propList) | ConvertTo-Csv ``` This yields the same output as the previous command with the fixed list of calculated properties. Upvotes: 3 [selected_answer]
2018/03/14
942
3,405
<issue_start>username_0: I'd like to show some loading animation in the app root while a component prepares to be rendered by vue router. Already found [this question](https://stackoverflow.com/q/44868403/176140), proposing the use of navigation guards, and [another question](https://stackoverflow.com/q/44886812/176140), where the accepted answer shows how to use the `beforeEach` guard to set a variable in `app`, showing a loading animation. The problem is that **this doesn't work when deep-linking** to some route (initial url includes a route path, such as 'someurl#/foo'). The `beforeEach` guard simply doesn't get called then. So i switched to the loaded component's `beforeRouteEnter` guard, which would also allow me to show the loading animation for some components only: app: ``` var app = new Vue({ el: '#app', data: { loading: false } router: router }); ``` component: ``` var Foo = { template: 'bar', beforeRouteEnter: function(to, from, next) { app.loading = true; // 'app' unavailable when deep-linking // do some loading here before calling next()... next(); } } ``` But then i found that when deep-linking to the component, `app` isn't available in `beforeRouteEnter`, as it gets called very early in the initialisation process. I don't want to set `loading` to `true` inside the app data declaration, as i might decide at some point to deep-link to another route, whose component doesn't need a loading animation.<issue_comment>username_1: Found a workaround using Vue.nextTick: ``` beforeRouteEnter: function(to, from, next) { Vue.nextTick(function(){ // now app is available app.loading = true; // some loading to happen here... seTimeout(function(){ app.loading = false; next(); }, 1000); }) } ``` Feels a little hacky, so would be thankful for other suggestions. Find a demo of this solution here: <https://s.codepen.io/username_1/debug/aYvXqx/GnrnbVPBXezr#/foo> Upvotes: 3 [selected_answer]<issue_comment>username_2: I believe, your solution is correct. However, I would suggest using next() function instead. As written in vue-router docs. <https://router.vuejs.org/en/advanced/navigation-guards.html> > > The beforeRouteEnter guard does NOT have access to this, because the guard is called before the navigation is confirmed, thus the new entering component has not even been created yet. > > > However, you can access the instance by passing a callback to next. The callback will be called when the navigation is confirmed, and the component instance will be passed to the callback as the argument: > > > ``` beforeRouteEnter (to, from, next) { next(vm => { vm.$root.loading = true; }) } ``` Upvotes: 3 <issue_comment>username_3: What about using `beforeRouteLeave` to trigger the loading then have the component toggle it off in `mounted`. For the initial load of the app you could have **app**: ``` var app = new Vue({ el: '#app', data() => ({ loading: true }), mounted() { this.loading: false }, router: router }); ``` then for your components **component**: ``` var Foo = { template: 'bar', mounted() { app.loading = false; }, beforeRouteLeave(to, from , next) { switch(to){ case COMPONENT_TO_SHOW_LOADING_ON: case OTHER_COMPONENT: app.loading = true; default: } } } ``` Upvotes: 0
2018/03/14
806
3,067
<issue_start>username_0: I am trying to change the state of a component every 5 seconds as below inside componentDidMount() hook ``` import React, { Component } from 'react'; export default class ToTest extends Component { constructor(props) { super(props); this.state = { test: false }; } componentDidMount() { setTimeout(() => { this.setState({ test: !this.state.test }) }, 5000); } renderDiv() { if(this.state.test) { return (test is true) } else { return (test is false) } } render() { return ( { this.renderDiv() } ); } } ``` But it executes only once. It changes from false to true once and then nothing. What am I missing?<issue_comment>username_1: Well `setTimeout` will only execute once, what you are looking for is `setInterval`: <https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setTimeout> > > The setTimeout() method of the WindowOrWorkerGlobalScope mixin (and > successor to window.setTimeout) sets a timer which executes a function > or specified piece of code **once after** the timer expires. > > > Compare with > > The setInterval() method of the WindowOrWorkerGlobalScope mixin > **repeatedly** calls a function or executes a code snippet, with a fixed > time delay between each call. > > > Upvotes: 2 <issue_comment>username_2: `componentDidMount()` is only executed once when the component mounts and you only schedule it once. You have to use `setInterval()` to schedule it periodically. Also when you update the state based on the current state you should use a callback in `setState()` that takes the previous state as react may batch multiple calls to `setState()`. And don't forget to cancel the timer in `componentWillUnmount()`: ``` import React, { Component } from 'react'; export default class ToTest extends Component { state = { test: false, }; componentDidMount() { this.timer = setInterval( () => this.setState(prevState => ({ test: !prevState.test })), 5000, ); } componentWillUnmount() { clearInterval(this.timer); } // other methods ... } ``` Upvotes: 5 [selected_answer]<issue_comment>username_3: As said in comments, you must use `setInterval`. the function `setTimeout` is called once. Make sure to clear the setInterval when the component unmounts. <https://reactjs.org/docs/react-component.html#componentwillunmount> The code. ``` import React, { Component } from 'react'; export default class ToTest extends Component { constructor(props) { super(props); this.state = { test: false }; } componentDidMount() { this.timer = setInterval(() => { this.setState({ test: !this.state.test }) }, 5000); } componentWillUnmount() { clearInterval(this.timer) } renderDiv() { if(this.state.test) { return (test is true) } else { return (test is false) } } render() { return ( { this.renderDiv() } ); } } ``` Upvotes: 2
2018/03/14
302
1,104
<issue_start>username_0: My problem is about changes in xcode for a new version of a IOS app. Step 1 : In terminal : cordova build ios. Step 2 : I open it on xcode (.xcode file) Step 3 : I do a archive build (for dev test) Step 4 : I do a change on my controller.js for example in xcode. (simple alert('test')) Step 5 : **I RE do a archive build... But it's like nothing has changed.** Where is my problem ? Do I have to rebuild cordova build ios EACH time ?<issue_comment>username_1: For propagating changes you just have to run `cordova prepare ios`, it's going to be faster that a full build. BTW, you don't need to archive, just hit the run button (triangle) and if you have a device connected it will install and run the app Upvotes: 1 <issue_comment>username_2: when you open the .xcode file in the project structure you will see the **Staging** folder which has www folder of the project. Changes will only reflect when you have done the changes in **staging** folder. then just run the app through xcode no need for any cli command. hope this will help Upvotes: 3 [selected_answer]
2018/03/14
809
2,730
<issue_start>username_0: I'm completely new to DynamoDB, and I can't find out how query works. So, in my case, I have a table(collection/etc) `Users` with this fields and AWS types in brackets: `id [type String], name [type String], email[type String], ref_code[type Map], register_date[type String]` And my table Indexes are ``` KeySchema: [ { AttributeName: 'id', KeyType: 'HASH' }, { AttributeName: 'register_date', KeyType: 'RANGE' }, ], AttributeDefinitions: [ { AttributeName: 'id', AttributeType: 'S' }, { AttributeName: 'register_date', AttributeType: 'S' }, ], ``` I've read documentation [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.NodeJs.04.html?shortFooter=true), [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.KeyConditions.html), [here](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.ReadData.Query.html) and a lot of other info too, but still can't understand how can I query user by his/her name. So, in MySQL world if I have primary index on field Id, I still can query user data by his/her name, like this `SELECT * FROM users WHERE name = '';` But, this behaviour isn't work with DynamoDB. I've tried to query like this: ``` var params = { TableName: 'Users', IndexName: 'name', KeyConditionExpression: '#u_name = :name', ProjectionExpression: '#u_name, lives, fb_account', ExpressionAttributeNames: { '#u_name': 'name', }, ExpressionAttributeValues: { ':name': 'Mykhaylo', }, }; ``` And a lot of other options, but nothing worked out. So, my question is **How to make query in AWS DynamoDB?** **EDIT** If I need to set more than 5 global second indexes, how should I perform my query? Is it possible at all?<issue_comment>username_1: You cannot directly query on the field which is not hash key. You have to either use `scan` with `filter` on `name` like ``` var params = { TableName: 'Users', FilterExpression: '#u_name = :value', ExpressionAttributeNames: { '#u_name': 'name' }, ExpressionAttributeValues: { ':value': 'Mykhaylo' }, }; docClient.scan(params, function(err, data) { //////// }); ``` **or** you need to create a [`gsi`](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html) with `name` as `hash key`. Then you can use [`Query`](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html) to get the result according to `name` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can run sql queries against dynamodb by doing the following: * open ubuntu bash * `pip install dql` * `dql -r eu-west-2` * `scan * from table_name;` Upvotes: 0
2018/03/14
1,090
4,140
<issue_start>username_0: Error: Invariant Violation: A VirtualizedList contains a cell which itself contains more than one VirtualizedList of the same orientation as the parent list. You must pass a unique listKey prop to each sibling list. So I am trying to make A FlatList which will have Multiple Nested FlatLists Like this.. ``` 1------FlaList Level 0 1.1-----FlatList Level 1 1.2-----FlatList Level 1 1.2.1------ FlatList Level 2 1.2.2------ FlatList Level 2 2------FlatList Level 0 2.1-----FlatList Level 1 2.2-----FlatList Level 1 2.2.1------ FlatList Level 2 2.2.2------ FlatList Level 2 ``` The Code Snippet For this: ``` {/* Flat List Level 0 ---------------------------------------------------- */} ( {index + 1} {item.title} {/\* Nested Item Level 1---------------------------- \*/} ( { return }}>{index + 1} {item.text} )} keyExtractor={(item, index) => index} /> {/\* Nested Item Level 1---------------------------- \*/} ( {index + 1} {item.title} {/\* Nested Item Level 2---------------------------- \*/} ( { return }}>{index + 1} {item.text} )} keyExtractor={(item, index) => index} /> {/\* Nested Item Level 2---------------------------- \*/} ( {index + 1} {item.title} ( { return }}>{index + 1} {item.text} )} keyExtractor={(item, index) => index} /> )} keyExtractor={(item, index) => index} /> {/\* Nested FlatList end Level 2---------------------\*/} )} keyExtractor={(item, index) => index} /> {/\* Nested FlatList END Level 1---------------------\*/} )} keyExtractor={(item, index) => index} /> {/\* Flat List END Level 0 ---------------------------------------------------- \*/} ``` Example of the Data The Parent FlatList is given. ``` var meetingTopicData = [ { title: "test title frt6", docs: [ { text: "Document Name", url: "https://dummy.com" }, { text: "Document Name", url: "https://dummy.com" }, ], subItems: [ { title: "test title frt6", docs: [ { text: "Document Name", url: "https://dummy.com" }, { text: "Document Name", url: "https://dummy.com" }, ], subItems: [ { title: "test title frt6", docs: [ { text: "Document Name", url: "https://dummy.com" }, { text: "Document Name", url: "https://dummy.com" }, ] }, { title: "test title frt6", docs: [ { text: "Document Name", url: "https://dummy.com" }, { text: "Document Name", url: "https://dummy.com" }, ] }, { title: "test title frt6", docs: [ { text: "Document Name", url: "https://dummy.com" }, { text: "Document Name", url: "https://dummy.com" }, ] } ] } ] }, ]; ``` You see, there are two FlatLists At Each Level. If I comment out one of them (The upper one with no Child FlatLists.) the Code runs without any error. I think it is something related to keyExtractors of sibling FlatLists.<issue_comment>username_1: // unique listKey ``` ``` Upvotes: 2 <issue_comment>username_2: Please follow this. instead of keyExtractor i used lisKey.That works for me. ``` 'D' + index.toString()} renderItem={({item}) => ( this.selectProduct(item)}> {item.name} 'D' + index.toString()} renderItem = {({item2}) => ( Hello )} /> )} /> ``` Upvotes: 4 <issue_comment>username_3: Below code works for me: ``` 'A' + index.toString()}/> 'B' + index.toString()}/> 'C' + index.toString()}/> 'D' + index.toString()}/> ``` Upvotes: 0
2018/03/14
366
1,319
<issue_start>username_0: i tried the following mapping: ``` {call pop.dbo.getRequestDetail ( #{uid, mode=IN, jdbcType=VARCHAR}, #{requestId, mode=IN, jdbcType=INTEGER}, #{resultStatus, mode=OUT, jdbcType=INTEGER}, #{resultMsg, mode=OUT, jdbcType=VARCHAR} )} ``` But I have error: > > org.apache.ibatis.exceptions.TooManyResultsException: Expected one result (or null) to be returned by selectOne(), but found: 2 > > > My interface is: `Map getRequestDetail(RequestDetailRequest detailRequest);` Can you please help me howto map multiple resutlset with calling procedure? My DB is Sybase.<issue_comment>username_1: sqlSession.selectOne indicates you are only expecting one row returned from the procedure. Instead you should use sqlSession.select Upvotes: 2 [selected_answer]<issue_comment>username_2: Method named `getRequestDetail` must be changed return type to `List>` instead of `Map`. And then, you can get a result type of List. It contains type of `ExternalManagersMap` in index 0 as List and type of `SubjectServicesMap` in index 1 as List. So, you can write like this. ``` List result = getRequestDetail(RequestDetailRequest detailRequest); ExternalManagersMap external = (ExternalManagersMap) result.get(0); SubjectServicesMap subject = (SubjectServicesMap) result.get(1); ``` Upvotes: 0
2018/03/14
713
2,381
<issue_start>username_0: I'm updating an existing project from a different developer in our company and am trying to set the colour of the icons to match the text (as you can see from the screenshot below). I've succeeded for the current selected item but not for the items that aren't selected. I cannot figure out why either of these do/don't work at the moment. [![my current drawerlayout icons](https://i.stack.imgur.com/wBTbk.png)](https://i.stack.imgur.com/wBTbk.png) My layout ``` ``` My code ``` @Bind(R.id.drawer_layout) DrawerLayout mDrawerLayout; private void initDrawer() { Menu m = mNavigationView.getMenu(); for (ModuleVO module : Modules.getActiveModules()) { m.add(0, module.id, 1, module.textRef).setIcon(module.drawerIconRef); } } ``` I have already done some searching and tried a couple of things, including the answers listed here: * [Changing text color of menu item in navigation drawer](https://stackoverflow.com/questions/32042794/changing-text-color-of-menu-item-in-navigation-drawer) * [How to style Menu Items in Navigation Drawer in Android?](https://stackoverflow.com/questions/38185891/how-to-style-menu-items-in-navigation-drawer-in-android) Using `app:itemIconTint` doesn't work and neither does writing a selector. **The only way I've been able to change the colours has been by changing these attributes in my `themes.xml` file.** ``` xml version="1.0" encoding="utf-8"? <item name="android:textColorPrimary">@color/text</item> <item name="android:textColorSecondary">@color/primary</item> ``` I will be happy to provide more information/code if required to solve the issue!<issue_comment>username_1: I have also have that issue but I have found that solution may be it will work for you :- ``` ``` **and** `drawer_item.xml`:- ``` xml version="1.0" encoding="utf-8"? ``` **nav\_header\_home.xml** is:- ``` xml version="1.0" encoding="utf-8"? ``` Upvotes: 1 <issue_comment>username_2: I found the answer! When looking over my layout file again I found and when I applied a selector to `navigation_view` it worked! Below is my working code. **activity\_home.xml** ``` ``` **navigation\_view.xml** ``` ``` **drawer\_item.xml** (Kudos to [username_1](https://stackoverflow.com/users/7319704/abhinav-gupta) for this part) ``` xml version="1.0" encoding="utf-8"? ``` Upvotes: 1 [selected_answer]
2018/03/14
321
1,171
<issue_start>username_0: I have an Ubuntu server on google compute engine that i have allowed ufw on but forgot to allow port 22 for `ssh` connections and now I cannot access it * Any idea how can I reverse that? I also tried to connect using google serial console but i can't remember my instance username and password. * Where are these set?<issue_comment>username_1: I would suggest you to run a [startup script](https://cloud.google.com/compute/docs/startupscript) and set username/password in order to access through serial console or directly to modify the ufw writing down in the script the commands you would have used. > > Note that the startup script is run as root. > > > Upvotes: 3 [selected_answer]<issue_comment>username_2: [This answer from Server Fault solved the problem for me](https://serverfault.com/a/946347/508925) . Use any of the 2 methods. I used the first: 1. **Method 1:** Add startup script on the GCP VM's instance settings page to disable ufw ``` #! /bin/bash /usr/sbin/ufw disable ``` 2. **Method 2**: Attach the boot disk to another instance and modify the file `/etc/ufw/ufw.conf` See link for detailed instructions Upvotes: 2
2018/03/14
765
2,843
<issue_start>username_0: I am using a component to display canvas image in it. For the first time it works fine. After displaying I hide it but after I display the component again it shows me the previous image but I already change the image source. So I want to ask is there any way I can destroy this instance of component so that when it is again rendered it re-initializes everything? I want to re-initialize it because there are many other member variables in the component I want them to re-initialize too. Notice : already tried ngonchange<issue_comment>username_1: It is possible by creating a dynamic component... You have to follow this steps to do this: in your parent html: ``` ``` in your parent component.ts file: ``` @ViewChild("dynamic", {read: ViewContainerRef}) container; componentRef: ComponentRef; constructor(... private resolver: ComponentFactoryResolver ...) createComponent() { if (this.componentRef) this.componentRef.destroy(); this.container.clear(); const factoryGleam: ComponentFactory = this.resolver.resolveComponentFactory(YourComponent); this.componentRef = this.container.createComponent(factoryGleam); } ngOnDestroy() { if (this.componentRef) this.componentRef.destroy(); } ``` Use your createComponent() method when you want to create a component again... And in your parent module: ``` ... providers: ..., entryComponents: [YourComponent] ``` Or you could read this article: <https://medium.com/front-end-hacking/dynamically-add-components-to-the-dom-with-angular-71b0cb535286> Upvotes: 0 <issue_comment>username_2: With a little hack of the `*ngFor` template you can create a new instance on change : Template : ``` ``` Component : ``` values = ['Template']; change(event) { this.values.pop(); this.values.push(event); } ``` [You can find a running example here.](https://stackblitz.com/edit/stackoverflow-49276542) The count displays the instances count. Upvotes: 1 <issue_comment>username_3: > > So I want to ask is there any way I can destroy this instance of > component so that when it is again rendered it re-initializes > everything? > > > **[Code example](https://stackblitz.com/edit/angular-component-reinit?file=app%2Fapp.component.ts)** In your parent component template: ``` Toggle ReInitialize ``` **Parent Component class:** ``` class ParentComponent{ isShown = false; toggleChild(){ this.isShown = !this.isShown; } reInit() { this.isShown = false; setTimeout(() =>{ this.isShown = true; }); } } ``` **setTimeout** is needed if we want re-initialize component immediately. Without it component will not re-initialized in one digest cycle. In first cycle component is destroyed, **setTimeout** forces to init component in next cycle. Upvotes: 0
2018/03/14
645
2,294
<issue_start>username_0: I was using `window.open('')` with `'_blank'` as second parameter to open my link in new tab For eg. `window.open('http://google.com', '_blank')` But, recently I added the third parameter `'noopener'` so that `window.opener` becomes null in the new tab and the new tab does not have access to the parent tab/window. i.e. `window.opener` is `null` `window.open('http://google.com', '_blank', 'noopener')` So the above code solved the security problem, but instead of opening a new tab, a new window started opening which is not what I expected. My browser settings were same and no changes were made to it. Is there anything I can do to make this code open new tab instead of new window ? I do not want to remove `noopener` as third parameter<issue_comment>username_1: > > <https://mathiasbynens.github.io/rel-noopener/> > > > ``` const anchor = document.createElement('a'); Object.assign(anchor, { target: '_blank', href: 'http://google.com', rel: 'noopener noreferrer' }) .click() ``` This is the method which feels a bit cleaner. It creates an anchor tag and clicks it, we have to use this workaround as its a user preference. Upvotes: 3 <issue_comment>username_2: Honestly i think your code is fine but you can try a different implementation: ``` var yourWindow = window.open(); yourWindow.opener = null; yourWindow.location = "http://someurl.here"; yourWindow.target = "_blank"; ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: This is the only thing that works cross-browser (IE11, Chrome 66, FF 60, Safari 11.1) for me ```js function openURL(url) { var link = document.createElement('a'); link.target = "_blank"; link.href = url; link.rel = "noopener noreferrer"; document.body.appendChild(link); // you need to add it to the DOM to get FF working link.click(); link.parentNode.removeChild(link); // link.remove(); doesn't work on IE11 }; ``` Upvotes: 3 <issue_comment>username_4: Another approach that will solve this in one line is to access the opener property directly and set it to null to make use of the fact that `window.open()` returns a `Window` object. This will work across all browsers to open a new tab with a null `window.opener`. ``` window.open(url, '_blank').opener = null; ``` Upvotes: 4
2018/03/14
1,172
4,240
<issue_start>username_0: I have Windows 10 pro x64, Excel 2016 32 bit and Sql server 2017. I want to import excel file to sql server, I need to use 32-bit wizard because Microsoft Excel is not shown in 64-bit version, but I face this error: The 'Microsoft.ACE.OLEDB.16.0' provider is not registered on the local machine. (System.Data)<issue_comment>username_1: Use this link to download the 64 bit version of the Microsoft Access Database Engine 2016 Redistributable: <https://www.microsoft.com/en-us/download/details.aspx?id=54920> Once installed you can open the import export wizard 64 bit and you will have a data source option for excel. Upvotes: 2 <issue_comment>username_2: If you have the Microsoft Access Database Engine and still are facing the same issue make sure that you are accessing the Microsoft SQL Server Management Studio as **Administrator**. Upvotes: 0 <issue_comment>username_3: If you are having problems installing the engine because components are already installed, do this (from Microsoft): If Office 365 is already installed, side by side detection will prevent the installation from proceeding. Instead perform a /quiet install of these components from command line. To do so, download the AccessDatabaeEngine\_x64.exe to your PC, open an administrative command prompt, and provide the installation path and switch Ex: C:\Files\AccessDatabaseEngine\_x64.exe /quiet Upvotes: 3 <issue_comment>username_4: I had success doing the following (I use Excel 2016 and SSMS 2017) From Excel File -> Export -> Change File Type -> Excel 97- 2003 (\*.xls) Upvotes: 3 <issue_comment>username_5: This is a workaround solution. Ultimately, converting the Excel document to a CSV and using the Tasks/Import Data/Flat File Source option imported my data (although, I was not able to successfully map my datatypes in the import, which I can fix with CAST() later). On upload, change File type to CSV from TXT. [![Flat File Source](https://i.stack.imgur.com/leGfN.png)](https://i.stack.imgur.com/leGfN.png) I have Office 365. I used a CSV and **gave up** on XLSX because: When I ran the 32 bit version of the AccessDatabaseEngine.exe, I received this error: [![32 not compatible with 64](https://i.stack.imgur.com/8McZV.png)](https://i.stack.imgur.com/8McZV.png) When I ran the 64 bit version of AccessDatabaseEnginex64.exe, I received this error: [![64 not compatible with 32](https://i.stack.imgur.com/VDqj6.png)](https://i.stack.imgur.com/VDqj6.png) Upvotes: 1 <issue_comment>username_6: If you have OS(64bit) and SSMS(64bit) and already install the **AccessDatabaseEngine(64bit)** and you still received an error, try this following solutions: 1: direct opening the sql server import and export wizard. if you able to connect using direct sql server import and export wizard, then importing from SSMS is the issue, it's like activating 32bit if you import data from SSMS. Instead of installing **AccessDatabaseEngine(64bit)** , try to use the **AccessDatabaseEngine(32bit)** , upon installation, windows will stop you for continuing the installation if you already have another app installed , if so , then use the following steps. This is from the **MICROSOFT**. The Quiet Installation. **If Office 365 is already installed, side by side detection will prevent the installation from proceeding. Instead perform a /quiet install of these components from command line. To do so, download the desired AccessDatabaseEngine.exe or AccessDatabaeEngine\_x64.exe to your PC, open an administrative command prompt, and provide the installation path and switch Ex: C:\Files\AccessDatabaseEngine.exe /quiet** or check in the **Addition Information** content from the **link below**, <https://www.microsoft.com/en-us/download/details.aspx?id=54920> Upvotes: 3 <issue_comment>username_7: Instead of using the import/export tasks provided under the database, I utilized "SQL Server 2016 Import and Export Data (64-bit)" service comes with MS SQL Server 2016 installation as per suggested [here](https://stackoverflow.com/a/42564863/5912896). In windows 10 you can find it under **SQL Server 2016** app. In your case, find **SQL Server 2016 Import and Export Data (32-bit)** service available in same location. Upvotes: 0
2018/03/14
514
1,882
<issue_start>username_0: I'm creating a cluster with `kubeadm init --with-stuff` (Kubernetes 1.8.4, for reasons). I can setup nodes, `weave`, etc. But I have a problem setting the cluster name. When I open the `admin.conf` or a different config file I see: ``` name: kubernetes ``` When I run `kubectl config get-clusters`: ``` NAME kubernetes ``` Which is the default. Is there a way to set the cluster name during `init` (there is no command line parameter)? Or is there a way to change this after the `init`? The current `name` is referenced in many files in `/etc/kubernetes/` Best Regrads Kamil<issue_comment>username_1: No, you cannot change a name of running cluster, because it serves for discovery inside a cluster and this would require near-simultaneous changing it across the cluster. Sadly, you also cannot change a name of the cluster before `init`. Here is the issue on [Github](https://github.com/kubernetes/kubeadm/issues/416). **Update:** From version 1.12, `kubeadm` allow you to change a cluster name before an "init" stage. To do it (for sure for versions >=1.15, for lower versions commands can be different, commands changed somewhen between versions 1.12 and 1.15), you need to set `clusterName` value in a cluster configuration file like that: 1. Save default configuration to a file (cluster config is optional, so we need to do that step first for not to write it from scratch) by a `kubeadm config print init-defaults < init-config.yaml` command. 2. Set `clusterName` value in the config. 3. Run `kubeadm` init with a config argument: `kubeadm init --config init-config.yaml` Upvotes: 2 <issue_comment>username_2: You can now do so using kubeadm's config file. PR here: <https://github.com/kubernetes/kubernetes/pull/60852> Using the kubeadm config you just set the following at the top level ``` clusterName: kubernetes ``` Upvotes: 3
2018/03/14
2,050
4,599
<issue_start>username_0: I have the following dataset where I have the column **Date** and **Values** for each row. It has both **+ve** and **-ve** values. I have to get a count of all positive values for the last 150 rows. IN each row. SO the 1st 150 rows will have null values. Then, the following rows will have have the count of last 150 **+ve** rows and similarly the **-ve** column will be filled with the count of negative values until that row. I tried using : ``` def get_count_of_all_150_positive_rows_before_this_row(row): df1 = row.tail(2) df1 = df1.to_frame() print(df1.tail()) # if df1['positive_values'] > 0: return (df1['positive_values'].count()) df.apply(get_count_of_all_150_positive_rows_before_this_row, axis=1) ``` Dataset: ``` Date values positive_values negative_values 01/01/08 0.12344 02/01/08 -0.12344 03/01/08 -0.1234433 04/01/08 -0.12344 05/01/08 -0.1234433 06/01/08 -0.12344 07/01/08 -0.1234433 08/01/08 -0.12344 09/01/08 -0.1234433 10/01/08 0.12344 11/01/08 -0.12344 12/01/08 -0.1234433 13/01/08 -0.12344 14/01/08 -0.1234433 15/01/08 -0.12344 16/01/08 -0.1234433 17/01/08 -0.12344 18/01/08 -0.1234433 19/01/08 0.12344 ```<issue_comment>username_1: This might be what you are looking for: ``` import numpy as np tail = df.tail(5) pos = len(tail[df['values']>0]) neg = len(tail[df['values']<0]) df['pos_values'], df['neg_values'] = np.nan, np.nan df.loc[df.index.values[-5:], 'pos_values'] = pos df.loc[df.index.values[-5:], 'neg_values'] = neg # Date values pos_values neg_values # 0 01/01/08 0.123440 NaN NaN # 1 02/01/08 -0.123440 NaN NaN # 2 03/01/08 -0.123443 NaN NaN # 3 04/01/08 -0.123440 NaN NaN # 4 05/01/08 -0.123443 NaN NaN # 5 06/01/08 -0.123440 NaN NaN # 6 07/01/08 -0.123443 NaN NaN # 7 08/01/08 -0.123440 NaN NaN # 8 09/01/08 -0.123443 NaN NaN # 9 10/01/08 0.123440 NaN NaN # 10 11/01/08 -0.123440 NaN NaN # 11 12/01/08 -0.123443 NaN NaN # 12 13/01/08 -0.123440 NaN NaN # 13 14/01/08 -0.123443 NaN NaN # 14 15/01/08 -0.123440 1.0 4.0 # 15 16/01/08 -0.123443 1.0 4.0 # 16 17/01/08 -0.123440 1.0 4.0 # 17 18/01/08 -0.123443 1.0 4.0 # 18 19/01/08 0.123440 1.0 4.0 ``` Upvotes: 1 <issue_comment>username_2: You want to use pd.rolling() to perform a rolling count of the positives and negatives given the previous 'period' count. ``` period = 5 df['less_than_zero'] = (df['values'] .rolling(window=period, min_periods=period) .agg(lambda x: (x < 0).sum())) df['greater_than_zero'] = (df['values'] .rolling(window=period,min_periods=period) .agg(lambda x: (x > 0).sum())) ``` This should give you what you want ``` Out[30]: date values less_than_zero greater_than_zero 0 01/01/08 0.123440 NaN NaN 1 02/01/08 -0.123440 NaN NaN 2 03/01/08 -0.123443 NaN NaN 3 04/01/08 -0.123440 NaN NaN 4 05/01/08 -0.123443 4.0 1.0 5 06/01/08 -0.123440 5.0 0.0 6 07/01/08 -0.123443 5.0 0.0 7 08/01/08 -0.123440 5.0 0.0 8 09/01/08 -0.123443 5.0 0.0 9 10/01/08 0.123440 4.0 1.0 10 11/01/08 -0.123440 4.0 1.0 11 12/01/08 -0.123443 4.0 1.0 12 13/01/08 -0.123440 4.0 1.0 13 14/01/08 -0.123443 4.0 1.0 14 15/01/08 -0.123440 5.0 0.0 15 16/01/08 -0.123443 5.0 0.0 16 17/01/08 -0.123440 5.0 0.0 17 18/01/08 -0.123443 5.0 0.0 18 19/01/08 0.123440 4.0 1.0 ``` **Note**: It's worth throwing a few 0s into the sample data set to ensure that you are not miss-attributing them in this case. (*We're not, but still*) Upvotes: 3 [selected_answer]
2018/03/14
914
2,865
<issue_start>username_0: I have a lot of C++ classes that use the same list of template parameters ``` template class A { ... }; template class B { ... }; template class C { ... }; ``` You get the idea. Then I instantiate them like ``` A a; B b; C c; ``` Is there a way to somehow create an alias for this bundle of template parameters so that I don't have to keep re-typing the argument list? I have something like this in mind... ``` using Params = T, Index, Bool, Data, n_x, n_u, n_c, n_w; A a; B b; C c; ``` I realize that I could create a separate class which just defines types, and use that. But I am wondering if there is a way of doing this without defining a new class. EDIT I do not want to use macros. I also do not want to use defaults because that would require ensuring that the defaults are uniform across a bunch of files. I realize that I could define a new header of defaults and just include that in all of the files, but that just seems like bad programming.<issue_comment>username_1: Not exactly what you asked but not so different... But require a little work. You can solve with a struct, `foo`, with a double layer of template management. ``` template struct foo { template class Cont> using type = Cont; }; ``` A first layer, the struct layer, with the types/values you want to fix (`T, Index, Bool, Data, n_x, n_u, n_c, n_w`, in your example). A second layer, the `using` layer, with the variable template element (`A`, `B` and `C`, in your example. You can also add a `using` alias `foot_t` to simplify the use ``` template class Cont, typename C> using foo\_t = typename C::template type; ``` Now you can fix types and values (the first layer) with a `using` ``` using f = foo; ``` and declare variable activating second layer using `foo_t` ``` foo_t a; foo\_t b; foo\_t c; ``` The following is a full working example ``` #include #include template class A { }; template class B { }; template class C { }; template struct foo { template class Cont> using type = Cont; }; template class Cont, typename C> using foo\_t = typename C::template type; int main () { using T = float; using Index = std::size\_t; using Bool = bool; using Data = std::vector; constexpr std::size\_t n\_x { 0U }; constexpr std::size\_t n\_u { 1U }; constexpr std::size\_t n\_c { 2U }; constexpr std::size\_t n\_w { 3U }; using f = foo; foo\_t a; foo\_t b; foo\_t c; static\_assert( std::is\_same>{}, "!" ); static\_assert( std::is\_same>{}, "!" ); static\_assert( std::is\_same>{}, "!" ); } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You could replace the alias by a define, not perfect, but easy solution and it works. ``` #define PARAMS T, Index, Bool, Data, n_x, n_u, n_c, n_w A a; B b; C c; ``` Note: no `;` at the end of the define. Upvotes: 1
2018/03/14
681
2,209
<issue_start>username_0: I have array with data. Array length equals 25 elements. I would like create matrix (5X5). How I can do this in C#? Please help.<issue_comment>username_1: You can use [Buffer.BlockCopy](https://msdn.microsoft.com/en-us/library/system.buffer.blockcopy.aspx) ``` using System; class Test { static double[,] ConvertMatrix(double[] flat, int m, int n) { if (flat.Length != m * n) { throw new ArgumentException("Invalid length"); } double[,] ret = new double[m, n]; // BlockCopy uses byte lengths: a double is 8 bytes Buffer.BlockCopy(flat, 0, ret, 0, flat.Length * sizeof(double)); return ret; } static void Main() { double[] d = { 2, 5, 3, 5, 1, 6 }; double[,] matrix = ConvertMatrix(d, 3, 2); for (int i = 0; i < 3; i++) { for (int j = 0; j < 2; j++) { Console.WriteLine("matrix[{0},{1}] = {2}", i, j, matrix[i, j]); } } } } ``` Upvotes: 0 <issue_comment>username_2: Translating a single dimension array into a multi dimension array is straight forward. ``` public static T getEntry(this T[] array, int column, int row, int width) { return array[column+row\*width]; } ``` Add wrapper classes and/or validation as desired. Usage example: ``` var array=Enumerable.Range(1,25).ToArray(); for (int row = 0; row < 5; row ++) { for (int column = 0; column < 5; column ++) { Console.WriteLine("Value in column {0}, row {1} is {2}", column, row, array.getEntry(column,row)); } } ``` Upvotes: 2 <issue_comment>username_3: As @username_2 suggest you can simply use indexing to simulate the structure of the matrix. If you need to access the element at row 2, col 3 in a 5 by 5 matrix simply access index 2\*5+3 of your array. (row \* # of cols + col) If you want to split your array into a 2D array you can do so using the following code: ``` public static T[,] Matrix(T[] arr, int rows) { var cols = arr.Length / rows; var m = new T[rows, cols]; for (var i = 0; i < arr.Length; i++) m[i / cols, i % cols] = arr[i]; return m; } ``` Upvotes: 1
2018/03/14
829
2,792
<issue_start>username_0: So my code just adds them together if i put like 4+4 it gives me 44 but i wanted to be like 4+4=8 ```html Laskeminen first = prompt("Enter your first number."); last = prompt("Enter your second number."); var y = first var z = last var x = y + z; document.getElementById("first,last").innerHTML = x; ```<issue_comment>username_1: You can use `parseInt()` ``` Laskeminen first = prompt("Enter your first number."); last = prompt("Enter your second number."); var y = parseInt(first) var z = parseInt(last) var x = y + z; document.getElementById("first,last").innerHTML = x; ``` Upvotes: 1 <issue_comment>username_2: Use `Number()` : ```html Laskeminen first = prompt("Enter your first number."); last = prompt("Enter your second number."); var y = Number(first); var z = Number(last); var x = y + z; document.getElementById("first,last").innerHTML = x; ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: In your second addition script you need to parse your input because currently it is a string. See below ```js first = prompt("Enter your first number."); console.log('User input is: ', typeof first) ``` so because these are string when your adding text they get added together e.g. ```js var text = 'ab'; var text1 = 'c'; if(text+text1 == 'abc'){ console.log('These have been added together and are equal') } ``` So you need to parse now if your dealing with ints or whole numbers you can use `parseInt(**variabletoParse**)` or `parseFloat(**variabletoParse**);` So what you need to do can be seen blow: ```html Laskeminen first = prompt("Enter your first number."); last = prompt("Enter your second number."); var y = parseFloat(first); var z = parseFloat(last); var x = y + z; document.getElementById("first,last").innerHTML = x; ``` Upvotes: 0 <issue_comment>username_4: You can use `parseFloat()` on the values to convert it into mathematical float values that can perform mathematical operation: ```js var first = prompt("Enter your first number."); var last = prompt("Enter your second number."); var y = parseFloat(first); var z = parseFloat(last); var x = y + z; document.getElementById("first,last").innerHTML = x; ``` But if you want only `integers` then you can do `parseInt()`: ```js var first = prompt("Enter your first number."); var last = prompt("Enter your second number."); var y = parseInt(first); var z = parseInt(last); var x = y + z; document.getElementById("first,last").innerHTML = x; ``` Upvotes: 0 <issue_comment>username_5: ``` Laskeminen first = prompt("Enter your first number."); last = prompt("Enter your second number."); var y = parseInt(first); var z = parseInt(last); var x = y + z; document.getElementById("first,last").innerHTML = x; ``` Upvotes: 0
2018/03/14
494
1,707
<issue_start>username_0: there is my method that return a student from the database with his name, i want to modify my method to return a list of students if i have many students with the same name, how can i do, thanks ``` // GET: api/Students/name [ResponseType(typeof(Student))] public IHttpActionResult GetStudentByName(string Name) { Student student = db.Students.FirstOrDefault(t => t.Name == Name); if (student == null) { return NotFound(); } return Ok(student); } ```<issue_comment>username_1: ``` List student = db.Students.Where(t => t.Name == Name).ToList() ``` Upvotes: 2 <issue_comment>username_2: **For Reference** [Enumerable.Where Method (IEnumerable, Func)](https://msdn.microsoft.com/en-us/library/bb534803(v=vs.110).aspx) > > Filters a sequence of values based on a predicate. > > > [Enumerable.FirstOrDefault Method](https://msdn.microsoft.com/en-us/library/system.linq.enumerable.firstordefault(v=vs.110).aspx) > > Returns the first element of a sequence, or a default value if no > element is found. > > > Also note you need to change your `ResponseType` to `List` **Example** ``` [ResponseType(typeof(List))] public IHttpActionResult GetStudentByName(string Name) { var students = db.Students.Where(t => t.Name == Name).ToList(); if (!students.any()) { return NotFound(); } return Ok(students); } ``` Upvotes: 2 <issue_comment>username_3: ``` [ResponseType(typeof(IEnumerable))] public IHttpActionResult GetStudentsByName(string Name) { var students = db.Students.Where(t => t.Name == Name).ToList(); return students.Count == 0 ? NotFound() : Ok(students); } ``` Upvotes: -1
2018/03/14
700
1,932
<issue_start>username_0: Arr= ["abcd","1223"," 10829380","pqrs"] I want to print array like this- ``` Arr=["abcd","1223","10829380","pqrs"] ```<issue_comment>username_1: You could use [Array#map!](http://ruby-doc.org/core-2.5.0/Array.html#method-i-map-21) or [Array.map](http://ruby-doc.org/core-2.5.0/Array.html#method-i-map). `Array#map!` changes the original array and `Array#map` returns a new array so the original array keeps unchanged. The `map` functions iterate about the array and execute the given block for each element in the array. ``` arr = ["abcd", "1223", " 10829380", "pqrs"] arr.map!{ |el| el.strip } arr # => ["abcd", "1223", "10829380", "pqrs"] # or arr = ["abcd", "1223", " 10829380", "pqrs"] arr.map{ |el| el.strip } # => ["abcd", "1223", "10829380", "pqrs"] arr # => ["abcd", "1223", " 10829380", "pqrs"] ``` Btw: Variables in ruby begin with a lowercase letter or \_ (`arr`). Upvotes: 2 <issue_comment>username_2: You should follow naming patterns and not use `Arr` as this usually is used for class names. ``` arr = ["abcd","1223"," 10829380","pqrs"] whitespace_removed_arr = arr.map { |item| item.strip } ``` `map` iterates the array of strings (`arr`) and builds a new array containing the return values of the block. You can use the shorter version if you like: ``` arr = ["abcd","1223"," 10829380","pqrs"] whitespace_removed_arr = arr.map(&:strip) ``` Please note that the solutions proposing `strip!` and `map` (inplace version iof `strip`) will most likely not work or work in a confusing way since `strip!` (oddly enough) returns `nil` when the string was not changed. ``` "".strip => "" "".strip! => nil "".strip => "" " ".strip! => "" ``` If you want to use the inplace variant of strip and modify the original array you will need to use `each` ``` arr.each(&:strip!) ``` `each` discards the return value from the block, and `strip!` modifies the string in place. Upvotes: 2
2018/03/14
453
1,137
<issue_start>username_0: I have this array: `array = ['S2B_MSIL1C_20180310T041559_N0206_R090_T46QEK_20180310T075716.SAFE'];` and this regex: `regex = new RegExp('S2B_MSIL1C_20180310T041559_N0206_R090_T46QEK_20180310T075716' + '.SAFE','g');` When I use `array.includes(regex);` , `false` is returned. Have I missed something?<issue_comment>username_1: RgExps are not for searching on Arrays, and includes method is for finding if your required object is included on the array or not. and here you passed and Regex object to your include method so it tells you that there is no regex object included your array. you have to do one of the belows: ``` array.includes('S2B_MSIL1C_20180310T041559_N0206_R090_T46QEK_20180310T075716' + '.SAFE'); ``` or ``` var yourRegex = /pattern/g ; for(var i = 0 ; i ``` Upvotes: 2 <issue_comment>username_2: Use `Array.some` ``` var yourRegex = /pattern/g ; var atLeastOneMatches = array.some(e => yourRegex.test(e)); ``` Array.some returns true after the *first one in the array* returns true. If it goes through the whole array with no `true`, it returns false. Upvotes: 6 [selected_answer]
2018/03/14
493
1,714
<issue_start>username_0: To allow multiple payment gateways in my system I have a table of defined payment gateways (id, name, code) where `code` is the table name (for example 'paypal') containing a specific payment gateway transaction responses. In my sql server query I want to join the transaction table for each gateway based on the value of this column. Is this possible? If so, how? In my query so far I am joining the payment gateway table based on the id of the chosen payment gateway for the specific seller (where `[s]` is the seller table): ``` INNER JOIN [payment_gateway] AS [pg] ON [s].[payment_gateway_id] = [pg].[id] ``` What I want to do is something like: ``` INNER JOIN {{[pg].[code]}} AS [payment_table] ON [payment_table].[order_id] = [order].[id] ```<issue_comment>username_1: You can do what you want with a `left join` and `coalesce()`: ``` SELECT . . ., COALESCE(pg1.col pg2.col, . . .) as col FROM seller s LEFT JOIN payment_gateway pg1 ON s.payment_gateway_id = pg1.id AND s.code = '1' LEFT JOIN payment_gateway pg2 ON s.payment_gateway_id = pg2.id AND s.code = '2' LEFT JOIN . . . ``` A `LEFT JOIN` is probably the most efficient way of handling this data in a single query. A better data structure would have all the payment gateway information in a single table. Upvotes: 1 <issue_comment>username_2: As the join is dependant on the column value, I have decided to do a `LEFT OUTER JOIN` on the table: ``` LEFT OUTER JOIN [paypal] as [pp] on [pp].[quote_id] = [q].[id] and [pg].[code] = 'paypal' ``` It means I'll need to add this line for every new payment gateway that I integrate, but I'm ok with that. Upvotes: 1 [selected_answer]
2018/03/14
906
2,270
<issue_start>username_0: i'm currently trying to write a shiny app. I want to create a barchart with reactive coloring to radiobuttons. Before i try to get the code for the reactive coloring, i try to test it, so that i get an idea of how to compose the code. Right now i'm struggling to get a barchart with alternating colors. ``` prodpermonth$month <- c("2008-11-01", "2008-12-01", "2009-01-01", "2009-02-01", "2009-03-01") prodpermonth$n <- c(1769, 3248, 3257, 2923, 3260) ggplot(prodpermonth, aes(x=prodmonth, y=n))+ geom_bar(stat = "identity", aes(fill = prodpermonth$prodmonth)) + scale_fill_manual(c("red", "green")) ``` This code returns an Error "Continous value supplied to discrete scale". I tried to just give a vector c("red", "green") into the fill argument, which also results in an Error "Aesthetics must be either length 1 or the same as the data". Thus, i tried to create a vector of the length of the data set, but this also did not work as i planned. Isn't there a simpler way to get alternating colors in a barchart? Cheers!<issue_comment>username_1: By alternating colors do you mean, that you want every other bar to have a different color? ``` library(ggplot2) prodpermonth <- data.frame( month = c("2008-11-01", "2008-12-01", "2009-01-01", "2009-02-01", "2009-03-01"), n = c(1769, 3248, 3257, 2923, 3260) ) ggplot(prodpermonth, aes(x=month, y=n)) + geom_bar(stat = "identity", aes(fill = (as.numeric(month) %% 2 == 0))) + scale_fill_discrete(guide="none") ``` Result: [![enter image description here](https://i.stack.imgur.com/GbQJ8.png)](https://i.stack.imgur.com/GbQJ8.png) Upvotes: 2 [selected_answer]<issue_comment>username_2: Alternatively, use scale\_fill\_manual with a vector of "red","green" that repeats for the length of your data frame ``` library(ggplot2) prodpermonth <- data.frame(month= c("2008-11-01", "2008-12-01", "2009-01-01", "2009-02-01", "2009-03-01"), n = c(1769, 3248, 3257, 2923, 3260)) ggplot(prodpermonth, aes(x=month, y=n, fill=month)) + geom_bar(stat = "identity") + scale_fill_manual(values=rep(c("red","green"), ceiling(length(prodpermonth$month)/2))[1:length(prodpermonth$month)]) ``` [![Result](https://i.stack.imgur.com/9XBvO.png)](https://i.stack.imgur.com/9XBvO.png) Upvotes: 2
2018/03/14
1,375
3,087
<issue_start>username_0: I have the following dataframe: ``` df = pd.DataFrame({('psl', 't1'): {'fiat': 36.389809173765507, 'mazda': 18.139242981049016, 'opel': 0.97626485600703961, 'toyota': 74.464422292108878}, ('psl', 't2'): {'fiat': 35.423004380643462, 'mazda': 24.269803148695079, 'opel': 1.0170540474994665, 'toyota': 60.389948228586832}, ('psv', 't1'): {'fiat': 35.836800462163097, 'mazda': 15.893295606055901, 'opel': 0.78744853046848606, 'toyota': 74.054850828062271}, ('psv', 't2'): {'fiat': 34.379812557124815, 'mazda': 23.202587247335682, 'opel': 0.80191294532382451, 'toyota': 58.735083244244322}}) ``` It looks like this: [![enter image description here](https://i.stack.imgur.com/gjJ9T.png)](https://i.stack.imgur.com/gjJ9T.png) I wish to reduce it from a multiindex to a normal index. I wish to do this by applying a function using t1 and t2 values and returning only a single value which will result in there being two columns: psl and psv. I have succeeded in grouping it as such and applying a function: ``` df.groupby(level=0, axis=1).agg(np.mean) ``` which is very close to what I want except that I don't want to apply np.mean, but rather a custom function. In particular, a percent change function. My end goal is to be able to do something like this: ``` df.groupby(level=0, axis=1).apply(lambda t1, t2: (t2-t1)/t1) ``` Which returns this error: ``` TypeError: () missing 1 required positional argument: 't2' ``` I have also tried this: ``` df.apply(lambda x: x[x.name].apply(lambda x: x['t1']/x['t2'])) ``` which in turn returns: ``` KeyError: (('psl', 't1'), 'occurred at index (psl, t1)') ``` Could you please include a thorough explanation of each part of your answer to the best of your abilities so I can better understand how pandas works.<issue_comment>username_1: Not easy. Use custom function with [`squeeze`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.squeeze.html) for `Series` and [`xs`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html) for select `MultiIndex` in columns: ``` def f(x): t2 = x.xs('t2', axis=1, level=1) t1 = x.xs('t1', axis=1, level=1) a = (t2-t1)/t1 #print (a) return (a.squeeze()) df1 = df.groupby(level=0, axis=1).agg(f) print (df1) psl psv fiat -0.026568 -0.040656 mazda 0.337972 0.459898 opel 0.041781 0.018369 toyota -0.189009 -0.206871 ``` Use lambda function is possible, but really awfull with repeating code: ``` df1 = df.groupby(level=0, axis=1) .agg(lambda x: ((x.xs('t2', axis=1, level=1)-x.xs('t1', axis=1, level=1))/ x.xs('t1', axis=1, level=1)).squeeze()) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Using `iloc` can solve the problem: ```py df.groupby(level=0, axis=1).agg(lambda x: (x.iloc[:,0]-x.iloc[:,1])/x.iloc[:,0]) ``` Outputs: ``` psl psv fiat 0.026568 0.040656 mazda -0.337972 -0.459898 opel -0.041781 -0.018369 toyota 0.189009 0.206871 ``` Upvotes: 0
2018/03/14
689
1,601
<issue_start>username_0: I want to add a new line (CR) in front of each word, where world is anything between spaces **that contains** letters For example for input string: ``` +48 123 456 789 fax: +48 987 654 321 ``` I would like the end result to be: ``` +48 123 456 789 fax: +48 987 654 321 ``` Any ideas? Thanks in advance.<issue_comment>username_1: Not easy. Use custom function with [`squeeze`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.squeeze.html) for `Series` and [`xs`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html) for select `MultiIndex` in columns: ``` def f(x): t2 = x.xs('t2', axis=1, level=1) t1 = x.xs('t1', axis=1, level=1) a = (t2-t1)/t1 #print (a) return (a.squeeze()) df1 = df.groupby(level=0, axis=1).agg(f) print (df1) psl psv fiat -0.026568 -0.040656 mazda 0.337972 0.459898 opel 0.041781 0.018369 toyota -0.189009 -0.206871 ``` Use lambda function is possible, but really awfull with repeating code: ``` df1 = df.groupby(level=0, axis=1) .agg(lambda x: ((x.xs('t2', axis=1, level=1)-x.xs('t1', axis=1, level=1))/ x.xs('t1', axis=1, level=1)).squeeze()) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Using `iloc` can solve the problem: ```py df.groupby(level=0, axis=1).agg(lambda x: (x.iloc[:,0]-x.iloc[:,1])/x.iloc[:,0]) ``` Outputs: ``` psl psv fiat 0.026568 0.040656 mazda -0.337972 -0.459898 opel -0.041781 -0.018369 toyota 0.189009 0.206871 ``` Upvotes: 0
2018/03/14
738
2,056
<issue_start>username_0: hi i have a project in laravel 5.4 for some function i have calling the laravel model methods inside the view for my convenient so it is showing me error save() method does not exist so anyone help me is that possible to call the laravel model method inside the view or how to achieve this below is my code below is my blade code ``` $pi_amount=new App\PI_Amount; $pi_amount->invoiceNumber=$fd->invoiceNumber; $pi_amount->total_goods=$total_goods; $pi_amount->total_cst=$total_tax; $pi_amount->total_security=$security_amount; $pi_amount->freight=$freight; $pi_amount->total_value=$total_value; $pi_amount->save(); ```<issue_comment>username_1: Not easy. Use custom function with [`squeeze`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.squeeze.html) for `Series` and [`xs`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html) for select `MultiIndex` in columns: ``` def f(x): t2 = x.xs('t2', axis=1, level=1) t1 = x.xs('t1', axis=1, level=1) a = (t2-t1)/t1 #print (a) return (a.squeeze()) df1 = df.groupby(level=0, axis=1).agg(f) print (df1) psl psv fiat -0.026568 -0.040656 mazda 0.337972 0.459898 opel 0.041781 0.018369 toyota -0.189009 -0.206871 ``` Use lambda function is possible, but really awfull with repeating code: ``` df1 = df.groupby(level=0, axis=1) .agg(lambda x: ((x.xs('t2', axis=1, level=1)-x.xs('t1', axis=1, level=1))/ x.xs('t1', axis=1, level=1)).squeeze()) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Using `iloc` can solve the problem: ```py df.groupby(level=0, axis=1).agg(lambda x: (x.iloc[:,0]-x.iloc[:,1])/x.iloc[:,0]) ``` Outputs: ``` psl psv fiat 0.026568 0.040656 mazda -0.337972 -0.459898 opel -0.041781 -0.018369 toyota 0.189009 0.206871 ``` Upvotes: 0
2018/03/14
647
2,507
<issue_start>username_0: When trying to assign a long JSON response to a Dictionary, I get either > > nil > > > or > > Thread 1:EXC\_BAD\_INSTRUCTION (code=EXC\_I386\_INVOP, subcode=0x0) > > > Assigning a short response works fine. Here is my code ``` func getUserInfo() { let access_token : String = accessToken_json_response["access_token"] ?? "empty" if access_token != "empty" { Alamofire.request("https://api.github.com/user?access_token=\(access_token)").responseJSON { response in if let json = response.result.value { print(json) //The JSON prints, but takes more than a second to do so. self.getUser_json_response = json as? Dictionary //This produces the thread error when the response is long. print(self.getUser\_json\_response) //This either prints nil, or a thread error produces in the previous instruction } } } } ```<issue_comment>username_1: You have to serialise the response into json and then you can use it as dictionary. `eg: let json = try? JSONSerialization.jsonObject(with: data, options: []) as? [String: Any]` then print this json Or Use this link, this is latest update from apple to code and encode json response according to your class or model. [Automatic JSON serialization and deserialization of objects in Swift](https://stackoverflow.com/questions/26820720/automatic-json-serialization-and-deserialization-of-objects-in-swift) Upvotes: 1 <issue_comment>username_2: First of all you are casting to an optional dictionary so it should be conditional binding i.e: ``` if let unwrappedJson = json as? .... ``` Second, you should cast to `[String : Any]` i.e: ``` if let unwrappedJson = json as? [String : Any] ``` Upvotes: 3 [selected_answer]<issue_comment>username_3: may this help.. ``` Alamofire.request(UrlStr, method: .post, parameters: params, encoding: URLEncoding.default, headers: nil) .validate() .responseJSON { response in switch response.result { case .success: if let JSON = response.result.value { print("JSON: \(JSON)") let jsonResponse = (JSON as? [String:Any]) ?? [String:Any]() print("jsonResponse: \(jsonResponse)") } case .failure(let error): print(error.localizedDescription) } } ``` Upvotes: 1
2018/03/14
885
2,196
<issue_start>username_0: I want to import Sudoku boards and convert them to a [9][9] array. [Example of a couple of printed boards:](https://i.stack.imgur.com/W7Lrb.png) Here is the first board: 370000001000700005408061090000010000050090460086002030000000000694005203800149500 The first 9 numbers fill the first row, the next 9 number fill the second row etc. ``` public static void main(String[] args) { File in = new File("board.txt"); try { Scanner input = new Scanner(in); for (int i = 0; i < 9; i++) { for (int j = 0; j < 9; j++) { grid[i][j]= input.nextInt(); } } } catch (FileNotFoundException e) { } ``` This code can import individual ints in a text file for example: ``` 0 1 3 5 0 8 etc ``` So how do I either edit the code I have so it can "read" a line of numbers without spaces and import every individual digit from the text file in an element. Or how do I create a new program that fills the same function?<issue_comment>username_1: Use `scanner.nextLine` read the line and use `Integer.parseInt` convert `char` to `int`: ``` Scanner input = new Scanner(in); String line = input.nextLine(); for (int i = 0; i < 9; i++) { for (int j = 0; j < 9; j++) { grid[i][j]= Integer.parseInt(line.charAt(i * 9 + j)); } } ``` Upvotes: 1 <issue_comment>username_2: If you are using Java 8 you can read all the line, then you can follow this steps : ``` int[][] result = Arrays.asList(line.split("(?<=\\G.{9})"))//cut in each 9 element .stream() .map(row -> Stream.of(row.split(""))//Split each row .mapToInt(Integer::parseInt).toArray()//convert it to array of ints ).toArray(int[][]::new);//Collect the result as a int[][] Arrays.asList(result).forEach(row -> System.out.println(Arrays.toString(row)));//Print ``` **Outputs** ``` [3, 7, 0, 0, 0, 0, 0, 0, 1] [0, 0, 0, 7, 0, 0, 0, 0, 5] [4, 0, 8, 0, 6, 1, 0, 9, 0] [0, 0, 0, 0, 1, 0, 0, 0, 0] [0, 5, 0, 0, 9, 0, 4, 6, 0] [0, 8, 6, 0, 0, 2, 0, 3, 0] [0, 0, 0, 0, 0, 0, 0, 0, 0] [6, 9, 4, 0, 0, 5, 2, 0, 3] [8, 0, 0, 1, 4, 9, 5, 0, 0] ``` Upvotes: 0
2018/03/14
1,074
4,230
<issue_start>username_0: I've noticed that when using Kotlin's synthetic binding, the view returned is non null (Kotlin will return `View!`). But this doesn't make much sense to me, since `findCachedViewById` can actually return null results, meaning that views can actually be null. ``` public View _$_findCachedViewById(int var1) { if(this._$_findViewCache == null) { this._$_findViewCache = new HashMap(); } View var2 = (View)this._$_findViewCache.get(Integer.valueOf(var1)); if(var2 == null) { View var10000 = this.getView(); if(var10000 == null) { return null; } var2 = var10000.findViewById(var1); this._$_findViewCache.put(Integer.valueOf(var1), var2); } return var2; } ``` So why are they not optional in this case? Why doesn't Kotlin simply return `View?` when using synthetic binding, so that developers would be forced to check nullability when dealing with views? Maybe it's just because I'm new to Kotlin, but I think this is a bit counter intuitive, since the variable is not optional but we are still supposed to check if the View is in fact not null. So in this case, does it make sense to do something like the code below? ``` view?.let { // handle non null view here } ```<issue_comment>username_1: The idea is that xml layouts in Android are pretty static and in order to use synthetic views, you must create a direct import of the parsed layout: ``` import kotlinx.android.synthetic.main.activity_main.* ``` So there are no real-life, non-magic scenarios where the `View` would be null. Unless you choose the wrong synthetic layout, but then you will get the crash on first run. That said, it will of course break if you modify the view on runtime, removing `Views` etc. But again, this is not the default usage for synthetic `Views` and requires a different approach. Upvotes: 0 <issue_comment>username_2: I figured it out, I always find the correct SO question right after I post mine :) The single exclamation point following the `View` does not actually mean that the view can not be null like I expected. This [answer](https://stackoverflow.com/a/43826700/2454356) to another question essentially answers my exact question. The `View`, when using synthetic binding, can actually be null, but we can't know for sure, hence the single exclamation mark. So it's safe to assume that the code I posted above - using `?.let{...}` is perfectly acceptable way of dealing with views when you are not sure if they are already initialised when accessing them. The cases where views might be null are very rare, but it can happen. Upvotes: 4 [selected_answer]<issue_comment>username_3: As you pointed out already, a single exclamation mark does not mean that it's not null, but rather that it's a Java platform type and the compiler doesn't know if it's nullable or not. I think what you have suggested is fine, although it fails silently in the actual case of a null which might not actually be what you want. Let's say you tried to call your view in onCreateView and forgot that it will not be initialised yet. The fragment will not behave as expected but it won't produce a meaningful error to help you debug the issue. I'm still trying to settle on one solution or another myself but I would suggest either explicitly handling the case of a null: ``` view?.let { //... } ?: throwExceptionIfDebugElseLogToCrashlytics() ``` Or decide that this time you actually want it to throw the NullPointerException in which case I would suggest: ``` view!!.let { //... } ``` The latter doesn't bloat your code for what "should" be an impossible edge case and it doesn't fail silently, but it still makes it clear to a reader that view could be null. Obviously the !! is not needed by the compiler, it is just there to make the chosen strategy for dealing with platform types more explicit. Upvotes: 1 <issue_comment>username_4: Actualy null pointer exception can happen for synthetic view bindings, if you try to access view from listener out of context of an activity or view, or in lambdas. The problem is in lambda, and Frantisek have post about it here: <https://stackoverflow.com/posts/comments/115183445?noredirect=1> Upvotes: 1
2018/03/14
1,369
5,800
<issue_start>username_0: I am new to kafka. We want to monitor and manage kafka topics. We tried different open source monitoring tools like 1. [kafka-monitor](https://github.com/linkedin/kafka-monitor) 2. [kafka-manager](https://github.com/yahoo/kafka-manager) Both tools are good. But we are unable to make a decision which should be included in our deployment stack. Which one is better and why, and in which scenario? 'kafka manager' from yahoo looks the older one and 'kafka monitor' from LinkedIn is newer one Kafka Monitor- [![enter image description here](https://i.stack.imgur.com/3SZwk.png)](https://i.stack.imgur.com/3SZwk.png)<issue_comment>username_1: The kafka-monitor is (despite the name) a load generation and reporting tool. Yahoo's kafka-manager is an overall monitoring tool. Upvotes: 0 <issue_comment>username_2: If you want to pay for licensing and Kafka cluster support, then you can use [Confluent Control Center](https://www.confluent.io/product/control-center/) Alternatively, the free route would be to use JMX exporters from Datadog and/or Prometheus/Influxdb (with Grafana dashboards) to see overall system health checks (CPU, network, memory, etc)... Much more information than what you get only by monitoring Kafka processes with Kafka tools Upvotes: 3 <issue_comment>username_3: **Lenses** [Lenses](https://lenses.io/) (ex Landoop) enhances Kafka with User Interface, streaming SQL engine and cluster monitoring. It enables faster monitoring of Kafka data pipelines. They provide a free all-in-one docker ([Lenses Box](https://lenses.io/lenses-box/)) which can serve a single broker for up to 25M messages. Note that this is recommended for development environments. **Cloudera SMM** Streams Messaging Manager is the solution for monitoring and managing clusters running Cloudera or Hortonworks kafka. It also comes with replication capability. **Confluent** Another option is [Confluent Enterprise](https://www.confluent.io/product/confluent-enterprise/) which is a Kafka distribution for production environments. It also includes [Control Centre](https://www.confluent.io/product/control-center/), which is a management system for Apache Kafka that enables cluster monitoring and management from a User Interface. **Yahoo CMAK (Cluster Manager for Apache Kafka, previously known as Kafka Manager)** [Kafka Manager or CMAK](https://github.com/yahoo/CMAK) is a tool for monitoring Kafka offering less functionality compared to the aforementioned tools. **KafDrop** [KafDrop](https://github.com/HomeAdvisor/Kafdrop) is a UI for monitoring Apache Kafka clusters. The tool displays information such as brokers, topics, partitions, and even lets you view messages. It is a lightweight application that runs on Spring Boot and requires very little configuration. **LinkedIn Burrow** [Burrow](https://github.com/linkedin/Burrow) is a monitoring companion for Apache Kafka that provides consumer lag checking as a service without the need for specifying thresholds. It monitors committed offsets for all consumers and calculates the status of those consumers on demand. An HTTP endpoint is provided to request status on demand, as well as provide other Kafka cluster information. There are also configurable notifiers that can send status out via email or HTTP calls to another service. **Kafka Tool** [Kafka Tool](http://www.kafkatool.com/) is a GUI application for managing and using Apache Kafka clusters. It provides an intuitive UI that allows one to quickly view objects within a Kafka cluster as well as the messages stored in the topics of the cluster. It contains features geared towards both developers and administrators. --- If you cannot afford licenses, then go for Yahoo Kafka Manager, LinkedIn Burrow or KafDrop. Confluent's and Landoop's products are the best out there, but unfortunately, they require licensing. For more details, you can refer to my blog post [**Overview of UI Monitoring tools for Apache Kafka Clusters**](https://medium.com/@giorgosmyrianthous/overview-of-ui-monitoring-tools-for-apache-kafka-clusters-9ca516c165bd). Upvotes: 8 [selected_answer]<issue_comment>username_4: At my company, we used the Yahoo product, we investigated the LinkedIn product, and several others mentioned. My company ultimately chose to use Prometheus+Grafana. Everyone loves it and I'd highly recommend it. There are two big advantages to Prometheus+Grafana. The first is it does full featured Kafka metrics ingestion+visualization+alerting but it's not limited to Kafka. While our initial needs were just to monitor Kafka, we also wanted metrics on HTTP servers+traffic, server utilization (cpu/ram/disk), and custom application level metrics. Prometheus handles all of the above. Secondly, Prometheus + Grafana are very high quality, well designed, and easy to use. A lot of other products in this space are old and complicated to work with. Prometheus + Grafana are both excellent to work with, they are very customizable, polished, and easy to use. Grafana has a very flashy + functional JavaScript interface that lets you make exactly the customized dashboards that you want. Prometheus has a very polished metric collection engine, storage engine, query language, and alerting system. Something like Yahoo Kafka Manager has much more limited functionality in all of these categories. If you want to try Prometheus, you need to do two things: 1) install+configure the JMX->Prometheus exporter on your Kafka brokers: <https://github.com/prometheus/jmx_exporter> 2) Setup a Prometheus server to collect metrics + and setup a Grafana dashboard to display the graphs that you want. I'd also say that this is just for monitoring+dashboards+alerting. For management functions, you still need other tools. Upvotes: 2
2018/03/14
2,255
8,210
<issue_start>username_0: I'm trying to make a Java service using Spring Boot that connects to a Rabbit exchange, discover new queues (that matches with a given prefix) and connect to them. I'm using `RabbitManagementTemplate` to discover and `SimpleMessageListenerContainer` to create a bind. It works fine. The problem is that when one of these dynamic queues gets deleted (by the web interface for example), my service can't handle the exception and I didn't find a place to register some handler to fix this. For these cases I just want to ignore the deletion and move on, I'm not willing to recreate the queue. My code is something like ``` @Scheduled(fixedDelay = 3*1000) public void watchNewQueues() { for (Queue queue : rabbitManagementTemplate.getQueues()) { final String queueName = queue.getName(); String[] nameParts = queueName.split("\\."); if ("dynamic-queue".equals(nameParts[0]) && !context.containsBean(queueName)) { logger.info("New queue discovered! Binding to {}", queueName); Binding binding = BindingBuilder.bind(queue).to(exchange).with("testroute.#"); rabbitAdmin.declareBinding(binding); rabbitAdmin.declareQueue(queue); SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(); container.setConnectionFactory(connectionFactory); container.setQueueNames(queueName); container.setMessageListener(new MyMessageListener()); container.setPrefetchCount(settings.getPrefetch()); container.setAutoDeclare(false); container.setMissingQueuesFatal(true); container.setDeclarationRetries(0); container.setFailedDeclarationRetryInterval(-1); context.getBeanFactory().registerSingleton(queueName, container); container.start(); } } } @Override public void onApplicationEvent(ListenerContainerConsumerFailedEvent event) { if (event.getSource() instanceof SimpleMessageListenerContainer) { SimpleMessageListenerContainer container = (SimpleMessageListenerContainer) event.getSource(); if (context.getAutowireCapableBeanFactory() instanceof BeanDefinitionRegistry) { logger.info("Removing bean! {}", container.getQueueNames()[0]); ((BeanDefinitionRegistry)context.getAutowireCapableBeanFactory()).removeBeanDefinition(container.getQueueNames()[0]); } else { logger.info("Context is not able to remove bean"); } } else { logger.info("Got event but is not a SimpleMessageListenerContainer {}", event.toString()); } } ``` And when the queue gets deleted, console logs: ``` 2018-03-13 15:01:29.623 WARN 32736 [pool-1-thread-6] --- o.s.a.r.listener.BlockingQueueConsumer : Cancel received for amq.ctag-wKQUQkUNOSCtjQ9RBUNCig; Consumer: tags=[{amq.ctag-wKQUQkUNOSCtjQ9RBUNCig=dynamic-queue.some-test}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest@localhost:5672/,3), conn: Proxy@23510c77 Shared Rabbit Connection: SimpleConnection@66c17803 [delegate=amqp://guest@localhost:5672/], acknowledgeMode=AUTO local queue size=0 2018-03-13 15:01:30.219 WARN 32736 [SimpleAsyncTaskExecutor-1] --- o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it. Exception summary: org.springframework.amqp.rabbit.support.ConsumerCancelledException 2018-03-13 15:01:30.219 INFO 32736 [SimpleAsyncTaskExecutor-1] --- o.s.a.r.l.SimpleMessageListenerContainer : Restarting Consumer: tags=[{}], channel=Cached Rabbit Channel: AMQChannel(amqp://guest@localhost:5672/,3), conn: Proxy@23510c77 Shared Rabbit Connection: SimpleConnection@66c17803 [delegate=amqp://guest@localhost:5672/], acknowledgeMode=AUTO local queue size=0 2018-03-13 15:01:30.243 WARN 32736 [SimpleAsyncTaskExecutor-2] --- o.s.a.r.listener.BlockingQueueConsumer : Failed to declare queue:dynamic-queue.some-test 2018-03-13 15:01:30.246 WARN 32736 [SimpleAsyncTaskExecutor-2] --- o.s.a.r.listener.BlockingQueueConsumer : Queue declaration failed; retries left=3 org.springframework.amqp.rabbit.listener.BlockingQueueConsumer$DeclarationException: Failed to declare queue(s):[dynamic-queue.some-test] at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.attemptPassiveDeclarations(BlockingQueueConsumer.java:571) at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.start(BlockingQueueConsumer.java:470) at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1171) at java.lang.Thread.run(Thread.java:748) Caused by: java.io.IOException: null at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:106) at com.rabbitmq.client.impl.AMQChannel.wrap(AMQChannel.java:102) at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:124) at com.rabbitmq.client.impl.ChannelN.queueDeclarePassive(ChannelN.java:885) at com.rabbitmq.client.impl.ChannelN.queueDeclarePassive(ChannelN.java:61) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.springframework.amqp.rabbit.connection.CachingConnectionFactory$CachedChannelInvocationHandler.invoke(CachingConnectionFactory.java:835) at com.sun.proxy.$Proxy63.queueDeclarePassive(Unknown Source) at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.attemptPassiveDeclarations(BlockingQueueConsumer.java:550) ... 3 common frames omitted Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method(reply-code=404, reply-text=NOT\_FOUND - no queue 'dynamic-queue.some-test' in vhost '/', class-id=50, method-id=10) at com.rabbitmq.utility.ValueOrException.getValue(ValueOrException.java:67) at com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(BlockingValueOrException.java:33) at com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(AMQChannel.java:361) at com.rabbitmq.client.impl.AMQChannel.privateRpc(AMQChannel.java:226) at com.rabbitmq.client.impl.AMQChannel.exnWrappingRpc(AMQChannel.java:118) ... 12 common frames omitted Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method(reply-code=404, reply-text=NOT\_FOUND - no queue 'dynamic-queue.some-test' in vhost '/', class-id=50, method-id=10) at com.rabbitmq.client.impl.ChannelN.asyncShutdown(ChannelN.java:484) at com.rabbitmq.client.impl.ChannelN.processAsync(ChannelN.java:321) at com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:144) at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:91) at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:554) ... 1 common frames omitted ``` Thanks for your attention **EDIT:** Thanks! I was able to avoid the recreation of the queue. I'm now struggling to remove the queue from the Spring Context :)<issue_comment>username_1: You'll get error logs, of course, but with `container.setMissingQueuesFatal(true);` (the default), the container will stop itself after 3 attempts to declare the queue at 5 second intervals. You can affect the time it takes to stop by setting the `declarationRetries` (default 3) and `failedDeclarationRetryInterval` (default 5000). Upvotes: 3 [selected_answer]<issue_comment>username_2: The easiest way is to use ApplicationListener for MissingQueueEvent ``` @Component public class MissingQueueListener implements ApplicationListener { private static final Logger logger = LoggerFactory.getLogger(MissingQueueListener.class); @Override public void onApplicationEvent(MissingQueueEvent missingQueueEvent) { ((SimpleMessageListenerContainer) missingQueueEvent.getSource()).removeQueueNames(missingQueueEvent.getQueue()); logger.error("Removing missing queue {} from its container", missingQueueEvent.getQueue()); } } ``` Upvotes: 0
2018/03/14
374
1,618
<issue_start>username_0: I have a database table called `interviews` and the interviewer and the interviewee will both have to review how the interview went. The review will have similar fields (rating on a scale) but different questions. Option 1 is to have them both in the same table and have it be `1..N` back to the interview table (storing the ID of the writer and the one being reviewed as well). and only limiting which fields can be input at the application level. Option 2 is to have two tables (one specifically for interviewer reviews and one specifically for interviewee reviews. What is your opinion of the best way to model this?<issue_comment>username_1: You'll get error logs, of course, but with `container.setMissingQueuesFatal(true);` (the default), the container will stop itself after 3 attempts to declare the queue at 5 second intervals. You can affect the time it takes to stop by setting the `declarationRetries` (default 3) and `failedDeclarationRetryInterval` (default 5000). Upvotes: 3 [selected_answer]<issue_comment>username_2: The easiest way is to use ApplicationListener for MissingQueueEvent ``` @Component public class MissingQueueListener implements ApplicationListener { private static final Logger logger = LoggerFactory.getLogger(MissingQueueListener.class); @Override public void onApplicationEvent(MissingQueueEvent missingQueueEvent) { ((SimpleMessageListenerContainer) missingQueueEvent.getSource()).removeQueueNames(missingQueueEvent.getQueue()); logger.error("Removing missing queue {} from its container", missingQueueEvent.getQueue()); } } ``` Upvotes: 0
2018/03/14
460
1,877
<issue_start>username_0: We're using Flink to monitor each event. The detail scenario is when a event arrives, flink find out all event with same userid in last 2 hours and sum the count field. For example: ``` event1 -> real time result = n1 event2 -> real time result = n2 event3 -> real time result = n1+n3 event4 -> real time result = n3+n4 ``` How could we implement such scenario in flink? Intuitively, we want to use sliding window, but there are two problems: 1. In flink, sliding window slides by parameter slide\_size. However, in our scenario, window slides for each event, which means the start/end point of window is different for each event (expected window range: [eventtime-2h, eventtime)). Should we implement this by setting a small slide\_size(10ms?)? 2. The process function is executed by trigger function, which means we can't get result immediately as soon as event arrive?<issue_comment>username_1: You'll get error logs, of course, but with `container.setMissingQueuesFatal(true);` (the default), the container will stop itself after 3 attempts to declare the queue at 5 second intervals. You can affect the time it takes to stop by setting the `declarationRetries` (default 3) and `failedDeclarationRetryInterval` (default 5000). Upvotes: 3 [selected_answer]<issue_comment>username_2: The easiest way is to use ApplicationListener for MissingQueueEvent ``` @Component public class MissingQueueListener implements ApplicationListener { private static final Logger logger = LoggerFactory.getLogger(MissingQueueListener.class); @Override public void onApplicationEvent(MissingQueueEvent missingQueueEvent) { ((SimpleMessageListenerContainer) missingQueueEvent.getSource()).removeQueueNames(missingQueueEvent.getQueue()); logger.error("Removing missing queue {} from its container", missingQueueEvent.getQueue()); } } ``` Upvotes: 0
2018/03/14
313
1,027
<issue_start>username_0: How do I change the color of `unusedField` or an unused variable in IntelliJ? In `Color scheme` when I click on `unusedField` nothing happens. I was expecting IDEA to show me the default color settings (or show where it is derived from) but nothing happens. The screenshot illustrates the situation when I click on `unusedField`. [![screenshot of the Color Scheme settings](https://i.stack.imgur.com/3zBjD.png)](https://i.stack.imgur.com/3zBjD.png)<issue_comment>username_1: Just edit the color after "Foreground" in the screenshot: ![image](https://user-images.githubusercontent.com/16398479/37400549-3ed36c82-27c0-11e8-93d6-e3ee099a0b31.png) Upvotes: 6 [selected_answer]<issue_comment>username_2: It's under "Settings/Editor/Color Scheme/General". You should Select "Errors and Warnings/Unused code" and change the foreground color, marked with a red border on this screenshot: [![enter image description here](https://i.stack.imgur.com/p6ht9.png)](https://i.stack.imgur.com/p6ht9.png) Upvotes: 0
2018/03/14
496
1,791
<issue_start>username_0: I'm developing a web app, and I use client-side routing (basically, we're loading the page using what's after the `#` in the URL). But, I've noticed that **sometimes** when I go one page and I go on another one, the old one is loading instead of the new one. Let me explain when I click on the link to go wherever I want, the page I was on is loading. I highly suspect that the browser cache is "overwriting" the new content. > > And I still don't know why it only happens sometimes (especially when it's a new browser window). > > > Are there any solutions to force the browser to open the new page, like opening the page in another page, and closing the old tab? **EDIT** I'm currently using GitHub pages to host my project, so in your answer, make sure that everything is client side .<issue_comment>username_1: You can set the following header from your server on all requests: ``` Cache-Control: no-cache ``` This would disable caching for that file/response. If you're using a expressJS server, you can set it as: ``` function nocache(req, res, next) { res.header('Cache-Control', 'private, no-cache, no-store, must-revalidate'); res.header('Expires', '-1'); res.header('Pragma', 'no-cache'); next(); } ``` And use this function as a middleware for all the routes you want to disable caching for. You can read more about cache headers on [MDN Docs](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control) Upvotes: 0 <issue_comment>username_2: I found a solution: open my link in a new tab, while closing the old one. This can be done using the following code: ``` el.addEventListener("click", e => { e.preventDefault(); window.open(el.href, "_blank") window.close() }) ``` Upvotes: 2 [selected_answer]
2018/03/14
399
1,395
<issue_start>username_0: I am using vue-router in my vue application. I have the following routes in my application. ``` const routes = [ {name: "home", path: "/", component: Home}, {name: "user", path: "/user", component: User}, {name: "edit-user", path: "/user/edit", component: EditUser} ]; const router = new VueRouter({ routes: routes, mode: 'history' }); ``` Here both the routes are working perfectly when accessed using `router-link` as given below. ``` Go to user Edit user ``` If I am accessing the routes directly by page refresh the route "/user" works perfectly. But the route "/user/edit" shows a black page with just the following error in the console (without any other error details). ``` Uncaught SyntaxError: Unexpected token < ``` Can anyone help me out to solve this?<issue_comment>username_1: It was caused due to a very small reason. In my main HTML page, the js file was linked as follows: ``` ``` The issue was fixed by changing the src value from `"js/app.js"` to `"/js/app.js"` as follows: ``` ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: I was facing the same issue I resolved this by adding `publicPath: '/'` in my Webpack config file suggested on <https://github.com/webpack/webpack-dev-middleware/issues/205#issuecomment-315847782> Thanks to **[<NAME>](https://github.com/hackingbeauty)** Upvotes: 4
2018/03/14
375
1,187
<issue_start>username_0: I have this table: ``` CREATE TABLE `count_traffic` ( `tool_id` int(11) NOT NULL, `leads` int(11) NOT NULL DEFAULT '0', `date` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP, ) ENGINE=InnoDB DEFAULT CHARSET=utf8; ``` And following SELECT: ``` $query=" SELECT sum(`leads`) as leads , date FROM count_traffic WHERE tool_id = :tool_id AND date BETWEEN '2018-03-13' AND '2018-03-14' GROUP BY date"; ``` This SELECT it show me good results if I use `date` for date field! How I can SELECT BETWEEN two dates and GROUP BY DATE only from `datetime` field type?<issue_comment>username_1: It was caused due to a very small reason. In my main HTML page, the js file was linked as follows: ``` ``` The issue was fixed by changing the src value from `"js/app.js"` to `"/js/app.js"` as follows: ``` ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: I was facing the same issue I resolved this by adding `publicPath: '/'` in my Webpack config file suggested on <https://github.com/webpack/webpack-dev-middleware/issues/205#issuecomment-315847782> Thanks to **[<NAME>](https://github.com/hackingbeauty)** Upvotes: 4
2018/03/14
1,038
3,580
<issue_start>username_0: I am trying to use Grafana to chart the output of a query similar to: ``` SELECT count(*) FROM myschema.table1 WHERE status_id = 2 ``` Essentially I just want Grafana to run this query every X minutes and then chart the output over time, but from what I can see Grafana requires a specific column to be used as the time series. Is there some way to achieve what I'm trying to do?<issue_comment>username_1: There are two parts to this. 1. you want the data in time buckets 2. you can set Grafana to auto refresh every so often but this is not related to the time buckets You can use something like the following to achieve 1). ``` SELECT to_timestamp(trunc( extract(epoch from created_at) / 60000) * 60000) AS time, count(*) FROM measures.static_ip_addresses GROUP BY time ORDER BY time desc ``` The 60000 is for 1 minute buckets. You can change this to resize your time buckets. You can modify this query, for example, to set a range of the buckets using a where clause. Upvotes: 1 <issue_comment>username_2: I had a similar task and found a solution. > > I just want Grafana to run this query every X minutes > > > Grafana is only a visualization solution, it does not store data itself, you need some time series database as a proxy. I used this scheme - PostgreSQL -> [prometheus-sql](https://github.com/chop-dbhi/prometheus-sql) -> Prometheus -> Grafana. Below I added configuration files for prometheus-sql and Prometheus: 1. `docker-compose.yaml` ``` version: '3.7' volumes: prometheus_data: {} services: prometheus: image: prom/prometheus volumes: - ./prometheus/:/etc/prometheus/ - prometheus_data:/var/prometheus command: - '--config.file=/etc/prometheus/prometheus.yaml' - '--storage.tsdb.path=/var/prometheus' - '--web.console.libraries=/usr/share/prometheus/console_libraries' - '--web.console.templates=/usr/share/prometheus/consoles' ports: - 9090:9090 user: root links: - prometheus-sql:prometheus-sql depends_on: - prometheus-sql restart: always sqlagent: image: dbhi/sql-agent prometheus-sql: image: dbhi/prometheus-sql ports: - 8080:8080 links: - sqlagent:sqlagent depends_on: - sqlagent command: - -config - /etc/prometheus-sql/config.yml - -queries - /etc/prometheus-sql/queries.yml - -service - http://sqlagent:5000 volumes: - ./sql/config.yml:/etc/prometheus-sql/config.yml - ./sql/queries.yml:/etc/prometheus-sql/queries.yml ``` 2. `prometheus/prometheus.yaml`: ``` scrape_configs: - job_name: 'Prometheus SQL' scrape_interval: 1m static_configs: - targets: ['prometheus-sql:8080'] ``` 3. `prometheus-sql/config.yml` (fill your database connection settings here): ``` defaults: data-source: postgresql query-interval: 1m query-timeout: 30s query-value-on-error: -1 data-sources: postgresql: driver: postgresql properties: host: DATABASE_HOST port: 5432 user: DATABASE_USER password: <PASSWORD> database: DATABASE_NAME ``` 4. `prometheus-sql/queries.yml` (tried to adopt to your case) ``` - table1_records_count_with_status_2: help: table1 records count with status_id = 2 sql: > SELECT count(*) AS count FROM myschema.table1 WHERE status_id = 2 data-field: count ``` Then in Grafana you could get time series data with the query like this: ``` query_result_table1_records_count_with_status_2{} ``` Upvotes: 0
2018/03/14
1,040
3,618
<issue_start>username_0: I am a little uncertain when the copy constructor is needed. For example, given this function: ``` template T max(const T\* array, int size) { T result = array[0]; for (int i = 1; i < size; ++i) { if (result < array[i]) { result = array[i]; } } return result; } ``` What is the reason that I need a copy constructor for the type `T`? I think it must be because we return by value. Does this line `T result = array[0];` also need the copy constructor?<issue_comment>username_1: > > What is the reason that I need `copy constructor` for the type `T`? > > > ``` T result = array[0]; ``` This is known as a copy initialization and invokes the copy constructor for the type `T`. Type `T` will *require* a copy constructor for this line to succeed. > > I think that it's must be because that we return by a value, and so we need `copy constructor` for `T` type. > > > ``` return result; ``` For the most part, your assumption is correct for the return value. However, it isn't necessary for a copy constructor to be defined in this case. To implement [move semantics](https://stackoverflow.com/questions/3106110/what-are-move-semantics), you can implement a move constructor which will remove the need for the copy constructor, since the local variable `result` will be "moved" instead of "copied" from. Move semantics remove the need for unnecessary copies of large objects when returning them from a function, since those large objects will not be accessible after the function returns. Upvotes: 4 [selected_answer]<issue_comment>username_2: This was already answered here: > > [What's the difference between assignment operator and copy constructor?](https://stackoverflow.com/questions/11706040/whats-the-difference-between-assignment-operator-and-copy-constructor) > > > So the thing is: A **copy constructor** is used to initialize a **previously uninitialized** object from some other object's data. An **assignment operator** is used to replace the data of a **previously initialized** object with some other object's data. Here's an example: ``` #include using namespace std; class MyClass{ public: MyClass(){ cout << "Default ctor\n"; } MyClass(const MyClass& copyArg){ cout << "Copy ctor\n"; } MyClass(MyClass&& moveArg){ cout << "Move ctor\n"; } void operator=(const MyClass& assignArg){ cout << "Assignment operator\n"; } bool operator<(const MyClass& comparsionArg) const { return true; } }; template T max(const T\* array, int size) { T result = array[0]; for (int i = 0; i < size; ++i) { if (result < array[i]) { result = array[i]; } } return result; } int main(){ MyClass arr[1]; const MyClass& a = max(arr, 1); return 0; } ``` To see what's exactly happening [we need to compile with `-fno-elide-constructors`](https://en.wikipedia.org/wiki/Copy_elision). The output is: ```none Default ctor Copy ctor Assignment operator Move ctor ``` So here, the **default constructor** is called at this line for one element of array: ``` MyClass arr[1]; ``` Then we initialize a **previously uninitialized** object and **copy constructor** is called: ``` T result = array[0]; ``` Then we make an assignment to **previously initialized** object and **assignment operator** called: ``` result = array[i]; ``` After we need to create object outside of our function scope since we return by value and for that **move constructor** called: ``` return result; ``` Then bind object constructed with **move constructor** in `main` scope to const reference: ``` const MyClass& a = max(arr, 1); ``` Upvotes: 1
2018/03/14
632
2,371
<issue_start>username_0: the method i’m try To call is the encrypt method it's from this class but when i try to call it in call class it shows error in the method name it shows that the method is not there or not found :( please help me ``` package test; public class MARS { public static byte[] encrypt(byte[] in,byte[] key){ K = expandKey(key); int lenght=0; byte[] padding = new byte[1]; int i; lenght = 16 - in.length % 16; padding = new byte[lenght]; padding[0] = (byte) 0x80; for (i = 1; i < lenght; i++) padding[i] = 0; byte[] tmp = new byte[in.length + lenght]; byte[] bloc = new byte[16]; int count = 0; for (i = 0; i < in.length + lenght; i++) { if (i > 0 && i % 16 == 0) { bloc = encryptBloc(bloc); System.arraycopy(bloc, 0, tmp, i - 16, bloc.length); } if (i < in.length) bloc[i % 16] = in[i]; else{ bloc[i % 16] = padding[count % 16]; count++; } } if(bloc.length == 16){ bloc = encryptBloc(bloc); System.arraycopy(bloc, 0, tmp, i - 16, bloc.length); } return tmp; } ``` } This is the calling class the error shown in line 3 ``` public static void main(String[] args) { byte[ ] array = β€œgoing to encrypt ”.getByte( ); byte[ ] arrayEnc = MARS.encrypt(array); System.out.println(β€œplain text: ” + array); System.out.println(β€œEncrypted Text: ”+ arrayEnc); } ```<issue_comment>username_1: I'm guessing the error you're referring to is a compile-time error? The encrypt(..) function is defined as taking two byte array parameters: the source data and the encryption key. In your main(..) method, you're only passing in a single byte array, the source data. You also need to pass in an encryption key. Upvotes: 0 <issue_comment>username_2: The encrypt you have defined takes in 2 parameters *public static byte[] encrypt(byte[] in,byte[] key)* But you are trying to call it with one *MARS.encrypt(array)*. Upvotes: 2 [selected_answer]
2018/03/14
1,985
5,341
<issue_start>username_0: I want to create 7 dummy variables -one for each day, using dplyr So far, I have managed to do it using the `sjmisc` package and the `to_dummy` function, but I do it in 2 steps -1.Create a df of dummies, 2) append to the original df ``` #Sample dataframe mydfdata.frame(x=rep(letters[1:9]), day=c("Mon","Tues","Wed","Thurs","Fri","Sat","Sun","Fri","Mon")) #1.Create the 7 dummy variables separately daysdummy<-sjmisc::to_dummy(mydf$day,suffix="label") #2. append to dataframe mydf<-bind_cols(mydf,daysdummy) > mydf x day day_Fri day_Mon day_Sat day_Sun day_Thurs day_Tues day_Wed 1 a Mon 0 1 0 0 0 0 0 2 b Tues 0 0 0 0 0 1 0 3 c Wed 0 0 0 0 0 0 1 4 d Thurs 0 0 0 0 1 0 0 5 e Fri 1 0 0 0 0 0 0 6 f Sat 0 0 1 0 0 0 0 7 g Sun 0 0 0 1 0 0 0 8 h Fri 1 0 0 0 0 0 0 9 i Mon 0 1 0 0 0 0 0 ``` My question is whether I can do it in one single workflow using `dplyr` and add the `to_dummy` into the pipe-workflow- perhaps using `mutate`? \*`to_dummy` [documentation](https://www.rdocumentation.org/packages/sjmisc/versions/2.6.3/topics/to_dummy)<issue_comment>username_1: An alternative solution using `dummies()` which I think would be quicker would be ``` mydf = data.frame(x=rep(letters[1:9]), day=c("Mon","Tues","Wed","Thurs","Fri","Sat","Sun","Fri","Mon")) library(dummies) mydf <- cbind(mydf, dummy(mydf$day, sep = "_")) ``` That yields ``` x day mydf_Fri mydf_Mon mydf_Sat mydf_Sun mydf_Thurs mydf_Tues mydf_Wed 1 a Mon 0 1 0 0 0 0 0 2 b Tues 0 0 0 0 0 1 0 3 c Wed 0 0 0 0 0 0 1 4 d Thurs 0 0 0 0 1 0 0 5 e Fri 1 0 0 0 0 0 0 6 f Sat 0 0 1 0 0 0 0 7 g Sun 0 0 0 1 0 0 0 8 h Fri 1 0 0 0 0 0 0 9 i Mon 0 1 0 0 0 0 0 ``` Then you can use `gsub()` to have cleaner names ``` names(mydf) = gsub("mydf_", "", names(mydf)) head(mydf) x day Fri Mon Sat Sun Thurs Tues Wed 1 a Mon 0 1 0 0 0 0 0 2 b Tues 0 0 0 0 0 1 0 3 c Wed 0 0 0 0 0 0 1 4 d Thurs 0 0 0 0 1 0 0 5 e Fri 1 0 0 0 0 0 0 6 f Sat 0 0 1 0 0 0 0 ``` Upvotes: 1 <issue_comment>username_2: If you want to do this with the pipe, you can do something like: ``` library(dplyr) library(sjmisc) mydf %>% to_dummy(day, suffix = "label") %>% bind_cols(mydf) %>% select(x, day, everything()) ``` Returns: > > > ``` > # A tibble: 9 x 9 > x day day_Fri day_Mon day_Sat day_Sun day_Thurs day_Tues day_Wed > > 1 a Mon 0. 1. 0. 0. 0. 0. 0. > 2 b Tues 0. 0. 0. 0. 0. 1. 0. > 3 c Wed 0. 0. 0. 0. 0. 0. 1. > 4 d Thurs 0. 0. 0. 0. 1. 0. 0. > 5 e Fri 1. 0. 0. 0. 0. 0. 0. > 6 f Sat 0. 0. 1. 0. 0. 0. 0. > 7 g Sun 0. 0. 0. 1. 0. 0. 0. > 8 h Fri 1. 0. 0. 0. 0. 0. 0. > 9 i Mon 0. 1. 0. 0. 0. 0. 0. > > ``` > > With `dplyr` and `tidyr` we could do: ``` library(dplyr) library(tidyr) mydf %>% mutate(var = 1) %>% spread(day, var, fill = 0, sep = "_") %>% left_join(mydf) %>% select(x, day, everything()) ``` And with base R we could do something like: ``` as.data.frame.matrix(table(rep(mydf$x, lengths(mydf$day)), unlist(mydf$day))) ``` Returns: > > > ``` > Fri Mon Sat Sun Thurs Tues Wed > a 0 1 0 0 0 0 0 > b 0 0 0 0 0 1 0 > c 0 0 0 0 0 0 1 > d 0 0 0 0 1 0 0 > e 1 0 0 0 0 0 0 > f 0 0 1 0 0 0 0 > g 0 0 0 1 0 0 0 > h 1 0 0 0 0 0 0 > i 0 1 0 0 0 0 0 > > ``` > > Upvotes: 5 [selected_answer]<issue_comment>username_3: Instead of `sjmisc::to_dummy` you can also use base R's `model.matrix`; a `dplyr` solution would be: ``` library(dplyr); model.matrix(~ 0 + day, mydf) %>% as.data.frame() %>% bind_cols(mydf) %>% select(x, day, everything()); # x day dayFri dayMon daySat daySun dayThurs dayTues dayWed #1 a Mon 0 1 0 0 0 0 0 #2 b Tues 0 0 0 0 0 1 0 #3 c Wed 0 0 0 0 0 0 1 #4 d Thurs 0 0 0 0 1 0 0 #5 e Fri 1 0 0 0 0 0 0 #6 f Sat 0 0 1 0 0 0 0 #7 g Sun 0 0 0 1 0 0 0 #8 h Fri 1 0 0 0 0 0 0 #9 i Mon 0 1 0 0 0 0 0 ``` Upvotes: 2
2018/03/14
1,706
4,587
<issue_start>username_0: I am working on Magento 2.2.2. I have deployed the website on a subdomain in my VPS within a account. The strange thing is that, the "generated" folder gets automatically generated after deletion. To investigate, I deleted everything inside the subdomain root folder where I had put the Magento 2.2.2 website code. Still from nowhere this "generated" folder and some sub-folders inside it gets generated automatically. See snapshot below. [![enter image description here](https://i.stack.imgur.com/dQCgs.png)](https://i.stack.imgur.com/dQCgs.png) I also checked with command crontab -l and found that there are no cron jobs running. I also restarted apache server from my WHM panel. What might be causing this to happen?<issue_comment>username_1: An alternative solution using `dummies()` which I think would be quicker would be ``` mydf = data.frame(x=rep(letters[1:9]), day=c("Mon","Tues","Wed","Thurs","Fri","Sat","Sun","Fri","Mon")) library(dummies) mydf <- cbind(mydf, dummy(mydf$day, sep = "_")) ``` That yields ``` x day mydf_Fri mydf_Mon mydf_Sat mydf_Sun mydf_Thurs mydf_Tues mydf_Wed 1 a Mon 0 1 0 0 0 0 0 2 b Tues 0 0 0 0 0 1 0 3 c Wed 0 0 0 0 0 0 1 4 d Thurs 0 0 0 0 1 0 0 5 e Fri 1 0 0 0 0 0 0 6 f Sat 0 0 1 0 0 0 0 7 g Sun 0 0 0 1 0 0 0 8 h Fri 1 0 0 0 0 0 0 9 i Mon 0 1 0 0 0 0 0 ``` Then you can use `gsub()` to have cleaner names ``` names(mydf) = gsub("mydf_", "", names(mydf)) head(mydf) x day Fri Mon Sat Sun Thurs Tues Wed 1 a Mon 0 1 0 0 0 0 0 2 b Tues 0 0 0 0 0 1 0 3 c Wed 0 0 0 0 0 0 1 4 d Thurs 0 0 0 0 1 0 0 5 e Fri 1 0 0 0 0 0 0 6 f Sat 0 0 1 0 0 0 0 ``` Upvotes: 1 <issue_comment>username_2: If you want to do this with the pipe, you can do something like: ``` library(dplyr) library(sjmisc) mydf %>% to_dummy(day, suffix = "label") %>% bind_cols(mydf) %>% select(x, day, everything()) ``` Returns: > > > ``` > # A tibble: 9 x 9 > x day day_Fri day_Mon day_Sat day_Sun day_Thurs day_Tues day_Wed > > 1 a Mon 0. 1. 0. 0. 0. 0. 0. > 2 b Tues 0. 0. 0. 0. 0. 1. 0. > 3 c Wed 0. 0. 0. 0. 0. 0. 1. > 4 d Thurs 0. 0. 0. 0. 1. 0. 0. > 5 e Fri 1. 0. 0. 0. 0. 0. 0. > 6 f Sat 0. 0. 1. 0. 0. 0. 0. > 7 g Sun 0. 0. 0. 1. 0. 0. 0. > 8 h Fri 1. 0. 0. 0. 0. 0. 0. > 9 i Mon 0. 1. 0. 0. 0. 0. 0. > > ``` > > With `dplyr` and `tidyr` we could do: ``` library(dplyr) library(tidyr) mydf %>% mutate(var = 1) %>% spread(day, var, fill = 0, sep = "_") %>% left_join(mydf) %>% select(x, day, everything()) ``` And with base R we could do something like: ``` as.data.frame.matrix(table(rep(mydf$x, lengths(mydf$day)), unlist(mydf$day))) ``` Returns: > > > ``` > Fri Mon Sat Sun Thurs Tues Wed > a 0 1 0 0 0 0 0 > b 0 0 0 0 0 1 0 > c 0 0 0 0 0 0 1 > d 0 0 0 0 1 0 0 > e 1 0 0 0 0 0 0 > f 0 0 1 0 0 0 0 > g 0 0 0 1 0 0 0 > h 1 0 0 0 0 0 0 > i 0 1 0 0 0 0 0 > > ``` > > Upvotes: 5 [selected_answer]<issue_comment>username_3: Instead of `sjmisc::to_dummy` you can also use base R's `model.matrix`; a `dplyr` solution would be: ``` library(dplyr); model.matrix(~ 0 + day, mydf) %>% as.data.frame() %>% bind_cols(mydf) %>% select(x, day, everything()); # x day dayFri dayMon daySat daySun dayThurs dayTues dayWed #1 a Mon 0 1 0 0 0 0 0 #2 b Tues 0 0 0 0 0 1 0 #3 c Wed 0 0 0 0 0 0 1 #4 d Thurs 0 0 0 0 1 0 0 #5 e Fri 1 0 0 0 0 0 0 #6 f Sat 0 0 1 0 0 0 0 #7 g Sun 0 0 0 1 0 0 0 #8 h Fri 1 0 0 0 0 0 0 #9 i Mon 0 1 0 0 0 0 0 ``` Upvotes: 2
2018/03/14
1,803
4,940
<issue_start>username_0: So I am building a ReactJS website/app and getting to grips with things. I am trying to have multiple re-usable components of code (to save me time) and to get used to it I have made this placeholder component ``` import React from 'react'; import { Image } from 'semantic-ui-react'; import '../App.css'; const PlaceholderImage = () => ( ) export const PlaceholderImage; ``` And I am trying to call it in another page like so... ``` import React, { Component } from 'react'; import { PlaceholderImage } from '../components/placeholder'; import '../App.css'; class App extends Component { render () { return ( ); } } export default App; ``` Both files are within my src folder, but my components are held in the components folder and the pages are held within my routes folder. When I try to build this with yarn, i get the unexpected token errors on the semi colon. I have tried other methods of exporting like ``` export default PlaceholderImage; export () => PlaceholderImage; ``` Any idea where I am going wrong? Cheers in advance!<issue_comment>username_1: An alternative solution using `dummies()` which I think would be quicker would be ``` mydf = data.frame(x=rep(letters[1:9]), day=c("Mon","Tues","Wed","Thurs","Fri","Sat","Sun","Fri","Mon")) library(dummies) mydf <- cbind(mydf, dummy(mydf$day, sep = "_")) ``` That yields ``` x day mydf_Fri mydf_Mon mydf_Sat mydf_Sun mydf_Thurs mydf_Tues mydf_Wed 1 a Mon 0 1 0 0 0 0 0 2 b Tues 0 0 0 0 0 1 0 3 c Wed 0 0 0 0 0 0 1 4 d Thurs 0 0 0 0 1 0 0 5 e Fri 1 0 0 0 0 0 0 6 f Sat 0 0 1 0 0 0 0 7 g Sun 0 0 0 1 0 0 0 8 h Fri 1 0 0 0 0 0 0 9 i Mon 0 1 0 0 0 0 0 ``` Then you can use `gsub()` to have cleaner names ``` names(mydf) = gsub("mydf_", "", names(mydf)) head(mydf) x day Fri Mon Sat Sun Thurs Tues Wed 1 a Mon 0 1 0 0 0 0 0 2 b Tues 0 0 0 0 0 1 0 3 c Wed 0 0 0 0 0 0 1 4 d Thurs 0 0 0 0 1 0 0 5 e Fri 1 0 0 0 0 0 0 6 f Sat 0 0 1 0 0 0 0 ``` Upvotes: 1 <issue_comment>username_2: If you want to do this with the pipe, you can do something like: ``` library(dplyr) library(sjmisc) mydf %>% to_dummy(day, suffix = "label") %>% bind_cols(mydf) %>% select(x, day, everything()) ``` Returns: > > > ``` > # A tibble: 9 x 9 > x day day_Fri day_Mon day_Sat day_Sun day_Thurs day_Tues day_Wed > > 1 a Mon 0. 1. 0. 0. 0. 0. 0. > 2 b Tues 0. 0. 0. 0. 0. 1. 0. > 3 c Wed 0. 0. 0. 0. 0. 0. 1. > 4 d Thurs 0. 0. 0. 0. 1. 0. 0. > 5 e Fri 1. 0. 0. 0. 0. 0. 0. > 6 f Sat 0. 0. 1. 0. 0. 0. 0. > 7 g Sun 0. 0. 0. 1. 0. 0. 0. > 8 h Fri 1. 0. 0. 0. 0. 0. 0. > 9 i Mon 0. 1. 0. 0. 0. 0. 0. > > ``` > > With `dplyr` and `tidyr` we could do: ``` library(dplyr) library(tidyr) mydf %>% mutate(var = 1) %>% spread(day, var, fill = 0, sep = "_") %>% left_join(mydf) %>% select(x, day, everything()) ``` And with base R we could do something like: ``` as.data.frame.matrix(table(rep(mydf$x, lengths(mydf$day)), unlist(mydf$day))) ``` Returns: > > > ``` > Fri Mon Sat Sun Thurs Tues Wed > a 0 1 0 0 0 0 0 > b 0 0 0 0 0 1 0 > c 0 0 0 0 0 0 1 > d 0 0 0 0 1 0 0 > e 1 0 0 0 0 0 0 > f 0 0 1 0 0 0 0 > g 0 0 0 1 0 0 0 > h 1 0 0 0 0 0 0 > i 0 1 0 0 0 0 0 > > ``` > > Upvotes: 5 [selected_answer]<issue_comment>username_3: Instead of `sjmisc::to_dummy` you can also use base R's `model.matrix`; a `dplyr` solution would be: ``` library(dplyr); model.matrix(~ 0 + day, mydf) %>% as.data.frame() %>% bind_cols(mydf) %>% select(x, day, everything()); # x day dayFri dayMon daySat daySun dayThurs dayTues dayWed #1 a Mon 0 1 0 0 0 0 0 #2 b Tues 0 0 0 0 0 1 0 #3 c Wed 0 0 0 0 0 0 1 #4 d Thurs 0 0 0 0 1 0 0 #5 e Fri 1 0 0 0 0 0 0 #6 f Sat 0 0 1 0 0 0 0 #7 g Sun 0 0 0 1 0 0 0 #8 h Fri 1 0 0 0 0 0 0 #9 i Mon 0 1 0 0 0 0 0 ``` Upvotes: 2
2018/03/14
805
2,655
<issue_start>username_0: I have an object in which I have to verify different values using multiple if-else statements, like below is sample javascript snippet: ``` if(properties.values.a > 0 ){ }else if(properties.values.b > 0){ }else if(properties.values.c > 0){ }else if(properties.values.d > 0){ }else if(properties.values.e > 0){ }else{ } ``` I am willing to replace this multiple if-else statement into switch-case. Now I wonder if that is possible for objects or not? If possible, how should be going towards it?<issue_comment>username_1: You can do that with a `for in` loop: ```js o = { a:0, b:0, c:1, d:0 }; for(var p in o) { if (o[p] > 0) { //... console.log(o[p]); break; } } ``` Upvotes: 1 <issue_comment>username_2: Switch is not supposed to work like this. So, no, this is not supported. As a workaround you could do this ``` switch (true) { case (properties.values.a > 0): ... case (properties.values.b > 0): ... case (properties.values.c > 0): ... case (properties.values.d > 0): ... case (properties.values.e > 0): ... } ``` Still, is pretty ugly so i suggest you stick to if/else. Upvotes: 3 [selected_answer]<issue_comment>username_3: You could use an array with the wanted keys and use iterate with a short circuit if a value is found. If no one is found call `callOther`. ``` ['a', 'b', 'c', 'd', 'e'].some(k => { if (properties.values[k] > 0) { callThis(); return true; } }) || callOther(); ``` Upvotes: 1 <issue_comment>username_4: In JavaScript you can only use `switch` to check for equality. You can't check whether `a > b` using a switch. If you want a more elegant solution, that I particularly would only use if I needed to check a huge amount of rules, is to do something like this: ``` const checkers = []; checkers.push([p => p.values.a > 0, () => { /* do something */ }]) checkers.push([p => p.values.b > 0, () => { /* do something */ }]) ... checkers.filter(checker => checker[0](property)).forEach(checker => checker[1]()) ``` In the above example, checkers is an array of arrays in which the first element is a predicate function and the second is a function that should execute if the predicate is true. But again... For most cases you just do an `if/else` as you described. Also, if you want to check if a value is greater than 0, you might use just `if(a)`, because if `a` is not `0`, then it's truthy, otherwise it's falsy. **EDIT** Apparently, technically, you can use logical expressions within `switch` statements, even though it seems hacky =/ Upvotes: 1
2018/03/14
614
2,062
<issue_start>username_0: I’m trying to use the `replace` function, the doc specifies > > replace(string::AbstractString, pat, r[, n::Integer=0]) > > > Search for the given pattern pat, and replace each occurrence with r. If n is provided, replace at most n occurrences. > As with search, the second argument may be a single character, a vector or a set of characters, a string, or a regular > expression. If r is a function, each occurrence is replaced with r(s) where s is the matched substring. If pat is a > regular expression and r is a SubstitutionString, then capture group references in r are replaced with the > corresponding matched text. > > > I don’t understand the last sentence and couldn’t find `SubstitutionString` (there is `SubString` though, but I also couldn't directly find doc for that). I’d like to do a replace where `r` uses the captured group(s) indicated in `pat`. Something that corresponds to the following simple example in Python: ``` regex.sub(r'#(.+?)#', r"captured:\1", "hello #target# bye #target2#") ``` which returns `'hello captured:target bye captured:target2'`.<issue_comment>username_1: A `SubstitutionString` can be created via `s""`. Similarly to how you'd create regexes with `r""`. These then can be used as a pair `from => to` to tell Julia how to replace matched strings. Julia (1.8+) ``` julia> replace("hello #target# bye #target2#", r"#(.+?)#" => s"captured:\1") "hello captured:target bye captured:target2" ``` Older version: ``` julia> replace("hello #target# bye #target2#", r"#(.+?)#", s"captured:\1") "hello captured:target bye captured:target2" ``` If you search for `substitution string` in <https://docs.julialang.org/en/v1/manual/strings/> you'll find another example there. Upvotes: 5 [selected_answer]<issue_comment>username_2: It has changed since last answer. Current correct version is this one ``` replace("first second", r"(\w+) (?\w+)" => s"\g \1") replace("a", r"." => s"\g<0>1") ``` See <https://docs.julialang.org/en/v1/manual/strings/> for more details. Upvotes: 3
2018/03/14
2,723
4,260
<issue_start>username_0: I'm trying to **map dates to the `viridis` colour scale** in ggplot2. The default `ggplot` colour scale works fine for dates. But I'm having trouble mapping them to the `viridis` scale, getting an error regarding 'origin' not being supplied to `as.Date.numeric()` (similar error when trying to use `ggplot2::scale_color_gradient()`). See reprex below. Any advice on the most efficient way to sort this? ```r ### data df <- structure(list(height = c(182.87, 179.12, 169.15, 175.66, 164.47, 158.27, 161.69, 165.84, 181.32, 167.37, 160.06, 166.48, 175.39, 164.7, 163.79, 181.13, 169.24, 176.22, 174.09, 180.11, 179.24, 161.92, 169.85, 160.57, 168.24, 177.75, 183.21, 167.75, 181.15, 181.56, 160.03, 165.62, 181.64, 159.67, 177.03, 163.35, 175.21, 160.8, 166.46, 157.95, 180.61, 159.52, 163.01, 165.8, 170.03, 157.16, 164.58, 163.47, 185.43, 165.34, 163.45, 163.97, 161.38, 160.09, 178.64, 159.78, 161.57, 161.83, 169.66, 166.84, 159.32, 170.51, 161.84, 171.41, 166.75, 166.19, 169.16, 157.01, 167.51, 160.47, 162.33, 175.67, 174.25, 158.94, 172.72, 159.23, 176.54, 184.34, 163.94, 160.09, 162.32, 162.59, 171.94, 158.07, 158.35, 162.18, 159.38, 171.45, 163.17, 183.1, 177.14, 171.08, 159.33, 185.43, 162.65, 159.44, 164.11, 159.13, 160.58, 164.88), weight = c(76.57, 80.43, 75.48, 94.54, 71.78, 69.9, 68.85, 70.44, 76.9, 79.06, 72.37, 67.34, 92.22, 75.69, 65.76, 72.33, 73.3, 97.67, 72.2, 75.72, 75.54, 69.92, 90.63, 63.54, 69.57, 74.84, 83.36, 82.06, 83.93, 79.54, 64.3, 76.72, 96.91, 71.88, 74.04, 70.46, 83.65, 64.77, 76.83, 67.41, 83.59, 67.99, 65.19, 71.77, 66.68, 69.64, 72.99, 72.89, 87.23, 70.84, 67.67, 66.71, 73.55, 65.93, 97.05, 68.31, 67.92, 66.03, 77.3, 88.25, 64.92, 84.35, 69.97, 81.7, 79.06, 67.46, 90.08, 66.56, 84.15, 68.2, 66.47, 88.82, 80.93, 65.14, 67.62, 69.96, 90.76, 90.41, 71.47, 68.94, 72.72, 69.76, 82.11, 69.8, 69.72, 67.81, 70.37, 84.29, 64.47, 82.47, 88.7, 72.51, 70.68, 73.63, 73.99, 66.21, 70.66, 66.96, 71.49, 68.07 ), birth = structure(c(766, 896, 920, 959, 1258, 1277, 815, 1226, 729, 1295, 854, 682, 811, 690, 741, 1056, 690, 1199, 1133, 1233, 806, 1097, 838, 1278, 773, 1059, 1373, 1038, 1387, 859, 1343, 926, 1074, 1366, 784, 1207, 1222, 1150, 965, 862, 819, 1072, 1238, 1320, 976, 1296, 760, 833, 1295, 767, 1030, 727, 774, 1126, 1113, 849, 1285, 928, 1247, 799, 1130, 1049, 829, 1318, 790, 1067, 1013, 831, 936, 841, 781, 1378, 801, 1247, 770, 1372, 1129, 892, 1172, 720, 982, 884, 1380, 871, 889, 820, 1374, 791, 1271, 1033, 698, 1185, 1273, 1257, 952, 1048, 904, 906, 1051, 684), class = "Date")), class = "data.frame", .Names = c("height", "weight", "birth"), row.names = c(NA, -100L)) ### libraries library(ggplot2) library(viridis) #> Loading required package: viridisLite ### plot default colour scale ggplot(data=df, aes(x = height, y = weight, colour = birth)) + geom_point(size=4) ``` ![](https://i.stack.imgur.com/8jFdK.png) ```r ### plot with viridis colour scale ggplot(data=df, aes(x = height, y = weight, colour = birth)) + geom_point(size=4) + scale_colour_viridis() #> Error in as.Date.numeric(value): 'origin' must be supplied ```<issue_comment>username_1: Here is a workaround: Convert the dates to numeric values before assigning them to the colour aesthetic. In the call to `scale_colour_viridis()` you then use corresponding breaks and labels: ``` # create equidistant sequence of dates to use as labels lab_dates <- pretty(df$birth) ggplot(data=df, aes(x = height, y = weight, colour = as.numeric(birth))) + geom_point(size=4) + scale_colour_viridis(breaks = as.numeric(lab_dates), labels = lab_dates) ``` [![enter image description here](https://i.stack.imgur.com/sLYni.png)](https://i.stack.imgur.com/sLYni.png) Upvotes: 3 <issue_comment>username_2: An alternative that doesn't require any extra variables is to set `trans = "date"`. ```r ggplot(df, aes(x = height, y = weight, colour = birth)) + geom_point(size = 4) + scale_colour_viridis_c(trans = "date") ``` *(Using {ggplot2} v3.3.2)* [![Scatter plot of weight vs height coloured by birth date](https://i.stack.imgur.com/XtChS.png)](https://i.stack.imgur.com/XtChS.png) Upvotes: 5 [selected_answer]
2018/03/14
1,915
5,523
<issue_start>username_0: ``` import emoji def emoji_lis(string): _entities = [] for pos,c in enumerate(string): if c in emoji.UNICODE_EMOJI: print("Matched!!", c ,c.encode('ascii',"backslashreplace")) _entities.append({ "location":pos, "emoji": c }) return _entities emoji_lis(" Ω…Ψ―ΫŒΨ­Ϋ asΓ­, se ds ") ``` * Matched!! \U0001f467 * Matched!! \U0001f3ff * Matched!! \U0001f60c * Matched!! \U0001f495 * Matched!! \U0001f46d My code is working of all other emoji's but how can I detect country flags ?<issue_comment>username_1: I don't think theres a library anywhere to do this. However, this can somewhat be done with a function: `\U0001F1E6\U0001F1E8` is the first unicode flag and `\U0001F1FF\U0001F1FC` is the last, so that covers almost all of them. Theres [3 more](http://unicode.org/emoji/charts/full-emoji-list.html#subdivision-flag) that cause some issues. Heres a function that would check if the input is a flag: ``` def is_flag_emoji(c): return "\U0001F1E6\U0001F1E8" <= c <= "\U0001F1FF\U0001F1FC" or c in ["\U0001F3F4\U000e0067\U000e0062\U000e0065\U000e006e\U000e0067\U000e007f", "\U0001F3F4\U000e0067\U000e0062\U000e0073\U000e0063\U000e0074\U000e007f", "\U0001F3F4\U000e0067\U000e0062\U000e0077\U000e006c\U000e0073\U000e007f"] ``` Testing: ``` >>> is_flag_emoji('a') False >>> is_flag_emoji('') False >>> is_flag_emoji("""""") True ``` So you could accordingly change your if statement to `if c in emoji.UNICODE_EMOJI or is_flag_emoji(c):`. There is an issue with this though; since a lot flags are made by joining multiple characters, you probably wont be able to identify the emoji. ``` >>> s ' here is more text and more' >>>emoji_lis(s) Matched!! b'\\U0001f1fe' Matched!! b'\\U0001f1ea' Matched!! b'\\U0001f1e9' [{'location': 0, 'emoji': ''}, {'location': 1, 'emoji': ''}, {'location': 22, 'emoji': ''}] ``` Upvotes: 2 <issue_comment>username_2: Here is an article about how [Unicode encodes country flags](https://esham.io/2014/06/unicode-flags). They are represented as sequences of two [regional indicator symbols](https://en.wikipedia.org/wiki/Regional_Indicator_Symbol) (code points ranging from U+1F1E6 to U+1F1FF), although obviously not every possible combination of two symbols corresponds to a country (and therefore a flag), obviously. You could just assume that no "bad" combinations will happen or maintain (or import) a set with the (currently) 270 valid pairs of symbols. Then there are regional flags. These are represented as a black flag code point (U+1F3F4) followed by a sequence of [tags](https://en.wikipedia.org/wiki/Tags_(Unicode_block)) (code points U+E0001 and range from U+E0020 to U+E007F) spelling the region identifier (for example, for the [flag or Wales](https://emojipedia.org/flag-for-wales/) that would be "gbwls"), plus a "cancel tag" code point (U+E007F). And, besides all that, you also have of course regular emojis that look like flags. The aforementioned [black flag (U+1F3F4)](https://emojipedia.org/emoji/%F0%9F%8F%B4/) is one of them, but you also have [triangular flag (U+1F6A9)](https://emojipedia.org/emoji/%F0%9F%9A%A9/), etc. Most of these you should already be able to detect, since they are just like other emojis. *However*, we are not quite done here. You have the issue of composite emojis, which affects some flags but also many other emojis. In your example, you can see that the matched emoji for the black woman in the input string is a "base" woman emoji, and then this brown patch. This is because the [black woman emoji](https://emojipedia.org/woman-type-6/) is made up of two code points, [woman (U+1F469)](https://emojipedia.org/emoji/%F0%9F%91%A9/) and [dark skin tone (U+1F311)](https://emojipedia.org/emoji/%F0%9F%8F%BF/). In many other cases, you would need the two code points, plus a [zero-width joiner (U+200D)](https://en.wikipedia.org/wiki/Zero-width_joiner) in between, to specify that you want them merged. And sometimes you also need to throw in a [variation selector (typically 16, U+FE0F)](https://en.wikipedia.org/wiki/Variation_Selectors_(Unicode_block)) to indicate that you want things to be used as emojis. You can read more about this [in this article](https://blog.emojipedia.org/emoji-zwj-sequences-three-letters-many-possibilities/). In the case of flags, you have for example the [rainbow flag (U+1F3F3, U+FE0F,‍ U+200D, U+1F308)](https://emojipedia.org/rainbow-flag/), that would read "white flag, variation selector 16 (to use white flag emoji, not text), zero-width joiner, rainbow"; or the [pirate flag (U+1F3F4,‍ U+200D, U+2620, U+FE0F)](https://emojipedia.org/pirate-flag/), that would read "black flag, zero-width joiner, skull and crossbones, variation selector 16 (to use skull and crossbones emoji, not text)". Now, there are different ways you can deal with all this, but in your current approach you are iterating one code point at a time, so you will not be able to detect complex emojis. You can just have a big set of all interesting sequences (flags, some composite emojis, etc.) and look for them in the input. You can check if the current character is a regional indicator symbol and, if that is the case, try to read the next code point to form a flag (and settle for individual simple emojis for the rest). I would not know for sure what is the best solution for your case (in terms complexity/benefits trade-off), but you should be aware of the nuances of emoji encoding and the pitfalls you may find. Upvotes: 2
2018/03/14
3,643
12,253
<issue_start>username_0: Situation: ---------- There are a number of blocking synchronous calls (this is a given which cannot be changed) which can potentially take a long time for which the results need to be aggregated. Goal: ----- Make the calls non-blocking, then wait for a max time (ms) and collect all the calls that have succeeded even though some might have failed because they have timed out (so we can degrade functionality on the failed calls). Current solution: ----------------- The solution below works by combining the futures, wait for that one to either finish or timeout and in the case of a NonFatal error (timeout) it uses the `completedFutureValues` method to extract the futures which completed successfully. ``` import scala.concurrent.{Await, Future} import scala.util.Random._ import scala.concurrent.duration._ import scala.concurrent.ExecutionContext.Implicits.global import scala.util.{Failure, Success} import scala.util.control.NonFatal def potentialLongBlockingHelloWorld(i: Int): String = {Thread.sleep(nextInt(500)); s"hello world $i" } // use the same method 3 times, but in reality is different methods (with different types) val futureHelloWorld1 = Future(potentialLongBlockingHelloWorld(1)) val futureHelloWorld2 = Future(potentialLongBlockingHelloWorld(2)) val futureHelloWorld3 = Future(potentialLongBlockingHelloWorld(3)) val combinedFuture: Future[(String, String, String)] = for { hw1 <- futureHelloWorld1 hw2 <- futureHelloWorld2 hw3 <- futureHelloWorld3 } yield (hw1, hw2, hw3) val res = try { Await.result(combinedFuture, 250.milliseconds) } catch { case NonFatal(_) => { ( completedFutureValue(futureHelloWorld1, "fallback hello world 1"), completedFutureValue(futureHelloWorld2, "fallback hello world 2"), completedFutureValue(futureHelloWorld3, "fallback hello world 3") ) } } def completedFutureValue[T](future: Future[T], fallback: T): T = future.value match { case Some(Success(value)) => value case Some(Failure(e)) => fallback case None => fallback } ``` it will return tuple3 with either the completed future result or the fallback, for example: `(hello world,fallback hello world 2,fallback hello world 3)` Although this works, I'm not particularly happy with this. ### Question: How can we improve on this?<issue_comment>username_1: As soon as (as I understand) you are going to block current thread anyway and wait for result synchronously, I would say that easiest solution should be: ``` import java.util.concurrent.atomic.AtomicReference import scala.concurrent.{Await, Future} import scala.util.Random._ import scala.concurrent.ExecutionContext.Implicits.global def potentialLongBlockingHelloWorld(i: Int): String = {Thread.sleep(nextInt(500)); s"hello world $i" } // init with fallback val result1 = new AtomicReference[String]("fallback hello world 1") val result2 = new AtomicReference[String]("fallback hello world 2") val result3 = new AtomicReference[String]("fallback hello world 3") // use the same method 3 times, but in reality is different methods (with different types) val f1 = Future(potentialLongBlockingHelloWorld(1)).map {res => result1.set(res) } val f2 = Future(potentialLongBlockingHelloWorld(2)).map {res => result2.set(res) } val f3 = Future(potentialLongBlockingHelloWorld(3)).map {res => result1.set(res) } for (i <- 1 to 5 if !(f1.isCompleted && f2.isCompleted && f3.isCompleted)) { Thread.sleep(50) } (result1.get(), result2.get(), result3.get()) ``` Here, you just introduce results in AtomicReferences, which are updated on future completion, and check results with tick for either all futures are completed or at most 250ms (timeout) with ticks. Alternatively, you can get `Future with timeout` implementation from [here](https://stackoverflow.com/questions/16304471/scala-futures-built-in-timeout) extend with fallback and timeout and than just use `Future.sequence` with Await, with guarantee that all `Futures` will be completed in-time with success or fallback. Upvotes: 0 <issue_comment>username_2: Probably it's better to use Future.sequence() which returns Future[Collection] from Collection[Future] Upvotes: 0 <issue_comment>username_2: why not to write: ``` val futures: f1 :: f2 :: f3 :: Nil val results = futures map { f => Await.result(f, yourTimeOut) } results.collect { case Success => /* your logic */ } ``` ??? Upvotes: 0 <issue_comment>username_3: If I also might suggest one approach to this. Idea would be to avoid blocking all together and actually set a timeout on every future. Here is a blog post I find very useful when doing my example, It's kind of old, but gold thing: <https://nami.me/2015/01/20/scala-futures-with-timeout/> One negative point in this is that you might need to add akka to the solution, but then again it's not completely ugly: ``` import akka.actor.ActorSystem import akka.pattern.after import scala.concurrent.ExecutionContext.Implicits.global import scala.concurrent.duration.{FiniteDuration, _} import scala.concurrent.{Await, Future} import scala.util.Random._ implicit val system = ActorSystem("theSystem") implicit class FutureExtensions[T](f: Future[T]) { def withTimeout(timeout: => Throwable)(implicit duration: FiniteDuration, system: ActorSystem): Future[T] = { Future firstCompletedOf Seq(f, after(duration, system.scheduler)(Future.failed(timeout))) } } def potentialLongBlockingHelloWorld(i: Int): String = { Thread.sleep(nextInt(500)); s"hello world $i" } implicit val timeout: FiniteDuration = 250.milliseconds val timeoutException = new TimeoutException("Future timed out!") // use the same method 3 times, but in reality is different methods (with different types) val futureHelloWorld1 = Future(potentialLongBlockingHelloWorld(1)).withTimeout(timeoutException).recoverWith { case _ β‡’ Future.successful("fallback hello world 1") } val futureHelloWorld2 = Future(potentialLongBlockingHelloWorld(2)).withTimeout(timeoutException).recoverWith { case _ β‡’ Future.successful("fallback hello world 2") } val futureHelloWorld3 = Future(potentialLongBlockingHelloWorld(3)).withTimeout(timeoutException).recoverWith { case _ β‡’ Future.successful("fallback hello world 3") } val results = Seq(futureHelloWorld1, futureHelloWorld2, futureHelloWorld3) val combinedFuture = Future.sequence(results) // this is just to show what you would have in your future // combinedFuture is not blocking anything val justToShow = Await.result(combinedFuture, 1.seconds) println(justToShow) // some of my runs: // List(hello world 1, hello world 2, fallback hello world 3) // List(fallback hello world 1, fallback hello world 2, hello world 3) ``` With this approach there's no blocking and you have a timeout on every stage so you can fine tune and adapt to what you really need. The await I'm using is just to show how this works. Upvotes: 2 [selected_answer]<issue_comment>username_4: Posting a solution provided by a colleague here which basically does the same as the solution provided in the question, but makes it way more clean. Using his solution one can write: ``` ( Recoverable(futureHelloWorld1, "fallback hello world 1"), Recoverable(futureHelloWorld2, "fallback hello world 1"), Recoverable(futureHelloWorld3, "fallback hello world 1") ).fallbackAfter(250.milliseconds) { case (hw1, hw2, hw3) => // Do something with the results. println(hw1.value) println(hw2.value) println(hw3.value) } ``` This works using tuples of futures with fallbacks. The code which makes this possible: ``` import org.slf4j.LoggerFactory import scala.concurrent.ExecutionContext.Implicits.global import scala.concurrent.duration._ import scala.concurrent.{Await, ExecutionContext, Future, TimeoutException} import scala.util.Try import scala.util.control.NonFatal sealed abstract class FallbackFuture[T] private(private val future: Future[T]) { def value: T } object FallbackFuture { final case class Recoverable[T](future: Future[T], fallback: T) extends FallbackFuture[T](future) { override def value: T = { if (future.isCompleted) future.value.flatMap(t => t.toOption).getOrElse(fallback) else fallback } } object Recoverable { def apply[T](fun: => T, fallback: T)(implicit ec: ExecutionContext): FallbackFuture[T] = { new Recoverable[T](Future(fun), fallback) } } final case class Irrecoverable[T](future: Future[T]) extends FallbackFuture[T](future) { override def value: T = { def except = throw new IllegalAccessException("Required future did not compelete before timeout") if (future.isCompleted) future.value.flatMap(_.toOption).getOrElse(except) else except } } object Irrecoverable { def apply[T](fun: => T)(implicit ec: ExecutionContext): FallbackFuture[T] = { new Irrecoverable[T](Future(fun)) } } object Implicits { private val logger = LoggerFactory.getLogger(Implicits.getClass) type FF[X] = FallbackFuture[X] implicit class Tuple2Ops[V1, V2](t: (FF[V1], FF[V2])) { def fallbackAfter[R](timeout: Duration)(fn: ((FF[V1], FF[V2])) => R): R = awaitAll(timeout, t) { fn(t) } } implicit class Tuple3Ops[V1, V2, V3](t: (FF[V1], FF[V2], FF[V3])) { def fallbackAfter[R](timeout: Duration)(fn: ((FF[V1], FF[V2], FF[V3])) => R): R = awaitAll(timeout, t) { fn(t) } } implicit class Tuple4Ops[V1, V2, V3, V4](t: (FF[V1], FF[V2], FF[V3], FF[V4])) { def fallbackAfter[R](timeout: Duration)(fn: ((FF[V1], FF[V2], FF[V3], FF[V4])) => R): R = awaitAll(timeout, t) { fn(t) } } implicit class Tuple5Ops[V1, V2, V3, V4, V5](t: (FF[V1], FF[V2], FF[V3], FF[V4], FF[V5])) { def fallbackAfter[R](timeout: Duration)(fn: ((FF[V1], FF[V2], FF[V3], FF[V4], FF[V5])) => R): R = awaitAll(timeout, t) { fn(t) } } implicit class Tuple6Ops[V1, V2, V3, V4, V5, V6](t: (FF[V1], FF[V2], FF[V3], FF[V4], FF[V5], FF[V6])) { def fallbackAfter[R](timeout: Duration)(fn: ((FF[V1], FF[V2], FF[V3], FF[V4], FF[V5], FF[V6])) => R): R = awaitAll(timeout, t) { fn(t) } } implicit class Tuple7Ops[V1, V2, V3, V4, V5, V6, V7](t: (FF[V1], FF[V2], FF[V3], FF[V4], FF[V5], FF[V6], FF[V7])) { def fallbackAfter[R](timeout: Duration)(fn: ((FF[V1], FF[V2], FF[V3], FF[V4], FF[V5], FF[V6], FF[V7])) => R): R = awaitAll(timeout, t) { fn(t) } } implicit class Tuple8Ops[V1, V2, V3, V4, V5, V6, V7, V8](t: (FF[V1], FF[V2], FF[V3], FF[V4], FF[V5], FF[V6], FF[V7], FF[V8])) { def fallbackAfter[R](timeout: Duration)(fn: ((FF[V1], FF[V2], FF[V3], FF[V4], FF[V5], FF[V6], FF[V7], FF[V8])) => R): R = awaitAll(timeout, t) { fn(t) } } implicit class Tuple9Ops[V1, V2, V3, V4, V5, V6, V7, V8, V9](t: (FF[V1], FF[V2], FF[V3], FF[V4], FF[V5], FF[V6], FF[V7], FF[V8], FF[V9])) { def fallbackAfter[R](timeout: Duration)(fn: ((FF[V1], FF[V2], FF[V3], FF[V4], FF[V5], FF[V6], FF[V7], FF[V8], FF[V9])) => R): R = awaitAll(timeout, t) { fn(t) } } implicit class Tuple10Ops[V1, V2, V3, V4, V5, V6, V7, V8, V9, V10](t: (FF[V1], FF[V2], FF[V3], FF[V4], FF[V5], FF[V6], FF[V7], FF[V8], FF[V9], FF[V10])) { def fallbackAfter[R](timeout: Duration)(fn: ((FF[V1], FF[V2], FF[V3], FF[V4], FF[V5], FF[V6], FF[V7], FF[V8], FF[V9], FF[V10])) => R): R = awaitAll(timeout, t) { fn(t) } } private implicit def toFutures(fallbackFuturesTuple: Product): Seq[Future[Any]] = { fallbackFuturesTuple.productIterator.toList .map(_.asInstanceOf[FallbackFuture[Any]]) .map(_.future) } private def awaitAll[R](timeout: Duration, futureSeq: Seq[Future[Any]])(fn: => R) = { Try { Await.ready(Future.sequence(futureSeq), timeout) } recover { case _: TimeoutException => logger.warn("Call timed out") case NonFatal(ex) => throw ex } fn } } } ``` Upvotes: 1
2018/03/14
2,219
7,326
<issue_start>username_0: I want the highest **even number** in the array. ``` public static void main(String[] args) { int[] a = new int[]{10, 46, 78, 32, 3, 80, 92, 11, 39, 57}; System.out.println(Arrays.toString(a)); int largest = Integer.MAX_VALUE; for(int number : a) { if(number > largest) { largest = number; } } System.out.println(largest); } ``` The output is: ``` [10, 46, 78, 32, 3, 80, 92, 11, 39, 57] 2147483647 ```<issue_comment>username_1: Don't init `largest` to max int: ``` int largest = Integer.MAX_VALUE; ``` Set it to min int instead ``` int largest = Integer.MIN_VALUE; ``` Or, as @username_2 suggests, initialize to the first value in the array: ``` int largest = a[0]; ``` And as @Zabusa points out, you want even number. So improve the if statement so it only triggers on even numbers: ``` if (number > largest && number % 2 == 0) { ``` Upvotes: 2 <issue_comment>username_2: The simplest way (if you are indeed using such small arrays) there is: step 1. sort the array step 2. print the last element. For larger arrays, the solution provided by username_1 is indeed the better option. Upvotes: -1 <issue_comment>username_3: Work For Both Positive and negative **even number** ``` int[] a = new int[]{100, 45, 77, 33, 3, 81, 80, 8, 120, 100,-2}; int maxvalue = Integer.MIN_VALUE; for (int anA : a) { if (anA % 2 == 0) { if (anA >= maxvalue) maxvalue = anA; } } ``` Upvotes: 0 <issue_comment>username_4: An alternative and simple solution using Streams: ``` int[] a = new int[] { 10, 46, 78, 32, 3, 80, 92, 11, 39, 57 }; System.out.println(Arrays.toString(a)); int largest = Arrays.stream(a).filter((i) -> i % 2 == 0).max().getAsInt(); System.out.println(largest); ``` An efficient and quick approach. Should there be no even number in the list the `getAsInt()` will throw an `NoSuchElementException` which can be easily caught and handled in any way you like. Instead of `getAsInt()` you could also use a `orElse(-1)` or similar if you do not want to work with an exception but want to have a defined value that signals that your input was probably malformed. Upvotes: 1 <issue_comment>username_5: Explanation ----------- Your code has **two problems**. The first is, as others have already pointed out, that you start with `Integer.MAX_VALUE` as initial guess, which is the wrong logic. You need to use the worst possible *largest value* as an initial guess. Otherwise your elements will always be smaller, and thus your initial guess is the biggest element. That's why we start by guessing `MIN_VALUE` as largest element. Since the elements of the array can then only get larger. Just play the algorithm on paper for a small example like `{1, 2}` and you see why that makes sense. The second problem is that you are actually considering **all values**, but you only wanted to consider the **even values**. We easily fix that by **skipping** all **odd values**. --- Code ---- Here is your code with both fixes: ``` public static void main(String[] args) { int[] a = new int[]{10, 46, 78, 32, 3, 80, 92, 11, 39, 57}; System.out.println(Arrays.toString(a)); // Start with lowest value as initial guess int largest = Integer.MIN_VALUE; for (int number : a) { // Skip number if odd if (number % 2 == 1) { continue; } // Now we only consider and collect even numbers if (number > largest) { // Update the current guess largest = number; } } // We now considered all elements, the guess is // final and correct. // And also even since we skipped odd values. System.out.println(largest); } ``` --- Notes ----- If the array does not contain any even number, then the output will be `Integer.MIN_VALUE`, you might consider this special case and catch it with some `if` clause. Others suggest using an element of the array as initial guess. Since you only want even values, you may only consider using **even values** of the array for this initial guess. Otherwise, if the array does not contain even values, you would output an odd number again. You could use a general and compact `Stream` solution as alternative to a custom method: ``` int maxEven = Arrays.stream(a) .filter(a -> a % 2 == 0) // Only even values .max() // OptionalInt .orElse(-1); // Gets the value or uses -1 if not present ``` Upvotes: 2 <issue_comment>username_6: If you have access to Java 8 and higher, I'd definitely check out the [Streams API](https://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html). You can see your array of integer as a stream of values. With that stream, you can access high level functions such as min, max and filter. There are many advantages to using Streams in Java such as the one mentioned below. > > The Java 8 Streams can be seen as lazily constructed Collections, where the values are computed when user demands for it. Actual Collections behave absolutely opposite to it and they are set of eagerly computed values (no matter if the user demands for a particular value or not). > > > You can continue reading [here](https://dzone.com/articles/understanding-java-8-streams-1) So, to answer your initial question on how to retrieve your highest value in your array, here's how you could do it : // snippet of only calculating the max ``` int maxValue = Arrays.stream(arrayOfVals) .mapToInt(v -> v) .filter(val -> val%2 == 0) .max() .orElse(-1); ``` Upvotes: 1 <issue_comment>username_7: Try this code: ``` public static void main (String [] args) { int [] arr = {31,-2,3,-4,9,3,8,11,7}; int maxEven, firstEven = 1; for (int i = 0; i < arr.length; i++) if(arr [i]%2 == 0) { firstEven = arr [i]; break; } maxEven = firstEven; for (int j = 0; j < arr.length; j++) if (arr [j]%2 == 0 && arr [j]>maxEven) maxEven = arr [j]; if (maxEven == 1) System.out.println ("No even numbers in this array!"); else System.out.println ("The maximum even number is: "+maxEven); } ``` Upvotes: 0 <issue_comment>username_8: I tried a different approach to this problem by using ternary operator. Although I know, nested ternary operators are bad for performance. We can use simple If/Else conditions as well there. A little experiment never hurts. Just never stop learning new things in your Life. My working code: ``` public static void main(String[] args) { int[] numArray = {1, 121, 9, -1024, -30001, 5, 41, 181, 91}; System.out.println("Max Even No is :: "+getMaxEvenNum(numArray)); } public static int getMaxEvenNum(int[] numArray) { int maxNo = Integer.MIN_VALUE; for(int i=0; i< numArray.length; i++) { for(int j=i+1; j< numArray.length; j++) { int maxOf2 = numArray[i]>numArray[j] ? (numArray[i]%2 == 0 ? numArray[i] : maxNo) : (numArray[j]%2 == 0 ? numArray[j] : maxNo); maxNo = maxNo>maxOf2 ? maxNo : maxOf2; } } return maxNo; } ``` It will give the following output: ```none Max Even No is :: -1024 ``` If no even number is present, then it will give the lowest integer value. Upvotes: 0
2018/03/14
2,683
9,231
<issue_start>username_0: I have followed official docs to integrate firebase crashlytis. <https://firebase.google.com/docs/crashlytics/get-started> it is working fine for debug build & i am getting crashes in console also, but I am getting below error while generating signed APK. ``` Warning:com.twitter.sdk.android.core.internal.scribe.ScribeFilesSender$ConfigRequestInterceptor: can't find referenced method 'java.lang.String getDeviceUUID()' in program class io.fabric.sdk.android.services.common.IdManager Information:See complete output in console Warning:there were 1 unresolved references to program class members. Information:3 warnings Error:Execution failed for task ':app:transformClassesAndResourcesWithProguardForLivedemoRelease'. > Job failed, see logs for details Information:1 error Information:BUILD FAILED in 10s Warning:Exception while processing task java.io.IOException: Please correct the above warnings first. ``` app level build.gradle ``` apply plugin: 'com.android.application' android { compileSdkVersion 27 buildToolsVersion "27.0.3" defaultConfig { ---- minSdkVersion 16 targetSdkVersion 25 versionCode 5885 testInstrumentationRunner "android.support.test.runner.AndroidJUnitRunner" vectorDrawables.useSupportLibrary = true multiDexEnabled true buildConfigField 'boolean', 'DIALER_WITH_RECENT', 'false' } dexOptions { javaMaxHeapSize "4g" } buildTypes { release { debuggable false minifyEnabled true proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro' } } flavorDimensions "prod" productFlavors { prod { dimension "prod" ..... } livedemo { dimension "prod" ..... } stage { dimension "prod" ..... } stage2 { dimension "prod" .... } } } dependencies { implementation fileTree(include: ['*.jar'], dir: 'libs') /*androidTestCompile('com.android.support.test.espresso:espresso-core:2.2.2', { exclude group: 'com.android.support', module: 'support-annotations' })*/ //bluetooth //compile 'com.estimote:sdk:0.11.1@aar'//Beacon-code /*androidTestCompile 'com.android.support.test.espresso:espresso-contrib:2.2.2', { exclude group: 'com.android.support', module: 'support-annotations' exclude group: 'com.android.support', module: 'support-v4' exclude group: 'com.android.support', module: 'design' exclude group: 'com.android.support', module: 'recyclerview-v7' }*/ //compile 'com.android.support.constraint:constraint-layout:1.0.2' /*compile 'com.google.firebase:firebase-messaging:10.0.1'*/ //facebook login //twitter login implementation('com.twitter.sdk.android:twitter-core:1.6.6@aar') { transitive = true } implementation('com.twitter.sdk.android:twitter:1.13.1@aar') { transitive = true } implementation project(':libraryHashTag') implementation files('libs/SQLiteStudioRemote.jar') implementation project(':SwipeMenuLibCustom') implementation files('libs/jtar-1.1.jar') implementation 'com.android.support:appcompat-v7:27.1.0' implementation 'com.android.support:recyclerview-v7:27.1.0' implementation 'com.android.support:cardview-v7:27.1.0' implementation 'com.android.support:design:27.1.0' implementation 'com.squareup.retrofit2:retrofit:2.3.0' implementation 'com.squareup.retrofit2:converter-gson:2.3.0' implementation 'com.squareup.okhttp3:logging-interceptor:3.9.1' implementation 'com.github.JakeWharton:ViewPagerIndicator:2.4.1' implementation 'com.github.bumptech.glide:glide:4.5.0' implementation 'com.theartofdev.edmodo:android-image-cropper:2.6.0' implementation 'com.google.android.gms:play-services-safetynet:11.8.0' implementation 'com.github.clans:fab:1.6.4' implementation 'org.ocpsoft.prettytime:prettytime:4.0.1.Final' implementation 'com.github.chrisbanes:PhotoView:2.1.3' implementation 'com.google.android.gms:play-services-analytics:11.8.0' implementation 'com.google.android.gms:play-services-location:11.8.0' implementation 'com.google.android.gms:play-services-base:11.8.0' implementation 'com.github.kenglxn.QRGen:android:2.4.0' implementation 'com.journeyapps:zxing-android-embedded:3.5.0' implementation 'com.facebook.android:facebook-android-sdk:4.28.0' implementation 'com.android.support:multidex:1.0.3' implementation 'com.android.support.constraint:constraint-layout:1.0.2' implementation 'com.google.android.gms:play-services-places:11.8.0' implementation 'com.google.android.gms:play-services-maps:11.8.0' implementation 'com.splitwise:tokenautocomplete:2.0.8' implementation 'com.google.firebase:firebase-messaging:11.8.0' implementation 'com.google.android.gms:play-services-vision:11.8.0' implementation 'com.github.castorflex.smoothprogressbar:library:1.1.0' /*For Crashlytics*/ releaseImplementation('com.crashlytics.sdk.android:crashlytics:2.9.1@aar') { transitive = true } compile 'com.google.firebase:firebase-core:11.8.0' testCompile 'junit:junit:4.12' } apply plugin: 'com.google.gms.google-services' apply plugin: 'io.fabric' ``` Proguard rules file ``` # Add project specific ProGuard rules here. # By default, the flags in this file are appended to flags specified # in D:\Android\sdk/tools/proguard/proguard-android.txt # You can edit the include path and order by changing the proguardFiles # directive in build.gradle. # # For more details, see # http://developer.android.com/guide/developing/tools/proguard.html # Add any project specific keep options here: # If your project uses WebView with JS, uncomment the following # and specify the fully qualified class name to the JavaScript interface # class: #-keepclassmembers class fqcn.of.javascript.interface.for.webview { # public *; #} -dontwarn javax.annotation.** -dontwarn com.squareup.okhttp3.** -keep class com.squareup.okhttp3.** { *; } -keep interface com.squareup.okhttp3.* { *; } -dontwarn javax.annotation.Nullable -dontwarn javax.annotation.ParametersAreNonnullByDefault -dontwarn javax.annotation.GuardedBy -keep public class * implements com.bumptech.glide.module.GlideModule -keep public enum com.bumptech.glide.load.resource.bitmap.ImageHeaderParser$** { **[] $VALUES; public *; } -dontwarn okio.** -dontwarn org.apache.lang.** -dontwarn org.joda.time.** -dontwarn org.w3c.dom.** -dontwarn com.viewpagerindicator.** -keep class android.support.v4.** { *; } -dontnote android.support.v4.** -dontwarn retrofit2.Platform$Java8 -keep class android.support.v4.app.** { *; } -keep interface android.support.v4.app.** { *; } -keep class android.support.v7.app.** { *; } -keep interface android.support.v7.app.** { *; } -keep class android.support.v7.widget.SearchView { *; } -keep class org.ocpsoft.prettytime.** -keep class com.estimote.sdk.* { *; } -keep interface com.estimote.sdk.* { *; } -dontwarn com.estimote.sdk.** -dontwarn com.beloo.widget.chipslayoutmanager.Orientation -keep class com.beloo.widget.chipslayoutmanager.* { *; } -keep class com.beloo.widget.chipslayoutmanager.** { *; } -keep class com.beloo.widget.chipslayoutmanager.*$* { *; } -keep class RestrictTo.* -keep class RestrictTo.** -keep class RestrictTo.*$* -keep class org.ocpsoft.prettytime.i18n.** -keepclassmembers class android.support.design.internal.BottomNavigationMenuView { boolean mShiftingMode; } -keep class com.crashlytics.** { *; } -dontwarn com.crashlytics.** -keepattributes SourceFile,LineNumberTable -keep public class * extends java.lang.Exception #-renamesourcefileattribute SourceFile #-keepattributes SourceFile,LineNumberTable #-printmapping mapping.txt #-keepresourcexmlelements manifest/application/meta-data@value=GlideModule ```<issue_comment>username_1: To skip running ProGuard on Crashlytics, just add the following to your ProGuard config file. ``` -keep class com.crashlytics.** { *; } -dontwarn com.crashlytics.** ``` Next, in order to provide the most meaningful crash reports, add the following line to your configuration file: ``` -keepattributes SourceFile,LineNumberTable ``` Crashlytics will still function without this rule, but your crash reports will not include proper file names or line numbers. If you are using custom exceptions, add this line so that custom exception types are skipped during obfuscation: ``` -keep public class * extends java.lang.Exception ``` Upvotes: 1 <issue_comment>username_2: Mike from Fabric here. You're using versions of Twitter's SDK that are no longer supported on Fabric. Specifically: ``` //twitter login implementation('com.twitter.sdk.android:twitter-core:1.6.6@aar') { transitive = true } implementation('com.twitter.sdk.android:twitter:1.13.1@aar') { transitive = true } ``` You should update to [Twitter's](https://github.com/twitter/twitter-kit-android/wiki/Getting-Started) new SDK which is on version 3.x. Upvotes: 3 [selected_answer]
2018/03/14
306
1,123
<issue_start>username_0: I have a csv file with characters like `CitΓ©`, but after make the insert into the DB, I see this `CitΒΏ` I open the file as a `BufferedReader`, but I don't know how to do it in `UTF-8` ``` BufferedReader br = new BufferedReader(new FileReader(csvFile)); ```<issue_comment>username_1: You can use `FileInputStream`: ``` BufferedReader in = new BufferedReader( new InputStreamReader( new FileInputStream(fileDir), "UTF8")); ``` Upvotes: 2 <issue_comment>username_2: You *could* explictly use a `FileInputStream` and an `InputStreamReader` using `StandardCharsets.UTF_8`, but it's probably simpler to use [`Files.newBufferedReader`](https://docs.oracle.com/javase/9/docs/api/java/nio/file/Files.html#newBufferedReader-java.nio.file.Path-): ``` Path path = Paths.get(csvFile); try (BufferedReader reader = Files.newBufferedReader(path)) { // Use the reader } ``` It's worth getting to know the [`Files`](https://docs.oracle.com/javase/9/docs/api/java/nio/file/Files.html) class as it has a bunch of convenience methods like this. Upvotes: 3 [selected_answer]
2018/03/14
229
877
<issue_start>username_0: When I visit mycompany.com in any browser except chrome, it successfully redirects to <https://www.mycompany.com>. However in Chrome I just get a 404, ditto all chrome users. In web.config I have this redirect rule ``` ``` Which I think is what makes it work in other browsers. I've tried adding a HTTP redirect dns record, but that doesn't seem to have helped.<issue_comment>username_1: i don't really have a explanation. I'm using this myself. ``` ``` they look very similar but I have not seen problems in redirection to https. Maybe its something else thats cauzing the 404? Upvotes: 1 <issue_comment>username_2: Open a incognito tab and verify that it is redirecting to the URL you want it to. Close incognito tab Clear Google Chrome cache It should then redirect you without having to use incognito mode Upvotes: 3 [selected_answer]
2018/03/14
250
937
<issue_start>username_0: How does yum work internally? Does yum shell out to use the rpm executable when actually manipulating rpm files, or does it implement its own rpm handling code? (Or does it use a static or shared rpm library for dealing with rpm files)<issue_comment>username_1: It seems yum is a python implementation building on rpm-python. You can deduce such things by looking at the rpm requirements: ``` rpm -q yum --requires ``` gives: ``` ... rpm-python ... ``` which led me. Also looking a the `/usr/bin/yum` file: ``` file /usr/bin/yum ``` gives ``` /usr/bin/yum: Python script, ASCII text executable ``` Upvotes: 1 <issue_comment>username_2: After obtaining the source to yum and rpm I found out the following: yum is implemented in python and uses rpm-python package for rpm access. Both rpm and yum ultimately utilize the librpm.so shared library for RPM package management at the low level. Upvotes: 0
2018/03/14
5,528
17,628
<issue_start>username_0: I have following code snippet: ``` public class ConditionTest { public static final ReentrantLock reentrantLock = new ReentrantLock(); public static final Condition CONDITION_PRODUCED = reentrantLock.newCondition(); public static final Condition CONDITION_RECEIVED = reentrantLock.newCondition(); public static void main(String[] args) throws InterruptedException { Thread receiverThread = new Thread(() -> { for (int i = 0; i < 10; i++) { reentrantLock.lock(); try { CONDITION_PRODUCED.await(); System.out.println("Received"); CONDITION_RECEIVED.signal(); } catch (InterruptedException e) { e.printStackTrace(); } reentrantLock.unlock(); } }); Thread senderThread = new Thread(() -> { for (int i = 0; i < 10; i++) { reentrantLock.lock(); if (i != 0) { try { CONDITION_RECEIVED.await(); } catch (InterruptedException e) { e.printStackTrace(); } } System.out.println("Produced"); CONDITION_PRODUCED.signal(); reentrantLock.unlock(); } }); receiverThread.setName("received"); senderThread.setName("Producer"); receiverThread.start(); Thread.sleep(500); senderThread.start(); } } ``` sometimes it works correctly and I see expected output. But sometimes it works wrong and hangs after printing: ``` Produced Received ``` thread dump: ``` 2018-03-14 14:47:58 Full thread dump Java HotSpot(TM) 64-Bit Server VM (25.111-b14 mixed mode): "JMX server connection timeout 18" #18 daemon prio=5 os_prio=0 tid=0x000000001e149800 nid=0x16ac in Object.wait() [0x000000001fb1f000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x000000076c76d070> (a [I) at com.sun.jmx.remote.internal.ServerCommunicatorAdmin$Timeout.run(ServerCommunicatorAdmin.java:168) - locked <0x000000076c76d070> (a [I) at java.lang.Thread.run(Thread.java:745) Locked ownable synchronizers: - None "RMI Scheduler(0)" #17 daemon prio=5 os_prio=0 tid=0x000000001e15d000 nid=0x2f3c waiting on condition [0x000000001fa1e000] java.lang.Thread.State: TIMED_WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000076c438500> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093) at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Locked ownable synchronizers: - None "RMI TCP Connection(1)-192.168.56.1" #16 daemon prio=5 os_prio=0 tid=0x000000001e3b8800 nid=0x1674 runnable [0x000000001f91e000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:170) at java.net.SocketInputStream.read(SocketInputStream.java:141) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read(BufferedInputStream.java:265) - locked <0x000000076c6f8d18> (a java.io.BufferedInputStream) at java.io.FilterInputStream.read(FilterInputStream.java:83) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:550) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:826) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.lambda$run$0(TCPTransport.java:683) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler$$Lambda$3/342486007.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:682) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Locked ownable synchronizers: - <0x000000076c469b90> (a java.util.concurrent.ThreadPoolExecutor$Worker) "RMI TCP Accept-0" #15 daemon prio=5 os_prio=0 tid=0x000000001e065800 nid=0x20e8 runnable [0x000000001f71f000] java.lang.Thread.State: RUNNABLE at java.net.DualStackPlainSocketImpl.accept0(Native Method) at java.net.DualStackPlainSocketImpl.socketAccept(DualStackPlainSocketImpl.java:131) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:409) at java.net.PlainSocketImpl.accept(PlainSocketImpl.java:199) - locked <0x000000076c4402e8> (a java.net.SocksSocketImpl) at java.net.ServerSocket.implAccept(ServerSocket.java:545) at java.net.ServerSocket.accept(ServerSocket.java:513) at sun.management.jmxremote.LocalRMIServerSocketFactory$1.accept(LocalRMIServerSocketFactory.java:52) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop(TCPTransport.java:400) at sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run(TCPTransport.java:372) at java.lang.Thread.run(Thread.java:745) Locked ownable synchronizers: - None "DestroyJavaVM" #13 prio=5 os_prio=0 tid=0x0000000002d1b000 nid=0x2c54 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE Locked ownable synchronizers: - None "Producer" #12 prio=5 os_prio=0 tid=0x000000001e3d3800 nid=0x24b0 waiting on condition [0x000000001ef1e000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000076b80c210> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at com.cryptex.fix.performance.ConditionTest.lambda$main$1(ConditionTest.java:32) at com.cryptex.fix.performance.ConditionTest$$Lambda$2/326549596.run(Unknown Source) at java.lang.Thread.run(Thread.java:745) Locked ownable synchronizers: - None "received" #11 prio=5 os_prio=0 tid=0x000000001e3d2000 nid=0x1a18 waiting on condition [0x000000001ee1e000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x000000076b80c1f8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at com.cryptex.fix.performance.ConditionTest.lambda$main$0(ConditionTest.java:18) at com.cryptex.fix.performance.ConditionTest$$Lambda$1/1642360923.run(Unknown Source) at java.lang.Thread.run(Thread.java:745) Locked ownable synchronizers: - None "Service Thread" #10 daemon prio=9 os_prio=0 tid=0x000000001e050800 nid=0x2a10 runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE Locked ownable synchronizers: - None "C1 CompilerThread2" #9 daemon prio=9 os_prio=2 tid=0x000000001df1d800 nid=0x2ea8 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE Locked ownable synchronizers: - None "C2 CompilerThread1" #8 daemon prio=9 os_prio=2 tid=0x000000001df1d000 nid=0x136c waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE Locked ownable synchronizers: - None "C2 CompilerThread0" #7 daemon prio=9 os_prio=2 tid=0x000000001df1c000 nid=0x2a98 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE Locked ownable synchronizers: - None "Monitor Ctrl-Break" #6 daemon prio=5 os_prio=0 tid=0x000000001e023800 nid=0x2f58 runnable [0x000000001e81e000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at java.net.SocketInputStream.read(SocketInputStream.java:170) at java.net.SocketInputStream.read(SocketInputStream.java:141) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) - locked <0x000000076b91a460> (a java.io.InputStreamReader) at java.io.InputStreamReader.read(InputStreamReader.java:184) at java.io.BufferedReader.fill(BufferedReader.java:161) at java.io.BufferedReader.readLine(BufferedReader.java:324) - locked <0x000000076b91a460> (a java.io.InputStreamReader) at java.io.BufferedReader.readLine(BufferedReader.java:389) at com.intellij.rt.execution.application.AppMainV2$1.run(AppMainV2.java:64) Locked ownable synchronizers: - None "Attach Listener" #5 daemon prio=5 os_prio=2 tid=0x000000001c4db800 nid=0x2f48 waiting on condition [0x0000000000000000] java.lang.Thread.State: RUNNABLE Locked ownable synchronizers: - None "Signal Dispatcher" #4 daemon prio=9 os_prio=2 tid=0x000000001c4da000 nid=0x10f8 runnable [0x0000000000000000] java.lang.Thread.State: RUNNABLE Locked ownable synchronizers: - None "Finalizer" #3 daemon prio=8 os_prio=1 tid=0x000000001c4c0000 nid=0x228c in Object.wait() [0x000000001d82f000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x000000076b508e98> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:143) - locked <0x000000076b508e98> (a java.lang.ref.ReferenceQueue$Lock) at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:164) at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:209) Locked ownable synchronizers: - None "Reference Handler" #2 daemon prio=10 os_prio=2 tid=0x0000000002e07000 nid=0x1204 in Object.wait() [0x000000001d72f000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x000000076b506b40> (a java.lang.ref.Reference$Lock) at java.lang.Object.wait(Object.java:502) at java.lang.ref.Reference.tryHandlePending(Reference.java:191) - locked <0x000000076b506b40> (a java.lang.ref.Reference$Lock) at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153) Locked ownable synchronizers: - None "VM Thread" os_prio=2 tid=0x000000001c497800 nid=0x2784 runnable "GC task thread#0 (ParallelGC)" os_prio=0 tid=0x0000000002d2e800 nid=0x848 runnable "GC task thread#1 (ParallelGC)" os_prio=0 tid=0x0000000002d30800 nid=0x20cc runnable "GC task thread#2 (ParallelGC)" os_prio=0 tid=0x0000000002d32000 nid=0x1c5c runnable "GC task thread#3 (ParallelGC)" os_prio=0 tid=0x0000000002d33800 nid=0x20bc runnable "VM Periodic Task Thread" os_prio=2 tid=0x000000001e13d000 nid=0x2dc waiting on condition JNI global references: 350 ``` What do I wrong?<issue_comment>username_1: The problem happens when the producer thread generates a signal for the condition `CONDITION_RECEIVED` *before* the consumer thread has started to wait for signals on this condition (`CONDITION_PRODUCED.await()`). In this case, the signal is "lost" and both threads end up waiting for each other. You may handle this situation with `boolean` flags shared by both thread, but I wouldn't advise it, as it would be error-prone, difficult to read and debug, and hard to extend because producers and consumers would be tightly coupled. Usual producer/consumer patterns involve *queues*. They provide you with an easier to understand and more loosely-coupled and extensible design. ``` BlockingQueue produced = new LinkedBlockingQueue<>(); BlockingQueue requests = new LinkedBlockingQueue<>(); Thread receiverThread = new Thread(() -> { int i = 1; requests.offer(i); while (i < 10) { try { produced = queue.take(); System.out.println("Received: " + i); requests.offer(i + 1); } catch (InterruptedException e) { // Something which does not swallow the interruption } } }); Thread senderThread = new Thread(() -> { while (true) { try { int i = requests.take(); System.out.println("Produced: " + i); produced.offer(i); } catch (InterruptedException e) { // Something which does not swallow the interruption } } }); ``` Upvotes: 0 <issue_comment>username_2: From *Java Concurrency in Practice* > > **Condition wait errors.** When waiting on a condition queue, Object.wait or Condition.await should be called in a loop, with the appropriate lock held, > after testing some state predicate (see Chapter 14). Calling Object.wait or > Condition.await without the lock held, not in a loop, or without testing > some state predicate is almost certainly an error. > > > Since you don't do this, you're likely experiencing a missed signal, or less likely but also possible, a spurious wakeup, which throws off the syncing between the two threads. A possible correction would be ``` public class ConditionTest { public static final ReentrantLock reentrantLock = new ReentrantLock(); public static final Condition CONDITION_PRODUCED = reentrantLock.newCondition(); public static final Condition CONDITION_RECEIVED = reentrantLock.newCondition(); private static boolean state = true; public static void main(String[] args) throws InterruptedException { Thread receiverThread = new Thread(() -> { try { for (int i = 0; i < 10; i++) { reentrantLock.lock(); while (state) { CONDITION_PRODUCED.await(); } state = true; System.out.println("Received"); CONDITION_RECEIVED.signal(); reentrantLock.unlock(); } } catch (InterruptedException e) { Thread.currentThread().interrupt(); } }); Thread senderThread = new Thread(() -> { try { for (int i = 0; i < 10; i++) { reentrantLock.lock(); while (!state) { CONDITION_RECEIVED.await(); } state = false; System.out.println("Produced"); CONDITION_PRODUCED.signal(); reentrantLock.unlock(); } } catch (InterruptedException e) { Thread.currentThread().interrupt(); } }); receiverThread.setName("received"); senderThread.setName("Producer"); receiverThread.start(); senderThread.start(); } } ``` Upvotes: 1 <issue_comment>username_3: Never call `wait()/await()` without loop! @username_2's solution doesn't follow the typical order of producer/consumer pattern. ``` public class ConditionTest { public static final ReentrantLock reentrantLock = new ReentrantLock(); public static final Condition CONDITION_PRODUCED = reentrantLock.newCondition(); public static final Condition CONDITION_RECEIVED = reentrantLock.newCondition(); private static boolean dataAvailale = false; public static void main(String[] args) throws InterruptedException { Thread receiverThread = new Thread(() -> { for (int i = 0; i < 100; i++) { reentrantLock.lock(); try { while (!dataAvailale){ CONDITION_PRODUCED.await(); } dataAvailale = false; System.out.println("Received " + i); CONDITION_RECEIVED.signal(); } catch (InterruptedException e) { e.printStackTrace(); } reentrantLock.unlock(); } }); Thread senderThread = new Thread(() -> { for (int i = 0; i < 100; i++) { reentrantLock.lock(); while (dataAvailale) { try { CONDITION_RECEIVED.await(); } catch (InterruptedException e) { e.printStackTrace(); } } dataAvailale = true; System.out.println("Produced " + i); CONDITION_PRODUCED.signal(); reentrantLock.unlock(); } }); receiverThread.setName("received"); senderThread.setName("Producer"); receiverThread.start(); Thread.sleep(50); senderThread.start(); } } ``` Upvotes: 0
2018/03/14
1,332
6,028
<issue_start>username_0: Techies, I have requirement where I have to add two class name in ngClass, one with condition and another one as normal. Existing code ``` | ``` O/P html should be- ``` | ``` In the above code, I have to add "col-md-1" inside ngClass. How can I do this Thanks, Arun<issue_comment>username_1: The problem happens when the producer thread generates a signal for the condition `CONDITION_RECEIVED` *before* the consumer thread has started to wait for signals on this condition (`CONDITION_PRODUCED.await()`). In this case, the signal is "lost" and both threads end up waiting for each other. You may handle this situation with `boolean` flags shared by both thread, but I wouldn't advise it, as it would be error-prone, difficult to read and debug, and hard to extend because producers and consumers would be tightly coupled. Usual producer/consumer patterns involve *queues*. They provide you with an easier to understand and more loosely-coupled and extensible design. ``` BlockingQueue produced = new LinkedBlockingQueue<>(); BlockingQueue requests = new LinkedBlockingQueue<>(); Thread receiverThread = new Thread(() -> { int i = 1; requests.offer(i); while (i < 10) { try { produced = queue.take(); System.out.println("Received: " + i); requests.offer(i + 1); } catch (InterruptedException e) { // Something which does not swallow the interruption } } }); Thread senderThread = new Thread(() -> { while (true) { try { int i = requests.take(); System.out.println("Produced: " + i); produced.offer(i); } catch (InterruptedException e) { // Something which does not swallow the interruption } } }); ``` Upvotes: 0 <issue_comment>username_2: From *Java Concurrency in Practice* > > **Condition wait errors.** When waiting on a condition queue, Object.wait or Condition.await should be called in a loop, with the appropriate lock held, > after testing some state predicate (see Chapter 14). Calling Object.wait or > Condition.await without the lock held, not in a loop, or without testing > some state predicate is almost certainly an error. > > > Since you don't do this, you're likely experiencing a missed signal, or less likely but also possible, a spurious wakeup, which throws off the syncing between the two threads. A possible correction would be ``` public class ConditionTest { public static final ReentrantLock reentrantLock = new ReentrantLock(); public static final Condition CONDITION_PRODUCED = reentrantLock.newCondition(); public static final Condition CONDITION_RECEIVED = reentrantLock.newCondition(); private static boolean state = true; public static void main(String[] args) throws InterruptedException { Thread receiverThread = new Thread(() -> { try { for (int i = 0; i < 10; i++) { reentrantLock.lock(); while (state) { CONDITION_PRODUCED.await(); } state = true; System.out.println("Received"); CONDITION_RECEIVED.signal(); reentrantLock.unlock(); } } catch (InterruptedException e) { Thread.currentThread().interrupt(); } }); Thread senderThread = new Thread(() -> { try { for (int i = 0; i < 10; i++) { reentrantLock.lock(); while (!state) { CONDITION_RECEIVED.await(); } state = false; System.out.println("Produced"); CONDITION_PRODUCED.signal(); reentrantLock.unlock(); } } catch (InterruptedException e) { Thread.currentThread().interrupt(); } }); receiverThread.setName("received"); senderThread.setName("Producer"); receiverThread.start(); senderThread.start(); } } ``` Upvotes: 1 <issue_comment>username_3: Never call `wait()/await()` without loop! @username_2's solution doesn't follow the typical order of producer/consumer pattern. ``` public class ConditionTest { public static final ReentrantLock reentrantLock = new ReentrantLock(); public static final Condition CONDITION_PRODUCED = reentrantLock.newCondition(); public static final Condition CONDITION_RECEIVED = reentrantLock.newCondition(); private static boolean dataAvailale = false; public static void main(String[] args) throws InterruptedException { Thread receiverThread = new Thread(() -> { for (int i = 0; i < 100; i++) { reentrantLock.lock(); try { while (!dataAvailale){ CONDITION_PRODUCED.await(); } dataAvailale = false; System.out.println("Received " + i); CONDITION_RECEIVED.signal(); } catch (InterruptedException e) { e.printStackTrace(); } reentrantLock.unlock(); } }); Thread senderThread = new Thread(() -> { for (int i = 0; i < 100; i++) { reentrantLock.lock(); while (dataAvailale) { try { CONDITION_RECEIVED.await(); } catch (InterruptedException e) { e.printStackTrace(); } } dataAvailale = true; System.out.println("Produced " + i); CONDITION_PRODUCED.signal(); reentrantLock.unlock(); } }); receiverThread.setName("received"); senderThread.setName("Producer"); receiverThread.start(); Thread.sleep(50); senderThread.start(); } } ``` Upvotes: 0
2018/03/14
1,013
2,609
<issue_start>username_0: ``` data = """ abcd1 1 abcd2 2 abcd3 3 abcd4 4 abcd5 5 abcd6 6 abcd7 7 abcd8 8 abcd9 9 . . . abcd256 1 abcd257 2 abcd258 3 abcd259 4 abcd260 5 abcd261 6 abcd262 7 abcd263 8 abcd264 9 """ if abcd1, then Get value 1, if abcd2, then Get value 2,...so on if abcd256, then Get value 1, if abcd257, then Get value 2, ``` Condition value must be in 1 to 255 Check string already exist in data variable. I have used below code: ``` check = set() for line in data.split("\n"): if len(line.split()) > 1: line = line.strip() check.add(line.split()[0]) if not "abcd264" in check: print "Not exist": value = 9#Help required to get value here else: print "Its already exist. Program exit" sys.exit() ``` Suggested using Pandas in other post, But I need to implement without using Pandas<issue_comment>username_1: You are better off vectorising your manipulations via a library such as `pandas`. Here is an example: ``` from io import StringIO import pandas as pd mystr = """abcd1 1 abcd2 2 abcd3 3 abcd4 4 abcd5 5 abcd6 6 abcd7 7 abcd8 8 abcd9 9 """ df = pd.read_csv(StringIO(mystr), delim_whitespace=True, header=None) df['idx'] = df[0].str[4:].astype(int) res = set(df.loc[df['idx'] <= 5, 1]) # {1, 2, 3, 4, 5} ``` Upvotes: 0 <issue_comment>username_2: If you wish to do it in pure Python, you can try doing it this way: ``` data = """ abcd1 1 abcd2 2 abcd3 3 abcd4 4 abcd5 5 abcd6 6 abcd7 7 abcd8 8 abcd9 9 abcd256 1 abcd257 2 abcd258 3 abcd259 4 abcd260 5 abcd261 6 abcd262 7 abcd263 8 abcd264 9 """ data = data.replace(" ","").replace(" "," ").split("\n")[1:-1] for d in data: number = int(d.split()[0][4:]) print("For number %d the result is: %d" % (number,number % 255)) ``` Output: ``` For number 1 the result is: 1 For number 2 the result is: 2 For number 3 the result is: 3 For number 4 the result is: 4 For number 5 the result is: 5 For number 6 the result is: 6 For number 7 the result is: 7 For number 8 the result is: 8 For number 9 the result is: 9 For number 256 the result is: 1 For number 257 the result is: 2 For number 258 the result is: 3 For number 259 the result is: 4 For number 260 the result is: 5 For number 261 the result is: 6 For number 262 the result is: 7 For number 263 the result is: 8 For number 264 the result is: 9 ``` Upvotes: 3 [selected_answer]
2018/03/14
1,060
3,525
<issue_start>username_0: I would like to set the default date to be 3 days from todays date when the date picker opens up on the browser. How can I acheive that? ``` ```<issue_comment>username_1: You need set the defaultValue attribute of the date input as `yyyy-mm-dd` like so: ``` const today = new Date(); const numberOfDaysToAdd = 3; const date = today.setDate(today.getDate() + numberOfDaysToAdd); const defaultValue = new Date(date).toISOString().split('T')[0] // yyyy-mm-dd ``` Here is a working example: <https://codesandbox.io/s/gracious-christian-22czv?file=/src/App.js:326-346> ### 2022 Answer You can use [toLocaleDateString](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toLocaleDateString) with a locale to get the date string in `yyyy-mm-dd` format. ```js class App extends React.Component { render() { const date = new Date(); const futureDate = date.getDate() + 3; date.setDate(futureDate); const defaultValue = date.toLocaleDateString('en-CA'); return ( ); } } ReactDOM.render( , document.body ); ``` ```html ``` Upvotes: 4 <issue_comment>username_2: You need to convert the date to iso string and get first 10 characters. e.g. ```html var curr = new Date(); curr.setDate(curr.getDate() + 3); var date = curr.toISOString().substring(0,10); ``` Reference [toISOString](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toISOString) method. Also you can see the result [here](https://codesandbox.io/s/18m28o0y03) Upvotes: 5 <issue_comment>username_3: 2022 Answer =========== ### using `jsx` ``` import { useState } from 'react' const Component = () => { let defaultDate = new Date() defaultDate.setDate(defaultDate.getDate() + 3) const [date, setDate] = useState(defaultDate) const onSetDate = (event) => { setDate(new Date(event.target.value)) } return ( <> date: {date.toString()} date: {date.toLocaleDateString('en-CA')} ) } export default Component ``` ### using `tsx` ``` import { useState } from "react" const Component = (): JSX.Element => { let defaultDate = new Date() defaultDate.setDate(defaultDate.getDate() + 3) const [date, setDate] = useState(defaultDate) const onSetDate = (event: React.ChangeEvent): void => { setDate(new Date(event.target.value)) } return ( <> date: {date.toString()} date: {date.toLocaleDateString('en-CA')} ) } export default Component ``` Upvotes: 1 <issue_comment>username_4: Create a helper function that returns the correct format of the which is `yyyy-mm-dd`. Let's define a function: ```js const getCurrentDateInput = () => { const dateObj = new Date(); // get the month in this format of 04, the same for months const month = ("0" + (dateObj.getMonth() + 1)).slice(-2); const day = ("0" + dateObj.getDate()).slice(-2); const year = dateObj.getFullYear(); const shortDate = `${year}-${month}-${day}`; return shortDate; }; ``` Then go back to the input tag and type the following : ``` ``` Upvotes: 2 <issue_comment>username_5: **Declare a state :-** ``` this.state={ recentDate:new Date().toISOString().slice(0,10) } ``` **Declare a function by means of which you can also be able select different dates.** ``` selectedDate(e){ this.setState({recentDate:e.target.value}) } ``` **Call the state into value attribute and selectedDate function in onChange prop.** ``` ``` Upvotes: 0
2018/03/14
1,651
5,455
<issue_start>username_0: I have a class as following : ``` public class OrganizationRoleDTO { private Long organizationId; private String organizationTitle; private String roleId; private String roleTitle; } ``` In my DAO I have a function that will return a list of `OrganizationRoleDTO`, as following : ``` 1, "Organization 1", 1, "Role 1" 1, "Organization 1", 2, "Role 2" 2, "Organization 2", 1, "Role 1" 2, "Organization 2", 3, "Role 3" 3, "Organization 3", 3, "Role 3" ``` What I'm trying to do is to create a new list using the above informations from `OrganizationRoleDTO` list, so the new list will be as following : ``` 1, "Organization 1", [{1, "Role 1"}, {2, "Role 2"}] 1, "Organization 2", [{1, "Role 1"}, {3, "Role 3"}] 1, "Organization 3", [{3, "Role 3"}] ``` What I did here is that I grouped the list by the field `organizationTitle`, and the generated list will be of type `OrganizationDTO`, where `OrganizationDTO` is defined as following: ``` public class OrganizationDTO{ private Long id; private String title; private List rolesList; } ``` And this is the definition of `RoleDTO` : ``` public class RoleDTO { private String title; private Long id; private List profilesList; } ``` This is the code I tried: ``` List organizationRoleList = findOrganizationRoleList(); Map map = organizationRoleList.stream().collect(HashMap::new, (m, t) -> { m.computeIfAbsent(t.getOrganizationTitle(), x -> new OrganizationDTO(t.getOrganizationId(), t.getOrganizationTitle())) .getRolesList() .add(new RoleDTO(t.getRoleId(), t.getRoleTitle(), profileBP.findProfilsByRoleId(t.getRoleId()))); }, (m1, m2) -> { m2.forEach((k, v) -> { OrganizationDTO organizationDTO = m1.get(k); if (organizationDTO != null) { organizationDTO.getRolesList().addAll(v.getRolesList()); } else { m1.put(k, v); } }); }); List list = map.values().stream().collect(Collectors.toList()); ``` This code is working as expected, the only problem is that it's hard to read and to debug (obvious problem of scalability). Is there another way to write this ?<issue_comment>username_1: You need set the defaultValue attribute of the date input as `yyyy-mm-dd` like so: ``` const today = new Date(); const numberOfDaysToAdd = 3; const date = today.setDate(today.getDate() + numberOfDaysToAdd); const defaultValue = new Date(date).toISOString().split('T')[0] // yyyy-mm-dd ``` Here is a working example: <https://codesandbox.io/s/gracious-christian-22czv?file=/src/App.js:326-346> ### 2022 Answer You can use [toLocaleDateString](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toLocaleDateString) with a locale to get the date string in `yyyy-mm-dd` format. ```js class App extends React.Component { render() { const date = new Date(); const futureDate = date.getDate() + 3; date.setDate(futureDate); const defaultValue = date.toLocaleDateString('en-CA'); return ( ); } } ReactDOM.render( , document.body ); ``` ```html ``` Upvotes: 4 <issue_comment>username_2: You need to convert the date to iso string and get first 10 characters. e.g. ```html var curr = new Date(); curr.setDate(curr.getDate() + 3); var date = curr.toISOString().substring(0,10); ``` Reference [toISOString](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toISOString) method. Also you can see the result [here](https://codesandbox.io/s/18m28o0y03) Upvotes: 5 <issue_comment>username_3: 2022 Answer =========== ### using `jsx` ``` import { useState } from 'react' const Component = () => { let defaultDate = new Date() defaultDate.setDate(defaultDate.getDate() + 3) const [date, setDate] = useState(defaultDate) const onSetDate = (event) => { setDate(new Date(event.target.value)) } return ( <> date: {date.toString()} date: {date.toLocaleDateString('en-CA')} ) } export default Component ``` ### using `tsx` ``` import { useState } from "react" const Component = (): JSX.Element => { let defaultDate = new Date() defaultDate.setDate(defaultDate.getDate() + 3) const [date, setDate] = useState(defaultDate) const onSetDate = (event: React.ChangeEvent): void => { setDate(new Date(event.target.value)) } return ( <> date: {date.toString()} date: {date.toLocaleDateString('en-CA')} ) } export default Component ``` Upvotes: 1 <issue_comment>username_4: Create a helper function that returns the correct format of the which is `yyyy-mm-dd`. Let's define a function: ```js const getCurrentDateInput = () => { const dateObj = new Date(); // get the month in this format of 04, the same for months const month = ("0" + (dateObj.getMonth() + 1)).slice(-2); const day = ("0" + dateObj.getDate()).slice(-2); const year = dateObj.getFullYear(); const shortDate = `${year}-${month}-${day}`; return shortDate; }; ``` Then go back to the input tag and type the following : ``` ``` Upvotes: 2 <issue_comment>username_5: **Declare a state :-** ``` this.state={ recentDate:new Date().toISOString().slice(0,10) } ``` **Declare a function by means of which you can also be able select different dates.** ``` selectedDate(e){ this.setState({recentDate:e.target.value}) } ``` **Call the state into value attribute and selectedDate function in onChange prop.** ``` ``` Upvotes: 0
2018/03/14
745
2,574
<issue_start>username_0: I have a login form that checks the password and username entered and if correct it will give a message and makes login session true. But the session doesn't work. When it redirects you to dashboard you won't be able to visit it the value of session login is unknown. So when you enter userpass correct the session is not created for that moment but when you redirect it to another page your session is gone. ```html PHP session_start(); // Starting session include 'config.php'; $pass = $_POST['pass']; $user = $_POST['user']; if (isset($_POST['user']) and isset($_POST['pass'])){ // Create connection $conn = new mysqli($servername, $username, $password, $dbname); // Check connection if ($conn-connect_error) { die("Connection failed: " . $conn->connect_error); } $result = $link->query("SELECT user FROM users2 WHERE user = '$user'"); if($result->num_rows == 0) { echo ' $(document).ready(function(){ demo.initChartist(); $.notify({ icon: "pe-7s-bell", message: "Username of password was wrong. Please try again." },{ type: "info", timer: 4000 }); }); '; } elseif ($result->num_rows == 1){ $userpass = $link->query("SELECT pass FROM users2 WHERE user = '$user'"); $row = $userpass->fetch_assoc(); $userpasss = $row["pass"]; if ($pass == $userpasss){ echo ' $(document).ready(function(){ demo.initChartist(); $.notify({ icon: "pe-7s-bell", message: "You are logged in successfully! Redirecting ..." },{ type: "info", timer: 4000 }); }); '; $_SESSION['login'] = "true"; $_SESSION['username'] = "$user"; echo " "; echo $_SESSION["login"]; echo $_SESSION["username"]; // Storing session data } else { echo ' $(document).ready(function(){ demo.initChartist(); $.notify({ icon: "pe-7s-bell", message: "Username of password was wrong. Please try again." },{ type: "info", timer: 4000 }); }); '; } } } echo $_SESSION["login"]; ?> ``` Where is the problem in the code? Thank you <NAME><issue_comment>username_1: Put session\_start(); at the Top of this page as you are accessing in global scope so you will get the session value. * And i will suggest you to create different file for authentication purpose. Upvotes: 2 [selected_answer]<issue_comment>username_2: You need to write all the page of the top `session_start()` After successful login then redirect to dashboard.php need to add the first line ``` php session_start(); echo $_SESSION['login]; ? ``` Upvotes: 0
2018/03/14
851
2,715
<issue_start>username_0: Take the following #definition from the pet store example. Given a #definition section a JSON structure can be generated e.g. [![Coversion of definition to JSON](https://i.stack.imgur.com/6KRwn.png)](https://i.stack.imgur.com/6KRwn.png) Is there something that can do the reverse given a largeish complex JSON file? Given the below JSON Structure can I get the #defintion section of a swagger file generated to save some typing ``` { "variable": "sample", "object1": { "obj-field1": "field 1 of object", "obj-field2": "field 2 of object", "anArray": [ "Value 1", { "anArrayObj1": "obj1fieldinarray", "anArrayObj2": "obj2fieldinarray" } ] } } ```<issue_comment>username_1: You can use this JSON-to-OpenAPI schema converter: <https://roger13.github.io/SwagDefGen/> ([GitHub project](https://github.com/Roger13/SwagDefGen)) I haven't used it personally though, so I'm not sure how good it is. Upvotes: 6 <issue_comment>username_2: This works for me: **Generate Swagger REST-client code (and POJO) from sample JSON**: 1. Go to **apistudio.io**: * Insert -> New Model. * CutNpaste your JSON. * [The Swagger YML file will be generated] * Download -> YAML. 2. Go to **editor.swagger.io**: * CutNpaste the YML saved from last step. * Generate Client -> jaxrs-cxf-client (there are many other options). Upvotes: 0 <issue_comment>username_3: 1 - Paste a response in <http://www.mocky.io> and get a link to your response 2 - Go to <https://inspector.swagger.io/> and make a call to your example response 3 - Select the call from "History" and click "Create API definition" 4 - The swagger definition will be available at <https://app.swaggerhub.com/> Upvotes: 5 <issue_comment>username_4: You can use [mock-to-openapi](https://github.com/username_4/mock-to-openapi) cli tool that generates [OpenAPI](https://swagger.io/specification/) `YAML` files from `JSON` mock. ```bash npm install --global mock-to-openapi ``` then run the conversion of all `*.json` files from the folder: ```bash mock-to-openapi ./folder/*.json` ``` Let's have, for example, `json` object with: ```json { "title": "This is title", "author": "<NAME>", "content" : "This is just an example", "date": "2020-05-12T23:50:21.817Z" } ``` Tool `mock-to-openapi` converts `JSON` to the OpenAPI specification as follows: ```yaml type: object properties: title: type: string example: This is title author: type: string example: <NAME> content: type: string example: This is just an example date: type: string format: date-time example: 2020-05-12T23:50:21.817Z ``` Upvotes: 3
2018/03/14
796
3,071
<issue_start>username_0: Consider abstract class `Element` here is a super class for many subclasses like `ArrayElement` each have their own helper methods but with common `param` property I need to call helper method `printValue` with `that` object. `check` method receive `ArrayElement` object in run time. Hence at run time, I hope there won't be any problem. But, this code is not compiling, `that` object looking for `printValue` method in abstact class `Element` at compile time. It forces me to declare `printValue` in `Element` All the helper methods in `ArrayElement` need to be declared in super abstract class `Element`? ``` object ObjectTest { def main(args: Array[String]): Unit = { val x = new ArrayElement(999).check(new ArrayElement(333)) } } abstract class Element { val param : Int def printValue : String // commenting this line throws error below } class ArrayElement(override val param : Int) extends Element { def check(that: Element) = { this.printValue println(that.param) println(that.printValue) // throws error -- **value printValue is not a member of org.mytest.challenges.Element** } def printValue = "value:" + param } ```<issue_comment>username_1: the that object is typed as Element, then if you remove printValue from Element it won't compile. Also, x is not a member of Element either. If the check method is going to be used by different subclasses from Element you may consider move it to Element as a protected method, it will be accessible from ArrayElement because it is extended. ``` object ObjectTest { def main(args: Array[String]): Unit = { val x = new ArrayElement(999).check(new ArrayElement(333)) } } abstract class Element { val param : Int def printValue : String protected def check(that: Element) = { this.printValue println(that.param) println(that.printValue) } } class ArrayElement(override val param : Int) extends Element { def printValue = "value:" + param } ``` On the other hand if check is going to be just an ArrayElement method, you should retype that to be ArrayElement Upvotes: 0 <issue_comment>username_2: **All the helper methods in ArrayElement need to be declared in super abstract class Element?** Yes you have to have them in-case your argument to `def check` method is of type super class. OR -- You can convert the incoming object into target type and call the method you want. ``` object ObjectTest { def main(args: Array[String]): Unit = { val x = new ArrayElement(999).check(new ArrayElement(333)) } } abstract class Element { val param : Int // def printValue : String // commenting this line throws error below } class ArrayElement(override val param : Int) extends Element { def check(that: Element) = { this.printValue println(that.param) println(that.asInstanceOf[ArrayElement].printValue) } def printValue = "value:" + param } ``` OR -- Use the `def printValue` method as default method in parent class. Upvotes: 2 [selected_answer]
2018/03/14
544
1,832
<issue_start>username_0: I can't understand why my below code raise an error. I'm trying to build a priority list based on heapq module of Python. The only difference with a basic example of the module is that want to use it with custom objects in it it, instead of simple (int,int) or (int,str) tuples. ``` import heapq class MyObject(): def __init__(self,a=0,name='toto'): self.a = a self.name = name if __name__ == '__main__': priority_list = [] heapq.heappush(priority_list,(1,MyObject())) heapq.heappush(priority_list,(1,MyObject())) ``` This is the error I have: ``` heapq.heappush(priority_list,(1,MyObject())) TypeError: '<' not supported between instances of 'MyObject' and 'MyObject' ``` The error is not raised if I use a different key to insert in the heap, but isn't heapq supposed to deal with same keys? I don't understand very well this behaviour. Thanks a lot<issue_comment>username_1: A heap has the property that the smallest object is always on top. In order for Python to preserve that invariant, it must have some way of determining which object is smaller. Your MyObject class does not provide this. You can define `__gt__` or `__lt__` to enable this. Upvotes: 0 <issue_comment>username_2: The operator `<` is not defined for your class. That way `heapq` can't define priority. ``` ob1 = MyObject() ob1 < ob1 ``` raises ``` TypeError: unorderable types: MyObject() < MyObject() ``` You must then define the logical operators. See [this](https://docs.python.org/3.6/library/operator.html#mapping-operators-to-functions) for more info. ``` class MyObject(): def __init__(self,a=0,name='toto'): self.a = a self.name = name def __lt__(ob1, ob2): return ob1.a < ob2.a ob1 = MyObject() ob1 < ob1 # returns False ``` Upvotes: 3 [selected_answer]
2018/03/14
337
1,067
<issue_start>username_0: I want to convert second in datetime format. Example : 1521028270 -> 2018-03-14 11:52:02.393 Can someone help me?<issue_comment>username_1: A heap has the property that the smallest object is always on top. In order for Python to preserve that invariant, it must have some way of determining which object is smaller. Your MyObject class does not provide this. You can define `__gt__` or `__lt__` to enable this. Upvotes: 0 <issue_comment>username_2: The operator `<` is not defined for your class. That way `heapq` can't define priority. ``` ob1 = MyObject() ob1 < ob1 ``` raises ``` TypeError: unorderable types: MyObject() < MyObject() ``` You must then define the logical operators. See [this](https://docs.python.org/3.6/library/operator.html#mapping-operators-to-functions) for more info. ``` class MyObject(): def __init__(self,a=0,name='toto'): self.a = a self.name = name def __lt__(ob1, ob2): return ob1.a < ob2.a ob1 = MyObject() ob1 < ob1 # returns False ``` Upvotes: 3 [selected_answer]
2018/03/14
585
2,095
<issue_start>username_0: I've searched over the net, not sure how to set X-FRAME-OPTIONS in my react app, the web.config.js looks like this, it's using inline option when I load index.html it gives response X-FRAME-OPTIONS:DENY I need it to change it to X-FRAME-OPTIONS:SAMEORIGIN, as I need to open an iframe within my app. Right now I'm getting a chrome error and firefox error. Not sure how I can update my web.config.js in development, I'm super confused. ``` module.exports = { devtool: 'eval', entry: { app: [ 'react-hot-loader/patch', 'webpack-dev-server/client?http://0.0.0.0' + web_port, 'webpack/hot/only-dev-server', './src/index' ], vendor: [ 'react', 'react-dom', 'react-router', 'react-router-dom', 'react-forms-ui', 'mobx', 'mobx-react', 'sockjs-client', 'react-table', 'react-bootstrap-table', ], fonts: glob.sync("./src/webfonts/*") }, output: { path: path.join(__dirname, 'dist'), filename: '[name].bundle.js', publicPath: '/static/' }, ```<issue_comment>username_1: `X-Frame-Options` is a HTTP header and setting it depends on the application you use as HTTP server, not on the files being served. In this case, if you want to set a header for `webpack-dev-server`, you can do it like this ([setting in `webpack.config.js`](https://webpack.js.org/configuration/dev-server/#devserver-headers-)): ``` devServer: { ... headers: { 'X-Frame-Options': 'sameorigin' } } ``` Upvotes: 3 <issue_comment>username_2: nextjs put below code in next.config.js ```html module.exports = { async headers() { return [ { source: '/((?!embed).*)', headers: [ { key: 'X-Frame-Options', value: 'SAMEORIGIN', } ] } ]; } } ``` Upvotes: 0 <issue_comment>username_3: You can set raw http headers in public/index.html inside the **head** tag: ``` ``` Upvotes: -1
2018/03/14
582
1,813
<issue_start>username_0: ``` #include template class X { public: using I = int; void f(I i) { std::cout << "i: " << i << std::endl; } }; template void fppm(void (X::\*p)(typename X::I)) { p(0); } int main() { fppm(&X<33>::f); return 0; } ``` I just don't understand the compile error message of the code. ``` error: called object type 'void (X<33>::*)(typename X<33>::I)' is not a function or function pointer p(0); ``` I think p is a function which returns void and takes `int` as its argument. But apparently, it's not. Could somebody give me clue?<issue_comment>username_1: As denoted in the comments already, `p` is a pointer to member function, but you call it like a static function (`p(0);`). You need a concrete object to call `p` on: ``` X x; (x.\*p)(0); // or: X\* xx = new X(); (xx->\*p)(0); delete xx; ``` Be aware that the `.*`/`->*` operators have lower precedence than the function call operator, thus you *need* the parentheses. Side note: Above is for better illustration, modern C++ might use `auto` keyword and smart pointers instead, which could look like this: ``` auto x = std::make_unique>(); (x.get()->\*p)(0); ``` Upvotes: 2 <issue_comment>username_2: Since `p` is a pointer to a nonstatic member function, you need an instance to call it with. Thus, first instantiate an object of `X<33>` in main: ``` int main() { X<33> x; fppm(x, &X<33>::f); // <-- Signature changed below to accept an instance ``` Then in your function, change the code to accept an instance of `X` and call the member function for it: ``` template void fppm(X instance, void (X::\*p)(typename X::I)) { (instance.\*p)(0); } ``` The syntax may look ugly but the low precedence of the pointer to member operator requires the need for the parentheses. Upvotes: 3 [selected_answer]
2018/03/14
660
2,236
<issue_start>username_0: I am creating a code in which data can pass from USB to serial port. Problem is that I am unable to write data to serial port on windows system in php. Device has been connect and opened successfully and baud rate, parity, length, stop bits, flow control are set accordingly with no error. When I am sending data to serial port no error or output will there. Anyone can help with? I am using php serial port class to do this. ``` include 'src/PhpSerial.php'; $serial = new PhpSerial; $serial->deviceSet('COM3'); $serial->confBaudRate(9600); $serial->confParity("none"); $serial->confCharacterLength(8); $serial->confStopBits(1); $serial->confFlowControl("none"); $serial->deviceOpen(); $serial->sendMessage(49); /* 49 is Ascii value of 1 (Light will ON on press 2 and will off on press 1) */ $serial->deviceClose(); ``` I used my device with a software coolterm on which its working fine. But when I am doing this task from php code, I unable to do that. Anyone?<issue_comment>username_1: As denoted in the comments already, `p` is a pointer to member function, but you call it like a static function (`p(0);`). You need a concrete object to call `p` on: ``` X x; (x.\*p)(0); // or: X\* xx = new X(); (xx->\*p)(0); delete xx; ``` Be aware that the `.*`/`->*` operators have lower precedence than the function call operator, thus you *need* the parentheses. Side note: Above is for better illustration, modern C++ might use `auto` keyword and smart pointers instead, which could look like this: ``` auto x = std::make_unique>(); (x.get()->\*p)(0); ``` Upvotes: 2 <issue_comment>username_2: Since `p` is a pointer to a nonstatic member function, you need an instance to call it with. Thus, first instantiate an object of `X<33>` in main: ``` int main() { X<33> x; fppm(x, &X<33>::f); // <-- Signature changed below to accept an instance ``` Then in your function, change the code to accept an instance of `X` and call the member function for it: ``` template void fppm(X instance, void (X::\*p)(typename X::I)) { (instance.\*p)(0); } ``` The syntax may look ugly but the low precedence of the pointer to member operator requires the need for the parentheses. Upvotes: 3 [selected_answer]
2018/03/14
1,190
4,212
<issue_start>username_0: i need to clone a box div element with children as a trigger, in each box should work the same with the first one, my code not working properly since it's only working for the first element and failed for the second div element (even in the same box), it only working once. here is my code below. ```js var container = document.querySelector(".container"); var box = document.getElementsByClassName("box"); for(var i = 0; i < box.length; i++){ var clone = box[i].cloneNode(true); var y = box[i].children[0]; y.addEventListener("click", function(){ container.appendChild(clone); }, false) } ``` ```css .container { border: 1px solid black; padding: 10px; } .box { width: 100px; height: 100px; background: red; } ``` ```html JS Bin Clone ```<issue_comment>username_1: Try this, ``` $(document).on("click", ".clone", function () { $('div.box:first').clone().insertAfter($('div.box:last')); }); ``` Upvotes: 0 <issue_comment>username_2: You need to attach event listener on parent element and then use `event` object to check target property and if its button then you run your code. ```js var container = document.querySelector(".container"); container.addEventListener('click', function({target}) { if (target.nodeName = 'BUTTON' && target.classList.contains('clone')) { const clone = target.parentNode.cloneNode(true); container.appendChild(clone) } }) ``` ```css .container { border: 1px solid black; padding: 10px; } .box { width: 100px; height: 100px; background: red; } ``` ```html Clone 1 Clone 2 ``` Upvotes: 2 <issue_comment>username_3: There are two approaches. ### Delgation Delegate event to children of the container. ```js // Attach event to container and delegate to the children var $container = $('.container'); $container.on('click', '.clone', function(e) { $container.append($(e.target).parent().clone()); }); ``` ### Cloning events and data Copy all data and events for the child. ```js // Attach event to the child and enable copying of events var $container = $('.container'); $('.clone').on('click', function(e) { $container.append($(e.target).parent().clone(true)); }); ``` Upvotes: 0 <issue_comment>username_4: `cloneNode()` doesn't carry over events bound to the button. Use **[Event Delegation](https://www.kirupa.com/html5/handling_events_for_many_elements.htm)** by adding the eventListener to an ancestor node (`window` and `document` objects are acceptable but I chose `.container` being more practical because of its proximity). When ancestor node detects a button is clicked, we use the [**Event.target**](https://developer.mozilla.org/en-US/docs/Web/API/Event/target) and [**Event.currentTarget**](https://developer.mozilla.org/en-US/docs/Web/API/Event/currentTarget) Event Object properties to determine exactly which button was clicked (`e.target`) and the listener (`e.currentTarget`). For good measure I added another condition that permits only a button to be `e.target`. So whenever you have multiple `e.target`s like buttons that have an ancestor node in common, add the event listener to the ancestor node instead of adding an event listener to each button. **Details commented in Demo** Demo ---- ```js // Reference the ancestor node var con = document.querySelector(".container"); // Register click event on div.con--callback dupeParent() con.addEventListener('click', dupeParent); // Pass the Event Object through function dupeParent(e) { /* if the clicked node (e.target) is not the node registered on || click event (e.currentTarget / div.con)... || if the clicked node (e.target) is a button... || clone the button's parent and add it to div.con */ if (e.target !== e.currentTarget) { if (e.target.tagName === 'BUTTON') { var clone = e.target.parentElement.cloneNode(true); this.appendChild(clone); } // Otherwise quit } else { return; } } ``` ```css .container { display: flex; flex-wrap: wrap; border: 1px solid black; padding: 10px; } .box { width: 100px; height: 100px; background: red; } ``` ```html JS Bin Clone ``` Upvotes: 2 [selected_answer]
2018/03/14
1,663
5,854
<issue_start>username_0: I am using Knex and Postgres, NodeJS, Express & React. I have a USERS table, a USERLIKS and a FILTERS table. Where I am stuck is the gender query. Users can define in their filter that they are looking for male, female or both. If I use '.orWhereExists' other filters such as the 2nd which stops you from being returned users you've already liked/rejected are ignored. My gut says I should nest the gender query lines somehow and then change them to '.orWhereExists' but I am not sure how. THANK YOU for all help. Just started coding this year and loving it but this problem has been a mind bender ``` Filters is organized like so table.increments('id') <-----primary table.integer('userid') <-----foreign table.integer('min_age'); table.integer('max_age'); table.string('female'); table.string('male'); ``` ```js app.get('/api/potentials', (req, res) => { const cookieid = req.session.id console.log("potentials get for id ", cookieid) knex('users') .select('*') .whereNot('users.id', cookieid ) .whereNotExists(knex.select('*').from('userlikes').whereRaw('userlikes.userid1 = ?', [cookieid]).andWhereRaw('users.id = userlikes.userid2')) .whereExists(knex.select('*').from('filters').whereRaw('users.gender = filters.female')) .whereExists(knex.select('*').from('filters').whereRaw('users.gender = filters.male')) .whereExists(knex.select('*').from('filters').whereRaw('users.age >= filters.min_age')) .whereExists(knex.select('*').from('filters').whereRaw('users.age < filters.max_age')) .then((result) => { console.log("filter result", result) res.send(result) }) .catch((err) => { console.log("error", err) }) ```<issue_comment>username_1: There is a Knex function `.buildermodify()` that will work for your situation. See also: [documentation link](http://knexjs.org/#Builder-modify). To use it, you create a function whose purpose is to conditionally add `.where` or other similar clauses to your knex query, and in the actual knex query, you call that function using `.buildermodify()` **The sample from the documentation:** ``` var withUserName = function(queryBuilder, foreignKey) { queryBuilder.leftJoin('users', foreignKey, 'users.id') .select('users.user_name'); }; knex.table('articles') .select('title', 'body') .modify(withUserName, 'articles_user.id') .then(function(article) { console.log(article.user_name); }); ``` **Applying it to your need to get gender (and ages):** ``` /* NEW FUNCTION TO DO THE SPECIAL FILTERING */ function customFiltering(queryBuilder, inputGender, minAge, maxAge) { if (inputGender === gender_is_female) { /* you need to fix this */ /* THIS LINE IS THE SECRET SAUCE TO CONDITIONALLY UPDATE YOUR QUERY */ /* queryBuilder. \*/ queryBuilder.whereExists( knex.select('\*').from('filters') .whereRaw('users.gender = filters.female')); } else if (inputGender === gender\_is\_male) { queryBuilder.whereExists( knex.select('\*').from('filters') .whereRaw('users.gender = filters.male')); } /\* ADD MORE CODE HERE for the ages filter - minAge, maxAge \*/ }; knex('users') .select('\*') .whereNot('users.id', cookieid ) .whereNotExists( knex.select('\*').from('userlikes').whereRaw('userlikes.userid1 = ?', [cookieid]).andWhereRaw('users.id = userlikes.userid2')) .modify(customFiltering, inputGender, minAge, maxAge) .then((result) => { console.log("filter result", result) res.send(result) }) .catch((err) => { console.log("error", err) }) ``` Good luck! Upvotes: 2 [selected_answer]<issue_comment>username_2: What I usually do with complex query building is to write it in plain SQL and then trying to convert it into `knex` constructions. As far as I understand you are looking for something like this ``` select * from users as u where ... and ( (exists select * from filters as f where f.male = u.gender) or (exists select * from filters as f where f.female = u.gender) ) ``` In knex words it can be written as ``` knex('users as u') .where('other conditions') .where((b) => { b .whereExists(knex.select('*').from('filters as f').whereRaw('u.gender = f.female')) .orWhereExists(knex.select('*').from('filters as f').whereRaw('u.gender = f.male')) }) ``` There is a possibility in `knex` to group your where clauses in parentheses. Search [here](http://knexjs.org/) for "Grouped Chain" Upvotes: 0 <issue_comment>username_3: ``` A solution someone offline gave me that worked was to use a "promise" but username_1's solution above is more inline with what I was looking for. In the event it is helpful to people with similar questions this also worked. > Blockquote app.get('/api/potentials', (req, res) => { const cookieid = 1//req.session.id console.log("potentials get for id ", cookieid) Promise.all([ knex('users') .select('filters.min_age','filters.max_age', 'filters.female','filters.male') .innerJoin('filters', 'users.id', 'filters.userid') .where('users.id',cookieid ), knex('users') .whereNotExists(knex.select('*').from('userlikes').whereRaw('userlikes.userid1 = ?', [cookieid]).andWhereRaw('users.id = userlikes.userid2')) ]) .then((result) => { const[filterCriteria, users] = result const [min_age, max_age, female, male] = Object.values(filterCriteria[0]) res.send(users.filter(user => { if((user.age >= min_age) && (user.age <= max_age) && ( (user.gender = female) || (user.gender = male) || (user.gender = female) && (user.gender = male) ) ) { return user } })) }) .catch((err) => { console.log("error", err) }) }) ``` Upvotes: 0
2018/03/14
2,218
8,345
<issue_start>username_0: I have a question about Room in Android and its POST and GET mechanic. I have made an app with a recycle view with the help of this site: <https://codelabs.developers.google.com/codelabs/android-room-with-a-view/#0> as a tutorial but the difference between this guy's code and my code is that he uses a class with one string and I use a class with 4 strings. These strings values should be the values of a couple of Edit text views text. Though they should get the data live from room as you can see in this tutorial. I have finished the tutorial until the last two sliders and have not understood what I should change in the code below to make it possible for me to fill my Room database class. So I can post from my Create\_Customer Activity to room and then in my main activity get the database and fill the recycleview with data. Below follows the code that I have trouble with. Create\_Customer: ``` Customer customer = new Customer(data.getStringExtra(NewWordActivity.EXTRA_REPLY)); public void onClick(View view) { Intent replyIntent = new Intent(); if (TextUtils.isEmpty(mEditWordView.getText())) { setResult(RESULT_CANCELED, replyIntent); } else { String word = mEditWordView.getText().toString(); replyIntent.putExtra(EXTRA_REPLY, word); setResult(RESULT_OK, replyIntent); } finish(); } ``` Main Activity: ``` public void onActivityResult(int requestCode, int resultCode, Intent data) { super.onActivityResult(requestCode, resultCode, data); if (requestCode == NEW_WORD_ACTIVITY_REQUEST_CODE && resultCode == RESULT_OK) { Customer customer = new Customer(data.getStringExtra(NewWordActivity.EXTRA_REPLY)); mWordViewModel.insert(word); } else { Toast.makeText( getApplicationContext(), R.string.empty_not_saved, Toast.LENGTH_LONG).show(); } } ``` I need help with the code above and here is my Adapter: ``` package com.example.jenso.paperseller; import android.content.Context; import android.support.annotation.NonNull; import android.support.v7.widget.RecyclerView; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.TextView; import java.util.List; public class PapperRecyclerAdapter extends RecyclerView.Adapter { class CustomerViewHolder extends RecyclerView.ViewHolder{ private TextView textViewName; private TextView textViewAddress; private TextView textViewPhoneNumber; private TextView textViewEmail; private CustomerViewHolder(View itemView){ super(itemView); textViewName = itemView.findViewById(R.id.nameTxt); textViewAddress = itemView.findViewById(R.id.addressTxt); textViewPhoneNumber = itemView.findViewById(R.id.PhoneNumberTxt); textViewEmail = itemView.findViewById(R.id.emailTxt); } } private List mCustomers; private Context context; private final LayoutInflater mInflater; public PapperRecyclerAdapter(Context context) { mInflater = LayoutInflater.from(context); } @Override public CustomerViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { View itemView = mInflater.inflate(R.layout.list\_item, parent, false); return new CustomerViewHolder(itemView); } @Override public void onBindViewHolder(CustomerViewHolder holder, int position) { if(mCustomers != null){ Customer current = mCustomers.get(position); holder.textViewName.setText(current.getFullName()); holder.textViewAddress.setText(current.getAddress()); holder.textViewPhoneNumber.setText(current.getPhonenumber()); holder.textViewEmail.setText(current.getEmail()); }else{ holder.textViewName.setText("Full name"); holder.textViewAddress.setText("Address"); holder.textViewPhoneNumber.setText("PhoneNumber"); holder.textViewEmail.setText("Email"); } } void setCustomer(List customers){ mCustomers = customers; notifyDataSetChanged(); } @Override public int getItemCount() { if(mCustomers != null){ return mCustomers.size(); }else{ return 0; } } public class ViewHolder extends RecyclerView.ViewHolder{ public ViewHolder(View itemView) { super(itemView); } } } ``` Where does he get the data from and how am I supposed to use it so I can fill all my strings with the data I get?<issue_comment>username_1: There is a Knex function `.buildermodify()` that will work for your situation. See also: [documentation link](http://knexjs.org/#Builder-modify). To use it, you create a function whose purpose is to conditionally add `.where` or other similar clauses to your knex query, and in the actual knex query, you call that function using `.buildermodify()` **The sample from the documentation:** ``` var withUserName = function(queryBuilder, foreignKey) { queryBuilder.leftJoin('users', foreignKey, 'users.id') .select('users.user_name'); }; knex.table('articles') .select('title', 'body') .modify(withUserName, 'articles_user.id') .then(function(article) { console.log(article.user_name); }); ``` **Applying it to your need to get gender (and ages):** ``` /* NEW FUNCTION TO DO THE SPECIAL FILTERING */ function customFiltering(queryBuilder, inputGender, minAge, maxAge) { if (inputGender === gender_is_female) { /* you need to fix this */ /* THIS LINE IS THE SECRET SAUCE TO CONDITIONALLY UPDATE YOUR QUERY */ /* queryBuilder. \*/ queryBuilder.whereExists( knex.select('\*').from('filters') .whereRaw('users.gender = filters.female')); } else if (inputGender === gender\_is\_male) { queryBuilder.whereExists( knex.select('\*').from('filters') .whereRaw('users.gender = filters.male')); } /\* ADD MORE CODE HERE for the ages filter - minAge, maxAge \*/ }; knex('users') .select('\*') .whereNot('users.id', cookieid ) .whereNotExists( knex.select('\*').from('userlikes').whereRaw('userlikes.userid1 = ?', [cookieid]).andWhereRaw('users.id = userlikes.userid2')) .modify(customFiltering, inputGender, minAge, maxAge) .then((result) => { console.log("filter result", result) res.send(result) }) .catch((err) => { console.log("error", err) }) ``` Good luck! Upvotes: 2 [selected_answer]<issue_comment>username_2: What I usually do with complex query building is to write it in plain SQL and then trying to convert it into `knex` constructions. As far as I understand you are looking for something like this ``` select * from users as u where ... and ( (exists select * from filters as f where f.male = u.gender) or (exists select * from filters as f where f.female = u.gender) ) ``` In knex words it can be written as ``` knex('users as u') .where('other conditions') .where((b) => { b .whereExists(knex.select('*').from('filters as f').whereRaw('u.gender = f.female')) .orWhereExists(knex.select('*').from('filters as f').whereRaw('u.gender = f.male')) }) ``` There is a possibility in `knex` to group your where clauses in parentheses. Search [here](http://knexjs.org/) for "Grouped Chain" Upvotes: 0 <issue_comment>username_3: ``` A solution someone offline gave me that worked was to use a "promise" but username_1's solution above is more inline with what I was looking for. In the event it is helpful to people with similar questions this also worked. > Blockquote app.get('/api/potentials', (req, res) => { const cookieid = 1//req.session.id console.log("potentials get for id ", cookieid) Promise.all([ knex('users') .select('filters.min_age','filters.max_age', 'filters.female','filters.male') .innerJoin('filters', 'users.id', 'filters.userid') .where('users.id',cookieid ), knex('users') .whereNotExists(knex.select('*').from('userlikes').whereRaw('userlikes.userid1 = ?', [cookieid]).andWhereRaw('users.id = userlikes.userid2')) ]) .then((result) => { const[filterCriteria, users] = result const [min_age, max_age, female, male] = Object.values(filterCriteria[0]) res.send(users.filter(user => { if((user.age >= min_age) && (user.age <= max_age) && ( (user.gender = female) || (user.gender = male) || (user.gender = female) && (user.gender = male) ) ) { return user } })) }) .catch((err) => { console.log("error", err) }) }) ``` Upvotes: 0
2018/03/14
847
3,189
<issue_start>username_0: Im trying to upgrade to angular 4, but when running the code I get an error: `ERROR Error: Uncaught (in promise): Error: StaticInjectorError(AppModule)[AuthenticatedGuard -> AuthService]: StaticInjectorError(Platform: core)[AuthenticatedGuard -> AuthService]: NullInjectorError: No provider for AuthService!` The error says "No provider for AuthService", but in the very component that I'm navigating from I inject and successfully make use of my AuthService. Here are the relevant source files: app.module.ts ``` import { AuthService } from '../../services/auth.service'; import { AuthenticatedGuard } from '../../utility/authenticated.gaurd' @NgModule({ imports:[ ... ], declarations: [ ... ], providers: [ AuthService, AuthenticatedGuard ]}) export class AppModule { } ``` authenticated.gaurd.ts ``` import { Injectable } from '@angular/core'; import { Router, CanActivate, ActivatedRouteSnapshot, RouterStateSnapshot } from '@angular/router'; import { AuthService } from '../services/auth.service.js'; @Injectable() export class AuthenticatedGuard implements CanActivate { constructor(private authService: AuthService, private router: Router) { } canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): boolean { return true; } } ``` app-routing.module.ts ``` import { AdminComponent } from '../../components/admin/admin.component'; import { NgModule } from '@angular/core'; import { RouterModule, Routes } from '@angular/router'; const routes: Routes = [ { path: 'admin', component: DashboardComponent, canActivate: [AuthenticatedGuard]}, ]; @NgModule({ imports: [ RouterModule.forRoot(routes) ], exports: [ RouterModule ] }) export class AppRoutingModule {} ``` Any ideas where this error could mysteriously come from? I assume something has changes from angular 4 - 5 but I'm not sure what?<issue_comment>username_1: Change this: ``` import { AuthService } from '../services/auth.service.js'; ``` To this: ``` import { AuthService } from '../services/auth.service'; ``` Remove the *.js* from the end of the import in the *authenticated.gaurd.ts*. The file extension is not needed for imports. Upvotes: 3 [selected_answer]<issue_comment>username_2: First step: ``` import { AuthService } from '../services/auth.service'; ``` Second step: ``` providers: [ AuthService ], ``` Upvotes: 3 <issue_comment>username_3: if you get this error then [![enter image description here](https://i.stack.imgur.com/wzcBf.png)](https://i.stack.imgur.com/wzcBf.png) Please add it in providers in app.module.ts ``` providers: [ AuthService ], ``` Upvotes: 1 <issue_comment>username_4: ``` @Injectable({ providedIn: 'root' }) export class YourService {} ``` My problem was this, I created the service without terminal shortcuts and added the injectable decorator but forgot to provide it in the root Upvotes: 0 <issue_comment>username_5: in your app.module.ts ``` providers: [AuthService, { provide: FIREBASE_OPTIONS, useValue: environment.firebase}, AuthGaurdService], bootstrap: [AppComponent] }) ``` Write this, works for me Upvotes: 0
2018/03/14
532
1,712
<issue_start>username_0: I want to plot but I face some errors ``` import numpy as np import matplotlib as plt x = np.arange(0, 3 * np.pi, 0.1) y = np.sin(x) plt.plot(x, y) plt.show() ``` what is its problem? > > cannot find reference 'arange' in \_\_ init\_\_.py > I'm using pycharm on windows 10 > > > is there any difference between `matplotlib.py` and `matplotlib.pyplot`? I can not find the second one solved: use version 2.1.2<issue_comment>username_1: Change this: ``` import { AuthService } from '../services/auth.service.js'; ``` To this: ``` import { AuthService } from '../services/auth.service'; ``` Remove the *.js* from the end of the import in the *authenticated.gaurd.ts*. The file extension is not needed for imports. Upvotes: 3 [selected_answer]<issue_comment>username_2: First step: ``` import { AuthService } from '../services/auth.service'; ``` Second step: ``` providers: [ AuthService ], ``` Upvotes: 3 <issue_comment>username_3: if you get this error then [![enter image description here](https://i.stack.imgur.com/wzcBf.png)](https://i.stack.imgur.com/wzcBf.png) Please add it in providers in app.module.ts ``` providers: [ AuthService ], ``` Upvotes: 1 <issue_comment>username_4: ``` @Injectable({ providedIn: 'root' }) export class YourService {} ``` My problem was this, I created the service without terminal shortcuts and added the injectable decorator but forgot to provide it in the root Upvotes: 0 <issue_comment>username_5: in your app.module.ts ``` providers: [AuthService, { provide: FIREBASE_OPTIONS, useValue: environment.firebase}, AuthGaurdService], bootstrap: [AppComponent] }) ``` Write this, works for me Upvotes: 0
2018/03/14
1,768
6,042
<issue_start>username_0: For a project, I have a large dataset of 1.5m entries, I am looking to aggregate some car loan data by some constraint variables such as: Country, Currency, ID, Fixed or floating , performing , Initial Loan Value , Car Type , Car Make I am wondering if it is possible to aggregate data by summing the initial loan value for the numeric and then condensing the similar variables into one row with the same observation such that I turn the first dataset into the second ``` Country Currency ID Fixed_or_Floating Performing Initial_Value Current_Value data have; set have; input country $ currency $ ID Fixed $ performing $ initial current; datalines; UK GBP 1 Fixed Performing 100 50 UK GBP 1 Fixed Performing 150 30 UK GBP 1 Fixed Performing 160 70 UK GBP 1 Floating Performing 150 30 UK GBP 1 Floating Performing 115 80 UK GBP 1 Floating Performing 110 60 UK GBP 1 Fixed Non-Performing 100 50 UK GBP 1 Fixed Non-Performing 120 30 ; run; data want; set have; input country $ currency $ ID Fixed $ performing $ initial current; datalines; UK GBP 1 Fixed Performing 410 150 UK GBP 1 Floating Performing 275 170 UK GBP 1 Fixed Non-performing 220 80 ; run; ``` Essentially looking for a way to sum the numeric values while concatenating the character variables. I've tried this code ``` proc means data=have sum; var initial current; by country currency id fixed performing; run; ``` Unsure If i'll have to use a proc sql (would be too slow for such a large dataset) or possibly a data step. any help in concatenating would be appreciated.<issue_comment>username_1: 1.5m entries is not very big dataset. The dataset is sorted first. ``` proc sort data=have; by country currency id fixed performing; run; proc means data=have sum; var initial current; by country currency id fixed performing; output out=sum(drop=_:) sum(initial)=Initial sum(current)=Current; run; ``` Upvotes: 0 <issue_comment>username_2: Create an output data set from `Proc MEANS` and concatenate the variables in the result. MEANS with a BY statement requires sorted data. Your `have` does not. Concatenation of the aggregations key (those lovely categorical variables) into a single space separated key (not sure why you need to do that) can be done with `CATX` function. ``` data have_unsorted; length country $2 currency $3 id 8 type $8 evaluation $20 initial current 8; input country currency ID type evaluation initial current; datalines; UK GBP 1 Fixed Performing 100 50 UK GBP 1 Fixed Performing 150 30 UK GBP 1 Fixed Performing 160 70 UK GBP 1 Floating Performing 150 30 UK GBP 1 Floating Performing 115 80 UK GBP 1 Floating Performing 110 60 UK GBP 1 Fixed Non-Performing 100 50 UK GBP 1 Fixed Non-Performing 120 30 ; run; ``` **Way 1 - MEANS with CLASS/WAYS/OUTPUT, post process with data step** The cardinality of the class variables *may* cause problems. ``` proc means data=have_unsorted noprint; class country currency ID type evaluation ; ways 5; output out=sums sum(initial current)= / autoname; run; data want; set sums; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 2 - SORT followed by MEANS with BY/OUTPUT, post process with data step** BY statement requires sorted data. ``` proc sort data=have_unsorted out=have; by country currency ID type evaluation ; proc means data=have noprint; by country currency ID type evaluation ; output out=sums sum(initial current)= / autoname; run; data want; set sums; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 3 - MEANS, given data that is grouped but unsorted, with BY NOTSORTED/OUTPUT, post process with data step** The `have` rows will be processed in *clumps* of the `BY` variables. A clump is a sequence of contiguous rows that have the same by group. ``` proc means data=have_unsorted noprint; by country currency ID type evaluation NOTSORTED; output out=sums sum(initial current)= / autoname; run; data want; set sums; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 4 - DATA Step, DOW loop, BY NOTSORTED and key construction** The `have` rows will be processed in *clumps* of the `BY` variables. A clump is a sequence of contiguous rows that have the same by group. ``` data want_way4; do until (last.evaluation); set have; by country currency ID type evaluation NOTSORTED; initial_sum = SUM(initial_sum, initial); current_sum = SUM(current_sum, current); end; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 5 - Data Step hash** data can be processed with out a presort or clumping. In other words, data can be totally disordered. ``` data _null_; length key $50 initial_sum current_sum 8; if _n_ = 1 then do; call missing (key, initial_sum, current_sum); declare hash sums(); sums.defineKey('key'); sums.defineData('key','initial_sum','current_sum'); sums.defineDone(); end; set have_unsorted end=end; key = catx(' ',country,currency,ID,type,evaluation); rc = sums.find(); initial_sum = SUM(initial_sum, initial); current_sum = SUM(current_sum, current); sums.replace(); if end then sums.output(dataset:'have_way5'); run; ``` Upvotes: 1 <issue_comment>username_3: Props to paige miller ``` proc summary data=testa nway; var net_balance; class ID fixed_or_floating performing_status initial country currency ; output out=sumtest sum=sum_initial; run; ``` Upvotes: -1
2018/03/14
2,265
8,198
<issue_start>username_0: I've now gone through my code with a fine tooth comb and I just cannot seem to see where the "recipe" property is not defined. I'm hoping some more experienced eyes would help me out and spot where I've made the mistake. Any help will be appreciated. Thank you. Ps. Please find my code below... it's the Recipe Box project from FreeCodeCamp and I followed the walk through from <NAME> from CodingTutorials360. As far as I can tell my code is identical to his except for some changes to React-Bootstrap as stipulated by the Documentation. ``` import React, { Component } from 'react'; import './App.css'; import Panel from 'react-bootstrap/lib/Panel' import Button from 'react-bootstrap/lib/Button' import ButtonToolbar from 'react-bootstrap/lib/ButtonToolbar' import Modal from 'react-bootstrap/lib/Modal' import FormGroup from 'react-bootstrap/lib/FormGroup' import ControlLabel from 'react-bootstrap/lib/ControlLabel' import FormControl from 'react-bootstrap/lib/FormControl' import PanelGroup from 'react-bootstrap/lib/PanelGroup' class App extends Component { state = { showAdd: false, showEdit: false, currentIndex: 0, recipes: [ ], newestRecipe: {recipeName:"", ingredients: []} } deleteRecipe(index){ let recipes = this.state.recipes.slice(); recipes.splice(index, 1); localStorage.setItem('recipes', JSON.stringify(recipes)); this.setState({recipes}); } updateNewRecipe(value, ingredients){ this.setState({newestRecipe:{recipeName: value, ingredients: ingredients}}); } close = () => { if(this.state.showAdd){ this.setState({showAdd: false}); } else if(this.state.showEdit){ this.setState({showEdit: false}); } } open = (state, currentIndex) => { this.setState({[state]: true}); this.setState({currentIndex}); } saveNewRecipe = () => { let recipes = this.state.recipes.slice(); recipes.push({recipeName: this.state.newestRecipe.recipeName, ingredients: this.state.newestRecipe.ingredients}); localStorage.setItem('recipes', JSON.stringify(recipes)); this.setState({ recipes }); this.setState({newestRecipe: {recipeName: '', ingredients:[]}}); this.close(); } updateRecipeName(recipeName, currentIndex){ let recipes = this.state.recipes.slice(); recipes[currentIndex] = {recipeName: recipeName, ingredients: recipes[currentIndex].ingredients}; this.setState({recipes}); localStorage.setItem('recipes', JSON.stringify(recipes)); this.close(); } updateIngredients(ingredients, currentIndex){ let recipes = this.state.recipes.slice(); recipes[currentIndex] = {recipeName: recipes[currentIndex].recipeName, ingredients: ingredients}; localStorage.setItem('recipes', JSON.stringify(recipes)); this.setState({recipes}); } componentDidMount(){ let recipes = JSON.parse(localStorage.getItem("recipes")) || []; this.setState({recipes}); } render() { const {recipes, newestRecipe, currentIndex} = this.state; return ( {recipes.length > 0 && ( {recipes.map((recipe, index)=>( {recipe.recipeName} {recipe.ingredients.map((item)=>( 2. {item} ))} this.deleteRecipe(index)}>Delete Recipe this.open("showEdit", index)}>Edit Recipe ))} )} Edit Recipe Recipe Name this.updateRecipeName(event.target.value, currentIndex)} /> Ingredients this.updateIngredients(event.target.value.split(","), currentIndex)} placeholder="Enter Ingredients [Seperate by Commas]" value={recipes[currentIndex].ingredients}> this.saveNewRecipe()}>Save Changes Add Recipe Recipe Name this.updateNewRecipe(event.target.value, newestRecipe.ingredients)} > Ingredients this.updateNewRecipe(newestRecipe.recipeName, event.target.value.split(','))} value={newestRecipe.ingredients} > {this.saveNewRecipe()}}>Save this.open("showAdd", currentIndex)}>Add Recipe ); } } export default App; ```<issue_comment>username_1: 1.5m entries is not very big dataset. The dataset is sorted first. ``` proc sort data=have; by country currency id fixed performing; run; proc means data=have sum; var initial current; by country currency id fixed performing; output out=sum(drop=_:) sum(initial)=Initial sum(current)=Current; run; ``` Upvotes: 0 <issue_comment>username_2: Create an output data set from `Proc MEANS` and concatenate the variables in the result. MEANS with a BY statement requires sorted data. Your `have` does not. Concatenation of the aggregations key (those lovely categorical variables) into a single space separated key (not sure why you need to do that) can be done with `CATX` function. ``` data have_unsorted; length country $2 currency $3 id 8 type $8 evaluation $20 initial current 8; input country currency ID type evaluation initial current; datalines; UK GBP 1 Fixed Performing 100 50 UK GBP 1 Fixed Performing 150 30 UK GBP 1 Fixed Performing 160 70 UK GBP 1 Floating Performing 150 30 UK GBP 1 Floating Performing 115 80 UK GBP 1 Floating Performing 110 60 UK GBP 1 Fixed Non-Performing 100 50 UK GBP 1 Fixed Non-Performing 120 30 ; run; ``` **Way 1 - MEANS with CLASS/WAYS/OUTPUT, post process with data step** The cardinality of the class variables *may* cause problems. ``` proc means data=have_unsorted noprint; class country currency ID type evaluation ; ways 5; output out=sums sum(initial current)= / autoname; run; data want; set sums; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 2 - SORT followed by MEANS with BY/OUTPUT, post process with data step** BY statement requires sorted data. ``` proc sort data=have_unsorted out=have; by country currency ID type evaluation ; proc means data=have noprint; by country currency ID type evaluation ; output out=sums sum(initial current)= / autoname; run; data want; set sums; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 3 - MEANS, given data that is grouped but unsorted, with BY NOTSORTED/OUTPUT, post process with data step** The `have` rows will be processed in *clumps* of the `BY` variables. A clump is a sequence of contiguous rows that have the same by group. ``` proc means data=have_unsorted noprint; by country currency ID type evaluation NOTSORTED; output out=sums sum(initial current)= / autoname; run; data want; set sums; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 4 - DATA Step, DOW loop, BY NOTSORTED and key construction** The `have` rows will be processed in *clumps* of the `BY` variables. A clump is a sequence of contiguous rows that have the same by group. ``` data want_way4; do until (last.evaluation); set have; by country currency ID type evaluation NOTSORTED; initial_sum = SUM(initial_sum, initial); current_sum = SUM(current_sum, current); end; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 5 - Data Step hash** data can be processed with out a presort or clumping. In other words, data can be totally disordered. ``` data _null_; length key $50 initial_sum current_sum 8; if _n_ = 1 then do; call missing (key, initial_sum, current_sum); declare hash sums(); sums.defineKey('key'); sums.defineData('key','initial_sum','current_sum'); sums.defineDone(); end; set have_unsorted end=end; key = catx(' ',country,currency,ID,type,evaluation); rc = sums.find(); initial_sum = SUM(initial_sum, initial); current_sum = SUM(current_sum, current); sums.replace(); if end then sums.output(dataset:'have_way5'); run; ``` Upvotes: 1 <issue_comment>username_3: Props to paige miller ``` proc summary data=testa nway; var net_balance; class ID fixed_or_floating performing_status initial country currency ; output out=sumtest sum=sum_initial; run; ``` Upvotes: -1
2018/03/14
3,226
9,366
<issue_start>username_0: Hi I'm having an issue continuously recording data to a .csv file using the following script ``` int ddm(void) { // 96 Temp MSB, 97 Temp LSB, 98 Vcc MSB, 99 Vcc LSB // 100 TX_BIA MSB, 101 TX_BIA LSB, // 102 TX MSB, 103 TX LSB, 104 RX MSB, 105 RX LSB FILE *focat; float temperature, vcc, tx_bias, optical_tx, optical_rx, RAW_tx, RAW_rx; char temp[10], vccc[10], txbi[10], optx[10], oprx[10], rwtx[30], rwrx[30]; int i; //Open (or create) the csv file and write the heading row focat=fopen("fcatdata.csv", "w"); if(focat == NULL) { printf("error openining file\n"); exit(1); } fprintf(focat,"Temp, Vcc, Tx_Bias, Tx, Rx, RAWTx, RAWRx\n"); fclose(focat); focat=fopen("fcatdata.csv", "a+"); i=0; //start infinite loop for(;;) { if(!read_eeprom(0x51)); else exit(EXIT_FAILURE); i=i+1; //Taking MSB and LSB data and converting temperature = (A51[96]+(float) A51[97]/256); vcc = (float)(A51[98]<<8 | A51[99]) * 0.0001; tx_bias = (float)(A51[100]<<8 | A51[101]) * 0.002; optical_tx = 10 * log10((float)(A51[102]<<8 | A51[103]) * 0.0001); optical_rx = 10 * log10((float)(A51[104]<<8 | A51[105]) * 0.0001); RAW_tx = ((float)(A51[102]<<8 | A51[103]) * 0.0001); RAW_rx = ((float)(A51[104]<<8 | A51[105]) * 0.0001); //Display Diagnostics Monitoring Data in Terminal printf ("SFP Temperature = %4.4fC\n", temperature); printf ("Vcc, Internal supply = %4.4fV\n", vcc); printf ("TX bias current = %4.4fmA\n", tx_bias); printf ("Tx, Optical Power = %4.4f dBm", optical_tx); printf (", %6.6f mW\n", RAW_tx); printf ("Rx, Optical Power = %4.4f dBm", optical_rx); printf (", %6.6f mW\n", RAW_rx); printf ("iteration %d \n", i); //Change the integers into strings for appending to file sprintf(temp, "%4.4f", temperature); sprintf(vccc, "%4.4f", vcc); sprintf(txbi, "%4.4f", tx_bias); sprintf(optx, "%4.4f", optical_tx); sprintf(oprx, "%4.4f", optical_rx); sprintf(rwtx, "%6.6f", RAW_tx); sprintf(rwrx, "%6.6f", RAW_rx); //Appends DDM Data into a new row of a csv file //focat=fopen("fcatdata.csv", "a"); fprintf(focat, "%s,%s,%s,%s,%s,%s,%s\n",temp,vccc,txbi,optx,oprx,rwtx,rwrx); //fclose(focat); } fclose(focat); return 0; } ``` When I have the code set up to open the .csv file prior to entering the loop I get the following error on the 1020th iteration: > > SFP Temperature = 31.9258C > > > Vcc, Internal supply = 3.1374V > > > TX bias current = 8.0540mA > > > Tx, Optical Power = -1.8006 dBm, 0.660600 mW > > > Rx, Optical Power = -40.0000 dBm, 0.000100 mW > > > **Unable to open I2C device: Too many open files** > > > When i change the comments towards the bottom of the code so it reads as follows: ``` //Appends DDM Data into a new row of a csv file focat=fopen("fcatdata.csv", "a"); fprintf(focat, "%s,%s,%s,%s,%s,%s,%s\n",temp,vccc,txbi,optx,oprx,rwtx,rwrx); fclose(focat); ``` and then also comment out the file open prior to the loop, I am subsequently presented with the following fault on the 1021st loop iteration: > > SFP Temperature = 31.8906C > > > Vcc, Internal supply = 3.1372V > > > TX bias current = 8.0620mA > > > Tx, Optical Power = -1.8006 dBm, 0.660600 mW > > > Rx, Optical Power = -40.0000 dBm, 0.000100 mW > > > **Segmentation fault** > > > I think this related somehow to `ulimit - n` showing a result of `1024` but i need to be able to run this script continuously for a week and therefore changing ulimit isnt a real solution for the problem. I tested this theory by making a script which loops endlessly and appends the integer i to a csv file and that reached far beyond 1021 rows of data. This has been bothering me for a week now. Any help is appreciated. Criticism on formatting etc it welcome, I don't often post here (or anywhere for that matter) --- ``` int read_eeprom(unsigned char address) { int xio,i,fd1; xio = wiringPiI2CSetup (address); if (xio < 0) { fprintf (stderr, "xio: Can't initialise I2C: %s\n", strerror (errno)); return 1; } for(i=0; i <128; i++) { fd1 = wiringPiI2CReadReg8 (xio,i); if (address == 0x50) { A50[i] = fd1; } else { A51[i] = fd1; } if (fd1 <0) { fprintf (stderr, "xio: Can't read i2c address 0x%x: %s\n", address, strerror (errno)); return 1; } } return 0; } ``` **Edit 1:** clarified the two scenarios where the file is opened and closed **Edit 2:** added info on what is in `read_eeprom` **Edit 3:** solved by adding `close(fp);` at the end of `read_eeprom` **Edit 4:** solved *properly* by adding `close(xio);` at the end of `read_eeprom` - Credits to @JohnH<issue_comment>username_1: 1.5m entries is not very big dataset. The dataset is sorted first. ``` proc sort data=have; by country currency id fixed performing; run; proc means data=have sum; var initial current; by country currency id fixed performing; output out=sum(drop=_:) sum(initial)=Initial sum(current)=Current; run; ``` Upvotes: 0 <issue_comment>username_2: Create an output data set from `Proc MEANS` and concatenate the variables in the result. MEANS with a BY statement requires sorted data. Your `have` does not. Concatenation of the aggregations key (those lovely categorical variables) into a single space separated key (not sure why you need to do that) can be done with `CATX` function. ``` data have_unsorted; length country $2 currency $3 id 8 type $8 evaluation $20 initial current 8; input country currency ID type evaluation initial current; datalines; UK GBP 1 Fixed Performing 100 50 UK GBP 1 Fixed Performing 150 30 UK GBP 1 Fixed Performing 160 70 UK GBP 1 Floating Performing 150 30 UK GBP 1 Floating Performing 115 80 UK GBP 1 Floating Performing 110 60 UK GBP 1 Fixed Non-Performing 100 50 UK GBP 1 Fixed Non-Performing 120 30 ; run; ``` **Way 1 - MEANS with CLASS/WAYS/OUTPUT, post process with data step** The cardinality of the class variables *may* cause problems. ``` proc means data=have_unsorted noprint; class country currency ID type evaluation ; ways 5; output out=sums sum(initial current)= / autoname; run; data want; set sums; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 2 - SORT followed by MEANS with BY/OUTPUT, post process with data step** BY statement requires sorted data. ``` proc sort data=have_unsorted out=have; by country currency ID type evaluation ; proc means data=have noprint; by country currency ID type evaluation ; output out=sums sum(initial current)= / autoname; run; data want; set sums; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 3 - MEANS, given data that is grouped but unsorted, with BY NOTSORTED/OUTPUT, post process with data step** The `have` rows will be processed in *clumps* of the `BY` variables. A clump is a sequence of contiguous rows that have the same by group. ``` proc means data=have_unsorted noprint; by country currency ID type evaluation NOTSORTED; output out=sums sum(initial current)= / autoname; run; data want; set sums; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 4 - DATA Step, DOW loop, BY NOTSORTED and key construction** The `have` rows will be processed in *clumps* of the `BY` variables. A clump is a sequence of contiguous rows that have the same by group. ``` data want_way4; do until (last.evaluation); set have; by country currency ID type evaluation NOTSORTED; initial_sum = SUM(initial_sum, initial); current_sum = SUM(current_sum, current); end; key = catx(' ',country,currency,ID,type,evaluation); keep key initial_sum current_sum; run; ``` **Way 5 - Data Step hash** data can be processed with out a presort or clumping. In other words, data can be totally disordered. ``` data _null_; length key $50 initial_sum current_sum 8; if _n_ = 1 then do; call missing (key, initial_sum, current_sum); declare hash sums(); sums.defineKey('key'); sums.defineData('key','initial_sum','current_sum'); sums.defineDone(); end; set have_unsorted end=end; key = catx(' ',country,currency,ID,type,evaluation); rc = sums.find(); initial_sum = SUM(initial_sum, initial); current_sum = SUM(current_sum, current); sums.replace(); if end then sums.output(dataset:'have_way5'); run; ``` Upvotes: 1 <issue_comment>username_3: Props to paige miller ``` proc summary data=testa nway; var net_balance; class ID fixed_or_floating performing_status initial country currency ; output out=sumtest sum=sum_initial; run; ``` Upvotes: -1