date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/15
1,086
3,582
<issue_start>username_0: **TL;DR**: How to choose every bit of information of the response of a WP REST API custom endpoint? **LONG VERSION** If I want to build a custom endpoint with the WP REST API - sending specific post data from different post types - following the example in the [Handbook](https://developer.wordpress.org/rest-api/extending-the-rest-api/adding-custom-endpoints/), I got this: ```php function custom_endpoint ( $data ) { $posts = get_posts( array( 'numberposts' => -1, 'post_type' => array('event', 'post'), ) ); if ( empty( $posts ) ) { return null; } return $posts; } add_action( 'rest_api_init', function () { register_rest_route( 'wp/v1', '/custom-endpoint/', array( 'methods' => 'GET', 'callback' => 'custom_endpoint', ) ); } ); ``` But the get\_post() function doesn't return some piece of data that is very useful if you wish to display posts in your page(category id, featured image, for intance). So how can I build a custom endpoint that returns: * Post Title * Post Date * Post Author * Post Excerpt * Post Content * Post Featured Image (like [Better Featured Images plugin](https://wordpress.org/plugins/better-rest-api-featured-images/)) * Post Category * Post Type * Post Link * Other usfeul informations<issue_comment>username_1: As the [WP Codex states](https://codex.wordpress.org/Template_Tags/get_posts) to access all data: Access all post data Some post-related data **is not available** to get\_posts. You can get them by: ``` $posts = get_posts( array( 'numberposts' => -1, 'post_type' => array('event', 'post'), ) ); $response = []; foreach ( $posts as $post ) { $response[] = [ 'content' => $post->CONTENT. 'title' => $post->TITLE, ..... ] } return $response; (in a WP way of constructing json responses) ``` Upvotes: 0 <issue_comment>username_2: Based in the @username_1 answer, I got the following idea: take the object that get\_posts() returns and add new proporties to it, using others Wordpress functions. ``` function custom_endpoint ( $data ) { $posts = get_posts( array( 'numberposts' => -1, //Here we can get more than one post type. Useful to a home page. 'post_type' => array('event', 'post'), ) ); if ( empty( $posts ) ) { return null; } $args = array(); foreach ( $posts as $post ) { //Get informations that is not avaible in get_post() function and store it in variables. $category = get_the_category( $post->ID ); $img_thumb = get_the_post_thumbnail_url( $post->ID, 'thumbnail' ); // Thumbnail (default 150px x 150px max) $img_medium = get_the_post_thumbnail_url( $post->ID, 'medium' ); // Medium resolution (default 300px x 300px max) $img_large = get_the_post_thumbnail_url( $post->ID, 'large' ); // Large resolution (default 640px x 640px max) $img_full = get_the_post_thumbnail_url( $post->ID, 'full' ); // Full resolution (original size uploaded) //Adds the informations to the post object. $post->category = $category; $post->img_tumb = $img_thumb; $post->img_medium = $img_medium; $post->img_large = $img_large; $post->img_full = $img_full; array_push($args, $post); } return $args; } ``` ```php add_action( 'rest_api_init', function () { register_rest_route( 'wp/v1', '/custom-endpoint/', array( 'methods' => 'GET', 'callback' => 'custom_endpoint', ) ); ``` } ); It works fine! Thanks, @username_1 for the contribution. Upvotes: 2 [selected_answer]
2018/03/15
670
2,140
<issue_start>username_0: I am looking to combine a single XML sheet with two "sections" of information, "Notes" and "Encounters". Each note has an *Extension ID* and each encounter also has an *Extension ID*. The data is setup like such: ``` Text Paragraph 1 Text Paragraph 2 123456 Text Paragraph 3 123456 Text Paragraph 4 Text Paragraph 5 789012 123456 2017-01-02 Dr. <NAME> 798012 2015-10-20 <NAME> ``` I would like to have the following output: ``` Text Paragraph 1 Text Paragraph 2 Text Paragraph 3 Dr. <NAME> 2015-10-20 Text Paragraph 4 Text Paragraph 5 <NAME> 2015-10-20 ``` My Current XSLT is below: ``` xml version="1.0" encoding="utf-8"? ``` The first part works, mostly, where it's grouping by Extension ID, but it's the associated Encounter info that I'm struggling with. What I'm noticing is that it's not even going into the for-each loop with "../../Encounters/Encounter[ExtensionID=key('ExtID',ExtensionID)]". I'm nearly positive that I can't do what I want to do: use the key like some form of "not- variable", but I'm not sure what to try next. Please note: I'm constrained by the software we use in my office to XSLT 1.0. I do not have access to 2.0 functions, nor am I able to install/upgrade or use a different engine than the one in Visual Studio.<issue_comment>username_1: Define a second key for the cross-reference: ``` ``` then you can select ``` ``` instead of ``` ``` Upvotes: 1 <issue_comment>username_2: There is indeed a problem with this line... ``` ``` Ignoring the fact you don't have `Encounters` in your XML, the "key" will return a `Note` node, when you just want the value of the `ExtensionID` for the current `Note`. So it should be this.... ``` ``` Or, if you define another key like this... ``` ``` You can then do this... ``` ``` Try this XSLT.... ``` ``` Maybe you should reverse the logic of the XSLT though? Instead of doing grouping on the `Note` elements, start off by getting the `Encounter` elements, and then using the key to get the notes for them.... ``` ``` Upvotes: 3 [selected_answer]
2018/03/15
771
3,221
<issue_start>username_0: The suggestion to encrypt log files as a means of protecting the personal data that might be contained in them is widespread. What I've not seen is a good reference implementation, which is surprising given how many companies will need this. In our particular case, we want to use public key encryption so that the files can not be read on the (weakly protected) system that generates them, and must be sent back to head office where we can look at them. The best suggestion I've seen so far is "use log4net but write your own appender using the RFC 3852 streaming implementation from BouncyCastle". Does anyone have an advance on that?<issue_comment>username_1: Technically, encrypting your log messages should be pretty easy. Using something like Serilog you could simply create a [custom sink](https://github.com/serilog/serilog/wiki/Developing-a-sink). Just blind encrypting the whole log is probably going to limit the usefulness of the logs though. If you're centralizing your logging using something like [ELK](https://www.elastic.co/elk-stack) then you won't be able to search based on any field/part of your logs that you encrypt (for example, if you encrypt the machine name then you don't even know where the logs come from!). If the kind of information that you're dealing with genuinely is personally identifiable information covered by GDPR then maybe you just have to suck that up - but I'd make an effort to encrypt only sensitive information from your logs rather than just blanket encrypting everything... that would require a more sophisticated sink but it will make your log data way less crippled. Upvotes: 2 <issue_comment>username_2: I agree with some of the commentators; personal data should not be a part of the log files. GDPR is not about the encryption - if you just encrypt personal data that does not mean that you're GDPR compliant. What will happen with the personal data in your log files when you receive "forget me" (Right to erasure) request from the individual? Or "change my data" (Right to rectification)? However, if you need to log personal data, maybe the option can be to hash the information and store hashed version in the logs. In that case, you'll be able to find the specific data in the logs, by calculating the hash from the search string. Related to a public key encryption part of your question, take a look: <https://aws.amazon.com/kms> or <https://azure.microsoft.com/en-us/services/key-vault/> Upvotes: 2 <issue_comment>username_3: If you're on a hosted server, follow James's answer, make a custom sink, and have it log the file in a way you control it. I prefer to use a system-wide type of solution and let the operating system take care of access control; segregation of responsibility is a thing that can simplify things. Using a group policy, you can instruct windows to protect data where only authorized users can read the data consistently in your organization. Have a look at the following: <https://techcommunity.microsoft.com/t5/windows-server-essentials-and/help-secure-your-business-information-using-encrypting-file/ba-p/397386> Contact your IT support and as them to protect the GDPR folders for you. Upvotes: 0
2018/03/15
791
3,294
<issue_start>username_0: Angular's [i18n](https://angular.io/guide/i18n) is great, and tools like [ng-packagr](https://www.npmjs.com/package/ng-packagr) makes component library packaging extremely easy, but can they be combined? What if i want to package and distribute a component library having translatable components? Is it possible? How do I package such a library? Will translation files be shipped together with the package, or should they be defined in the main app? It'd be great if someone could point me at some doc. Thanks<issue_comment>username_1: When you generate a translation file for the main app with the CLI (with `ng xi18n`), elements with the attribute i18n in the library are imported in the translation file. You can then define the translations in the main app. Upvotes: 3 <issue_comment>username_2: There are two ways of doing so - statically providing the assets and bundling on build time or configuring translation path on runtime. 1. In order to statically include files on build time, you just use `setTranslations` in the code, as mentioned in <https://github.com/ngx-translate/core> docs. Then, you can just bundle your translations with the code. 2. Better would be to let consumer know what to use. In order to properly be able to provide path to translation files (assuming standard structure, where every translation is residing in a separate file containing language in the name), you can do something as follows: ``` interface TranslationsConfig { prefix: string; suffix: string; } export const TRANSLATIONS_CONFIG = new InjectionToken('TRANSLATIONS_CONFIG'); @NgModule({ declarations: [], imports: [ NgxTranslateModule, ], exports: [ NgxTranslateModule, ] }) export class TranslateModule { public static forRoot(config: TranslationsConfig): ModuleWithProviders { return { ngModule: TranslateModule, providers: [ { provide: TRANSLATIONS_CONFIG, useValue: config }, ...NgxTranslateModule.forRoot({ loader: { provide: TranslateLoader, useFactory: HttpLoaderFactory, deps: [HttpClient, TRANSLATIONS_CONFIG] } }).providers ], }; } ``` } This code makes sure that when building library, AOT will be able to resolve types (hence `InjectionToken` etc.) and allows to create custom translations loader. Now it's only up to you to implement loader factory or class that will use the config! This is mine (I'm using POs for my translations): ``` export function HttpLoaderFactory(http: HttpClient, config: TranslationsConfig) { return new TranslatePoHttpLoader(http, config.prefix, config.suffix); } ``` Please do remember to export every class and function that you're using in the module as that's prerequisite for AOT (and libraries are built with AOT by default). To use this whole solution, wherever you use your main library module or this translation module, you can just call `TranslateModule.forRoot(/* Your config here */)`. If this is not the main module exported, more on using hierarchical modules with `forRoot` here: [How to use .forRoot() within feature modules hierarchy](https://stackoverflow.com/questions/39653072/how-to-use-forroot-within-feature-modules-hierarchy) Upvotes: 0
2018/03/15
1,420
6,085
<issue_start>username_0: Sorry for my english. I have three projects: IdentityServer, Ensino.Mvc, Ensino.Api. The IdentityServer Project provides the main identity information and claims - claim Profile, claim Address, claim Sid... etc, from the IdentityServer4 library. The Ensino.Mvc Project gets this information in a token and sends it to the API, so that the MVC is grranted access to the resources. The token contains all the claims provided by IdentityServer. But in the API, I need to generate other claims that are API specific, like: claim EnrollmentId that corresponds to claim Sid from the token. And also I want to add this claim in HttpContext for future purposes. Can somebody tell me how to achieve this? I have this code in Startup.ConfigureServices: ``` // Add identity services services.AddAuthentication("Bearer") .AddIdentityServerAuthentication(options => { options.Authority = "http://localhost:5100"; options.RequireHttpsMetadata = false; options.ApiName = "beehouse.scope.ensino-api"; }); // Add mvc services services.AddMvc(); ``` In other Project, without API, just mvc, I have inherited `UserClaimsPrincipalFactory` and overridden `CreateAsync` to add additional claims. I like to do something like this but in the API project. Is it possible? What the best approach to do this? EDIT: After some research, what I want to do is: Authentication by IdentityServer and set authorization in api, based on claims and specific api database data.<issue_comment>username_1: OK so step by step: 1. You need to create an API Resource(`beehouse.scope.ensino-api` in your case, but I'll recommend you to hide such info when posting code here) in Identity Server. It should be with the same name as your `options.ApiName` 2. You need to add this scope to the allowed scopes of your MVC client. Both steps are described [here](https://identityserver4.readthedocs.io/en/release/quickstarts/1_client_credentials.html#defining-the-api), but the main thing is when adding the resource you can do something like: ``` new ApiResource("beehouse.scope.ensino-api", "My test resource", new List() { "claim1", "claim2" }); ``` and then in your client configuration: ``` new Client { ClientId = "client", . . // scopes that client has access to AllowedScopes = { "beehouse.scope.ensino-api" } . . } ``` This will add the claims that are associated with this resource to the token. Of course you will have to set this claims on Identity Server level, but from what you said, you already know how to do this. Upvotes: 0 <issue_comment>username_2: In your API project you can add your own event handler to `options.JwtBearerEvents.OnTokenValidated`. This is the point where the `ClaimsPrincipal` has been set and you can add claims to the identity or add a new identity to the principal. ``` services.AddAuthentication("Bearer") .AddIdentityServerAuthentication(options => { options.Authority = "http://localhost:5100"; options.RequireHttpsMetadata = false; options.ApiName = "beehouse.scope.ensino-api"; options.JwtBearerEvents.OnTokenValidated = async (context) => { var identity = context.Principal.Identity as ClaimsIdentity; // load user specific data from database ... // add claims to the identity identity.AddClaim(new Claim("Type", "Value")); }; }); ``` Note that this will run on every request to the API so it's best to cache the claims if you're loading info from database. Also, Identity Server should only be responsible for identifying users, not what they do. What they do is application specific (roles, permissions etc.) so you're correct in recognising this and avoiding the logic crossover with Identity Server. Upvotes: 4 [selected_answer]<issue_comment>username_3: Making your own `AuthenticationHandler` that uses the `IdentityServerAuthenticationHandler` would be the best option. This would allow you to use DI, reject authentication, and skip the custom authentication handler when it is not needed. Example `AuthenticationHandler` that first authenticates the token and then adds more claims: ``` public class MyApiAuthenticationHandler : AuthenticationHandler { protected override async Task HandleAuthenticateAsync() { // Pass authentication to IdentityServerAuthenticationHandler var authenticateResult = await Context.AuthenticateAsync("Bearer"); // If token authentication fails, return immediately if (!authenticateResult.Succeeded) { return authenticateResult; } // Get user ID from token var userId = authenticateResult.Principal.Claims .FirstOrDefault(c => c.Type == JwtClaimTypes.Subject)?.Value; // Do additional checks for authentication // e.g. lookup user ID in database if (userId == null) { return AuthenticateResult.NoResult(); } // Add additional claims var identity = authenticateResult.Principal.Identity as ClaimsIdentity; identity.AddClaim(new Claim("MyClaim", "MyValue")); return authenticateResult; } } ``` Add handler to services: ``` services.AddAuthentication() .AddIdentityServerAuthentication(options => { // ... }) .AddScheme("MyApiScheme", null); ``` Now you can use either scheme: ``` // Authenticate token and get extra API claims [Authorize(AuthenticationSchemes = "MyApiScheme")] // Authenticate just the token [Authorize(AuthenticationSchemes = "Bearer")] ``` --- Note that `IdentityServerAuthenticationHandler` does the same thing, [using the dotnet JWT handler:](https://github.com/IdentityServer/IdentityServer4.AccessTokenValidation/blob/2.5.0/src/IdentityServer4.AccessTokenValidation/IdentityServerAuthenticationHandler.cs#L61) ``` public class IdentityServerAuthenticationHandler : AuthenticationHandler { protected override async Task HandleAuthenticateAsync() { ... return await Context.AuthenticateAsync(jwtScheme); ... } } ``` Upvotes: 2
2018/03/15
974
3,282
<issue_start>username_0: I have a JavaScript app I'm bundling with webpack. Per the docs, I'm using this command to start bundling: ``` npx webpack ``` Each time I get this output: ``` npx: installed 1 in 2.775s ``` I've verified that the webpack command exists in my `./node_modules/.bin` directory where npx is looking. Can anyone think of why it's downloading webpack every time? It can take up to 7 seconds to complete this step, which is slowing down my builds.<issue_comment>username_1: **Old answer**: `npx` doesn't reuse previously installed packages, instead it pulls down that package's dependencies every time that you run it. **Update** on 06 May 2022 for newer versions of `npx` e.g. ver. 8.3.0: Now `npx` does use previously installed packages without need to reinstall anything! Looks like `npm` team fixed old issue some time ago, not sure which version was first that received this fix. > > npx allows you to run an arbitrary command from an npm package > (either one installed locally, or fetched remotely), in a similar > context as running it via npm run. > > > <https://docs.npmjs.com/cli/v8/commands/npx> Upvotes: 2 <issue_comment>username_2: I agree with [laggingreflex](https://stackoverflow.com/questions/49302438/why-does-npx-install-webpack-every-time/68421887#comment85607199_49302438). It was probably not `webpack` that's being installed in your case each time. Refer [this issue](https://github.com/zkat/npx/issues/148), it is of around the same time. I don't think it is applicable to newer versions. --- The [other answer](https://stackoverflow.com/a/58642519/11613622) is misleading, or at least I don't quite understand its context. Quoting [the readme](https://github.com/npm/npx#description): > > Executes either from a local `node_modules/.bin`, or from a central cache, installing any packages needed in order for to run. > > > > > By default, `npx` will check whether exists in `$PATH`, or in the local project binaries, and execute that. If is not found, it will be installed prior to execution. > > > Quoting [release blog](https://blog.npmjs.org/post/162869356040/introducing-npx-an-npm-package-runner): > > `npx` has basically no overhead if invoking an already-installed binary  —  it's clever enough to load the code for the tool directly into the current running node process! > > > > > Calling `npx` when isn't already in your `$PATH` will automatically install a package with that name from the `npm` registry for you, and invoke it. When it’s done, the installed package won't be anywhere in your globals, so you won’t have to worry about pollution in the long-term. > > > Although the above referenced `npx` as a separate package is now deprecated and `npx` is now a part of `npm-cli`, but the essence is still same, and can also be verified by the [official docs](https://docs.npmjs.com/cli/v7/commands/npx): > > This command allows you to run an arbitrary command from an `npm` package (either one installed locally, or fetched remotely). > > > > > If any requested packages are not present in the local project dependencies, then they are installed to a folder in the `npm` cache, which is added to the `PATH` environment variable in the executed process. > > > Upvotes: 1
2018/03/15
642
2,620
<issue_start>username_0: I read that an abstract class can still have a table. But I'm confused on how many entries it would have in its vtable. For example, if my abstract class was: ``` class Circle(){ virtual void draw() = 0; } ``` then how many entries would be in its vtable? Also, am I correct in saying that this abstract class has 1 entry in its vtable? Thanks for any help. ``` class Circle(){ virtual double a{ return 0.0; } virtual void draw() = 0; } ```<issue_comment>username_1: Every virtual function can be overridden. The compiler has to build in some mechanism to dynamically dispatch calls to each virtual function so that the code calls the right overriding version, which depends on the actual type of the object. That mechanism is typically a vtable, and there has to be one entry for each virtual function. So the first example would have one entry, and the second would have two. Note that marking a function as pure virtual does not affect this; it still has to be dynamically dispatched. Upvotes: 4 [selected_answer]<issue_comment>username_2: First, vtables are an implementation detail. As long as you're not doing really weird things, you should ignore their existence. Second, even though all compilers use vtables to implement virtual dispatch, there are differences in how they do it. For some, a vtable entry is just a function pointer; for others, it's a pointer an an offset. Some compilers have one entry for a virtual destructor, some have two, or even more. And some add a new entry for a covariantly overriden function, while others might not. The bottom line is that you shouldn't, in general, worry about this issue. If you are interested in implementation details, you could for example read the [Itanium C++ ABI](https://itanium-cxx-abi.github.io/cxx-abi/), which is what Linux compilers generally follow. Upvotes: 2 <issue_comment>username_3: Well, vtables are an implementation-detail, though an ubiquitous one. As neither `Circle`'s ctor-body nor dtor-body calls any of its functions, especially none calling virtual ones, and it is abstract due to the pure virtual function, `Circle`'s vtable is never used, if it is in fact even created. Anyway, the theoretical vtable needs at least one entry for the [`std::type_info`](http://en.cppreference.com/w/cpp/types/type_info) and any other support for `dynamic_cast`, one for each virtual function, and if the dtor is virtual, two for that (one just for the dtor, one for dtor + deallocation with `delete`). Which comes to at least 3 entries (2 virtual functions + RTTI) for your second example. Upvotes: 2
2018/03/15
516
1,310
<issue_start>username_0: Let's say I have the following data. ``` df = pd.DataFrame({'group':list('aaaabbbb'), 'val':[1,3,3,np.NaN,5,6,6,2], 'id':[1,np.NaN,np.NaN,np.NaN,np.NaN,3,np.NaN,3]}) df ``` I want to drop columns where the percentage of NaN values is over 50%. I could do it manually by running the following and then using drop. ``` df.isnull().sum()/len(df)*100 ``` However, I was wondering if there was an elegant and quick code to do this?<issue_comment>username_1: Use `mean` with [`boolean indexing`](http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing) for remove columns: ``` print (df.isnull().mean() * 100) group 0.0 id 62.5 val 12.5 dtype: float64 ``` --- ``` df1 = df.loc[:, df.isnull().mean() <= .5] print (df1) group val 0 a 1.0 1 a 3.0 2 a 3.0 3 a NaN 4 b 5.0 5 b 6.0 6 b 6.0 7 b 2.0 ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: ``` df.dropna(thresh=len(df)//2,axis=1) Out[57]: group val 0 a 1.0 1 a 3.0 2 a 3.0 3 a NaN 4 b 5.0 5 b 6.0 6 b 6.0 7 b 2.0 ``` Upvotes: 3 <issue_comment>username_3: Could use `thresh` param of dropna. ``` df.dropna(axis=1, thresh=int(0.5*len(df))) ``` Upvotes: 3
2018/03/15
585
1,953
<issue_start>username_0: I have a database with thousands of customers. They all have unique customer ID. From another different system I have received a list of customer ID's and I would like to know which of these ID's cannot be found in my database. My query currently is something like this ``` select ID from table where ID in ('A1', 'A2', 'A3', ... 'A6499', 'A6500') ``` From the search condition, how do I get the *unmatched* search values into the result set? I know how I would do this if the search criteria data already existed in the same database. Are there any other options than making a temporary table and joining with it? My database will contain more values than A6500, for example A7500. I do not want to get this value in my result set.<issue_comment>username_1: Using a explicit `VALUES`, you can write more than 1000 values, which is the `IN` limit. ``` ;WITH OtherDatabaseIDs AS ( SELECT V.* FROM (VALUES (1000), --... as many records as you want (2500)) V(OtherDatabaseID) ) SELECT * FROM OtherDatabaseIDs AS T WHERE NOT EXISTS (SELECT 1 FROM table AS O WHERE T.OtherDatabaseID = O.ID) ``` Upvotes: 1 <issue_comment>username_2: You can use a [Table Value Constructor](https://learn.microsoft.com/en-us/sql/t-sql/queries/table-value-constructor-transact-sql): ``` SELECT t.ID FROM (VALUES ('A1'), ('A2'), ('A3')) AS t(ID) LEFT JOIN table ON t.ID = table.ID WHERE table.ID IS NULL ``` Or if you get the ids as a comma delimited string use [`string_split`](https://learn.microsoft.com/en-us/sql/t-sql/functions/string-split-transact-sql) (2016+) or a string splitting udf for lower versions - I recommend <NAME>'s [DelimitedSplit8K](http://www.sqlservercentral.com/articles/Tally+Table/72993/): ``` SELECT value FROM STRING_SPLIT('A1,A2,A3', ',') AS t LEFT JOIN table ON t.value = table.ID WHERE table.ID IS NULL ``` Upvotes: 4 [selected_answer]
2018/03/15
409
2,045
<issue_start>username_0: What bearerOption.SaveToken property used for in the configuration of JwtAuthentication in aspnet core 2 ? ``` services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme) .AddJwtBearer(bearer => { bearer.TokenValidationParameters.IssuerSigningKey = signingKey as SecurityKey; bearer.TokenValidationParameters.ValidIssuer = Configuration["Jwt:Issuer"]; bearer.TokenValidationParameters.ValidAudience = Configuration["Jwt:Audience"]; bearer.TokenValidationParameters.ClockSkew = TimeSpan.Zero; bearer.TokenValidationParameters.ValidateLifetime = true; bearer.TokenValidationParameters.ValidateAudience = true; bearer.TokenValidationParameters.ValidateIssuer = true; bearer.TokenValidationParameters.ValidateIssuerSigningKey = true; bearer.TokenValidationParameters.RequireExpirationTime = true; bearer.TokenValidationParameters.RequireSignedTokens = true; // ****** bearer.SaveToken = true; // ****** }); ```<issue_comment>username_1: It is a property that defines whether the bearer token should be stored in the [AuthenticationProperties](https://learn.microsoft.com/en-us/dotnet/api/microsoft.aspnetcore.http.authentication.authenticationproperties?view=aspnetcore-2.0) after a successful authorization. Upvotes: 3 <issue_comment>username_2: You have two option: 1. Save the token as claim. Then you can get '**access\_token**' by below code: `string accessToken = User.FindFirst("access_token")?.Value;` 2. Save the token in the AuthenticationProperties. Then you have to set '**SaveToken**' option to **true** and get access token by below code: `string accessToken = await HttpContext.GetTokenAsync("access_token");` Upvotes: 1
2018/03/15
470
1,708
<issue_start>username_0: I am trying to create a program where one can input words, which are added to an array, until the same word is entered twice. Then the program breaks. Something like this: ``` public static void main(String[] args) { ArrayList words = new ArrayList(); Scanner reader = new Scanner(System.in); while (true) { System.out.println("Type a word: "); String word = reader.nextLine(); words.add(word); if (words.contains(word)) { System.out.println("You typed the word: " + word + " twice."); break; } ``` Every time I enter a single word, the program says "You have typed the word twice." I need to find a way to distinguish the items in the array from one another. Is it possible to use a for block? Thank you.<issue_comment>username_1: You're adding `word` to `words` before doing the `contains` check. ``` if (words.contains(word)) { System.out.println("You typed the word: " + word + " twice."); break; } else { words.add(word); } ``` will resolve this. You should also consider making `words` a `Set`, which has faster lookups and doesn't allow duplicates. Upvotes: 3 [selected_answer]<issue_comment>username_2: A slightly improved version would be to use a `Set`: its `add` method returns `false` when the element is already present (and it's more efficient than a list to "find" an element - although in your case, because there is only a small number of words, it won't make any noticeable difference). ``` Set words = new HashSet<> (); while (true) { System.out.println("Type a word: "); String word = reader.nextLine(); if (!words.add(word)) { System.out.println("You typed the word: " + word + " twice."); break; } } ``` Upvotes: 2
2018/03/15
1,543
5,132
<issue_start>username_0: Lets say I have an list of strings with the following values: > > ["a","a","b","a","a","a","c","c"] > > > I want to execute a linq query that will group into 4 groups: > > Group 1: ["a","a"] Group 2: ["b"] Group 3: ["a","a","a"] Group 4: > ["c","c"] > > > Basically I want to create 2 different groups for the value "a" because they are not coming from the same "index sequence". Anyone has a LINQ solution for this?<issue_comment>username_1: Calculate the "index sequence" first, then do your group. ``` private class IndexedData { public int Sequence; public string Text; } string[] data = [ "a", "a", "b" ... ] // Calculate "index sequence" for each data element. List indexes = new List(); foreach (string s in data) { IndexedData last = indexes.LastOrDefault() ?? new IndexedData(); indexes.Add(new IndexedData { Text = s, Sequence = (last.Text == s ? last.Sequence : last.Sequence + 1) }); } // Group by "index sequence" var grouped = indexes.GroupBy(i => i.Sequence) .Select(g => g.Select(i => i.Text)); ``` Upvotes: 0 <issue_comment>username_2: This is a naive `foreach` implementation where whole dataset ends up in memory (probably not an issue for you since you do `GroupBy`): ``` public static IEnumerable> Split(IEnumerable values) { var result = new List>(); foreach (var value in values) { var currentGroup = result.LastOrDefault(); if (currentGroup?.FirstOrDefault()?.Equals(value) == true) { currentGroup.Add(value); } else { result.Add(new List { value }); } } return result; } ``` Here comes a slightly complicated implementation with `foreach` and `yield return` enumerator state machine which keeps only current group in memory - this is probably how this would be implemented on framework level: EDIT: This is apparently also the way MoreLINQ does it. ``` public static IEnumerable> Split(IEnumerable values) { var currentValue = default(string); var group = (List)null; foreach (var value in values) { if (group == null) { currentValue = value; group = new List { value }; } else if (currentValue.Equals(value)) { group.Add(value); } else { yield return group; currentValue = value; group = new List { value }; } } if (group != null) { yield return group; } } ``` And this is a joke version using LINQ only, it is basically the same as the first one but is slightly harder to understand (especially since `Aggregate` is not the most frequently used LINQ method): ``` public static IEnumerable> Split(IEnumerable values) { return values.Aggregate( new List>(), (lists, str) => { var currentGroup = lists.LastOrDefault(); if (currentGroup?.FirstOrDefault()?.Equals(str) == true) { currentGroup.Add(str); } else { lists.Add(new List { str }); } return lists; }, lists => lists); } ``` Upvotes: 0 <issue_comment>username_3: Using an extension method based on the APL scan operator, that is like `Aggregate` but returns intermediate results paired with source values: ``` public static IEnumerable> ScanPair(this IEnumerable src, TKey seedKey, Func, T, TKey> combine) { using (var srce = src.GetEnumerator()) { if (srce.MoveNext()) { var prevkv = new KeyValuePair(seedKey, srce.Current); while (srce.MoveNext()) { yield return prevkv; prevkv = new KeyValuePair(combine(prevkv, srce.Current), srce.Current); } yield return prevkv; } } } ``` You can create extension methods for grouping by consistent runs: ``` public static IEnumerable> GroupByRuns(this IEnumerable src, Func key, Func result, IEqualityComparer cmp = null) { cmp = cmp ?? EqualityComparer.Default; return src.ScanPair(0, (kvp, cur) => cmp.Equals(key(kvp.Value), key(cur)) ? kvp.Key : kvp.Key + 1) .GroupBy(kvp => kvp.Key, kvp => result(kvp.Value)); } public static IEnumerable> GroupByRuns(this IEnumerable src, Func key) => src.GroupByRuns(key, e => e); public static IEnumerable> GroupByRuns(this IEnumerable src) => src.GroupByRuns(e => e, e => e); public static IEnumerable> Runs(this IEnumerable src, Func key, Func result, IEqualityComparer cmp = null) => src.GroupByRuns(key, result).Select(g => g.Select(s => s)); public static IEnumerable> Runs(this IEnumerable src, Func key) => src.Runs(key, e => e); public static IEnumerable> Runs(this IEnumerable src) => src.Runs(e => e, e => e); ``` And using the simplest version, you can get either an `IEnumerable>`: ``` var ans1 = src.GroupByRuns(); ``` Or a version that dumps the `IGrouping` (and its `Key`) for an `IEnumerable`: ``` var ans2 = src.Runs(); ``` Upvotes: 0 <issue_comment>username_4: You just need key other than items of array ``` var x = new string[] { "a", "a", "a", "b", "a", "a", "c" }; int groupId = -1; var result = x.Select((s, i) => new { value = s, groupId = (i > 0 && x[i - 1] == s) ? groupId : ++groupId }).GroupBy(u => new { groupId }); foreach (var item in result) { Console.WriteLine(item.Key); foreach (var inner in item) { Console.WriteLine(" => " + inner.value); } } ``` Here is the result: [Link](https://dotnetfiddle.net/EHyXCh) Upvotes: 3 [selected_answer]
2018/03/15
413
1,585
<issue_start>username_0: Im trying to access a class from code behind in javascript but I am getting the error saying it does'nt exist in this context. This has worked for me before this way. Here is my code: Code Behind: ``` public class ReviewData { public int NumberOfReviews { get; set; } public double AvgReviewScore { get; set; } } ``` This variable has been populated further down Here is my javascript: ``` var reviewData = "<%=ReviewData%>" ```<issue_comment>username_1: You'll have to use JSON.Net to serialize your class into a JSON string. Inside your class, create a method called "Serialize()" that returns a string and serializes itself. In the Javascript, you could then write something like: `var reviewData = "@ReviewData.Serialize()"` From there, you may have to use Javascript to parse it back into an object from a string... But you can't pass an actual C# class to Javascript. The best you can do is JSON. Also, this won't work unless you have an object reference of ReviewData in your codebehind. Upvotes: 2 <issue_comment>username_2: username_1s answer is correct if you need to use a list of objects as an array. But if you want to access a single property of a class, you can do this. Declare it as a public variable ``` public ReviewData reviewData; protected void Page_Load(object sender, EventArgs e) { reviewData = new ReviewData(); reviewData.NumberOfReviews = 5; } ``` Now you can access the properties of the variable `reviewData` on the aspx. ``` var reviewData = '<%=reviewData.NumberOfReviews%>'; ``` Upvotes: 1
2018/03/15
695
2,834
<issue_start>username_0: I have the following architecture: ``` public interface IStatus { string StatusName {get;} } public class A: IStatus { public string StatusName {get {return "A Status";}} } public class B: IStatus { public string StatusName {get {return "B Status";}} } public class C: IStatus { public string StatusName {get {return "C Status";}} } ``` I tried the following and I can see in debug all the relevant classes but I am not sure how to get the StatusName property for each one of the classes. I presume, I need to create an instance of each class somehow and ask for the StatusName property. ``` public class Test { var type = typeof(IStatus); var types = AppDomain.CurrentDomain.GetAssemblies() .SelectMany(s => s.GetTypes()) .Where(p => type.IsAssignableFrom(p)); foreach (Type mytype in types) { var i = mytype.GetMember("StatusName"); if (i.Count()!=0) { var n = i.GetValue(0); } } } ``` The expected result I need is a list of string with values: "A Status", "B Status", "C Status"<issue_comment>username_1: You can create default instance for each of your types and then get value of each property, since they just have `return "C Status"`, that should not be a problem ``` var instance = Activator.CreateInstance(myType); var value = instance.GetType().GetProperty("StatusName").GetValue(instance); ``` Upvotes: 0 <issue_comment>username_2: Based on the comments it sounds like the statuses are per type and unchanging, and may be required independently of any instances. This makes them a poor choice for interfaces and a good fit for "attributes": ``` using System; using System.Linq; [AttributeUsage(AttributeTargets.Class, AllowMultiple = false, Inherited = false)] sealed class StatusAttribute : Attribute { public StatusAttribute(string name) { Name = name; } public string Name { get; } } [Status("A Status")] public class A { /* ... */ } [Status("B Status")] public class B { /* ... */ } [Status("C Status")] public class C { /* ... */ } static class P { static void Main() { var attrib = typeof(StatusAttribute); var pairs = from s in AppDomain.CurrentDomain.GetAssemblies() from p in s.GetTypes() where p.IsDefined(attrib, false) select new { Type = p, Status = ((StatusAttribute)Attribute.GetCustomAttribute( p, attrib)).Name }; foreach(var pair in pairs) { Console.WriteLine($"{pair.Type.Name}: {pair.Status}"); } } } ``` Upvotes: 2 [selected_answer]
2018/03/15
4,390
14,572
<issue_start>username_0: Note: This is not an issue with Visual Studio, but rather with incompatible dll versions. The steps below replicate the problem since running in Visual Studio in debug mode breaks on exceptions being thrown. If you just run, the thrown exception is handled elsewhere and the program works fine. But since I spend a lot of time in debug mode, I would prefer to fix this problem. When debugging, I want to be able to step into modules I have added to my Virtual Environment in Visual Studio. I get a 'library not found' error that I am not able to fix. Here are the steps: 1. In Visual Studio create a new Python Application. 2. Create a virtual environment for that application (Python 3.6 64 bit). 3. `pip install twilio` into your virtual environment. You get the following output. ... ``` ----- Installing 'twilio' ----- Collecting twilio Using cached twilio-6.10.5-py2.py3-none-any.whl Collecting pytz (from twilio) Using cached pytz-2018.3-py2.py3-none-any.whl Collecting six (from twilio) Using cached six-1.11.0-py2.py3-none-any.whl Collecting PyJWT>=1.4.2 (from twilio) Using cached PyJWT-1.6.0-py2.py3-none-any.whl Collecting requests>=2.0.0; python_version >= "3.0" (from twilio) Using cached requests-2.18.4-py2.py3-none-any.whl Collecting pysocks; python_version >= "3.0" (from twilio) Using cached PySocks-1.6.8.tar.gz Collecting certifi>=2017.4.17 (from requests>=2.0.0; python_version >= "3.0"->twilio) Using cached certifi-2018.1.18-py2.py3-none-any.whl Collecting chardet<3.1.0,>=3.0.2 (from requests>=2.0.0; python_version >= "3.0"->twilio) Using cached chardet-3.0.4-py2.py3-none-any.whl Collecting urllib3<1.23,>=1.21.1 (from requests>=2.0.0; python_version >= "3.0"->twilio) Using cached urllib3-1.22-py2.py3-none-any.whl Collecting idna<2.7,>=2.5 (from requests>=2.0.0; python_version >= "3.0"->twilio) Using cached idna-2.6-py2.py3-none-any.whl Installing collected packages: pytz, six, PyJWT, certifi, chardet, urllib3, idna, requests, pysocks, twilio Running setup.py install for pysocks: started Running setup.py install for pysocks: finished with status 'done' Successfully installed PyJWT-1.6.0 certifi-2018.1.18 chardet-3.0.4 idna-2.6 pysocks-1.6.8 pytz-2018.3 requests-2.18.4 six-1.11.0 twilio-6.10.5 urllib3-1.22 ----- Successfully installed 'twilio' ----- ``` 4. Add the following line to the top of your .py file: `from twilio.rest import Client` 5. In Visual Studio go to tools > options > python > debugging. Make sure 'Enable debugging of Python standard library' is checked 6. Run the application. You get the following error: > > ModuleNotFoundError: No module named 'OpenSSL' > > > 7. `pip install pyopenssl` You get the following output: ... ``` ----- Installing 'pyopenssl' ----- Collecting pyopenssl Using cached pyOpenSSL-17.5.0-py2.py3-none-any.whl Requirement already satisfied: six>=1.5.2 in c:\users\x\source\repos\pythonapplication9\pythonapplication9\env\lib\site-packages (from pyopenssl) Collecting cryptography>=2.1.4 (from pyopenssl) Using cached cryptography-2.1.4-cp36-cp36m-win_amd64.whl Requirement already satisfied: idna>=2.1 in c:\users\x\source\repos\pythonapplication9\pythonapplication9\env\lib\site-packages (from cryptography>=2.1.4->pyopenssl) Collecting cffi>=1.7; platform_python_implementation != "PyPy" (from cryptography>=2.1.4->pyopenssl) Using cached cffi-1.11.5-cp36-cp36m-win_amd64.whl Collecting asn1crypto>=0.21.0 (from cryptography>=2.1.4->pyopenssl) Using cached asn1crypto-0.24.0-py2.py3-none-any.whl Collecting pycparser (from cffi>=1.7; platform_python_implementation != "PyPy"->cryptography>=2.1.4->pyopenssl) Using cached pycparser-2.18.tar.gz Installing collected packages: pycparser, cffi, asn1crypto, cryptography, pyopenssl Running setup.py install for pycparser: started Running setup.py install for pycparser: finished with status 'done' Successfully installed asn1crypto-0.24.0 cffi-1.11.5 cryptography-2.1.4 pycparser-2.18 pyopenssl-17.5.0 ----- Successfully installed 'pyopenssl' ----- ``` 8. Run the application. You get the following error: `asn1crypto._ffi.LibraryNotFoundError: The library libcrypto could not be found` The error is thrown in the file named `_big_num_ctypes.py` in `asn1crypto`. The code line where this is thrown is: ``` libcrypto_path = find_library(b'crypto' if sys.version_info < (3,) else 'crypto') if not libcrypto_path: raise LibraryNotFoundError('The library libcrypto could not be found') ``` **Update:** I was asked to provide the full backtrace. I modified the code in this way to print it: ``` import unittest import traceback class Test_test1(unittest.TestCase): def test_A(self): try: from twilio.rest import Client except Exception as e: print('foo') foo = traceback.extract_stack() traceback.print_exc(e) if __name__ == '__main__': unittest.main() ``` As before the import line throws the exception but the exception is not caught and the lines in 'except' clause are never executed from twilio.rest import Client update 2: I somehow had gotten this to work following @Prateek and @user8212173. But now it is not working again. As both suggested, the problem is that crypto.dll is not there. So I went thru the steps below to add it with no success: 1. I installed Win64 OpenSSL v1.1.0j from <https://slproweb.com/products/Win32OpenSSL.html> (pointed to from <https://wiki.openssl.org/index.php/Binaries>). It does not contain crypto.dll. 2. I then installed crypto.dll from <http://www.dlldownloader.com/crypto-dll/> (as @user8212173 suggested) (there is only a 32 bit version) and followed the instructions. I then got a new error message "ImportError: DLL load failed: %1 is not a valid Win32 application" which means that the crypto.dll I installed has a version conflict (I am running 64bit python on a 64bit computer). I remember installing it from [Unofficial Windows Binaries for Python Extension Packages](https://www.lfd.uci.edu/~gohlke/pythonlibs/) I can't find it there. So where do I get a working 64bit version of crypto.dll?<issue_comment>username_1: I searched a lot and could find that you are missing `crypto.dll` file. Your code is looking for this dll file and it is unable to find it. Please note this wont be installed by `pip install crypto` as this is python library and the code is looking for a dll file. `ctypes.util.find_library` searches for dll file from windows environment path variable. [Reference : find\_library() in ctypes](https://stackoverflow.com/questions/23804438/find-library-in-ctypes#23805306) To verify I checked. ``` find_library('l2gpstore') >>'C:\\WINDOWS\\system32\\l2gpstore.dll' find_library('java') >>'C:\\Program Files\\Java\\jdk-9.0.4\\bin\\java.dll' ``` Furthermore you should install `OpenSSL` with `libcrypto` module from here [OpenSSL](https://github.com/openssl/openssl) OpenSSL installation [instructions](https://www.openssl.org/source/) -------------------------------------------------------------------- > > The master sources are maintained in our git repository, which is > accessible over the network and cloned on GitHub, at > <https://github.com/openssl/openssl>. Bugs and pull patches (issues and > pull requests) should be file on the GitHub repo. Please familiarize > yourself with the license. > > > libcrypto with respect to OpenSSL --------------------------------- [reference : GitHub](https://github.com/openssl/openssl) > > libcrypto (with platform specific naming): > Provides general cryptographic and X.509 support needed by SSL/TLS but > not logically part of it. > > > Once you install binaries and check `crypto.dll` is available in one of the path strings in your environment variables this issue should be resolved. If not add it into path variable and check. Update: ======= Update since the question has been updated and the issue has recurred. There are architectural changes with OpenSSL 1.1.0 as compared to 1.0.2 > > September 13, 2018 - OpenSSL 1.1.0 and later are quite different from previous releases. Users should install BOTH the 1.0.2 series (LTS) and the 1.1.1 (LTS) series for maximum application compatibility. Developers need to recompile their software to support 1.1.1. See the official OpenSSL release strategy document for more details. – Prateek yesterday > > > If you open 1.0.2 from Github you can see `crypto.h` file , the same file is missing in latest version. Also `OpenSSL` there is change in `DLL` names , they [renamed `libeay32` to `libcrypto`](https://github.com/arvidn/libtorrent/issues/1931) You need to post code which makes use of `asn1crypto` library in the post. There is no code that explicitly uses asn1crypto in your post. So, not able to reproduce your issue using `pipenv`. Make sure you are using updated libraries too. I *would not recommend* downloading DLL source from unreliable source like [DLLdownloader](http://www.dlldownloader.com/) --- If you are having issues with latest version of `OpenSSL` and `asn1crypto` its better to downgrade `OpenSSL` to `1.0.2` ,I think that would work considering it ships with `crypto.h` file. Good luck! Upvotes: 4 [selected_answer]<issue_comment>username_2: I tried to reproduce the error on my computer and was successful when I ran the "error-producing" file `_big_num_ctypes.py`. Although, I do not have Visual Studio, the error stems from the missing `crypto.dll` file. We will deduce this step-by-step. Let's first examine the error causing code snippet in the file `_big_num_ctypes.py`: ``` #imports from ctypes.util import find_library . . from .._ffi import LibraryNotFoundError, FFIEngineError try: # On Python 2, the unicode string here may raise a UnicodeDecodeError as it # tries to join a bytestring path to the unicode name "crypto" libcrypto_path = find_library(b'crypto' if sys.version_info < (3,) else 'crypto') if not libcrypto_path: raise LibraryNotFoundError('The library libcrypto could not be found') . . except (AttributeError): raise FFIEngineError('Error initializing ctypes') ``` I ran the file: ``` C:\>cd "C:\ProgramData\Anaconda3\Lib\site-packages\asn1crypto\_perf" C:\ProgramData\Anaconda3\Lib\site-packages\asn1crypto\_perf>python _big_num_ctypes.py ``` and had a `Traceback` for the library import: ``` Traceback (most recent call last): File "_big_num_ctypes.py", line 27, in from ..\_ffi import LibraryNotFoundError, FFIEngineError ValueError: attempted relative import beyond top-level package ``` So, I changed the import path for`.ffi`to: ``` from asn1crypto._ffi import LibraryNotFoundError, FFIEngineError ``` On the second run, the missing libcrypto library error appeared: ``` asn1crypto._ffi.LibraryNotFoundError: The library libcrypto could not be found ``` The exception is raised when the *dll* library named **crypto** could not be found at C:\Windows\System32 and/or SYSWOW64(for 64-bit) ``` libcrypto_path = find_library(b'crypto' if sys.version_info < (3,) else 'crypto') ``` The purpose of `find_library` is to find a specified library and return a pathname. The behavior of this method varies with OS as described in the [docs](https://docs.python.org/3/library/ctypes.html). If this method cannot find any packages, it returns `None`. ``` >>> from ctypes.util import find_library >>> print(find_library("c")) None ``` In our case, the search is for `crypto.dll` and I couldn't find this file on my computer. So, I downloaded and installed it exactly according to the instructions [here](http://www.dlldownloader.com/crypto-dll/). When I checked again: ``` >>> find_library('crypto') 'C:\\windows\\system32\\crypto.dll' ``` Now I ran `python _big_num_ctypes.py` again and got a different `Traceback`: ``` Traceback (most recent call last): File "_big_num_ctypes.py", line 37, in libcrypto = CDLL(libcrypto\_path) File "C:\ProgramData\Anaconda3\lib\ctypes\\_\_init\_\_.py", line 348, in \_\_init\_\_ self.\_handle = \_dlopen(self.\_name, mode) OSError: [WinError 193] %1 is not a valid Win32 application ``` A further investigation into the above error revealed that if I'm using a 32bit DLL with 64bit Python, or vice-versa, then I may get such errors as explained [here](https://stackoverflow.com/questions/19849077/error-loading-dll-in-python-not-a-valid-win32-application). So, I installed Python 3.6 32-bit and tried again with `py -3.6-32 _big_num_ctypes.py`. I also installed all the required packages, but this error persisted. Could it be possible that we may require 32-bit binaries for the `Crypto` package? This [answer](https://www.pycryptodome.org/en/latest/src/installation.html#windows-from-sources-python-3-5-and-newer) and [this](https://stackoverflow.com/questions/19019720/importerror-dll-load-failed-1-is-not-a-valid-win32-application-but-the-dlls) give more information. I realized that [Pycryptodome](https://www.pycryptodome.org/en/latest/src/installation.html#windows-from-sources-python-3-5-and-newer) is a regularly maintained package and is preferred over the old `Crypto` package but can still be installed under `Crypto`. Another point to notice is that one of the requirements for this package is MS Visual Studio 2015 (Community Edition) and the C/C++ compilers and the redistributable only. It could be possible that some C++ compiler files or MS Visual Studio files are missing at present and causing these issues to happen. If you install all the above prerequisites, the `crypto.dll` file and the `Pycryptodome`package, I believe this error will be Resolved. You have already installed other required packages `OpenSSL` & `Twilio`. Unfortunately, I am restricted to install MS Visual Studio on my computer and so I couldn't test this further. I also ran the `unittest` code and it ran successfully for me: ``` #Output . ---------------------------------------------------------------------- Ran 1 test in 0.771s OK ``` Upvotes: 2 <issue_comment>username_3: Here is my solution for windows 10, visual studio community 2017, open file C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36\_64\Lib\site-packages\asn1crypto\_perf\_big\_num\_ctypes.py and then change the code from : ``` libcrypto_path = find_library(b'crypto' if sys.version_info < (3,) else 'crypto') ``` to: ``` libcrypto_path = find_library(b'crypto' if sys.version_info < (3,) else 'libcrypto') ``` then the error msg is gone. The libcrypto.dll is already under folder C:\Windows\System32\ Upvotes: 0
2018/03/15
629
2,214
<issue_start>username_0: I’m trying to run/load sql file into mysql database using this golang statement but this is not working: ``` exec.Command("mysql", "-u", "{username}", "-p{db password}", "{db name}", "<", file abs path ) ``` But when i use following command in windows command prompt it’s working perfect. ``` mysql -u {username} -p{db password} {db name} < {file abs path} ``` So what is the problem?<issue_comment>username_1: Go's exec.Command runs the first argument as a program with the rest of the arguments as parameters. The '<' is interpreted as a literal argument. e.g. `exec.Command("cat", "<", "abc")` is the following command in bash: `cat \< abc`. To do what you want you have got two options. * Run (ba)sh and the command as argument: `exec.Command("bash", "-c", "mysql ... < full/path")` * Pipe the content of the file in manually. See <https://stackoverflow.com/a/36383984/8751302> for details. The problem with the bash version is that is not portable between different operating systems. It won't work on Windows. Upvotes: 1 <issue_comment>username_2: Go's [`os.exec` package](https://golang.org/pkg/os/exec/) does not use the shell and does not support redirection: > > Unlike the "system" library call from C and other languages, the os/exec package intentionally does not invoke the system shell and does not expand any glob patterns or handle other expansions, pipelines, or redirections typically done by shells. > > > You can call the shell explicitly to pass arguments to it: ``` cmd := exec.Command("/bin/sh", yourBashCommand) ``` Depending on what you're doing, it may be helpful to write a short bash script and call it from Go. Upvotes: 1 <issue_comment>username_3: As others have answered, you can't use the `<` redirection operator because `exec` doesn't use the shell. But you don't have to redirect input to read an SQL file. You can pass arguments to the MySQL client to use its `source` command. ``` exec.Command("mysql", "-u", "{username}", "-p{db password}", "{db name}", "-e", "source {file abs path}" ) ``` The `source` command is a builtin of the MySQL client. See <https://dev.mysql.com/doc/refman/5.7/en/mysql-commands.html> Upvotes: 3
2018/03/15
684
2,653
<issue_start>username_0: my android studio started working with kotlin and not java after the last update. I have a university project that I need to deliver soon and we only learned coding with java, can anyone please help me find a way to return it to java thanks a lot<issue_comment>username_1: If I understand your problem correctly, when you create new project you have to uncheck "Include Kotlin support" checkbox at the bottom of "Create new project" window. And remember that you can create java classes in Kotlin project. Upvotes: 3 [selected_answer]<issue_comment>username_2: If you would like disable Kotlin in your project you should delete `MainActivity.kt` and remove next lines build.gradle(Project) ``` //classpath "org.jetbrains.kotlin:kotlin-gradle-plugin:" ``` build.gradle(App) ``` //apply plugin: 'kotlin-android' //apply plugin: 'kotlin-android-extensions' dependencies { ... //implementation"org.jetbrains.kotlin:kotlin-stdlib-jre7:" } ``` Upvotes: 2 <issue_comment>username_3: I understand the problem, it's simple, we are just missing the option to change the language while creating a new app. Just change the language to Java from kotlin. I am sharing the steps for the same (currently using android studio chipmunk) **Step 1:** Select New project from the menu [![New App](https://i.stack.imgur.com/auBEg.jpg)](https://i.stack.imgur.com/auBEg.jpg) **Step 2:** Select the main Activity Type [![Select activity type](https://i.stack.imgur.com/Wm32Y.jpg)](https://i.stack.imgur.com/Wm32Y.jpg) **Step 3:** Find the option to change the language [![update lang to JAVA](https://i.stack.imgur.com/dC5RO.jpg)](https://i.stack.imgur.com/dC5RO.jpg) Upvotes: -1 <issue_comment>username_4: In the new version of Android Studio, instead of choosing "Empty Activity" as usual, choose "Empty Views Activity". Android Studio removed Java support in "Empty Activity". Other answers are irrelevant because they explain how to choose Java for the previous versions where it was obvious anyway. [![enter image description here](https://i.stack.imgur.com/t7gpy.png)](https://i.stack.imgur.com/t7gpy.png) side personal notes: This behavior is not typical to Google and is very surprising and aggressive. It becomes dangerous to upgrade versions these days. People love Java and are use to it. It is great that Kotlin support was added but yet Java is powerful amazing programming language that will stay forever. I use Java also for server side and for desktop applications so it is convenient to use same programming language for multiple platforms. Android Studio doesn't even explain the change clearly. Upvotes: 4
2018/03/15
2,255
6,571
<issue_start>username_0: I have a simple view created in VS 2017. Here it is: ``` CREATE VIEW [dbo].[ApplicantStat] AS SELECT ISNULL(CONVERT(VARCHAR(50), NEWID()), '') AS ID, ISNULL(AVG(ApplicationTime), 0) AS 'AvgApplicationTime', ISNULL(AVG(ResponseTime), 0) AS 'AvgResponseTime', ISNULL(CAST(COUNT(CASE WHEN [IsAccepted] = 1 THEN 1 END) / COUNT(CASE WHEN [IsValid] = 1 THEN 1 END) AS float), 0) AS 'PctAccepted' FROM [Application] WHERE CreatedOn BETWEEN CAST(GETDATE()-30 AS date) AND CAST(GETDATE()-1 AS date) ``` As you can see, it gets data between 2 dates and does some simple aggregation. The idea of the cast is that I want to ignore the time and get everything for the date range regardless - so as of today, 15th Mar, I would it to fetch everything for 14th March 00:00:00 - 23:59:59 and 29 days previous. This does not happen - it picks up 3 rows (13th) - it should pick up all 5 rows. And yes, my system date is currently 15/03/2018 14:44 (UK time). Here's, the table and data: ``` CREATE TABLE [dbo].[Application] ( [Id] INT NOT NULL, [ApplicantId] INT NOT NULL, [LoanAmount] INT NOT NULL, [LoanTerm] SMALLINT NOT NULL, [EmailAddress] VARCHAR (254) NOT NULL, [MobilePhone] VARCHAR (11) NOT NULL, [House] VARCHAR (25) NOT NULL, [Street] VARCHAR (50) NOT NULL, [TownCity] VARCHAR (50) NOT NULL, [Postcode] VARCHAR (7) NOT NULL, [IpAddress] VARCHAR (39) NOT NULL, [IsValid] BIT NOT NULL, [IsAccepted] BIT NOT NULL, [Commission] DECIMAL (9, 2) NOT NULL, [Processors] VARCHAR (500) NOT NULL, [ResponseTime] SMALLINT NOT NULL, [ApplicationTime] SMALLINT NOT NULL, [CreatedOn] DATETIME NOT NULL, PRIMARY KEY CLUSTERED ([Id] ASC) ); INSERT INTO [dbo].[Application] ([Id], [ApplicantId], [LoanAmount], [LoanTerm], [EmailAddress], [MobilePhone], [House], [Street], [TownCity], [Postcode], [IpAddress], [IsValid], [IsAccepted], [Commission], [Processors], [ResponseTime], [ApplicationTime], [CreatedOn]) VALUES (1, 1, 300, 3, N'<EMAIL>', N'07957000000', N'1', N'Acacia Avenue', N'Suburbia', N'SB1 2RB', N'192.168.3.11', 1, 1, CAST(3.20 AS Decimal(9, 2)), N'1,2,3,4,5', 90, 600, N'2018-03-13 08:00:00') INSERT INTO [dbo].[Application] ([Id], [ApplicantId], [LoanAmount], [LoanTerm], [EmailAddress], [MobilePhone], [House], [Street], [TownCity], [Postcode], [IpAddress], [IsValid], [IsAccepted], [Commission], [Processors], [ResponseTime], [ApplicationTime], [CreatedOn]) VALUES (2, 2, 500, 12, N'<EMAIL>', N'0', N'1', N'a', N's', N's', N'1', 0, 1, CAST(5.00 AS Decimal(9, 2)), N'1', 60, 300, N'2018-03-14 16:00:00') INSERT INTO [dbo].[Application] ([Id], [ApplicantId], [LoanAmount], [LoanTerm], [EmailAddress], [MobilePhone], [House], [Street], [TownCity], [Postcode], [IpAddress], [IsValid], [IsAccepted], [Commission], [Processors], [ResponseTime], [ApplicationTime], [CreatedOn]) VALUES (3, 3, 1000, 6, N'<EMAIL>', N'0', N'1', N'a', N's', N's', N'1', 1, 1, CAST(7.00 AS Decimal(9, 2)), N'1', 75, 360, N'2018-03-13 10:00:00') INSERT INTO [dbo].[Application] ([Id], [ApplicantId], [LoanAmount], [LoanTerm], [EmailAddress], [MobilePhone], [House], [Street], [TownCity], [Postcode], [IpAddress], [IsValid], [IsAccepted], [Commission], [Processors], [ResponseTime], [ApplicationTime], [CreatedOn]) VALUES (4, 4, 2000, 24, N'<EMAIL>', N'0', N'1', N'a', N's', N's', N'1', 1, 1, CAST(20.00 AS Decimal(9, 2)), N'1', 30, 365, N'2018-03-14 11:00:00') INSERT INTO [dbo].[Application] ([Id], [ApplicantId], [LoanAmount], [LoanTerm], [EmailAddress], [MobilePhone], [House], [Street], [TownCity], [Postcode], [IpAddress], [IsValid], [IsAccepted], [Commission], [Processors], [ResponseTime], [ApplicationTime], [CreatedOn]) VALUES (5, 5, 3000, 18, N'<EMAIL>', N'0', N'1', N'a', N's', N's', N'1', 1, 1, CAST(40.00 AS Decimal(9, 2)), N'1', 45, 330, N'2018-03-13 12:00:00') ```<issue_comment>username_1: Try this out: ``` WHERE CreatedOn >= CAST(GETDATE()-30 AS date) AND CreatedOn < CAST(GETDATE() AS date) ``` The problem is your converting to date the day before today. Upvotes: 3 [selected_answer]<issue_comment>username_2: You can `CAST` your `CreatedOn` field as `DATE` to remove the time portion, which is getting in your way here... Perhaps ``` WHERE CAST(CreatedOn AS DATE) BETWEEN CAST(GETDATE()-30 AS date) AND CAST(GETDATE()-1 AS date) ``` **BUT** - `CAST`ing a field in the WHERE expression *may* make it non SARGable. See [here](https://www.sqlinthewild.co.za/index.php/2016/09/13/what-is-a-sargable-predicate/). So avoid this solution for large or production environments unless you know the expression will be SARGable. Use only as a test to refine your logic and options. (Even if there is no explicit index on CreatedOn - it *may* still suffer as SQL builds its own indexes all the time if no index exists explicitly. Always worth confirming whether it is SARGable so you know for sure.) To see what is happening - view your values in your SELECT - just to get an idea of what is working For example: ``` SELECT TOP 1000 CreatedOn ,CAST(GETDATE()-30 AS date) ,CAST(GETDATE()-1 AS date) FROM [Application] ``` Or see the other options for removing time values from datatime fields [here](https://stackoverflow.com/questions/1177449/best-approach-to-remove-time-part-of-datetime-in-sql-server) as you may want to coerce or round the time value instead Upvotes: 2 <issue_comment>username_3: Instead of trying to ignore the time value, just make sure that your search terms are accurate for it. Also, don't blindly add things like `ISNULL` to every column. Spend a few seconds thinking if it's relevant or not. `NEWID()` for example, is never going to return a `NULL` value to you. Adding that kind of code is poor programming which will lead to less legible code. Here's how I would write it to account for the time portions: ``` CREATE VIEW dbo.ApplicantStat AS SELECT CONVERT(VARCHAR(50), NEWID()) AS ID, COALESCE(AVG(ApplicationTime), 0) AS AvgApplicationTime, COALESCE(AVG(ResponseTime), 0) AS AvgResponseTime, COALESCE(CAST(COUNT(CASE WHEN [IsAccepted] = 1 THEN 1 END) / COUNT(CASE WHEN [IsValid] = 1 THEN 1 END) AS float), 0) AS PctAccepted FROM dbo.Application WHERE CreatedOn >= DATEADD(DAY, -30, CAST(GETDATE() AS DATE)) AND CreatedOn < CAST(GETDATE() AS DATE) ``` Upvotes: 2
2018/03/15
1,146
3,974
<issue_start>username_0: this is my project directory structure: [![enter image description here](https://i.stack.imgur.com/vAquG.png)](https://i.stack.imgur.com/vAquG.png) And this is my pom.xml: ``` 4.0.0 borsa borsa 1 UTF-8 UTF-8 aggiornamento maven-compiler-plugin 3.7.0 1.8 1.8 org.apache.maven.plugins maven-dependency-plugin copy-dependencies prepare-package copy-dependencies ${project.build.directory}/dist/libs org.apache.maven.plugins maven-jar-plugin 2.6 ${basedir}/target/dist true libs/ borsa.Application org.apache.maven.plugins maven-surefire-plugin 2.20.1 true org.apache.poi poi 3.17 org.jsoup jsoup 1.11.2 junit junit 4.12 org.apache.poi poi-ooxml 3.17 org.springframework spring-context 5.0.2.RELEASE ethereal ethereal 1 common-parser common-parser 1 common-util common-util 1 org.apache.logging.log4j log4j-api 2.10.0 org.apache.logging.log4j log4j-core 2.10.0 ``` **If I run maven package or maven install resources are not copied to target folder**, even if in the console appears the output: ``` [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ borsa --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 2 resources ``` However if I add in the pom the manual configuration of the resources plugin: ``` maven-resources-plugin 3.0.2 process-resources process-resources resources ${project.build.directory}/dist/resources ${basedir}/src/main/resources \* ``` resources get copied as expected. [![enter image description here](https://i.stack.imgur.com/BHauj.png)](https://i.stack.imgur.com/BHauj.png) So the question is: **shouldn't the resources-plugin with goal resources run automatically?** **Shouldn't the scr/main/resources folder be scanned automatically for resources and resources copied to target folder?** **Why does it work if I explicitly configure the resources-plugin and why doesn't it work if I don't?**<issue_comment>username_1: If you want files in `target/dist/resources` as shown in the output, you must configure the output directory in the plugin. The default output directory is `target/classes`. Upvotes: 1 <issue_comment>username_2: Found the answer. The default behaviour of the maven-resources-plugin is to package the resources ***inside the jar***. If you open the jar with a zip program you will see the resources inside it. To change the behaviour of the default resources plugin (maven-resources-plugin ) in the default phase (process-resources) with the default goal (resources) you must use the tag as child of : **EDIT:** the default behaviour of the maven-resources-plugin is to package the resources inside the jar ***alongside the target/classes folder***. ``` src/main/resources ${project.build.directory}/dist/resources ``` In this way resources will go outside the jar in the specified folder. If you want to send some resources in the jar and some outside: ``` src/main/resources ${project.build.directory}/dist/resources this\_file\_will\_go\_outside\_the\_ jar\_in\_the\_folder.txt this\_too.txt src/main/resources this\_file\_will\_go\_outside\_the\_ jar\_in\_the\_folder.txt this\_too.txt ``` **EDIT:** note that with the first option resources will still be copied into the target/classes folder (and in the jar/artifact). Excluded resources will not be present in the classpath so you will not be able to access them by running the code from Eclipse. You will only be able to access them from an executable jar with the targetPath folder specified as classpath in the manifest file. A better option is to confure the mave-jar-plugin and the maven-resources-plugin. See [this](https://stackoverflow.com/questions/49351806/eclipse-adds-an-exclusion-pattern-of-to-src-main-resources-how-to-read-a-res) and [this](https://stackoverflow.com/questions/49319549/null-results-when-reading-a-resource) answer. Upvotes: 4 [selected_answer]
2018/03/15
1,033
2,852
<issue_start>username_0: My H1 is not centering as I increase the font, if I have a smaller size (30px) font my H1 positions correctly. **[I want my H1 to be in the center of my banner div and look like this.](https://i.stack.imgur.com/4LhkV.png)** Currently my H1 is too low from the center and text-align:center; is not solving this issue. What do I need to add or remove to make my H1 center or do I need to remove or add something to my banner? [Jfiddle](https://jsfiddle.net/cu4h3j8t/4/) ```css body { margin: 0; padding: 0; background-color: #ffffff; font-family: sans-serif; } .banner { width: 100%; background-color: #8F3144; height: 300px; top: 0px; } /* LOGO START*/ .banner>img { float: left; height: 103px; width: 140px; color: black; } /* LOGO END */ .mainh1 { font-weight: bolder; text-align: center; padding-top: 80px; color: #ffffff; font-size: 50px; } .nav { list-style: none; text-align: right; margin: 0; } .nav>li { display: inline-block; font-size: 20px; margin-right: 20px; padding-top: 20px; font-weight: bolder; } .nav>li>a { text-decoration: none; color: #ffffff; } .nav>li>a:hover { opacity: .5; } ``` ```html ![logo](logo.png) * [Meets](#) * Gallery Harrison TEST TEST TEST TEST ============================ ```<issue_comment>username_1: Add the following two properties to the mainh. ``` margin-top:auto; margin-botton:auto; ``` This should solve the problem and keep it always centered, although it goes off screen at that big of fonts. Also you can try placing it in a tag Upvotes: -1 <issue_comment>username_2: This is because of the size and position of the logo. You could use `absolute` positioning on the `h1` with `width: 100%` and remove the `padding`. See the snippet: ```css body { margin: 0; padding: 0; background-color: #ffffff; font-family: sans-serif; } .banner { width: 100%; background-color: #8F3144; height: 300px; top: 0px; } /* LOGO START*/ .banner>img { float: left; height: 103px; width: 140px; color: black; } /* LOGO END */ .mainh1 { font-weight: bolder; text-align: center; color: #ffffff; font-size: 50px; position: absolute; width: 100%; } .nav { list-style: none; text-align: right; margin: 0; } .nav>li { display: inline-block; font-size: 20px; margin-right: 20px; padding-top: 20px; font-weight: bolder; } .nav>li>a { text-decoration: none; color: #ffffff; } .nav>li>a:hover { opacity: .5; } ``` ```html ![logo](logo.png) * [Meets](#) * Gallery Harrison TEST TEST TEST TEST ============================ ``` Upvotes: 0 <issue_comment>username_3: ``` clear: both ``` This is the answer I believe. Add this style to your `h1` tag. Upvotes: 1
2018/03/15
891
2,816
<issue_start>username_0: Let me show this by one example ``` Add ``` we have bind the class add\_main the function body execute on clicking the add button is as given below ``` $.ajax({ type: "POST", url : "url", dataType : 'json', data : { 'param1' : "param1value" }, beforeSend: function(jqXHR , settings ){}, success: function(response){ // the response is json object. $(this).closest('.main_div').html(''); // I have to assign the response data (that is json object ) to inner_data_seciton. $(this).closest('.main_div').find('inner_data_section').data('data1',response); }, complete: function(jqXHR , settings){} }); ``` **Problem** AS the "inner\_data\_section" added later to div "main\_div". The this.closest.find unable to point the inner\_data\_section that's why data also not bind to it. \*\* My Search \*\* I have searched and find solution that we assign data as data-data1="some value" i did this it is accessible later as well. **But in our case we have to assign the json object (complex value) to the inner section div the data-data1="some value" works for only simple value.** I think so i am having issue in getting jquery selector pointer the newly added html in json response. How to cope with this problem? Just hint not complete solution.<issue_comment>username_1: Add the following two properties to the mainh. ``` margin-top:auto; margin-botton:auto; ``` This should solve the problem and keep it always centered, although it goes off screen at that big of fonts. Also you can try placing it in a tag Upvotes: -1 <issue_comment>username_2: This is because of the size and position of the logo. You could use `absolute` positioning on the `h1` with `width: 100%` and remove the `padding`. See the snippet: ```css body { margin: 0; padding: 0; background-color: #ffffff; font-family: sans-serif; } .banner { width: 100%; background-color: #8F3144; height: 300px; top: 0px; } /* LOGO START*/ .banner>img { float: left; height: 103px; width: 140px; color: black; } /* LOGO END */ .mainh1 { font-weight: bolder; text-align: center; color: #ffffff; font-size: 50px; position: absolute; width: 100%; } .nav { list-style: none; text-align: right; margin: 0; } .nav>li { display: inline-block; font-size: 20px; margin-right: 20px; padding-top: 20px; font-weight: bolder; } .nav>li>a { text-decoration: none; color: #ffffff; } .nav>li>a:hover { opacity: .5; } ``` ```html ![logo](logo.png) * [Meets](#) * Gallery Harrison TEST TEST TEST TEST ============================ ``` Upvotes: 0 <issue_comment>username_3: ``` clear: both ``` This is the answer I believe. Add this style to your `h1` tag. Upvotes: 1
2018/03/15
625
1,823
<issue_start>username_0: I have a document that contains a grid (Among others). One column in the grid has full hyperlinks leading to a ticketing system, i.e. <https://ticketsystem.internal.company.com/tickets/ticket1337> I want to enable users to click on the URL and open the ticket in a new browserwindow. How do I do that?<issue_comment>username_1: Add the following two properties to the mainh. ``` margin-top:auto; margin-botton:auto; ``` This should solve the problem and keep it always centered, although it goes off screen at that big of fonts. Also you can try placing it in a tag Upvotes: -1 <issue_comment>username_2: This is because of the size and position of the logo. You could use `absolute` positioning on the `h1` with `width: 100%` and remove the `padding`. See the snippet: ```css body { margin: 0; padding: 0; background-color: #ffffff; font-family: sans-serif; } .banner { width: 100%; background-color: #8F3144; height: 300px; top: 0px; } /* LOGO START*/ .banner>img { float: left; height: 103px; width: 140px; color: black; } /* LOGO END */ .mainh1 { font-weight: bolder; text-align: center; color: #ffffff; font-size: 50px; position: absolute; width: 100%; } .nav { list-style: none; text-align: right; margin: 0; } .nav>li { display: inline-block; font-size: 20px; margin-right: 20px; padding-top: 20px; font-weight: bolder; } .nav>li>a { text-decoration: none; color: #ffffff; } .nav>li>a:hover { opacity: .5; } ``` ```html ![logo](logo.png) * [Meets](#) * Gallery Harrison TEST TEST TEST TEST ============================ ``` Upvotes: 0 <issue_comment>username_3: ``` clear: both ``` This is the answer I believe. Add this style to your `h1` tag. Upvotes: 1
2018/03/15
1,054
2,826
<issue_start>username_0: Following is my code that I have used to convert Iterator[char] to Seq[String]. ``` val result = IOUtils.toByteArray(new FileInputStream (new File(fileDir))) val remove_comp = result.grouped(11).map{arr => arr.update(2, 32);arr}.flatMap{arr => arr.update(3, 32); arr} val convert_iter = remove_comp.map(_.toChar.toString).toSeq.mkString.split("\n") val rdd_input = Spark.sparkSession.sparkContext.parallelize(convert_iter) ``` val fileDir: ``` 12**34567890 12@@34567890 12!!34567890 12¬¬34567890 12 '34567890 ``` I am not happy with this code as the data size is big and converting to string would end up with heap space. ``` val convert_iter = remove_comp.map(_.toChar) convert_iter: Iterator[Char] = non-empty iterator ``` Is there a better way of coding?<issue_comment>username_1: By completely disregarding corner cases about empty Strings etc I would start with something like: ``` val test = Iterable('s','f','\n','s','d','\n','s','v','y') val (allButOne, last) = test.foldLeft( (Seq.empty[String], Seq.empty[Char]) ) { case ((strings, chars), char) => if (char == '\n') (strings :+ chars.mkString, Seq.empty) else (strings, chars :+ char) } val result = allButOne :+ last.mkString ``` I am sure it could be made more elegant, and handle corner cases better (once you define you want them handled), but I think it is a nice starting point. But to be honest I am not entirely sure what you want to achieve. I just guessed that you want to group chars divided by `\n` together and turn them into `String`s. Upvotes: 1 <issue_comment>username_2: *Looking at your code*, I see that you are trying to *replace the special characters* such as `**`, `@@` and so on from the file that contains following data > > `12**34567890 > 12@@34567890 > 12!!34567890 > 12¬¬34567890 > 12 > '34567890` > > > For that you can just read the data using *sparkContext textFile* and use regex `replaceAllIn` ``` val pattern = new Regex("[¬~!@#$^%&*\\(\\)_+={}\\[\\]|;:\"'<,>.?` /\\-]") val result = sc.textFile(fileDir).map(line => pattern.replaceAllIn(line, "")) ``` and you should have you `result` as `RDD[String]` which also an *iterator* ``` 1234567890 1234567890 1234567890 1234567890 12 34567890 ``` **Updated** If there are `\n` and `\r` in between the texts *at 3rd and 4th place* and if the *result is all fixed length of 10 digits text* then you can use `wholeTextFiles` api of `sparkContext` and use following *regex* as ``` val pattern = new Regex("[¬~!@#$^%&*\\(\\)_+={}\\[\\]|;:\"'<,>.?` /\\-\r\n]") val result = sc.wholeTextFiles(fileDir).flatMap(line => pattern.replaceAllIn(line._2, "").grouped(10)) ``` You should get the output as ``` 1234567890 1234567890 1234567890 1234567890 1234567890 ``` I hope the answer is helpful Upvotes: 0
2018/03/15
1,361
3,384
<issue_start>username_0: I'm trying to import the following json in hive > > [{"time":1521115600,"latitude":44.3959,"longitude":26.1025,"altitude":53,"pm1":21.70905,"pm25":16.5,"pm10":14.60085,"gas1":0,"gas2":0.12,"gas3":0,"gas4":0,"temperature":null,"pressure":0,"humidity":0,"noise":0},{"time":1521115659,"latitude":44.3959,"longitude":26.1025,"altitude":53,"pm1":24.34045,"pm25":18.5,"pm10":16.37065,"gas1":0,"gas2":0.08,"gas3":0,"gas4":0,"temperature":null,"pressure":0,"humidity":0,"noise":0},{"time":1521115720,"latitude":44.3959,"longitude":26.1025,"altitude":53,"pm1":23.6826,"pm25":18,"pm10":15.9282,"gas1":0,"gas2":0,"gas3":0,"gas4":0,"temperature":null,"pressure":0,"humidity":0,"noise":0},{"time":1521115779,"latitude":44.3959,"longitude":26.1025,"altitude":53,"pm1":25.65615,"pm25":19.5,"pm10":17.25555,"gas1":0,"gas2":0.04,"gas3":0,"gas4":0,"temperature":null,"pressure":0,"humidity":0,"noise":0}] > > > ``` CREATE TABLE json_serde ( s array>) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' WITH SERDEPROPERTIES ( 'mapping.value' = 'value' ) STORED AS TEXTFILE location '/user/hduser'; ``` the import works but if i try ``` Select * from json_serde; ``` it will return from every document that is on hadoop/user/hduser only the first element per file. there is a good documentation on working with json array??<issue_comment>username_1: By completely disregarding corner cases about empty Strings etc I would start with something like: ``` val test = Iterable('s','f','\n','s','d','\n','s','v','y') val (allButOne, last) = test.foldLeft( (Seq.empty[String], Seq.empty[Char]) ) { case ((strings, chars), char) => if (char == '\n') (strings :+ chars.mkString, Seq.empty) else (strings, chars :+ char) } val result = allButOne :+ last.mkString ``` I am sure it could be made more elegant, and handle corner cases better (once you define you want them handled), but I think it is a nice starting point. But to be honest I am not entirely sure what you want to achieve. I just guessed that you want to group chars divided by `\n` together and turn them into `String`s. Upvotes: 1 <issue_comment>username_2: *Looking at your code*, I see that you are trying to *replace the special characters* such as `**`, `@@` and so on from the file that contains following data > > `12**34567890 > 12@@34567890 > 12!!34567890 > 12¬¬34567890 > 12 > '34567890` > > > For that you can just read the data using *sparkContext textFile* and use regex `replaceAllIn` ``` val pattern = new Regex("[¬~!@#$^%&*\\(\\)_+={}\\[\\]|;:\"'<,>.?` /\\-]") val result = sc.textFile(fileDir).map(line => pattern.replaceAllIn(line, "")) ``` and you should have you `result` as `RDD[String]` which also an *iterator* ``` 1234567890 1234567890 1234567890 1234567890 12 34567890 ``` **Updated** If there are `\n` and `\r` in between the texts *at 3rd and 4th place* and if the *result is all fixed length of 10 digits text* then you can use `wholeTextFiles` api of `sparkContext` and use following *regex* as ``` val pattern = new Regex("[¬~!@#$^%&*\\(\\)_+={}\\[\\]|;:\"'<,>.?` /\\-\r\n]") val result = sc.wholeTextFiles(fileDir).flatMap(line => pattern.replaceAllIn(line._2, "").grouped(10)) ``` You should get the output as ``` 1234567890 1234567890 1234567890 1234567890 1234567890 ``` I hope the answer is helpful Upvotes: 0
2018/03/15
1,075
3,120
<issue_start>username_0: I've defined the following custom parser: ``` newtype St = St Int type TxsParser = ParsecT String St (State St) ``` Now to be able to run this parser, I have to use the [`runParserT`](https://hackage.haskell.org/package/parsec-3.1.13.0/docs/Text-Parsec.html#v:runParserT) function. ``` runParserT :: Stream s m t => ParsecT s u m a -> u -> SourceName -> s -> m (Either ParseError a) ``` Which instantiated to my custom parser reads: ``` runParserT :: ParsecT String St (State St) a -> St -> SourceName -> String -> State St (Either ParseError a) ``` But this means that if I want to evaluate the result of `runParserT` (which is a state monad) I have to supply another initial state (of type `St` in this case). For instance: ``` evalState (runParserT myParser (St 0) fp input) (St 0) ``` While this works, it seems wrong that I have to repeat the state twice. Does this mean that mixing `ParsecT` and the `State` monads is not a good idea?<issue_comment>username_1: By completely disregarding corner cases about empty Strings etc I would start with something like: ``` val test = Iterable('s','f','\n','s','d','\n','s','v','y') val (allButOne, last) = test.foldLeft( (Seq.empty[String], Seq.empty[Char]) ) { case ((strings, chars), char) => if (char == '\n') (strings :+ chars.mkString, Seq.empty) else (strings, chars :+ char) } val result = allButOne :+ last.mkString ``` I am sure it could be made more elegant, and handle corner cases better (once you define you want them handled), but I think it is a nice starting point. But to be honest I am not entirely sure what you want to achieve. I just guessed that you want to group chars divided by `\n` together and turn them into `String`s. Upvotes: 1 <issue_comment>username_2: *Looking at your code*, I see that you are trying to *replace the special characters* such as `**`, `@@` and so on from the file that contains following data > > `12**34567890 > 12@@34567890 > 12!!34567890 > 12¬¬34567890 > 12 > '34567890` > > > For that you can just read the data using *sparkContext textFile* and use regex `replaceAllIn` ``` val pattern = new Regex("[¬~!@#$^%&*\\(\\)_+={}\\[\\]|;:\"'<,>.?` /\\-]") val result = sc.textFile(fileDir).map(line => pattern.replaceAllIn(line, "")) ``` and you should have you `result` as `RDD[String]` which also an *iterator* ``` 1234567890 1234567890 1234567890 1234567890 12 34567890 ``` **Updated** If there are `\n` and `\r` in between the texts *at 3rd and 4th place* and if the *result is all fixed length of 10 digits text* then you can use `wholeTextFiles` api of `sparkContext` and use following *regex* as ``` val pattern = new Regex("[¬~!@#$^%&*\\(\\)_+={}\\[\\]|;:\"'<,>.?` /\\-\r\n]") val result = sc.wholeTextFiles(fileDir).flatMap(line => pattern.replaceAllIn(line._2, "").grouped(10)) ``` You should get the output as ``` 1234567890 1234567890 1234567890 1234567890 1234567890 ``` I hope the answer is helpful Upvotes: 0
2018/03/15
1,089
2,925
<issue_start>username_0: I try to set up a cron job to execute a python test program (rm a file from folder) but it doesn't work. I've tried different things : - Run programm with script sh - Run action directly with command in crontab When I launch test.sh from terminal, it works perfectly. When I launch rm file directly in crontab command it works too BUT nothing happens when the script sh is launched from crontab... This is my cron tab : ``` */5 * * * * run_auto_resp_ads.sh 37 14 * * * test.sh 18 14 * * * rm ~/Rendu/test_cron/lol.py ``` This is test.sh content : ``` #!/bin/sh rm ~/Rendu/test_cron/lol2.py ``` When I check my cron logs, task are running : ``` Mar 15 14:21:01 AcerA17 CROND[14905]: (mjz) CMD (test.sh) Mar 15 14:18:01 AcerA17 CROND[12944]: (mjz) CMD (rm ~/Rendu/test_cron/lol.py) ``` I've also checked files rights. Any ideas please ? Thx a lot :)<issue_comment>username_1: By completely disregarding corner cases about empty Strings etc I would start with something like: ``` val test = Iterable('s','f','\n','s','d','\n','s','v','y') val (allButOne, last) = test.foldLeft( (Seq.empty[String], Seq.empty[Char]) ) { case ((strings, chars), char) => if (char == '\n') (strings :+ chars.mkString, Seq.empty) else (strings, chars :+ char) } val result = allButOne :+ last.mkString ``` I am sure it could be made more elegant, and handle corner cases better (once you define you want them handled), but I think it is a nice starting point. But to be honest I am not entirely sure what you want to achieve. I just guessed that you want to group chars divided by `\n` together and turn them into `String`s. Upvotes: 1 <issue_comment>username_2: *Looking at your code*, I see that you are trying to *replace the special characters* such as `**`, `@@` and so on from the file that contains following data > > `12**34567890 > 12@@34567890 > 12!!34567890 > 12¬¬34567890 > 12 > '34567890` > > > For that you can just read the data using *sparkContext textFile* and use regex `replaceAllIn` ``` val pattern = new Regex("[¬~!@#$^%&*\\(\\)_+={}\\[\\]|;:\"'<,>.?` /\\-]") val result = sc.textFile(fileDir).map(line => pattern.replaceAllIn(line, "")) ``` and you should have you `result` as `RDD[String]` which also an *iterator* ``` 1234567890 1234567890 1234567890 1234567890 12 34567890 ``` **Updated** If there are `\n` and `\r` in between the texts *at 3rd and 4th place* and if the *result is all fixed length of 10 digits text* then you can use `wholeTextFiles` api of `sparkContext` and use following *regex* as ``` val pattern = new Regex("[¬~!@#$^%&*\\(\\)_+={}\\[\\]|;:\"'<,>.?` /\\-\r\n]") val result = sc.wholeTextFiles(fileDir).flatMap(line => pattern.replaceAllIn(line._2, "").grouped(10)) ``` You should get the output as ``` 1234567890 1234567890 1234567890 1234567890 1234567890 ``` I hope the answer is helpful Upvotes: 0
2018/03/15
1,057
3,335
<issue_start>username_0: I have entries in a data table *TableofDates* comprising three fields; ProjectID (type = dbText), Start (type = dbDate) and Finish (type = dbDate): **TableofDates** ``` ProjectID Start Finish ABC 01/01/2018 09/09/2018 ``` I wish to present the following dataset with individual DateA records (derived from [Start] in [TableofDates]) and DateB records (derived form [Finish] in [TableofDates]) **ToD UNION** ``` ProjectID DateA DateB ABC 01/01/2018 ABC 09/09/2018 ``` The following SQL Query produces almost the result required: ``` SELECT ProjectID,DateA, '' AS DateB FROM DtA UNION SELECT ProjectID, '' AS DateA,DateB FROM DtB; ``` The problem I have is that the inserted null strings force the field data to type dbText. If instead I insert Nulls, I get only the first date field reproduced: ``` SELECT ProjectID, DateA, Null AS DateB FROM DtA UNION SELECT ProjectID, Null AS DateA, DateB FROM DtB; ``` yields: ``` ToD UNION ProjectID DateA DateB ABC 01/01/2018 ABC ``` Both parts of the Union work individually: ``` SELECT ProjectID,DateA, Null AS DateB FROM DtA; ``` yields ``` ToD UNION ProjectID DateA DateB ABC 01/01/2018 ``` While ``` SELECT ProjectID,Null AS DateA, DateB FROM DtB; ``` Yields ``` ToD UNION ProjectID DateA DateB ABC 09/09/2018 ``` Any ideas how I can create a union of TableofDates with inserted "Null" dates as required, and maintain Data Type of dbDate in the resultant dataset?<issue_comment>username_1: You can use a header row (a select statement that returns 0 records, but includes the correct types and labels) ``` SELECT CLng(1) As ProjectID, #2001-01-01# As DateA, #2001-01-01# As DateB FROM MSysObjects WHERE 1=0 UNION ALL SELECT ProjectID, DateA, Null FROM DtA UNION ALL SELECT ProjectID, Null, DateB FROM DtB; ``` Your query doesn't work, because in Access, the first query of a union query determines the type and label of the field. Because the last column of the first query only contains `Null`, Access can't determine a field type, and chooses the wrong one. By using a query that doesn't return records, but sets the field type explicitly, you're avoiding that problem. Note that the change from `UNION` (which typecasts and checks for duplicates) to `UNION ALL` (which doesn't do those things) also fixes the problem. So 2 solutions for the price of one. Upvotes: 3 [selected_answer]<issue_comment>username_2: A Union query will ignore field alias names except for those in the first SELECT statement. A NULL value is less than any date value, so a descending sort will place the NULLS first. This SQL produces your example output: ``` SELECT ProjectID, Start AS DateA, NULL AS DateB FROM TableOfDates UNION ALL SELECT ProjectID, NULL, Finish FROM TableOfDates ORDER BY ProjectID, DateA DESC ``` This table: ``` ProjectID Start Finish ABC 01/01/2018 09/09/2018 DEF 01/02/2017 03/05/2018 ``` Produced this result: ``` ProjectID DateA DateB ABC 01/01/2018 ABC 09/09/2018 DEF 01/02/2017 DEF 03/05/2018 ``` Upvotes: 0
2018/03/15
2,102
7,284
<issue_start>username_0: I'm using the JAXB Unmarshaller to convert an XML string into a Java object structure (using Java 7; Java 8 is not an option at this time). I'm having a problem with lists of objects though. (I apologize in advance for all the code included.) This is the input XML string: ``` 900 AT0000000000018 900 1 900 0 900 1 910 2 false 3503 0 3503 0 3504 0 ``` This is the definition for the various objects: ``` import java.util.List; import javax.xml.bind.annotation.XmlRootElement; @XmlRootElement public class ClaimImportCompositeDTO { private String _claimId; private String _claimNumber; private ClaimDTO _claimDTO; private List \_notes; public String getClaimId() { return \_claimId; } public String getClaimNumber() { return \_claimNumber; } public ClaimDTO getClaimDTO() { return \_claimDTO; } public List getNotes() { return \_notes; } public void setClaimId(String claimId) { this.\_claimId = claimId; } public void setClaimNumber(String claimNumber) { this.\_claimNumber = claimNumber; } public void setClaimDTO(ClaimDTO claimDTO) { this.\_claimDTO = claimDTO; } public void setNotes(List notes) { this.\_notes = notes; } } public class ClaimDTO { private String \_recordId; private String \_version; private ClmAddress \_clmAddress; private ClaimStatus \_claimStatus; public String getRecordId() { return \_recordId; } public String getVersion() { return \_version; } public ClmAddress getClmAddress() { return \_clmAddress; } public ClaimStatus getClaimStatus() { return \_claimStatus; } public void setRecordId(String recordId) { this.\_recordId = recordId; } public void setVersion(String version) { this.\_version = version; } public void setClmAddress(ClmAddress clmAddress) { this.\_clmAddress = clmAddress; } public void setClaimStatus(ClaimStatus claimStatus) { this.\_claimStatus = claimStatus; } } public class ClmAddress { private String \_recordId; private String \_version; public String getRecordId() { return \_recordId; } public String getVersion() { return \_version; } public void setRecordId(String recordId) { this.\_recordId = recordId; } public void setVersion(String version) { this.\_version = version; } } import java.util.List; public class ClaimStatus { private List \_expired; private boolean \_statusCompleteWorkItemFlag; public List getExpired() { return \_expired; } public boolean isStatusCompleteWorkItemFlag() { return \_statusCompleteWorkItemFlag; } public void setExpired(List expired) { this.\_expired = expired; } public void setStatusCompleteWorkItemFlag(boolean statusCompleteWorkItemFlag) { this.\_statusCompleteWorkItemFlag = statusCompleteWorkItemFlag; } } public class ClaimStatusDTO { private String \_recordId; private String \_version; public String getRecordId() { return \_recordId; } public String getVersion() { return \_version; } public void setRecordId(String recordId) { this.\_recordId = recordId; } public void setVersion(String version) { this.\_version = version; } } import java.util.List; public class NotepadDTO { private String \_recordId; private String \_version; private List \_notepadText; public List getNotepadText() { return \_notepadText; } public String getRecordId() { return \_recordId; } public String getVersion() { return \_version; } public void setNotepadText(List notepadText) { this.\_notepadText = notepadText; } public void setRecordId(String recordId) { this.\_recordId = recordId; } public void setVersion(String version) { this.\_version = version; } } public class NotepadTextDTO { private String \_recordId; private String \_version; public String getRecordId() { return \_recordId; } public String getVersion() { return \_version; } public void setRecordId(String recordId) { this.\_recordId = recordId; } public void setVersion(String version) { this.\_version = version; } } ``` This is the output: ``` 900 AT0000000000018 900 1 900 0 null null false null null ``` When I run the unmarshaller, the non-list data is read correctly, but anything in a list is skipped. Well, one object is created (regardless of how many are in the list) and all of the fields are null (i.e., the claimsStatus -> expired list or the notes list). I'm hoping I'm missing something fairly basic, but I've been unable to find what that may be. Any ideas?<issue_comment>username_1: You need the @XmlElement annotation on the list. Something like this. ``` @XmlElement(name = "notepadDTO") private List \_notes; ``` Here's a tutorial I used: <https://howtodoinjava.com/jaxb/jaxb-exmaple-marshalling-and-unmarshalling-list-or-set-of-objects/> Upvotes: 0 <issue_comment>username_2: The problem is that the xml provided does not correspond to the structure of the your POJOs. An easy way to fix it, following your existing setup and coding style is to add @XmlElementWrapper on the fields that are lists of items and change their names. You need to change the 3 following classes to how they look below and it will work: ``` @XmlRootElement @XmlAccessorType(XmlAccessType.PROPERTY) public class ClaimImportCompositeDTO { private String _claimId; private String _claimNumber; private ClaimDTO _claimDTO; private List \_notepadDTO; public String getClaimId() { return \_claimId; } public String getClaimNumber() { return \_claimNumber; } public ClaimDTO getClaimDTO() { return \_claimDTO; } @XmlElementWrapper(name = "notes") public List getNotepadDTO() { return \_notepadDTO; } public void setClaimId(String claimId) { this.\_claimId = claimId; } public void setClaimNumber(String claimNumber) { this.\_claimNumber = claimNumber; } public void setClaimDTO(ClaimDTO claimDTO) { this.\_claimDTO = claimDTO; } public void setNotepadDTO(List notes) { this.\_notepadDTO = notes; } } ``` ClaimStatus class would be: ``` @XmlAccessorType(XmlAccessType.PROPERTY) public class ClaimStatus { private List \_claimStatusDTO; private boolean \_statusCompleteWorkItemFlag; @XmlElementWrapper(name = "expired") public List getClaimStatusDTO() { return \_claimStatusDTO; } public void setClaimStatusDTO(List claimStatusDto) { this.\_claimStatusDTO = claimStatusDto; } public boolean isStatusCompleteWorkItemFlag() { return \_statusCompleteWorkItemFlag; } public void setStatusCompleteWorkItemFlag(boolean statusCompleteWorkItemFlag) { this.\_statusCompleteWorkItemFlag = statusCompleteWorkItemFlag; } } ``` And NotepadDTO: ``` @XmlAccessorType(XmlAccessType.PROPERTY) public class NotepadDTO { private String _recordId; private String _version; private List \_notepadTextDTO; @XmlElementWrapper(name = "notepadText") public List getNotepadTextDTO() { return \_notepadTextDTO; } public String getRecordId() { return \_recordId; } public String getVersion() { return \_version; } public void setNotepadTextDTO(List notepadText) { this.\_notepadTextDTO = notepadText; } public void setRecordId(String recordId) { this.\_recordId = recordId; } public void setVersion(String version) { this.\_version = version; } } ``` Upvotes: 3 [selected_answer]
2018/03/15
1,098
4,300
<issue_start>username_0: When user fills out subscribe form or contact form and the form is not valid it redirects to the form view The forms are a partial form which are in the index page, possibly the issue but I'm not sure how to solve - it is a single page application with two actions, subscribe and contact. How can I get it to stay on the same page and show the error. Here is the subscribe Form (Contact is similar) ``` @model SubscribeViewModel Subscribe ``` Here is the Home Controller ``` //Subscribe [HttpPost] public IActionResult Subscribe(SubscribeViewModel vm) { if (ModelState.IsValid) { _mailService.SubscribeEmail(vm.Email); return RedirectToAction("Index", "Home"); } return - **What to do here??;** } // Send Mail [HttpPost] public IActionResult Contact(ContactViewModel vm) { if (ModelState.IsValid) { _mailService.SendEmail(vm.Name, vm.Email, vm.Subject, vm.Message); return RedirectToAction("Index", "Home"); } return - **And here?;** } ```<issue_comment>username_1: If the ViewModel is not valid, return the View that rendered the form originally, and pass the ViewModel you have received in the POST action. This ViewModel will contain errors in `ModelState.Errors` which can be displayed by MVC, e.g. with `@Html.ValidationSummary()`. For this to work correctly, make sure that all properties of the ViewModel are posted back! If a property can not be changed by the User, render a hidden input for it so it will not get lost on the round trip. ``` [HttpPost] public IActionResult Subscribe(SubscribeViewModel vm) { if (!ModelState.IsValid) { return PartialView("_SubscribeForm", vm); } // model is valid // ... } ``` Upvotes: 0 <issue_comment>username_2: You just need to replace `return - **What to do here??;**` with this code. ``` return View("Your View name", vm); ``` This line populate all fields that are already entered by the user so you can identify the wrong values. Upvotes: 0 <issue_comment>username_3: I think ... return View(); You simply need to return View() as per my under standing you write all action in index view controller so you write return View() is automatic return to index view, and also ModelState.IsValid is false so it show error messages so i think is enough to return View(). Upvotes: 0 <issue_comment>username_4: You could use ajax to post the data and report back errors (which is probably the cleanest solution, but this needs jQuery scripting and modifying your post actions). Too much to post as an answer, and you probably need something very specific. Using the "Redirect" approach in your code means redirecting back to the Home Page but also transferring the ModelState errors as well (exporting the ModelState from the post action and importing it to the Home Page action), which can be done and is reasonably straightforward, but does need some more coding (and probably more research) - try searching for the "PRG pattern" to find coding examples. As a starting point, there is a blog post by <NAME> (see section 13. Use PRG Pattern for Data Modification) [here](https://web.archive.org/web/20130702160308/http://weblogs.asp.net/rashid/archive/2009/04/01/asp-net-mvc-best-practices-part-1.aspx) which should get you started, but there are other useful posts out there as well. The blog post describes how to use TempData and ActionFilters to import and export ModelStates (by decorating your actions with the attributes) and is very useful for redirects to avoid recreating ViewModels and for transferring ModelStates. From what I see, there is no "one-liner quick fix", you just have to research and test this pattern, or go down the ajax/json route. Upvotes: 0 <issue_comment>username_5: you need `return View();` But also you'll want to add `[ChildActionOnly]` to your controller. ``` //Subscribe [ChildActionOnly] [HttpPost] public IActionResult Subscribe(SubscribeViewModel vm) { if (ModelState.IsValid) { _mailService.SubscribeEmail(vm.Email); return RedirectToAction("Index", "Home"); } return PartialView(vm) } ``` [ChildActionOnly](https://msdn.microsoft.com/en-us/library/system.web.mvc.childactiononlyattribute(v=vs.118).aspx) Upvotes: 1
2018/03/15
365
1,146
<issue_start>username_0: I define an array like this: ``` [{foo:0}, true === false && { foobar:1}, {bar:2}] ``` My expected result would be that the middle item is not added at all when the middle condition is not met: ``` [ { foo: 0 }, { bar: 2 } ] ``` in fact it adds false as an array item: ``` [ { foo: 0 }, false, { bar: 2 } ] ``` Is there a way to prevent adding the `false` while maintaining this lightweight syntax (I know I could always use push or the spread operator)<issue_comment>username_1: As Denys suggested, you could do this: ```js const arr = [{foo:0}, true === false && { foobar:1}, {bar:2}].filter(el => el !== false); console.log(arr); ``` Upvotes: 0 <issue_comment>username_2: You could use `concat` with spread syntax and an empty array as neutral value. ```js var a = [].concat(...[ { foo: 0 }, true === false ? { foobar: 1 } : [], { bar: 2 } ]); console.log(a); ``` With apply ```js var a = Array.prototype.concat.apply([], [ { foo: 0 }, true === false ? { foobar: 1 } : [], { bar: 2 } ]); console.log(a); ``` Upvotes: 3 [selected_answer]
2018/03/15
698
2,371
<issue_start>username_0: I'm developing an android app with multiple flavors like so: ``` sourceSets { main { res.srcDirs += ["headend/ott/res"] } flavor1 { res.srcDirs += ["src/module1/res-splash"] } flavor2 { java.srcDirs += ["src/module1/java"] res.srcDirs += ["src/module1/res"] res.srcDirs += ["src/module2/res"] assets.srcDirs += ["src/module1/assets"] } test { res.srcDirs += ["src/test/resources"] } ... ``` My problem is that, in flavor2, some of the module2 resources are supposed to be replacements of ones already present in module1, but with my current approach it causes the build to fail with duplicate resources. So what I need is a way to add "src/module1/res" to flavor2 but without including one specific file. I've tried ``` res{ srcDirs += ["src/module1/res"] res.srcDirs += ["src/module1/res"] exclude 'src/module1/res/drawable/specific_file.xml' } ``` But to no avail. Is this possible at all?<issue_comment>username_1: After looking at multiple answers [like this](https://discuss.gradle.org/t/how-can-i-exclude-certain-java-files-from-being-compiled/5287/4), the code that you have looks correct to me. However, [this bug](https://issuetracker.google.com/issues/36988285) stating that exclude paths are not implemented is still open. This [alternate approach](https://stackoverflow.com/a/45581826/984830) which references [these docs](https://developer.android.com/studio/build/shrink-code.html#keep-resources) may work for you instead. I suggest adding a resource directory inside your flavour2 directory/module and using it to include a discard file. ``` sourceSets { flavor2 { res { srcDirs += ["src/module1/res"] srcDirs += ["src/module2/res"] } } } ``` Then add `resources_discard.xml` to `module2/res/raw` with the following: ``` xml version="1.0" encoding="utf-8"? ``` Upvotes: 1 <issue_comment>username_2: Try doing this. In one of my projects, I was able to exclude java file from compiling by not using the `srcDir` path in the `exclude` path. `exclude` usually works without appending the `srcDir` path. ``` res{ srcDirs += ["src/module1/res"] res.srcDirs += ["src/module1/res"] exclude 'drawable/specific_file.xml' } ``` Upvotes: 0
2018/03/15
1,088
4,266
<issue_start>username_0: I'm trying to configure a Zip task based on one of the property inside sub-projects, but the property is not yet accessible at the time of configuring the task. For instance, I want to exclude all my projects that has `toexclude = true` from my zip file. So, the `build.gradle` of the sub-projects that I want to exclude starts with this: ``` ext.toexclude = true; ... ``` And my main `build.gradle` has this task: ``` task zipContent (type: Zip){ def excludedProjects = allprojects.findAll{Project p -> p.toexclude == true}.collect{it.name} println excludedProjects destinationDir = "/some/path" baseName = "myFile.zip" exclude excludedProjects from "/some/other/path" } ``` The problem is that `excludedProjects` is always empty. Indeed, when I am executing the task, I can see `[]`. I believe this is due to the fact that the property that I set in the subproject's `build.gradle` is not available at the moment the task is configured. As a proof, if I replace the first line of the task by this: ``` def excludedProjects = allprojects.collect{it.name} ``` The task prints out all of my project's name, and the zip contains nothing (which means the problem is in the `p.toexclude == true`). Also, if I try this: ``` task zipContent (type: Zip){ def excludedProjects = [] doFirst{ excludedProjects = allprojects.findAll{Project p -> p.toexclude == true}.collect{it.name} println "IN DOFIRST" println excludedProjects } println "IN TASK CONFIG" println excludedProjects destinationDir = "/some/path" baseName = "myFile.zip" exclude excludedProjects from "/some/other/path" } ``` The task prints out `IN TASK CONFIG` followed by an empty array, then `IN DOFIRST` with the array containing only the subprojects that I set `ext.toexclude == true`. So, is there a way to get the properties of the sub-projects at configuration time?<issue_comment>username_1: Just define `excludedProjects` outside the task ``` def excludedProjects = allprojects.findAll{Project p -> p.toexclude == true}.collect{it.name} task zipContent (type: Zip){ destinationDir = file("/some/path") baseName = "myFile.zip" exclude excludedProjects from "/some/other/path" } ``` Upvotes: 1 <issue_comment>username_2: Well, the crucial question is: At which point of the build is all necessary information available? Since we want to know each project in the build, where the extra property `toexclude` is set to `true` and it is possible (and by design) that the property is set via the build script, we need each build script to be evaluated. Now, we have two options: 1. By **default**, subprojects are evaluated after the parent (root) project. To ensure the evaluation of each project, we need to wait for the point of the build, where all projects are evaluated. Gradle provides a listener for that point: ``` gradle.addListener(new BuildAdapter() { @Override void projectsEvaluated(Gradle gradle) { tasks.getByPath('zipContent').with { exclude allprojects.findAll { it.toexclude }.collect{ it.name } } } }) ``` 2. Gradle provides the method `evaluationDependsOnChildren()`, to turn the evaluation order around. It may be possible to use your original approach by calling this method before querying the excluded projects. Since this method only applies on child projects, you may try to call `evaluationDependsOn(String)` for each project in the build to also apply for 'sibling' projects. Since this solution breaks Gradle default behavior, it may have undesired side effects. Upvotes: 4 [selected_answer]<issue_comment>username_3: You can call [evaluationDependsOnChildren()](https://docs.gradle.org/current/javadoc/org/gradle/api/Project.html#evaluationDependsOnChildren--) in the root project so that child projects are evaluated before the root Eg ``` evaluationDependsOnChildren() task zipContent (type: Zip) { ... } ``` Another option is to use an [afterEvaluate { ... }](https://docs.gradle.org/current/javadoc/org/gradle/api/Project.html#afterEvaluate-groovy.lang.Closure-) closure to delay evaluation Eg: ``` afterEvaluate { task zipContent (type: Zip) { ... } } ``` Upvotes: 1
2018/03/15
683
2,374
<issue_start>username_0: How do you convert a dotted keys into a javascript object and retain it's value? So I got this kind of response from an API and I need to parse it by key: value. ``` { "property": "personal_info.address.city", "description": "Missing field" }, { "property": "personal_info.address.country", "description": "Missing field" }, ``` So I achieved this: ``` { 'personal_info.address.city': 'Missing field', 'personal_info.address.country': 'Missing field' } // by using this code (lodash) _.mapValues(_.keyBy(obj, 'property'), function(o) { return o.description; }) ``` however, i need it to be like this: ``` { personal_info: { address: { city: 'Missing field', country: 'Missing field', } } } ``` I somehow searched in stackoverflow how to convert a dot notation string into an object here: [Convert string with dot notation to JSON](https://stackoverflow.com/questions/22985676/convert-string-with-dot-notation-to-json) but I'm stuck since I'm changing the key itself. EDIT: Changed test city and test country to reflect the description field (sorry)<issue_comment>username_1: You could use `forEach()` loop and inside `reduce()` method to get result like this. ```js const data = [{"property": "personal_info.address.city","description": "Missing field"},{"property": "personal_info.address.country","description": "Missing field"}] const result = {} data.forEach(function(o) { o.property.split('.').reduce(function(r, e, i, arr) { return r[e] = (r[e] || (arr[i + 1] ? {} : o.description)) }, result) }) console.log(result) ``` Upvotes: 3 <issue_comment>username_2: You could use [`_.set`](https://lodash.com/docs/#set) from lodash. > > Sets the value at `path` of `object`. If a portion of `path` doesn't exist, it's created. Arrays are created for missing index properties while objects are created for all other missing properties. Use [`_.setWith`](https://lodash.com/docs/#setWith) to customize path creation. > > > ```js var array = [{ property: "personal_info.address.city", description: "Missing field" }, { property: "personal_info.address.country", description: "Missing field" }], object = array.reduce((o, { property, description }) => _.set(o, property, description), {}); console.log(object); ``` Upvotes: 4 [selected_answer]
2018/03/15
825
3,066
<issue_start>username_0: This is my root element, and in the data "booking" is a object with sitebooking. Sitebooking object can have array of objects with first name and last name. ``` var app = new Vue({ el: '#app', data: { booking: { sitebooking: [{ firstname: "", lastname: "" },{ firstname: "", lastname: "" }], } } }); ``` And this is my template (child component), ``` First name Last name ``` And i am looping through booking.sitebooking object in parent compoent to create multiple child component (each site booking will get one child component). ``` ``` I am passing the value through "my-sb" props and assinging in to local data in the child component. ``` Vue.component('sitebooking', { template: '#sitebooking_template', props:["mySb"], data: function () { return { newcompsitebooking : this.mySb, } } }); ``` Till now everything works, but the strange behavior is whenever I change the value in the child component, it updates the data of parent component too. But according to vuejs documentation, the change in child component will be propagated back to parent via emit. But i am not emitting data back to parent, but still the value updates automatically in parent. Can anyone pls help on this?<issue_comment>username_1: You're passing a pointer to the siteBooking object. The child object can do anything which that pointer it likes and the parent wont react, but the object and its properties are still shared. EDIT Cloning an object to make a new one is called [deep cloning](https://stackoverflow.com/questions/122102/what-is-the-most-efficient-way-to-deep-clone-an-object-in-javascript) Upvotes: 1 <issue_comment>username_2: I resolved this issue with the following changes. Whenever I try to pass object as props and assigning into local variable in child, it actually copies the reference of the parent data, which leads to the problem. So if you do this, changes in child will affect the parent data which is not good. I found a good discussion here, <https://forum.vuejs.org/t/props-are-mutable-when-passed-as-an-object/2525/5> **Solution** To solve this issue, as suggested in @icecream\_hobbit answer i tried to clone the prob obj and then store the new copy of it in the local data. **How to do clone object** [Vuejs : How to pass an object as prop and have the component update sub-objects](https://stackoverflow.com/questions/49072799/vuejs-how-to-pass-an-object-as-prop-and-have-the-component-update-sub-objects) [Is this a good way to clone an object in ES6?](https://stackoverflow.com/questions/39736397/is-this-a-good-way-to-clone-an-object-in-es6) I modified my code like this now, ``` Vue.component('sitebooking', { template: '#sitebooking_template', props:["mySb"], data: function () { return { newcompsitebooking : {...this.mySb}, } } }); ``` Now the issue solved. Thanks to @icecream\_hobbit. Upvotes: 0
2018/03/15
1,335
5,056
<issue_start>username_0: Say we have dataframes set as follows: ``` df1 = pd.DataFrame(np.random.randint(0, 2, (10, 2)), columns=['Cow', 'Sheep']) df2 = pd.DataFrame(np.random.randint(0, 2, (10, 5)), columns=['Hungry', 'Scared', 'Happy', 'Bored', 'Sad']) df3 = pd.DataFrame(np.random.randint(0, 2, (10, 2)), columns=['Davids', 'Michaels']) df1.index.name = df2.index.name = df3.index.name = 'id' combos_to_test = pd.DataFrame([('Davids', 'Cow', 'Hungry'), ('Michaels', 'Cow', 'Hungry'), ('Davids', 'Cow', 'Scared'), ('Michaels', 'Cow', 'Scared'), ('Michaels', 'Sheep', 'Scared'), ('Davids', 'Sheep', 'Happy'), ('Michaels', 'Sheep', 'Happy'),]) ``` example : ``` DF1: DF2: DF3: id Cow Sheep id Hungry Scared Happy Bored Sad id Davids Michaels 0 0 1 0 0 1 1 0 1 0 1 0 1 0 0 1 1 0 0 1 1 1 0 1 2 0 0 2 1 0 0 1 1 2 0 0 3 1 0 3 0 0 1 0 1 3 0 1 4 1 0 4 0 0 1 1 0 4 0 1 5 1 1 5 0 0 1 1 0 5 1 0 6 1 1 6 1 0 1 1 0 6 1 0 7 1 0 7 1 1 1 1 0 7 1 1 8 1 1 8 1 1 1 1 0 8 1 0 9 1 0 9 0 1 1 0 0 9 1 0 ``` And I need a 4th dataframe, which finds (for each combination), when each `combos_to_test` is a column. The way I plan on doing this is to change the columns into: ``` df1.columns = Cow, Cow, Cow, Cow, Sheep, Sheep, Sheep df2.columns = Hungry, Hungry, Scared, Scared, Happy, Happy df3.columns = David, Michael, David, Michael, Michael, David, Michael ``` Then renaming all the cols to `col1, col2, col3, ..., col8` and then multiplying each dataframe by eachother (which will vectorise it - but take large amounts of memory). My dataset is obviously much bigger, and will be using numpy/pandas. The output df should look like: ``` ('Davids', 'Cow', 'Hungry') | ('Michaels', 'Cow', 'Hungry') | ('Davids', 'Cow', 'Scared') | ('Michaels', 'Cow', 'Scared') | ... 1) 0 1 0 0 2) 0 0 0 0 3) 0 1 0 0 4) 0 0 1 0 5) 0 0 0 0 6) 0 0 0 0 7) 0 0 0 0 8) 0 0 1 1 9) 1 0 0 0 10) 1 0 0 0 ```<issue_comment>username_1: The easiest way to copy a column is to use just: ``` df1['Cow_copy'] = df1['Cow'] ``` If you want to copy many columns, you could make a List of columns and loop through it and using the code above for each one. Upvotes: 0 <issue_comment>username_2: I can do this with `pd.concat` ``` df = pd.concat([df1, df2, df3], axis=1) pd.concat({ ctt: df.reindex(columns=ctt).prod(1) for ctt in map(tuple, combos_to_test.values) }, axis=1) Davids Michaels Cow Sheep Cow Sheep Hungry Scared Happy Hungry Scared Happy Scared id 0 0 0 0 0 0 0 0 1 1 1 0 1 1 0 1 2 0 0 0 0 0 0 0 3 0 0 0 0 0 1 0 4 1 1 0 0 0 0 0 5 0 0 0 1 1 1 1 6 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 ``` Upvotes: 3 [selected_answer]
2018/03/15
2,213
7,444
<issue_start>username_0: I am trying to crop an image before uploading it through an API. I am showing a modal ([`Dialog`](http://www.material-ui.com/#/components/dialog)) to do this, and using this library [react-image-crop](https://github.com/DominicTobias/react-image-crop) to achieve this. Here is the code snippet: ``` showCropImageModal() { const actions = [ , , ]; if (this.state.showImageCropper) { return ( {console.log(crop, pixel)}} onChange={(crop) => { console.log(crop); this.setState({crop}); }} /> ); } } ``` On "Crop" action I am handling it using the `handleCropClose` function: ``` handleCropClose(){ let {selectedFile, crop} = this.state const croppedImg = this.getCroppedImg(selectedFile, crop.width, crop.height, crop.x, crop.y, 2); console.log(croppedImg) this.setState({showImageCropper: false}) } ``` And here is `getCroppedImg` code: ``` getCroppedImg(imgObj, newWidth, newHeight, startX, startY, ratio) { /* the parameters: - the image element - the new width - the new height - the x point we start taking pixels - the y point we start taking pixels - the ratio */ // Set up canvas for thumbnail console.log(imgObj) var img = new Image(); img.src = this.state.selectedImageURL; var tnCanvas = this.refs.canvas; tnCanvas.width = newWidth; tnCanvas.height = newHeight; tnCanvas.getContext('2d').drawImage(img, startX, startY, newWidth, newHeight); return tnCanvas.toDataURL("image/png"); } ``` Now, I am not able to get the right preview or new image file object so that I could use that to show as preview in the modal there itself and than use the same to upload it. I am not even getting the right image ratio. Any help? Here is the image: [![enter image description here](https://i.stack.imgur.com/7yFw2.png)](https://i.stack.imgur.com/7yFw2.png)<issue_comment>username_1: **Solution** : First, use pixel coordinates: - change : `onChange={(crop) => { console.log(crop); this.setState({crop}); }}` - to `onChange={(crop, pixelCrop) => { console.log(crop); this.setState({crop, pixelCrop}); }}`. Use `this.state.pixelCrop` instead of `this.state.crop` for `getCroppedImg`. Then, update `getCroppedImg` to fetch the image asynchronously using a Promise and crop it. ``` getCroppedImg(imgObj, newWidth, newHeight, startX, startY, ratio) { /* the parameters: - the image element - the new width - the new height - the x point we start taking pixels - the y point we start taking pixels - the ratio */ return new Promise((resolve, reject) => { const img = new Image(); img.onload = resolve; img.onerror = reject; img.src = this.state.selectedImageURL; }).then(img => { // Set up canvas for thumbnail var tnCanvas = this.refs.canvas; tnCanvas.width = newWidth; tnCanvas.height = newHeight; tnCanvas .getContext('2d') .drawImage( img, startX, startY, newWidth, newHeight, 0, 0, newWidth, newHeight ); return tnCanvas.toDataURL("image/png"); }); } ``` **Explanation** : You are missing parameters to `drawImage`. You are asking the canvas to draw the image at position `(startX, startY)` and scale it to `(newWidth, newHeight)`. To crop the image you need [additional parameters](https://developer.mozilla.org/en/docs/Web/API/CanvasRenderingContext2D/drawImage) : > > > ``` > drawImage( > image, > sx, sy, sw, sh, > dx, dy, dw, dh > ); > > ``` > > Where : > > > [![drawImage](https://i.stack.imgur.com/LNgjx.png)](https://i.stack.imgur.com/LNgjx.png) > > > **Example** : ```js const img = new Image() const canvas = document.createElement('canvas') img.src = 'https://cmeimg-a.akamaihd.net/640/clsd/getty/991dda07ecb947f1834bf1aa89153cf6' const newWidth = 200 const newHeight = 200 const startX = 200 const startY = 100 img.onload = () => { canvas.width = newWidth; canvas.height = newHeight; canvas.getContext('2d').drawImage(img, startX, startY, newWidth, newHeight, 0, 0, newWidth, newHeight); } document.body.appendChild(canvas) document.body.appendChild(img) ``` Upvotes: 2 <issue_comment>username_2: `react-image-crop` using percent for scaling, make sure to calculate. Also make sure while create new object image on the fly to render virtual dom. Here, try this: ``` import React, { Component } from 'react'; import ReactCrop, { makeAspectCrop } from 'react-image-crop'; import { FlatButton, Dialog } from 'material-ui'; import MuiThemeProvider from 'material-ui/styles/MuiThemeProvider'; import sample from './sample.png'; import 'react-image-crop/dist/ReactCrop.css'; class App extends Component { state = { showImageCropper: false, selectedImageURL: sample, crop: { x: 0, y: 0, // aspect: 16 / 9, }, selectedFile: null, croppedImage: sample }; showCropImageModal() { const actions = [ , , ]; if (this.state.showImageCropper) { return ( ); } } onCropComplete = (crop, pixels) => { } onCropChange = (crop) => { this.setState({ crop }); } // onImageLoaded = (image) => { // this.setState({ // crop: makeAspectCrop({ // x: 0, // y: 0, // // aspect: 10 / 4, // // width: 50, // }, image.naturalWidth / image.naturalHeight), // image, // }); // } handleCancel = () => { this.setState({ showImageCropper: false }); } handleCropClose = () => { let { crop } = this.state; // console.log("selectedFile", selectedFile); // console.log("crop",crop); const croppedImg = this.getCroppedImg(this.refImageCrop, crop); this.setState({ showImageCropper: false, croppedImage: croppedImg }) } getCroppedImg(srcImage,pixelCrop) { /* the parameters: - the image element - the new width - the new height - the x point we start taking pixels - the y point we start taking pixels - the ratio */ // Set up canvas for thumbnail // console.log(imgObj); // let img = new Image(); // img.src = this.state.selectedImageURL; // let tempCanvas = document.createElement('canvas'); // let tnCanvas = tempCanvas; // tnCanvas.width = newWidth; // tnCanvas.height = newHeight; // tnCanvas.getContext('2d').drawImage(img, startX, startY, newWidth, newHeight); // return tnCanvas; let img = new Image(); img.src = this.state.selectedImageURL; const targetX = srcImage.width * pixelCrop.x / 100; const targetY = srcImage.height * pixelCrop.y / 100; const targetWidth = srcImage.width * pixelCrop.width / 100; const targetHeight = srcImage.height * pixelCrop.height / 100; const canvas = document.createElement('canvas'); canvas.width = targetWidth; canvas.height = targetHeight; const ctx = canvas.getContext('2d'); ctx.drawImage( img, targetX, targetY, targetWidth, targetHeight, 0, 0, targetWidth, targetHeight ); return canvas.toDataURL('image/jpeg'); } handleOpen = () => { this.setState({ showImageCropper: true }); } render() { return ( { this.showCropImageModal() } ![]({this.state.selectedImageURL}) {this.refImageCrop = img}} alt="" /> ![]({this.state.croppedImage}) ); } } export default App; ``` Upvotes: 3 [selected_answer]
2018/03/15
4,029
16,148
<issue_start>username_0: I have created a custom camera and have implemented below code to crop the taken image, I have shown guides in the preview layer so I want to crop the image which appears in that area. ``` func imageByCropToRect(rect:CGRect, scale:Bool) -> UIImage { var rect = rect var scaleFactor: CGFloat = 1.0 if scale { scaleFactor = self.scale rect.origin.x *= scaleFactor rect.origin.y *= scaleFactor rect.size.width *= scaleFactor rect.size.height *= scaleFactor } var image: UIImage? = nil; if rect.size.width > 0 && rect.size.height > 0 { let imageRef = self.cgImage!.cropping(to: rect) image = UIImage(cgImage: imageRef!, scale: scaleFactor, orientation: self.imageOrientation) } return image! } ``` This code just works fine when & give the exact cropped image when the below line of code is commented, though I want the image streaming to be full screen so I have to use the below line of code. The image comes zoomed out sort of. ``` (self.previewLayer as! AVCaptureVideoPreviewLayer).videoGravity = AVLayerVideoGravity.resizeAspectFill ``` How do I solve this issue? Is the cropping code wrong? Here is the full Class code ``` import UIKit import AVFoundation class CameraViewController: UIViewController { @IBOutlet weak var guideImageView: UIImageView! @IBOutlet weak var guidesView: UIView! @IBOutlet weak var cameraPreviewView: UIView! @IBOutlet weak var cameraButtonView: UIView! @IBOutlet weak var captureButton: UIButton! var captureSession = AVCaptureSession() var previewLayer: CALayer! var captureDevice: AVCaptureDevice! /// This will be true when the user clicks on the click photo button. var takePhoto = false override func viewDidLoad() { super.viewDidLoad() } override func viewWillAppear(_ animated: Bool) { super.viewWillAppear(animated) captureSession = AVCaptureSession() previewLayer = CALayer() takePhoto = false requestAuthorization() } private func userinteractionToButton(_ interaction: Bool) { captureButton.isEnabled = interaction } /// This function will request authorization, If authorized then start the camera. private func requestAuthorization() { switch AVCaptureDevice.authorizationStatus(for: AVMediaType.video) { case .authorized: prepareCamera() case .denied, .restricted, .notDetermined: AVCaptureDevice.requestAccess(for: AVMediaType.video, completionHandler: { (granted) in if !Thread.isMainThread { DispatchQueue.main.async { if granted { self.prepareCamera() } else { let alert = UIAlertController(title: "unable_to_access_the_Camera", message: "to_enable_access_go_to_setting_privacy_camera_and_turn_on_camera_access_for_this_app", preferredStyle: UIAlertControllerStyle.alert) alert.addAction(UIAlertAction(title: "ok", style: .default, handler: {_ in self.navigationController?.popToRootViewController(animated: true) })) self.present(alert, animated: true, completion: nil) } } } else { if granted { self.prepareCamera() } else { let alert = UIAlertController(title: "unable_to_access_the_Camera", message: "to_enable_access_go_to_setting_privacy_camera_and_turn_on_camera_access_for_this_app", preferredStyle: UIAlertControllerStyle.alert) alert.addAction(UIAlertAction(title: "ok", style: .default, handler: {_ in self.navigationController?.popToRootViewController(animated: true) })) self.present(alert, animated: true, completion: nil) } } }) } } /// Will see if the primary camera is avilable, If found will call method which will asign the available device to the AVCaptureDevice. private func prepareCamera() { // Resets the session. self.captureSession.sessionPreset = AVCaptureSession.Preset.photo if #available(iOS 10.0, *) { let availableDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices self.assignCamera(availableDevices) } else { // Fallback on earlier versions // development, need to test this on iOS 8 if let availableDevices = AVCaptureDevice.default(for: AVMediaType.video) { self.assignCamera([availableDevices]) } else { self.showAlert() } } } /// Assigns AVCaptureDevice to the respected the variable, will begin the session. /// /// - Parameter availableDevices: [AVCaptureDevice] private func assignCamera(_ availableDevices: [AVCaptureDevice]) { if availableDevices.first != nil { captureDevice = availableDevices.first beginSession() } else { self.showAlert() } } /// Configures the camera settings and begins the session, this function will be responsible for showing the image on the UI. private func beginSession() { do { let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice) captureSession.addInput(captureDeviceInput) } catch { print(error.localizedDescription) } let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) self.previewLayer = previewLayer self.cameraPreviewView.layer.addSublayer(self.previewLayer) self.previewLayer.frame = self.view.layer.frame self.previewLayer.frame.origin.y = +self.cameraPreviewView.frame.origin.y (self.previewLayer as! AVCaptureVideoPreviewLayer).videoGravity = AVLayerVideoGravity.resizeAspectFill self.previewLayer.masksToBounds = true self.cameraPreviewView.clipsToBounds = true captureSession.startRunning() self.view.bringSubview(toFront: self.cameraPreviewView) self.view.bringSubview(toFront: self.cameraButtonView) self.view.bringSubview(toFront: self.guidesView) let dataOutput = AVCaptureVideoDataOutput() dataOutput.videoSettings = [((kCVPixelBufferPixelFormatTypeKey as NSString) as String):NSNumber(value:kCVPixelFormatType_32BGRA)] dataOutput.alwaysDiscardsLateVideoFrames = true if captureSession.canAddOutput(dataOutput) { captureSession.addOutput(dataOutput) } captureSession.commitConfiguration() let queue = DispatchQueue(label: "com.letsappit.camera") dataOutput.setSampleBufferDelegate(self, queue: queue) self.userinteractionToButton(true) } /// Get the UIImage from the given CMSampleBuffer. /// /// - Parameter buffer: CMSampleBuffer /// - Returns: UIImage? func getImageFromSampleBuffer(buffer:CMSampleBuffer, orientation: UIImageOrientation) -> UIImage? { if let pixelBuffer = CMSampleBufferGetImageBuffer(buffer) { let ciImage = CIImage(cvPixelBuffer: pixelBuffer) let context = CIContext() let imageRect = CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(pixelBuffer), height: CVPixelBufferGetHeight(pixelBuffer)) if let image = context.createCGImage(ciImage, from: imageRect) { return UIImage(cgImage: image, scale: UIScreen.main.scale, orientation: orientation) } } return nil } /// This function will destroy the capture session. func stopCaptureSession() { self.captureSession.stopRunning() if let inputs = captureSession.inputs as? [AVCaptureDeviceInput] { for input in inputs { self.captureSession.removeInput(input) } } } func showAlert() { let alert = UIAlertController(title: "Unable to access the camera", message: "It appears that either your device doesn't have camera or its broken", preferredStyle: .alert) alert.addAction(UIAlertAction(title: "cancel", style: .cancel, handler: {_ in self.navigationController?.dismiss(animated: true, completion: nil) })) self.present(alert, animated: true, completion: nil) } @IBAction func didTapClick(_ sender: Any) { userinteractionToButton(false) takePhoto = true } override func prepare(for segue: UIStoryboardSegue, sender: Any?) { if segue.identifier == "showImage" { let vc = segue.destination as! ShowImageViewController vc.image = sender as! UIImage } } } extension CameraViewController: AVCaptureVideoDataOutputSampleBufferDelegate { func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) { if connection.isVideoOrientationSupported { connection.videoOrientation = .portrait } if takePhoto { takePhoto = false // Rotation should be unlocked to work. var orientation = UIImageOrientation.up switch UIDevice.current.orientation { case .landscapeLeft: orientation = .left case .landscapeRight: orientation = .right case .portraitUpsideDown: orientation = .down default: orientation = .up } if let image = self.getImageFromSampleBuffer(buffer: sampleBuffer, orientation: orientation) { DispatchQueue.main.async { let newImage = image.imageByCropToRect(rect: self.guideImageView.frame, scale: true) self.stopCaptureSession() self.previewLayer.removeFromSuperlayer() self.performSegue(withIdentifier: "showImage", sender: newImage) } } } } } ``` Here is the view hierarchy image [![enter image description here](https://i.stack.imgur.com/9CYtD.png)](https://i.stack.imgur.com/9CYtD.png)<issue_comment>username_1: **Solution** : First, use pixel coordinates: - change : `onChange={(crop) => { console.log(crop); this.setState({crop}); }}` - to `onChange={(crop, pixelCrop) => { console.log(crop); this.setState({crop, pixelCrop}); }}`. Use `this.state.pixelCrop` instead of `this.state.crop` for `getCroppedImg`. Then, update `getCroppedImg` to fetch the image asynchronously using a Promise and crop it. ``` getCroppedImg(imgObj, newWidth, newHeight, startX, startY, ratio) { /* the parameters: - the image element - the new width - the new height - the x point we start taking pixels - the y point we start taking pixels - the ratio */ return new Promise((resolve, reject) => { const img = new Image(); img.onload = resolve; img.onerror = reject; img.src = this.state.selectedImageURL; }).then(img => { // Set up canvas for thumbnail var tnCanvas = this.refs.canvas; tnCanvas.width = newWidth; tnCanvas.height = newHeight; tnCanvas .getContext('2d') .drawImage( img, startX, startY, newWidth, newHeight, 0, 0, newWidth, newHeight ); return tnCanvas.toDataURL("image/png"); }); } ``` **Explanation** : You are missing parameters to `drawImage`. You are asking the canvas to draw the image at position `(startX, startY)` and scale it to `(newWidth, newHeight)`. To crop the image you need [additional parameters](https://developer.mozilla.org/en/docs/Web/API/CanvasRenderingContext2D/drawImage) : > > > ``` > drawImage( > image, > sx, sy, sw, sh, > dx, dy, dw, dh > ); > > ``` > > Where : > > > [![drawImage](https://i.stack.imgur.com/LNgjx.png)](https://i.stack.imgur.com/LNgjx.png) > > > **Example** : ```js const img = new Image() const canvas = document.createElement('canvas') img.src = 'https://cmeimg-a.akamaihd.net/640/clsd/getty/991dda07ecb947f1834bf1aa89153cf6' const newWidth = 200 const newHeight = 200 const startX = 200 const startY = 100 img.onload = () => { canvas.width = newWidth; canvas.height = newHeight; canvas.getContext('2d').drawImage(img, startX, startY, newWidth, newHeight, 0, 0, newWidth, newHeight); } document.body.appendChild(canvas) document.body.appendChild(img) ``` Upvotes: 2 <issue_comment>username_2: `react-image-crop` using percent for scaling, make sure to calculate. Also make sure while create new object image on the fly to render virtual dom. Here, try this: ``` import React, { Component } from 'react'; import ReactCrop, { makeAspectCrop } from 'react-image-crop'; import { FlatButton, Dialog } from 'material-ui'; import MuiThemeProvider from 'material-ui/styles/MuiThemeProvider'; import sample from './sample.png'; import 'react-image-crop/dist/ReactCrop.css'; class App extends Component { state = { showImageCropper: false, selectedImageURL: sample, crop: { x: 0, y: 0, // aspect: 16 / 9, }, selectedFile: null, croppedImage: sample }; showCropImageModal() { const actions = [ , , ]; if (this.state.showImageCropper) { return ( ); } } onCropComplete = (crop, pixels) => { } onCropChange = (crop) => { this.setState({ crop }); } // onImageLoaded = (image) => { // this.setState({ // crop: makeAspectCrop({ // x: 0, // y: 0, // // aspect: 10 / 4, // // width: 50, // }, image.naturalWidth / image.naturalHeight), // image, // }); // } handleCancel = () => { this.setState({ showImageCropper: false }); } handleCropClose = () => { let { crop } = this.state; // console.log("selectedFile", selectedFile); // console.log("crop",crop); const croppedImg = this.getCroppedImg(this.refImageCrop, crop); this.setState({ showImageCropper: false, croppedImage: croppedImg }) } getCroppedImg(srcImage,pixelCrop) { /* the parameters: - the image element - the new width - the new height - the x point we start taking pixels - the y point we start taking pixels - the ratio */ // Set up canvas for thumbnail // console.log(imgObj); // let img = new Image(); // img.src = this.state.selectedImageURL; // let tempCanvas = document.createElement('canvas'); // let tnCanvas = tempCanvas; // tnCanvas.width = newWidth; // tnCanvas.height = newHeight; // tnCanvas.getContext('2d').drawImage(img, startX, startY, newWidth, newHeight); // return tnCanvas; let img = new Image(); img.src = this.state.selectedImageURL; const targetX = srcImage.width * pixelCrop.x / 100; const targetY = srcImage.height * pixelCrop.y / 100; const targetWidth = srcImage.width * pixelCrop.width / 100; const targetHeight = srcImage.height * pixelCrop.height / 100; const canvas = document.createElement('canvas'); canvas.width = targetWidth; canvas.height = targetHeight; const ctx = canvas.getContext('2d'); ctx.drawImage( img, targetX, targetY, targetWidth, targetHeight, 0, 0, targetWidth, targetHeight ); return canvas.toDataURL('image/jpeg'); } handleOpen = () => { this.setState({ showImageCropper: true }); } render() { return ( { this.showCropImageModal() } ![]({this.state.selectedImageURL}) {this.refImageCrop = img}} alt="" /> ![]({this.state.croppedImage}) ); } } export default App; ``` Upvotes: 3 [selected_answer]
2018/03/15
560
2,015
<issue_start>username_0: I'm using knex version 3.10.10, in my node app, connecting to MySQL DB. My configuration of knex in the app is using the pool option configuration. 1) Is there a need to EXPLICITLY return a connection to the pool after I fired a query? If yes - how 2) Is there a need to EXPLICITLY perform a check on a pool's connection, before firing the query? Thanks in advance<issue_comment>username_1: No. There is no need to do either. Knex handles a connection pool for you. You can adjust the pool size if you need to by using the setting: `pool: { min: 0, max: 7 }` within your connection setup, and the documentation also includes a link to the library that Knex uses for pool handling if you care about the gory details. The knex documentation has a little info on this here: [link](http://knexjs.org/#Installation-pooling) Each connection will be used by Knex for the duration of a query or a transaction, then released back to the pool. BUT, if you implement transactions (i.e. multiple SQL statements to be saved or cancelled as a unit) without using Promises, then you will need to explicitly commit/rollback the transaction to properly complete the transaction, which will also release the connection back to the pool when the transaction is complete. (see more on Knex Transactions: [here](http://knexjs.org/#Transactions)). Upvotes: 3 <issue_comment>username_2: There is no such info in the documentation but based on the [source](https://github.com/tgriesser/knex/blob/master/src/util/make-knex.js#L73) [code](https://github.com/tgriesser/knex/blob/master/src/client.js#L275) you can access `knex` pool like this ``` const knex = require('knex')(config); const pool = knex.client.pool; console.log(pool); ``` `knex` uses [tarn](https://github.com/Vincit/tarn.js) pool under the hood, so you can check out it's methods there. P.S. I don't know where did you get that `knex` version (3 point something) but the current version of it on this answer moment is `0.14.4` Upvotes: 2
2018/03/15
752
3,178
<issue_start>username_0: I need that my beta testers have the beta app to let them test it but at the same time they must to have the production app to work in production. The problem is that seems if they become a beta tester only can get the beta app from the Google Play. There is a way to have both version in the same device without to have to change the app name? Thanks a lot.<issue_comment>username_1: The short answer is No, it can't be done. **Update: it turns out the suggestion below is not true...** But you could sort of achieve this by defining multiple user accounts on the device. Create a second user account on the device that uses a different, not-in-the-beta Google account. User accounts have their own distinct set of installed apps, so you could then switch between the two user accounts to use the beta or prod app. Upvotes: 0 <issue_comment>username_2: At the moment this is not possible by design. Beta testers are ideally users who get a slightly earlier version of your app, and use it day to day so you discover issues. The thinking behind this decision was that if you allowed both, then if issues were discovered beta users wouldn't report them, they would just switch to the prod app. Also, as an app developer the beta version should be "production ready" according to your internal QA. This has value for other reasons. Beta users can't leave public reviews - instead they send private feedback. If you allowed both, then you wouldn't be able to have this feature. A few well known apps (like Chrome) want users to be able to have both apps. For these apps, they have a separate package name for the Beta version and maintain a separate store listing: [Chrome Beta](https://play.google.com/store/apps/details?id=com.chrome.beta), [Chrome](https://play.google.com/store/apps/details?id=com.android.chrome). If you did this it would allow both to be installed. But you would have to maintain two store listings, two sets of reviews etc. Upvotes: 4 [selected_answer]<issue_comment>username_3: After some developments after releasing the app to production a bug was reported that I couldn't replicate in the latest version. So I needed to check the production version. As they both have the same name I knew I couldn't have both, so I tried to replace the beta version by the production one. This is what I found out about how to do it. Step 1. In my phone I went to the app page but the store insisted to install the beta version. Step 2. In my phone change the account to one whithout beta test access. The play store detected the other account and installed the beta version. Step 3. Finnaly I went to the store in my desktop and logged in whith the normal account I have on my device! This worked! The production version was in my phone and I could test it! So my solution to you is this: You can't have the two versions simultaneously in the same phone except if they have different names. But you can replace the beta by the production one if you have two accounts on the phone and A) Loggout from the account with access to the beta version and then install the production one, or B) Install from a desktop. Upvotes: 0
2018/03/15
728
3,038
<issue_start>username_0: > > I am playing around with hyperledger-sawtooth. I have installed the > sawtooth in ubuntu machine but identity transaction processor is not > installed with sawtooth. so how can i use **identity-tp** command > > ><issue_comment>username_1: The short answer is No, it can't be done. **Update: it turns out the suggestion below is not true...** But you could sort of achieve this by defining multiple user accounts on the device. Create a second user account on the device that uses a different, not-in-the-beta Google account. User accounts have their own distinct set of installed apps, so you could then switch between the two user accounts to use the beta or prod app. Upvotes: 0 <issue_comment>username_2: At the moment this is not possible by design. Beta testers are ideally users who get a slightly earlier version of your app, and use it day to day so you discover issues. The thinking behind this decision was that if you allowed both, then if issues were discovered beta users wouldn't report them, they would just switch to the prod app. Also, as an app developer the beta version should be "production ready" according to your internal QA. This has value for other reasons. Beta users can't leave public reviews - instead they send private feedback. If you allowed both, then you wouldn't be able to have this feature. A few well known apps (like Chrome) want users to be able to have both apps. For these apps, they have a separate package name for the Beta version and maintain a separate store listing: [Chrome Beta](https://play.google.com/store/apps/details?id=com.chrome.beta), [Chrome](https://play.google.com/store/apps/details?id=com.android.chrome). If you did this it would allow both to be installed. But you would have to maintain two store listings, two sets of reviews etc. Upvotes: 4 [selected_answer]<issue_comment>username_3: After some developments after releasing the app to production a bug was reported that I couldn't replicate in the latest version. So I needed to check the production version. As they both have the same name I knew I couldn't have both, so I tried to replace the beta version by the production one. This is what I found out about how to do it. Step 1. In my phone I went to the app page but the store insisted to install the beta version. Step 2. In my phone change the account to one whithout beta test access. The play store detected the other account and installed the beta version. Step 3. Finnaly I went to the store in my desktop and logged in whith the normal account I have on my device! This worked! The production version was in my phone and I could test it! So my solution to you is this: You can't have the two versions simultaneously in the same phone except if they have different names. But you can replace the beta by the production one if you have two accounts on the phone and A) Loggout from the account with access to the beta version and then install the production one, or B) Install from a desktop. Upvotes: 0
2018/03/15
2,842
10,917
<issue_start>username_0: I am trying to create an internal app to upload files to google cloud. I don't want each individual user or this app to log in so I'm using a service account. I login into the service account and everything is ok, but when I try to upload it gives me this error: ServiceException: 401 Anonymous caller does not have storage.objects.list access to bucket [![This is the error that I get](https://i.stack.imgur.com/ELTTA.png)](https://i.stack.imgur.com/ELTTA.png) As you can see I am logged in with a service account and my account and(neither service or personal) works<issue_comment>username_1: I can only think of a few things that might cause you to see this error: 1. Maybe you have an alias set up to a standalone installation of gsutil (which doesn't share credentials with gcloud)? **Edit**: it's also possible you're invoking the wrong gsutil entry point - make sure you're using `/google-cloud-sdk/bin/gsutil`, and not `/google-cloud-sdk/platform/gsutil/gsutil`. The `platform` path will not automatically know about your configured gcloud auth options. 2. Maybe your service account credentials have moved/are invalid now? If your boto file is referring to a keyfile path and the keyfile was moved, this might happen. 3. Maybe the gcloud boto file (that gcloud created to use with gsutil when you ran `gcloud auth login`) is gone. You can run `gsutil version -l` to see if it's shown in your config path. If gcloud's boto file is present, you should see a line similar to this: config path(s): /Users/Daniel/.config/gcloud/legacy\_credentials/email<EMAIL>/.boto You can run `gsutil version -l` to get a bit more info and look into the possibilities above. In particular, these attributes from the output will probably be the most helpful: `using cloud sdk`, `pass cloud sdk credentials to gsutil`, `config path(s)`, and `gsutil path`. Upvotes: 3 <issue_comment>username_2: Does your service account actually have the required [permission](https://cloud.google.com/storage/docs/access-control/iam-permissions)? The [role(s)](https://cloud.google.com/storage/docs/access-control/iam-roles) that will give you this permission are roles/storage.objectViewer / roles/storage.objectAdmin / roles/storage.admin. Please ensure the service account actually have the permissions in your Cloud Console and then it should work. --- UPDATE --- Since you have the correct permission in the account, there it's likely the correct account wasn't used in the gsutil command. This can happen if you have multiple installations of your gsutil tool, please ensure your gsutil has the correct path point to a .BOTO file. There's a similar issue reported on the [github repo](https://github.com/GoogleCloudPlatform/gsutil/issues/457). You can see if the solution there works. Ultimately, you can use a new machine / vm with a fresh install to test it out to see if it works. You can this easily by going to the Cloud Console and using the [Cloud Shell](https://cloud.google.com/shell/docs/quickstart). No real installation needed, should be very simple to test. This should work and it will basically isolate your issue (to that of multiple installation) on your original machine. After that, you basically just have to do a clean install to fix it. Upvotes: 2 <issue_comment>username_3: It happened to me because I had an incomplete initialisation while running `gcloud init`. I reinitialised the configuration using `gcloud init` command and it worked fine. Upvotes: 4 <issue_comment>username_4: If you installed *gsutil* using python (without gcloud SDK), it may help to run `gsutil config` and complete steps of initialisation. Upvotes: 2 <issue_comment>username_5: Personally, I had an account with proper permissions registered but I got that error as well despite verifying that my account was running using "sudo gcloud init" What solved it for me was navigating to the ~/.gutil directory and writing the following ``` sudo chown jovyan:jovyan * ``` which let my JupyterLab terminal run, not from root, but from default jovyan. After that it used my account, not Anonymous caller Upvotes: 0 <issue_comment>username_6: I had similar problem, and as always, it took me 2 hours but the solution was trivial, if only it was written somewhere... I needed to login (or authorize, what suits you) to the *gsutil* in addition to being authorized to the *gcloud*. I thought they are linked or whatever, but nah. After I ran `gsutil config` and authorized via the provided link (and code that I pasted back to the console), it started working for me. Note that I was also logged in to *gcloud* via a service account linked to my project and having the service account *.json* key saved locally (see `gcloud auth activate-service-account --help`. Upvotes: 7 <issue_comment>username_7: I faced the same problem. It took me two days to get this thing working. I am writing about the whole setup. please refer to step 2 for the answer to the question. FYI my OS is windows 10 Step 1: Firstly, I faced problems installing gcloud and this is what i did. The script(.\google-cloud-sdk\install.bat) which is supposed to add gcloud to the path was not working due to permission issues. I had to add the path manually in two places 1) In the system variables, to the "PATH" variable i added the path to the gcloud bin which should look like - C:\Users\774610\google-cloud-sdk\bin - in my case 2) Additionally gcloud needs python so to the "PATHEXT" variable i appended ".PY" at the end. After Performing these tasks gcloud started working. Step 2: Even though gcloud is working, maven is not able to connect to cloud storage and the error was "401 Anonymous caller does not have storage.objects.list access to bucket" I was pretty sure i did login to my account and selected the correct project. I also tried adding environment variable as shown in this documentation "<https://cloud.google.com/docs/authentication/getting-started>" Nothing seemed to be working even though all the credentials were perfectly setup. while going through the gcloud documentation I came across this command - "gcloud auth application-default login" which was exactly what i needed. [Refer here for difference between gcloud auth login and gcloud auth application default login](https://stackoverflow.com/a/53307505/11919952) In short what this command does is it obtains your credentials via a web flow and stores them in 'the well-known location for Application Default Credentials' and any code/SDK you run will be able to find the credentials automatically After this, maven was successfully able to connect to google storage and do its stuff. Hope this helps, thanks Upvotes: 2 <issue_comment>username_8: I had the same issue, tried to do `gsutil config` then it recommended me `gcloud auth login` which opened google in the browser. After i logged in, i could download with `gsutil cp -r gs://my_bucket/Directory local_save_path` the entire bucket and save it locally. Upvotes: 3 <issue_comment>username_9: `gcloud auth login` **solved** my issue. You need both steps: ``` gcloud auth login gcloud auth application-default login ``` Upvotes: 6 <issue_comment>username_10: Here is another way to edit roles: ``` gsutil iam ch allUsers:objectViewer gs://tf-learn-objectdetection ``` Fore more documentation: ``` gsutil iam help ``` Upvotes: 0 <issue_comment>username_11: Thank you for all the replies. I would like to share my own experience. I had to login under the user which is defined when installing Gitlab Runner. By default, the user indicated in the installation doc is : "gitlab-runner". So, first, I added a password on this user: ``` passwd gitlab-runner ``` then : ``` su - gitlab-runner gcloud auth login gcloud auth application-default login ``` The issue is solved. Maybe there is a better way, by directly putting the Google auth files under /home/gitlab-runner Upvotes: 1 <issue_comment>username_12: I faced same issue. I used gcloud auth login and follow the link Upvotes: 1 <issue_comment>username_13: Use this command to resolve some issues ``` gsutil config ``` Follow the browser to get a code, then set it in your terminal. Upvotes: 3 <issue_comment>username_14: If you are using a service account you need first to authorize it, otherwise `gsutil` won't have the permission to read/write ``` gcloud auth activate-service-account --key-file=service_account_file.json ``` Upvotes: 1 <issue_comment>username_15: Use gcloud auth login 1. Goto mention link 2. Copy Verification code 3. Paste Verification code Upvotes: 0 <issue_comment>username_16: In my case, even after using `gsutils` solutions discussed in other answers, I got the error. After checking other google search results, I found out that the reason was that I was authenticating with "my user" while running the `gsutils` as the root. Thanks to the answer in the `gsutils` page in github: <https://github.com/GoogleCloudPlatform/gsutil/issues/457> Upvotes: 0 <issue_comment>username_17: **Let me expain what helped me step by step:** **First my requirement is to [enable CORS](https://stackoverflow.com/q/65653801/9774005), but faced the asked issue, So I followed the below steps:** **On Browser side:** 1. Open google cloud console on your browser. 2. Open Cloud shell editor. 3. Type `gcloud auth login`. 4. Now it will show an command with an url. 5. Copy that command Don't close browser. **On PC GCloud software side:** 1. [Download GCloud Sdk Installer.exe](https://dl.google.com/dl/cloudsdk/channels/rapid/GoogleCloudSDKInstaller.exe) 2. Open GCLoud in your pc It will ask you to sign In via browser 3. Signin with correct email id 4. Select your project from the shown list 5. Paste the previously copied command 6. Again it will ask you to signIn 7. Select the proper account to sign in 8. Now the GCloud cmd will show you another command with url `as output` 9. Copy the `output` Open your browser, then paste it. Done! It will show like You are now logged in as `<EMAIL>` *Now I'm able to set `CORS` without any exception. Hope these steps will be helpfull for someone who is new to the issue.* Upvotes: 0 <issue_comment>username_18: Looks like account information is not stored with gsutil Step 1: gsutil config Step 2: copy url in browser Step3: select account and grant permission Step 4: Copy key and share it in gsutil promt "step1 will be asking for this key to proceed" Step 5: Run command whose access was denied Thank you username_6!!! you saved my day... Upvotes: 0 <issue_comment>username_19: For me the problem was: I've installed gsutil following the instructions at <https://cloud.google.com/storage/docs/gsutil_install#deb> and afterwards I've used pip to install dependencies from a project and one of them was `gsutil`. Both have concurred and the solution was to uninstall it from python modules: ``` pip uninstall gsutil ``` Upvotes: 0
2018/03/15
1,224
4,516
<issue_start>username_0: What I'm trying to do here is to compare the performance of single-threading and multi-threading by doing a simple and repetitive operation. So I have two threads assigning this random number 0xde to this array each thread taking the first and second half of the array and single-thread does the same work by itself from index 0 to the end. What I do not understand is that even though those sub-threads are doing half the work as the single-thread(that is, the main-thread), they're taking up more time to finish their task! I don't expect it to take half the time as single-threading, but I can't possibly imagine why it would take longer than single-threading. And even more surprisingly, If I switch the order to do the single-threading first, then I get the result I wanted. I could really use some help on this as this is all messed up in my head. Thanks in advance! ps. I'm using a Raspberry Pi 3 which has 4 ARM processors if that helps. This is the result I got. Multithreading1 : 46 ms Mulththreading2 : 50 ms Singlethreading : 34 ms ``` #include #include #include #define SIZE 1000000 clock\_t difference = 0; clock\_t difference1 = 0; clock\_t difference2 = 0; void \*substitute1(void \*operand) { int \*arr = (int \*)operand; int i=0; clock\_t before1 = clock(); for(i=0;i<(SIZE/2);i++) { arr[i] = 0x00de; } difference1 = clock() - before1; return NULL; } void \*substitute2(void \*operand) { int \*arr = (int \*)operand; int i=0; clock\_t before2 = clock(); for(i=(SIZE/2);i ```<issue_comment>username_1: Hi Multi threading's performance can be measured with large volume of data. With very small volume of data you cannot measure the performance of multithreaded application. The reasons:- As you said you have 4 processors in your system they are enough to measure the performance of 2 threads in your case. But why it is taking more time than single thread. > > 1. To create a thread O/S need to allocation of memory to each thread which take time (even though it is tiny bit.) > 2. When you create multi threads it needs context switching which also take time. > 3. Need to release memory allocated to threads which also take time. > > > So when you try with small operation with multi threads it's performance will be as same as single thread or even less not suitable at all. So your outcome are preface in this case. To Measure the performance of multithread architecture use large amount of data with complex operation then only you can see the differences. Now just for understanding see the following scenario. Just consider that sleep is total time requires by a function to complete its task:- Just do it like below you can see the difference:- ``` void callme() { printf("In callme()\n"); sleep(2); } void main() { //read the system time here callme(); callme(); callme(); callme(); callme(); //read the system time here and check how much time it took in a single thread architecture //it will take more than 10 sec } ``` Now try with multi threaded architecture:- ``` void * callme(void *) { printf("In callme()\n"); sleep(2); return NULL; //better use pthread_exit(NULL); } void main() { //read the system time here pthread_t thread1; pthread_t thread2; pthread_t thread3; pthread_t thread4; pthread_t thread5; pthread_create(&thread1, NULL, callme, NULL); pthread_create(&thread2, NULL, callme, NULL); pthread_create(&thread3, NULL, callme, NULL); pthread_create(&thread4, NULL, callme, NULL); pthread_create(&thread5, NULL, callme, NULL); pthread_join(thread1, NULL); pthread_join(thread2, NULL); pthread_join(thread3, NULL); pthread_join(thread4, NULL); pthread_join(thread5, NULL); //read the system time here and check how much time it took in a single thread //it will take hardly 2.5 to 3 seconds benefit of 7 to 7.5 second than single thread } ``` Hope this will help you to understand. Upvotes: 2 <issue_comment>username_2: Performance improvement in multi-threaded programs comes from distributing the workload between multiple processing units. So your program would have to use the processor enough to justify splitting the workload up. However, all you are doing here is writing data to memory, there is no processing going on, so you are bound by your memory access, as explained [here](https://stackoverflow.com/a/17348330/4454124). Upvotes: 2
2018/03/15
494
1,282
<issue_start>username_0: I am using HDInishgt Spark 2.1 and in my Jupyter notebook I would like to load multiple spark packages. ``` %%configure -f { "conf": {"spark.jars.packages": "com.databricks:spark-avro_2.11:3.2.0"}} ``` But when I try and do ``` %%configure -f { "conf": {"spark.jars.packages": "com.microsoft.azure.cosmosdb.spark,com.databricks:spark-avro_2.11:3.2.0"}} ``` OR ``` { "conf": {"spark.jars.packages": ["com.databricks:spark-avro_2.11:3.2.0","com.microsoft.azure.cosmosdb.spark"] }} ``` I get an error. What am i doing wrong?<issue_comment>username_1: Try this ``` %%configure -f { "conf": {"spark.jars.packages": [ "com.databricks:spark-avro_2.11:3.2.0", "com.microsoft.azure.cosmosdb.spark" ] } } ``` Upvotes: 0 <issue_comment>username_2: Late reply, but hopefully still helpful: ``` %%configure -f { "conf": { "spark.jars.packages": "com.databricks:spark-avro_2.11:3.2.0,com.microsoft.azure:azure-cosmosdb-spark_2.3.0_2.11:1.2.2"} } ``` You can also add repositories the same way: ``` %%configure -f { "conf": { "spark.jars.packages": "com.databricks:spark-avro_2.11:3.2.0,com.microsoft.azure:azure-cosmosdb-spark_2.3.0_2.11:1.2.2", "spark.jars.repositories": "http://nexus.internal/repository/maven-public/"} } ``` Upvotes: 2
2018/03/15
1,533
5,170
<issue_start>username_0: I'm currently working on Bootstrap4 in SCSS. I want to change the inner $grid-gutter-width on smartphone only. According to \_grid.scss `$grid-columns: 12 !default; $grid-gutter-width: 30px !default;` On the bootstrap site, it is sait that : > > Updated grid sizes, mixins, and variables. Grid gutters now have a Sass map so you can specify specific gutter widths at each breakpoint. > > > I can't find the map and how it can be done.<issue_comment>username_1: This looks like a mistake in the docs. There used to be a map, but [it was removed](https://github.com/twbs/bootstrap/issues/22944) before 4.0.0 was released. However, it would be fairly easy to add this for just `xs` with SASS. For example 5px on mobile... ``` @media (min-width: map-get($grid-breakpoints, xs)) and (max-width: map-get($grid-breakpoints, sm)){ .row > .col, .row > [class*="col-"] { padding-right: 5px; padding-left: 5px; } } ``` <https://www.codeply.com/go/XgynFzTmGv> Upvotes: 5 [selected_answer]<issue_comment>username_2: Same as Zim's answer but with the row fix and using the $grid-gutter-width variable. 10% nicer if you are using a preprocessor. UPDATE: I added more styling to preserve the functionality of `.no-gutters`, which was broken before. ``` // HALF GUTTER WIDTH ON XS @media (max-width: map-get($grid-breakpoints, sm)){ .row:not(.no-gutters) { margin-right: -$grid-gutter-width / 4; margin-left: -$grid-gutter-width / 4; } .row:not(.no-gutters) > .col, .row:not(.no-gutters) > [class*="col-"] { padding-right: $grid-gutter-width / 4; padding-left: $grid-gutter-width / 4; } } ``` Upvotes: 4 <issue_comment>username_2: The way [Bootstrap documentation](https://getbootstrap.com/docs/4.2/utilities/spacing/#negative-margin) intends for this to be done is by setting your minimum gutter width size on the `$grid-gutter-width` variable and then using the margin/padding helper classes all the way up for larger breakpoints (mobile first). ``` Custom column padding Custom column padding ``` In the example they are setting a negative x margin on the row, then setting the same amount of x padding on each column. This will get fairly verbose and difficult to read if you are setting a different width for every breakpoint. I recommend using my alternate answer above. Upvotes: 2 <issue_comment>username_3: Configurable map of gutter sizes across breakpoints: **\_settings.scss** ``` // Grid columns // Custom map of gutter widths across breakpoints. $grid-gutter-widths: ( xs: 16px, md: 20px, ); // Default Bootstrap gutter width variable. $grid-gutter-width: map-get($grid-gutter-widths, md); ``` **styles.scss** ``` @import './settings'; @import 'node_modules/bootstrap/scss/bootstrap'; .container { @each $breakpoint, $gutter in $grid-gutter-widths { @include media-breakpoint-up($breakpoint) { @include make-container($gutter); } } } .row { @each $breakpoint, $gutter in $grid-gutter-widths { @include media-breakpoint-up($breakpoint) { @include make-row($gutter); } } .row > .col, .row > [class*='col-'] { @each $breakpoint, $gutter in $grid-gutter-widths { @include media-breakpoint-up($breakpoint) { @include make-col-ready($gutter); } } } } ``` Upvotes: 2 <issue_comment>username_4: Same as username_2's answer but with some automation that gives a possibility to choose a specific gutter width for every breakpoint: ``` /* Specify your own gutters for every breakpoint. I use media-breakpoint-up so to avoid a default width it's better to always include "xs", and then you can do whatever you want */ $grid-gutter-widths: ( xs: 10px, /* "sm" takes the same width as "xs" automatically */ md: 20px, lg: 30px ); /* You don't need to change anything below */ @each $grid-breakpoint, $grid-gutter-width in $grid-gutter-widths { @include media-breakpoint-up($grid-breakpoint) { $grid-gutter-half-width: $grid-gutter-width / 2; .row:not(.no-gutters) { margin-right: -$grid-gutter-half-width; margin-left: -$grid-gutter-half-width; } .row:not(.no-gutters) > .col, .row:not(.no-gutters) > [class*="col-"] { padding-right: $grid-gutter-half-width; padding-left: $grid-gutter-half-width; } } } ``` I'd recommend to adjust $container-max-widths afterwards, because there is a chance, that it may change original width a bit. UPD: I've done a small npm package with the solution. Also you can find a pretty detailed doc there: <https://github.com/DZakh/Custom-gutters-bootstrap-plugin> Upvotes: 3 <issue_comment>username_5: I've just made a package that allows the customisation of the gutters and container margins per break-point: <https://github.com/BenceSzalai/bootstrap-responsive-grid> It can be an effective way for those interested in a pure SCSS solution as it changes the way Bootstrap CSS is generated in the first place rather than adding overrides after the already generated CSS. It is ideal to use in a build chain for example with Webpack compiling your `css` from npm packages. Upvotes: 0
2018/03/15
502
1,947
<issue_start>username_0: I'm new to Pub/Sub and Dataflow/Beam. I have done a task in Spark and Kafka, I want to do the same using Pub/Sub and Dataflow/Beam. From what I understood so far Kafka is similar to Pub/Sub and Spark is similar to Dataflow/Beam. The task is take a JSON file and write to a Pub/Sub topic. Then using Beam/Dataflow I need to get that data into a PCollection. How will I achieve this?<issue_comment>username_1: Pubsub is a streaming source/ sink (it doesn't make sense to read/write to it only once). Dataflow python SDK support for streaming is not yet available. Documentation: <https://cloud.google.com/dataflow/release-notes/release-notes-python>. Once streaming is available, you should be able to do this pretty trivially. However if you are going from file -> pubsub and then pubsub -> pcollection you should be able to do this with a batch pipeline and drop out the pubsub aspect. You can look at the basic file io for beam. Upvotes: 2 <issue_comment>username_2: I solved the above problem. I'm able to continuously read data from a pubsub topic and then do some processing and then write the result to a datastore. ``` with beam.Pipeline(options=options) as p: # Read from PubSub into a PCollection. lines = p | beam.io.ReadStringsFromPubSub(topic=known_args.input_topic) # Group and aggregate each JSON object. transformed = (lines | 'Split' >> beam.FlatMap(lambda x: x.split("\n")) | 'jsonParse' >> beam.ParDo(jsonParse()) | beam.WindowInto(window.FixedWindows(15,0)) | 'Combine' >> beam.CombinePerKey(sum)) # Create Entity. transformed = transformed | 'create entity' >> beam.Map( EntityWrapper(config.NAMESPACE, config.KIND, config.ANCESTOR).make_entity) # Write to Datastore. transformed | 'write to datastore' >> WriteToDatastore(known_args.dataset_id) ``` Upvotes: 4 [selected_answer]
2018/03/15
948
2,612
<issue_start>username_0: I'm finding I want to update a struct pretty often, then pipe the result to another function. The need to update my struct keeps breaking up my pipes. I find myself doing this a lot: ``` my_struct = %{my_struct | my_field_in_struct: a_new_value} |> my_funct1 my_struct = %{my_struct | my_field_in_struct: a_new_value} |> my_funct2 my_struct = %{my_struct | my_field_in_struct: a_new_value} |> my_funct3 ``` I'd like to do something like: ``` my_struct |> %{ | my_field_in_struct: a_new_value} |> my_funct1 |> %{ | my_field_in_struct: a_new_value} |> my_funct2 |> %{ | my_field_in_struct: a_new_value} |> my_funct3 ``` The original syntax may not be all that bad, but still. I know I can use *Map.put()*, but then I would have to write a function in my module to convert the resulting map back to my struct type. Has anyone run into this tiny annoyance before? Is there a clean alternative?<issue_comment>username_1: You could also pass in an anonymous function if you *really* wanted to: ``` my_struct |> (&(%{ &1| my_field_in_struct: a_new_value})).() ``` or ``` my_struct |> (fn struct -> %{ struct| my_field_in_struct: a_new_value} end).() ``` but I don't think that looks very great / readable Upvotes: 2 <issue_comment>username_2: What is really great about Elixir, it has macros. So why would not you define your own pipe operator if this is a very common operation for your application? ```rb defmodule StructPipe do defmacro left ~>> right do {:%{}, [], [{:|, [], [left, right]}]} end end defmodule MyStruct do defstruct ~w|foo bar baz|a end defmodule StructPipe.Test do import StructPipe def test do %MyStruct{foo: 42} ~>> [bar: 3.14] ~>> [baz: "FOOBAR"] end end IO.inspect StructPipe.Test.test, label: "Resulting in" #⇒ Resulting in: %MyStruct{bar: 3.14, baz: "FOOBAR", foo: 42} ``` --- Note that it might be safely mixed with a normal [`Kernel.|>/2`](https://hexdocs.pm/elixir/Kernel.html#%7C%3E/2) pipe: ```rb %MyStruct{foo: 42} |> IO.inspect(label: "Ini") ~>> [bar: 3.14, baz: 3.14] |> IO.inspect(label: "Mid") ~>> [baz: "FOOBAR"] |> IO.inspect(label: "Aft") #⇒ Ini: %MyStruct{bar: nil, baz: nil, foo: 42} # Mid: %MyStruct{bar: 3.14, baz: 3.14, foo: 42} # Aft: %MyStruct{bar: 3.14, baz: "FOOBAR", foo: 42} ``` Upvotes: 3 <issue_comment>username_3: You can use [Map.replace!/3](https://hexdocs.pm/elixir/1.12/Map.html#replace!/3) now. It uses [:maps.update/3](https://www.erlang.org/doc/man/maps.html#update-3) under the hood, which is equivalent to the `%{struct | field: value}` syntax. Upvotes: 2 [selected_answer]
2018/03/15
1,302
4,851
<issue_start>username_0: I have a web app running in Tomcat correctly that I want to run on the new OpenLiberty server, the app is starting correctly inside OpenLiberty but at the moment of the database connection initiation is throwing the following exception: ``` [Default Executor-thread-15] 2018-03-15 15:02:30 ERROR TomcatConnectionManager:41 - Loading jdbc/mysql/myaap failure javax.naming.NameNotFoundException: java:/comp/env at com.ibm.ws.jndi.url.contexts.javacolon.internal.JavaURLName.(JavaURLName.java:83) at com.ibm.ws.jndi.url.contexts.javacolon.internal.JavaURLNameParser.parse(JavaURLNameParser.java:39) at com.ibm.ws.jndi.url.contexts.javacolon.internal.JavaURLNameParser.parse(JavaURLNameParser.java:60) at com.ibm.ws.jndi.url.contexts.javacolon.internal.JavaURLContext$NameUtil.(JavaURLContext.java:474) at com.ibm.ws.jndi.url.contexts.javacolon.internal.JavaURLContext.lookup(JavaURLContext.java:321) at com.ibm.ws.jndi.url.contexts.javacolon.internal.JavaURLContext.lookup(JavaURLContext.java:370) at org.apache.aries.jndi.DelegateContext.lookup(DelegateContext.java:161) ``` The above exception is thrown during the lookup phase: ``` Context initContext = new InitialContext(); Context envContext = (Context) initContext.lookup("java:/comp/env"); ``` Is there any way to make it work on OpenLiberty doing less changes possible?<issue_comment>username_1: On OpenLiberty the equivalent lookup would look like this: ``` Context initContext = new InitialContext(); Context envContext = (Context) initContext.lookup("java:comp/env"); ``` The key is that you need to use `java:comp/...` instead of `java:/comp/...` --- The reason why Tomcat is different than Liberty is because Tomcat is just a servlet container and Liberty conforms to the full Java EE specification. According to section [EE.5.2.2 of the Java EE 7 spec](http://download.oracle.com/otn-pub/jcp/java_ee-7-fr-spec/JavaEE_Platform_Spec.pdf): > > The application component’s naming environment is composed of four logical > namespaces, representing naming environments with different scopes. The four > namespaces are: > > > * java:comp – Names in this namespace are per-component (for example, per enterprise > bean). Except for components in a web module, each component gets > its own java:comp namespace, not shared with any other component. Components > in a web module do not have their own private component namespace. > See note below. > * java:module – Names in this namespace are shared by all components in a > module (for example, all enterprise beans in a single EJB module, or all components > in a web module). > * java:app – Names in this namespace are shared by all components in all modules > in a single application, where “single application” means a single deployment > unit, such as a single ear file, a single module deployed standalone, etc. > For example, a war file and an EJB jar file in the same ear file would both have > access to resources in the java:app namespace. > * java:global – Names in this namespace are shared by all applications deployed > in an application server instance. Note that an application server instance > may represent a single server, a cluster of servers, an administrative > domain containing many servers, or even more. The scope of an application > server instance is product-dependent, but it must be possible to deploy multiple > applications to a single application server instance. > > > Upvotes: 2 <issue_comment>username_2: Had a similar problem going between WebSphere and Tomcat. I'm developing and testing on a Tomcat server and using utilities I can't change that handle the DB connection to our DB2. On WebSphere it uses a constant set to "jdbc/COMPDB2" to retrieve the DataSource when I configure Tomcat and my Web.xml file it resolves to "java:comp/env/jdbc/SFCCDB2" My work around for on local work space it to add a listener to copy the resource to the level in the InitialContext. I'm not very experienced with the server side of things but this is working so far using TomEE 7.0.81. ``` InitialContext ctx = new InitialContext(); DataSource ds = (DataSource) ctx.lookup("java:comp/env/jdbc/SFCCDB2"); javax.naming.Context envCtx = (javax.naming.Context) ctx.lookup("java:comp/env"); try{ /* Added this because after redeploying code to the server it would error connecting to the DB with an SQLException Datasource is closed */ DataSource dataSource = (DataSource) ctx.lookup("jdbc/COMPDB2"); ctx.destroySubcontext("jdbc"); } catch (NamingException e){ //Doesn't exist; safe to just add } ctx.createSubcontext("jdbc"); ctx.bind("jdbc/COMPDB2", ds); ctx.close(); ``` Upvotes: 0
2018/03/15
895
3,445
<issue_start>username_0: For example, the following error is not caught by a regular try/catch on request: ``` (node:6432) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 pipe listeners added. Use emitter.setMaxListeners() to increase limit Error: Exceeded maxRedirects. Probably stuck in a redirect loop https://192.168.127.12/ at Redirect.onResponse (/XXX/node_modules/request/lib/redirect.js:98:27) at Request.onRequestResponse (/XXX/node_modules/request/request.js:990:22) at emitOne (events.js:115:13) at ClientRequest.emit (events.js:210:7) at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:565:21) at HTTPParser.parserOnHeadersComplete (_http_common.js:116:23) at TLSSocket.socketOnData (_http_client.js:454:20) at emitOne (events.js:115:13) at TLSSocket.emit (events.js:210:7) at addChunk (_stream_readable.js:266:12) ```<issue_comment>username_1: You can simply disable follow 3xx redirects in request options object `request({followRedirect: false}, ...)` Or change Nodejs `process.setMaxListeners(0);` to 0 (unlimite) Upvotes: 3 [selected_answer]<issue_comment>username_2: By default `request` will follow up to ten redirects. If you have a server that is misbehaving, for example redirecting to itself, request will follow the redirect up to ten times and then return an error. However, the request also listens for an incoming request body - you can pipe a Readable to the `request` object and this library will read and upload that response. This is done by registering a `request.on('pipe', (src) => ...)` listener in the request source code. The problem is that on each followed redirect, `request` redeclares the `.on('pipe', (src) => ...)` listener, so by the 7th redirect you have 7 listeners and so on. By default the EventEmitter warns after ten of the same event have been registered, which is why you get Possible memory leak detected exactly once, with the default settings for both the EventEmitter and for `request`. This is probably a mistake - if you did manage to pipe the request body to the 7th HTTP request, you would have seven listeners by that point, each firing independently, and it's not clear what would happen - you might end up with the request body duplicated seven times, or just written seven times to something that's not listening. However, there is a separate issue where this piped request body is only sent on the first request, not all of them, see #3138. So **is there actually a memory leak if you have lots of redirects?** Probably not... the listeners are registered on the request object, as long as that request object goes out of scope (you're not holding a reference to it somewhere) then it will eventually get garbage collected and the event emitters will be collected. **What is the fix?** In this library, either unregister and re-register the pipe handler each time a new redirect is issued, don't re-register the pipe handler if we are following a redirect. In your code, it's probably safe to either set maxRedirects to 9, or EventEmitter default to 11, either of which will resolve the issue, as long as your `request` objects are getting garbage collected (they probably are). Note **if you are getting `memory leak detected` for behavior other than redirect following** there may still be a memory leak, see for example <https://github.com/request/request/issues/2575>. Upvotes: 0
2018/03/15
864
3,255
<issue_start>username_0: How to looping multi array using `foreach()`? Or how to define `$looping` on `for()`? I'm tired of using `AND` logic on `foreach()`. This is my code: ``` php $b=["a","b","c","d","5"]; $a=["1","3","4","5"]; foreach($a as $a && $b as $b) { echo $a.$b; } // AND logic Error $tipe=trim(fgets(STDIN)); $post=trim(fgets(STDIN)); if($tipe == "1") { $url="http://example.com/api" $postdata="post_data={$post[$x]}"; } for($x=0;$x<10;$x++) { $ch=curl_init(); curl_setopt($ch, CURLOPT_URL,$url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POSTFIELDS,$postdata); curl_setopt($ch, CURLOPT_POST, 1); curl_exec($ch); } // $x not defined </code ```<issue_comment>username_1: You can simply disable follow 3xx redirects in request options object `request({followRedirect: false}, ...)` Or change Nodejs `process.setMaxListeners(0);` to 0 (unlimite) Upvotes: 3 [selected_answer]<issue_comment>username_2: By default `request` will follow up to ten redirects. If you have a server that is misbehaving, for example redirecting to itself, request will follow the redirect up to ten times and then return an error. However, the request also listens for an incoming request body - you can pipe a Readable to the `request` object and this library will read and upload that response. This is done by registering a `request.on('pipe', (src) => ...)` listener in the request source code. The problem is that on each followed redirect, `request` redeclares the `.on('pipe', (src) => ...)` listener, so by the 7th redirect you have 7 listeners and so on. By default the EventEmitter warns after ten of the same event have been registered, which is why you get Possible memory leak detected exactly once, with the default settings for both the EventEmitter and for `request`. This is probably a mistake - if you did manage to pipe the request body to the 7th HTTP request, you would have seven listeners by that point, each firing independently, and it's not clear what would happen - you might end up with the request body duplicated seven times, or just written seven times to something that's not listening. However, there is a separate issue where this piped request body is only sent on the first request, not all of them, see #3138. So **is there actually a memory leak if you have lots of redirects?** Probably not... the listeners are registered on the request object, as long as that request object goes out of scope (you're not holding a reference to it somewhere) then it will eventually get garbage collected and the event emitters will be collected. **What is the fix?** In this library, either unregister and re-register the pipe handler each time a new redirect is issued, don't re-register the pipe handler if we are following a redirect. In your code, it's probably safe to either set maxRedirects to 9, or EventEmitter default to 11, either of which will resolve the issue, as long as your `request` objects are getting garbage collected (they probably are). Note **if you are getting `memory leak detected` for behavior other than redirect following** there may still be a memory leak, see for example <https://github.com/request/request/issues/2575>. Upvotes: 0
2018/03/15
822
3,154
<issue_start>username_0: I have an array which I would like to split into multiple arrays based on a key(K1 etc key) value (A,AA etc values). My example: ``` array = [{"K1":"A","K2":"B","k3":"AA"},{"K1":"A","K2":"B","k3":"BB"},{"K1":"A","K2":"B","k3":"BB"},{"K1":"A","K2":"B","k3":"CC"}] ``` Should return where values AA, BB, CC are the variables/values for separation. The array is dynamic so ik also could have for example DD, EE etc ``` array 1= [{"K1":"A","K2":"B","k3":"AA"}] array 2= [{"K1":"A","K2":"B","k3":"BB"},{"K1":"A","K2":"B","k3":"BB"}] array 3= [{"K1":"A","K2":"B","k3":"CC"}] ``` How do I achieve this<issue_comment>username_1: You can simply disable follow 3xx redirects in request options object `request({followRedirect: false}, ...)` Or change Nodejs `process.setMaxListeners(0);` to 0 (unlimite) Upvotes: 3 [selected_answer]<issue_comment>username_2: By default `request` will follow up to ten redirects. If you have a server that is misbehaving, for example redirecting to itself, request will follow the redirect up to ten times and then return an error. However, the request also listens for an incoming request body - you can pipe a Readable to the `request` object and this library will read and upload that response. This is done by registering a `request.on('pipe', (src) => ...)` listener in the request source code. The problem is that on each followed redirect, `request` redeclares the `.on('pipe', (src) => ...)` listener, so by the 7th redirect you have 7 listeners and so on. By default the EventEmitter warns after ten of the same event have been registered, which is why you get Possible memory leak detected exactly once, with the default settings for both the EventEmitter and for `request`. This is probably a mistake - if you did manage to pipe the request body to the 7th HTTP request, you would have seven listeners by that point, each firing independently, and it's not clear what would happen - you might end up with the request body duplicated seven times, or just written seven times to something that's not listening. However, there is a separate issue where this piped request body is only sent on the first request, not all of them, see #3138. So **is there actually a memory leak if you have lots of redirects?** Probably not... the listeners are registered on the request object, as long as that request object goes out of scope (you're not holding a reference to it somewhere) then it will eventually get garbage collected and the event emitters will be collected. **What is the fix?** In this library, either unregister and re-register the pipe handler each time a new redirect is issued, don't re-register the pipe handler if we are following a redirect. In your code, it's probably safe to either set maxRedirects to 9, or EventEmitter default to 11, either of which will resolve the issue, as long as your `request` objects are getting garbage collected (they probably are). Note **if you are getting `memory leak detected` for behavior other than redirect following** there may still be a memory leak, see for example <https://github.com/request/request/issues/2575>. Upvotes: 0
2018/03/15
836
2,586
<issue_start>username_0: I would like to replace the space characters inside XML file as for example: from: ``` Test User 1 ``` to: ``` Test\_User\_1 ``` Prefer using sed as this is the most feasible option at this moment. Appreciate for any suggestion or feedback. Thanks<issue_comment>username_1: Don't parse HTML with regex, use a proper XML/HTML parser. ### theory : According to the compiling theory, HTML can't be parsed using regex based on [finite state machine](http://en.wikipedia.org/wiki/Finite-state_machine). Due to hierarchical construction of HTML you need to use a [pushdown automaton](http://en.wikipedia.org/wiki/Pushdown_automaton) and manipulate [LALR](http://en.wikipedia.org/wiki/LR_parser) grammar using tool like [YACC](http://en.wikipedia.org/wiki/Yacc). ### realLife©®™ everyday tool in a [shell](/questions/tagged/shell "show questions tagged 'shell'") : You can use one of the following : [xmllint](http://xmlsoft.org/xmllint.html) [xmlstarlet](http://xmlstar.sourceforge.net/docs.php) [saxon-lint](https://github.com/sputnick-dev/saxon-lint) (my own project) --- Check: [Using regular expressions with HTML tags](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags) --- ### Example using [xpath](/questions/tagged/xpath "show questions tagged 'xpath'") : ``` xmlstarlet edit -L -u '//userName' \ -x 'translate(//userName/text(), " ", "_")' file.xml ``` Output : -------- ``` $ cat file.xml xml version="1.0"? Test\_User\_1 ``` Upvotes: 2 <issue_comment>username_1: Using [python](/questions/tagged/python "show questions tagged 'python'") and [xpath](/questions/tagged/xpath "show questions tagged 'xpath'") (for fun): ``` from lxml import etree myXML = 'file.xml' tree = etree.parse(myXML) root = tree.getroot() code = root.xpath("//userName") code[0].text = code[0].text.replace(' ', '_') print(code[0].text) etree.ElementTree(root).write(myXML, pretty_print=True) ``` Output : -------- ``` $ cat file.xml xml version="1.0"? Test\_User\_1 ``` Upvotes: 0 <issue_comment>username_1: Using [perl](/questions/tagged/perl "show questions tagged 'perl'") also for fun : ``` #!/usr/bin/env perl # edit file.xml file and save new one in new.xml use strict; use warnings; use XML::LibXML; my $xl = XML::LibXML->new(); my $xml = $xl->load_xml(location => 'file.xml') ; for my $node ($xml->findnodes('//userName/text()')) { my $value = $node->getValue; print $value; $value =~ s/\s+/_/g; $node->setData($value); } $xml->toFile('new.xml'); ``` Upvotes: 0
2018/03/15
520
1,391
<issue_start>username_0: I'm looking for a way to take a string like below: ``` let s = '@someone hello that was awesome +5. Rock on!'; ``` and strip out absolutely everything except for the first `+` or `-` and the immediate following number, using regex. The result would be: ``` +5 ``` Other examples: ``` let s = '!#$#$%#$%^%^ -3000absdf' //-3000 let s = 'you get a +1 and a -2000 for your efforts' //+1 let s = s = '+++++++++++++++17 .' //+17 ```<issue_comment>username_1: Instead of stripping the rest of the characters out, just match (non-globally) on `[+-]\d+`. ```js let a = [ '@someone hello that was awesome +5. Rock on!', //+5 '!#$#$%#$%^%^ -3000absdf', //-3000 'you get a +1 and a -2000 for your efforts', //+1 '+++++++++++++++17 .' //+17 ] var r = /[+-]\d+/ a.forEach(function(s) { if(r.test(s)) console.log(s.match(r)[0]) }) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Just match globally, and then check if there is a match ```js var regExp = /([+-]\d+)/; var match = regExp.exec('asdfadf +233 asdfadf -23434'); var number = match && match[0]; console.log(number); var match = regExp.exec('asdfadf34+233jkjkjk asdfadf -23434'); var number = match && match[0]; console.log(number); var match = regExp.exec('asdfadjkjkjk asdfadf ty3434'); var number = match && match[0]; console.log(number); ``` Upvotes: 1
2018/03/15
840
3,323
<issue_start>username_0: Using a route that needs props passed in to the components, I'm passing in a boolean that is observable from my store. When this value changes, components rendered by the route are not updated. ``` } /> ``` This component is Observable and injected with MyMobxStore. If I change that observable, the component "MyComponent" is not re-rendered or notified about the change. If I call any arbitrary "setState" on my top level component, it works. or if I add a dummy element such as ``` ``` that makes the current component re-render such that "MyComponent" that depends on this observable behaves as expected. Is there an obvious solution that I'm missing? I can add one of these hacks to make it work. I could wrap my "MyComponent" in another observable and pass it down that way, But that isn't conducive with my design. Though It doesn't matter that much. This is purely academic at this point. I'd like to know why it doesn't work. **Update** was asked to include more code. ```html import React, { Component } from 'react'; import { BrowserRouter as Router, Route, Switch, Link } from 'react-router-dom'; import {inject, observer } from 'mobx-react'; import {MyComponent, CoolChildComponent} from '.coomComponents'; @inject('LoaderStore') @observer class App extends Component { componentWillMount() { this.props.LoaderStore.setShowLocalLoader(true); setTimeout(() => { this.props.LoaderStore.setShowLocalLoader(false); }, 5000); } render() { return( {/\* note router container and "with router" wrapper one level up. \*/} {return }} /> ); } } export default App; ``` note, I know ways to get around this. What I don't understand is why MobX doesn't realize that render is using LoaderStore.showLocalLoader<issue_comment>username_1: sounds like you are missing an observer wrapper for MyComponent you can use @observer if you have support for decorators else you should enable decorators as shown [here](https://mobx.js.org/best/decorators.html), hopes that helps, if that's the issue here is a little explanation: when you set an observable in a mobx store you can react to a change of this observable in a few ways (i'm just listing 2 for more options see the mobx docs): 1. using autorun - [more info](https://mobx.js.org/refguide/autorun.html) 2. using @observable - [more info](https://mobx.js.org/refguide/observer-component.html) both this methods work on the same principles, just that in the option of observable what happens is that the render function works like autorun works it's pretty straight forward if you have an observable inside the function it will trigger it every time the observable in the function scope changes good luck! Upvotes: 0 <issue_comment>username_1: try the following: ``` render() { const showLocalLoader = this.props.LoaderStore.showLocalLoader return( {/\* note router container and "with router" wrapper one level up. \*/} {return }} /> ); } ``` this will cause the correct value to be sent as prop to the Route, there is an issue that sometimes the change in value doesn't trigger re-rendering when the observable is nested inside the return value of render. let me know if it worked Upvotes: 3 [selected_answer]
2018/03/15
430
1,706
<issue_start>username_0: I know there's DBPedia for Wikipedia, but does something like that exist for Wiktionary? I'd like to get something like <https://en.wiktionary.org/wiki/Category:en:Occupations> into JSON or similar format.<issue_comment>username_1: sounds like you are missing an observer wrapper for MyComponent you can use @observer if you have support for decorators else you should enable decorators as shown [here](https://mobx.js.org/best/decorators.html), hopes that helps, if that's the issue here is a little explanation: when you set an observable in a mobx store you can react to a change of this observable in a few ways (i'm just listing 2 for more options see the mobx docs): 1. using autorun - [more info](https://mobx.js.org/refguide/autorun.html) 2. using @observable - [more info](https://mobx.js.org/refguide/observer-component.html) both this methods work on the same principles, just that in the option of observable what happens is that the render function works like autorun works it's pretty straight forward if you have an observable inside the function it will trigger it every time the observable in the function scope changes good luck! Upvotes: 0 <issue_comment>username_1: try the following: ``` render() { const showLocalLoader = this.props.LoaderStore.showLocalLoader return( {/\* note router container and "with router" wrapper one level up. \*/} {return }} /> ); } ``` this will cause the correct value to be sent as prop to the Route, there is an issue that sometimes the change in value doesn't trigger re-rendering when the observable is nested inside the return value of render. let me know if it worked Upvotes: 3 [selected_answer]
2018/03/15
635
2,546
<issue_start>username_0: I have the following code for the dynamic addition of annotation to java class ``` private void decorateWithSpecificAnnotation(final Set> domainClasses) { final ClassPool cp = ClassPool.getDefault(); for (Class c : domainClasses) { try { final CtClass cc = cp.get(c.getName()); final ClassFile cfile = cc.getClassFile(); final ConstPool cpool = cfile.getConstPool(); final AnnotationsAttribute attr = new AnnotationsAttribute(cc.getClassFile().getConstPool(), AnnotationsAttribute.visibleTag); final Annotation annot = new Annotation(Document.class.getName(), cpool); attr.addAnnotation(annot); cfile.addAttribute(attr); } catch (NotFoundException e) { throw new RuntimeException("Unexpected error occured during dynamic domain decoration", e); } } } ``` After calling the above method when I do the following i see that annotation is not present. What i am missing. ``` decorateWithSpecificAnnotation(domainClasses); domainClasses.stream().forEach(d -> { System.out.println(d.isAnnotationPresent(Document.class)); }); ```<issue_comment>username_1: sounds like you are missing an observer wrapper for MyComponent you can use @observer if you have support for decorators else you should enable decorators as shown [here](https://mobx.js.org/best/decorators.html), hopes that helps, if that's the issue here is a little explanation: when you set an observable in a mobx store you can react to a change of this observable in a few ways (i'm just listing 2 for more options see the mobx docs): 1. using autorun - [more info](https://mobx.js.org/refguide/autorun.html) 2. using @observable - [more info](https://mobx.js.org/refguide/observer-component.html) both this methods work on the same principles, just that in the option of observable what happens is that the render function works like autorun works it's pretty straight forward if you have an observable inside the function it will trigger it every time the observable in the function scope changes good luck! Upvotes: 0 <issue_comment>username_1: try the following: ``` render() { const showLocalLoader = this.props.LoaderStore.showLocalLoader return( {/\* note router container and "with router" wrapper one level up. \*/} {return }} /> ); } ``` this will cause the correct value to be sent as prop to the Route, there is an issue that sometimes the change in value doesn't trigger re-rendering when the observable is nested inside the return value of render. let me know if it worked Upvotes: 3 [selected_answer]
2018/03/15
469
2,007
<issue_start>username_0: How can a react application be packaged and hosted without a server. The application is purely front end. Tried building the application `npm run build` but that created a build which would require a server to be hosted on.<issue_comment>username_1: You'll need a http server to host the site locally. But if you want to host it on the cloud, there are other options. Once you have it bundled. You just have to copy the assets to a bucket on Amazon S3. It allows you to serve them as a static website. You can refer more information here. <https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html> Similarly other Cloud providers have their own equivalents. Upvotes: 0 <issue_comment>username_2: Well no, it doesn't require a server to host it. As you said its just front end. HOWEVER, if you want to navigate OTHER than via scripted navigation you'll need a server. Simply building your project and serving the index file as a static asset is sufficient to run the code on shared hosting. Once the index file has finished downloading it will execute the javascript thats imported (react in this case). but if you try to go to say, website.com/blah its not going to work as rendering specific routes in react requires rendering and returning javascript. Something you need a server for. The react bundle is self executing, at least with create-react-app. You shouldn't have any issues. Upvotes: 1 <issue_comment>username_3: You can not load anything by `http(s):` protocol without server by definition. Possible you mean loading by url like `file:` which is local file system path. Bundle builded to load assets by absolute url. It is because your pages may have some deep path in url. So you need to configure server so your build folder will be as root of site. This mean if you try to run it from disk on workstation you need to put content of your build directory to root of disk to work (`C:\` for example if windows, or `/` if you not on windows) Upvotes: 0
2018/03/15
645
1,993
<issue_start>username_0: I have 4 tables with the following structure: ``` **FiltersMain** FilterMain_ID FilterMain_Name_GR FilterMain_Name_EN **FiltersSub** FilterSub_ID FilterMain_ID FilterSub_Name_GR FilterSub_Name_EN **cm** cm_ID product_id Cat_Main_ID **Filtra** f_Id product_id FilterMain_ID FilterSub_ID ``` The only known things I have at this time are the followings: `FilterMain_ID` of the `FiltersMain` table (which is 555) `Cat_Main_ID` of the `cm` table (which is 222) I want to count (with the `select count` method) the total number of records in the `Filtra` table BUT with followings conditions: Having the already known `FilterMain_ID` of the `FiltersMain` table (which is 555) and the already known `Cat_Main_ID` of the `cm` table (which is 222). I have already tried the following select count statement, but I am getting completely wrong result. ``` SELECT COUNT(*) AS total_records FROM FiltersMain, FiltersSub, Filtra, cm WHERE FiltersMain.FilterMain_ID = FiltersSub.FilterMain_ID Filtra.FilterMain_ID = FiltersMain.FilterMain_ID cm.Cat_Main_ID = 222 cm.product_id = Filtra.product_id ``` I am completely confused at this moment so any kind any help will be very appreciated<issue_comment>username_1: ``` select (Select Count(*) as FiltersMainCount from FiltersMain where FilterMain_ID='555') as FiltersMainCount (Select Count(*) as FiltersSubCount from FiltersSub where FilterMain_ID='555') as FiltersSubCount (Select Count(*) as cmCount from cm where Cat_Main_ID='222') as cmCount (Select Count(*) as FiltraCount from Filtra where FilterMain_ID='555') as FiltraCount ``` Upvotes: 0 <issue_comment>username_2: Probably this is what you're looking for ? ``` SELECT COUNT(fa.f_Id) AS total_records FROM Filtra fa LEFT JOIN FiltersMain fm ON fm.FilterMain_ID = fa.FilterMain_ID LEFT JOIN FiltersSub fs ON fs.FilterSub_ID = fa.FilterSub_ID LEFT JOIN cm c ON c.product_id = fa.product_id AND c.Cat_Main_ID = 222 ``` Upvotes: 3 [selected_answer]
2018/03/15
1,036
4,017
<issue_start>username_0: So I've come across some code that makes me uncomfortable, but I can't find a definitive answer as to whether it's actually problematic. We have a ASP.Net Web API that is primarily used by a message bus. There is a balancing process that needs to be started for several accounts. The balancing service method is asyncronous and returns a Task. The code is called like this: ``` foreach (AccountingGroup accountingGroup in Groups) { ledgerService.CreateItemsAsync(accountingGroup.GLAccountingHeaderId); } return StatusCode(HttpStatusCode.NoContent); ``` This strikes me as wrong on quite a few levels. I get the intention. "We want to run this method on all of these groups, but we don't need to wait on them to finish. Obviously CancellationToken's aren't being used. They are relying on AWS to just kill the entire process if it runs to long, and that's not a refactor I can really get into right now anyways. I've been out of C# for a year an a half, and asynchronous code for 2.5 years and feel like I knew the issue here at some point, but I just can't find it again. What is the proper way to handle this problem? Is it even a problem?<issue_comment>username_1: The correct way is to define your API method as async and then wait for all of the async methods to complete: ``` public Task DoStuff() { await Task.WhenAll(groups.Select(g => ledgerService.CreateItemsAsync(g.GLAccountingHeaderId)); return StatusCode(HttpStatusCode.NoContent); } ``` Patrick's answer has an explanation of "why". It seems like a bad idea to pretend to the client that an action has been completed when it has not. If you want to run these things in the background, you might look into using messages queues like RabbitMq and develop a fail-safe way of ensuring that these tasks are completed. Feedback when things are failing is good. With your current approach, you have absolutely no way to find out if this code is failing, meaning that if it stops working you won't realise until it affects something else. Upvotes: 2 <issue_comment>username_2: > > Is it even a problem? > > > Yes, there is a difference in not wanting to wait for it, and actually be able to handle exceptions. For example, if your code fails, for whatever reason, you now return a HTTP 204, a success state. If you would await the result, and it fails, you will get an HTTP 500 most likely. > > What is the proper way to handle this problem? > > > You should await the results, for example aggregating the tasks and call `Task.WhenAll` on them, so you don't have to wait on each and every one of them separately. Upvotes: 2 <issue_comment>username_3: No it is not ok, the server may shut down the app domain while the background work is running. The best way to handle this is use a library for background work like <https://www.hangfire.io/>. If you feel the work will be done within the next minute or so you could use the short term system [`HostingEnvironment.QueueBackgroundWorkItem(Func)`](https://learn.microsoft.com/en-us/dotnet/api/system.web.hosting.hostingenvironment.queuebackgroundworkitem?view=netframework-4.7.1#System_Web_Hosting_HostingEnvironment_QueueBackgroundWorkItem_System_Func_System_Threading_CancellationToken_System_Threading_Tasks_Task__) however i am not sure if this works with ASP.NET Core or not, it was designed to be used with the previous versions of ASP.NET. EDIT: Found a reference, QueueBackgroundWorkItem indeed does not work in ASP.NET Core but there is [a similar way to handle these situations there](https://stackoverflow.com/questions/47863231/alternative-for-system-web-hosting-hostingenvironment-registerobject-in-asp-net). Upvotes: 3 [selected_answer]<issue_comment>username_4: You can use a QueueBackgroundWorkItem Please take a look at [Getting QueueBackgroundWorkItem to complete if the web page is closed](https://stackoverflow.com/questions/49280652/getting-queuebackgroundworkitem-to-complete-if-the-web-page-is-closed/49283461#49283461) Upvotes: 0
2018/03/15
411
1,386
<issue_start>username_0: The following is the json that I am receiving: ``` { "total": 5, "responses": [{ "gender": "Female", "age": 66 }, { "gender": "Male", "age": 52 }] } ``` The following is the code I am using to receive and parse the json ``` // Declare a proxy to reference the hub. $.connection.hub.url = 'https://www.url...'; var res = $.connection.resHub; // Create a function that the hub can call to broadcast messages. res.client.broadcastRes = function (resp) { var now = new Date(); console.log(now.toLocaleTimeString(), 'signalR survey data received', JSON.parse(resp)); createChart(JSON.parse(resp.responses)); }; $.connection.hub.start(); ``` In the console I am being able to see the entire JSON response like I showed above by doing `console.log(JSON.parse(resp));` But I am getting the error > > Unexpected token u in JSON at position 0 > at JSON.parse () > > > When I am using `resp.responses` Where am I going wrong? Any kind of help would be greatly appreciated. Thanks.<issue_comment>username_1: Try this : `createChart(JSON.parse(resp).responses);` Upvotes: 1 <issue_comment>username_2: Well, basically, if `resp` is an string, then if you do `JSON.parse(resp.responses)` will fail because `responses` is `undefined`. You just want `JSON.parse(resp).responses`. This contains the array you need Upvotes: 1 [selected_answer]
2018/03/15
353
1,207
<issue_start>username_0: A few weeks ago my website was hacked. So I decided to delete and reconfigure the whole Wordpress website and increase the security. After some research, I found a plugin named Wordfence which can monitor and trace all actions on the website. When I have a look at the log of the plugin, I found many actions with the following description: ... arrived from <http://www.Your-Website-Sucks>. net/WWW.example.COM and visited <https://example.COM/> 14-3-2018 01:25:47 (4 hours ago) IP: 192.168.3.11 Hostname: hehehe.your-website-sucks. net Browser: undefined Mozilla/5.0 (compatible; Your-Website-Sucks/2.0; +<http://www.Your-Website-Sucks>. net/WWW.example.COM) I am a bit concerned about this because of the name 'www.your-website-sucks. net' isn't sound good. Is there anyone who knows this site and what to do against this?<issue_comment>username_1: Try this : `createChart(JSON.parse(resp).responses);` Upvotes: 1 <issue_comment>username_2: Well, basically, if `resp` is an string, then if you do `JSON.parse(resp.responses)` will fail because `responses` is `undefined`. You just want `JSON.parse(resp).responses`. This contains the array you need Upvotes: 1 [selected_answer]
2018/03/15
2,871
11,086
<issue_start>username_0: I'm using below code to populate my Main Activity with Fragment containing Listview. I'm following a tutorial from <http://wptrafficanalyzer.in/blog/android-itemclicklistener-for-a-listview-with-images-and-text/>. I would like to know, how to use intent to open a separate activity / Fragment when each item clicked on Listview. For example, wWhen first item is clicked, it will open A Fragment and when second item is clicked, it will open B Fragment. ``` package com.nepalpolice.cdp; import android.content.Intent; import android.os.Bundle; import android.support.annotation.Nullable; import android.support.v4.app.Fragment; import android.support.v4.app.FragmentManager; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.AdapterView; import android.widget.AdapterView; import android.widget.AdapterView.OnItemClickListener; import android.widget.ListView; import android.widget.SimpleAdapter; import java.util.ArrayList; import java.util.HashMap; import java.util.List; /** * Created by Sagar on 2017/09/23. */ public class club extends Fragment { // Array of strings storing country names String[] countries = new String[]{ "India", "Pakistan", "Sri Lanka", "China", "Bangladesh", "Nepal", "Afghanistan", "North Korea", "South Korea", "Japan" }; // Array of integers points to images stored in /res/drawable-ldpi/ int[] flags = new int[]{ R.drawable.eka, R.drawable.kat, R.drawable.rat, R.drawable.set, R.drawable.ann, R.drawable.kar, R.drawable.suk, R.drawable.sap, R.drawable.him, R.drawable.gor }; // Array of strings to store currencies String[] currency = new String[]{ "Indian Rupee", "Pakistani Rupee", "Sri Lankan Rupee", "Renminbi", "Bangladeshi Taka", "Nepalese Rupee", "Afghani", "North Korean Won", "South Korean Won", "Japanese Yen" }; @Nullable @Override public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) { View view = inflater.inflate(R.layout.fragment_club, container, false); // Each row in the list stores country name, currency and flag List> aList = new ArrayList>(); for (int i = 0; i < 10; i++) { HashMap hm = new HashMap(); hm.put("txt", "Country : " + countries[i]); hm.put("cur", "Currency : " + currency[i]); hm.put("flag", Integer.toString(flags[i])); aList.add(hm); } // Keys used in Hashmap String[] from = {"flag", "txt", "cur"}; // Ids of views in listview\_layout int[] to = {R.id.flag, R.id.txt, R.id.cur}; // Instantiating an adapter to store each items // R.layout.listview\_layout defines the layout of each item SimpleAdapter adapter = new SimpleAdapter(getActivity(), aList, R.layout.listview\_layout, from, to); // Getting a reference to listview of main.xml layout file ListView listView = (ListView) view.findViewById(R.id.listview); // Setting the adapter to the listView listView.setAdapter(adapter); return view; } // Item Click Listener for the listview AdapterView.OnItemClickListener itemClickListener = new AdapterView.OnItemClickListener() { @Override public void onItemClick(AdapterView parent, View container, int position, long id) { if(position == 1/\*or any other position\*/){ Fragment fragment = new notices(); FragmentManager fragmentManager = getFragmentManager(); fragmentManager.beginTransaction().replace(R.id.fragment\_frame, fragment).addToBackStack(null).commit(); } else if(position == 2){ } // etc... } }; } ``` [![enter image description here](https://i.stack.imgur.com/g9rqY.png)](https://i.stack.imgur.com/g9rqY.png)<issue_comment>username_1: Inside `onItemClick` just make an `if` or `switch` to call your activity ``` // Item Click Listener for the listview AdapterView.OnItemClickListener itemClickListener = new AdapterView.OnItemClickListener() { @Override public void onItemClick(AdapterView parent, View container, int position, long id) { Intent intent; if(position == 1/*or any other position*/){ intent = new Intent(YourActivity.this,OtherActivity1.class); // YourActivity is the activity containing this code, if this line causes problems, use context value here } else if(position == 2){ intent = new Intent(YourActivity.this,OtherActivity2.class); } // etc... // create intent to activity and call it startActivity(intent); } }; ``` or if you want to start the same activity with different parameters(which I usually do with listView), try this: (I strongly recommend you this one if your list is long, and all your activities are going to be similar) ``` // Item Click Listener for the listview AdapterView.OnItemClickListener itemClickListener = new AdapterView.OnItemClickListener() { @Override public void onItemClick(AdapterView parent, View container, int position, long id) { Intent intent; intent = new Intent(YourActivity.this,SomeActivity.class); intent.putExtra("some_key",((TextView)container).getText.toString()); // if your container is not TextView, for example custom layout, you'll need to change this line a bit to fit your needs // instead of calling different activities, all one activity // but with different parameters. startActivity(intent); } }; ``` *EDIT* You wrote "i am totally new to android". I want to give you advice about this then. When you have a list view, and you want to perform action when user clicks some item, and action is a bit different for each item (such as list of countries, and click will view information about that country, all actions are simmilar, to view the information), then use one activity, and just call them with other parameters ([Intent's extras](https://stackoverflow.com/questions/5265913/how-to-use-putextra-and-getextra-for-string-data)). If you make separate activity for every list item(if your list is long), it would be as bad as creating SQL table for each user(do you know SQL?). Hope it helps. Tell me if you have any problems with code, or if something is unclear for you. *EDIT 2* If you have problem with 'new Intent(YourActivity.this/*here*/,[...]) Try to pass a context variable to your fragment and use it instead of YourActivity.this ``` //in you Activity class final Activity a_this = this; ``` then you need to pass `a_this` to your fragment, and use it instead of `YourActivity.this`: ``` intent = new Intent(a_this, SomeActivity.class); ``` If your fragment is nested in your activity, there will be no problem with passing `a_this` to fragment. Upvotes: 1 <issue_comment>username_2: Here is the working solution, I added ``` listView.setOnItemClickListener(itemClickListener); ``` above `return view;` The complete code is ```css package com.nepalpolice.cdp; import android.content.Intent; import android.os.Bundle; import android.support.annotation.Nullable; import android.support.v4.app.Fragment; import android.support.v4.app.FragmentManager; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.AdapterView; import android.widget.AdapterView; import android.widget.AdapterView.OnItemClickListener; import android.widget.LinearLayout; import android.widget.ListView; import android.widget.SimpleAdapter; import android.widget.TextView; import android.widget.Toast; import java.util.ArrayList; import java.util.HashMap; import java.util.List; /** * Created by Sagar on 2017/09/23. */ public class club extends Fragment { // Array of strings storing country names String[] countries = new String[]{ "India", "Pakistan", "Sri Lanka", "China", "Bangladesh", "Nepal", "Afghanistan", "North Korea", "South Korea", "Japan" }; // Array of integers points to images stored in /res/drawable-ldpi/ int[] flags = new int[]{ R.drawable.eka, R.drawable.kat, R.drawable.rat, R.drawable.set, R.drawable.ann, R.drawable.kar, R.drawable.suk, R.drawable.sap, R.drawable.him, R.drawable.gor }; // Array of strings to store currencies String[] currency = new String[]{ "Indian Rupee", "Pakistani Rupee", "Sri Lankan Rupee", "Renminbi", "Bangladeshi Taka", "Nepalese Rupee", "Afghani", "North Korean Won", "South Korean Won", "Japanese Yen" }; @Nullable @Override public View onCreateView(LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) { View view = inflater.inflate(R.layout.fragment_club, container, false); // Each row in the list stores country name, currency and flag List> aList = new ArrayList>(); for (int i = 0; i < 10; i++) { HashMap hm = new HashMap(); hm.put("txt", "Country : " + countries[i]); hm.put("cur", "Currency : " + currency[i]); hm.put("flag", Integer.toString(flags[i])); aList.add(hm); } // Keys used in Hashmap String[] from = {"flag", "txt", "cur"}; // Ids of views in listview\_layout int[] to = {R.id.flag, R.id.txt, R.id.cur}; // Instantiating an adapter to store each items // R.layout.listview\_layout defines the layout of each item SimpleAdapter adapter = new SimpleAdapter(getActivity(), aList, R.layout.listview\_layout, from, to); // Getting a reference to listview of main.xml layout file ListView listView = (ListView) view.findViewById(R.id.listview); // Setting the adapter to the listView listView.setAdapter(adapter); listView.setOnItemClickListener(itemClickListener); return view; } // Setting the adapter to the listView // Item Click Listener for the listview OnItemClickListener itemClickListener = new OnItemClickListener() { @Override public void onItemClick(AdapterView parent, View container, int position, long id) { // Getting the Container Layout of the ListView if(position == 1/\*or any other position\*/){ Fragment fragment = new notices(); FragmentManager fragmentManager = getFragmentManager(); fragmentManager.beginTransaction().replace(R.id.fragment\_frame, fragment).addToBackStack(null).commit(); } else if(position == 2){ } // etc... } }; } ``` Upvotes: 0
2018/03/15
775
2,411
<issue_start>username_0: I'm trying to copy the data from one sheet to the last row of another sheet. The reason why I am doing this is because I want to consolidate the data in a sheet which is already existing and my contain already a data. Below is my code so far which only copies again to the A2 of another sheet. What approach should I do for this: ``` Sub Upload() Dim Wb1 As Workbook Dim Wb2 As Workbook Dim MainPage As Worksheet Set MainPage = Sheets("Main") Dim r As Long Application.DisplayAlerts = False Application.ScreenUpdating = False Set Wb1 = ActiveWorkbook FileToOpen = Application.GetOpenFilename _ (Title:="Please choose a File", _ filefilter:="Excel File *.xlsx (*.xlsx),") If FileToOpen = False Then MsgBox "No File Specified.", vbExclamation, "ERROR" Exit Sub Else Set Wb2 = Workbooks.Open(Filename:=FileToOpen) With Wb2.Sheets("ALL TICKETS (excpt Open-OnHold)") srcLastRow = .Range("A:AJ").Find("*", SearchOrder:=xlByRows, SearchDirection:=xlPrevious).Row destLastRow = Wb1.Sheets("ALL TICKETS (excpt Open-OnHold)".Range("A:AJ").Find("*", SearchOrder:=xlByRows, SearchDirection:=xlPrevious).Row + 1 Wb1.Sheets("ALL TICKETS (excpt Open-OnHold)").Range("A2:AJ" & destLastRow).Value = .Range("A2", "AJ" & srcLastRow).Value End With Wb2.Close End If End Sub ```<issue_comment>username_1: You know your copied range, so then you need to know the last row of the destination sheet: ``` dim lr as long With Sheets("Destination") lr = .cells(.rows.count,1).end(xlup).row 'assumes column 1 is contiguous End with ``` You can then take your source range (will use variable SrcRng) and paste to the new sheet, into a specific cell: ``` SrcRng.Copy Sheets("Destination").Cells(lr+1,1) 'this line does the copy and the paste ``` The rest of the copied range will be filled in. --- Edit1: Hard to show the code in a comment... ``` Dim LRSrc as Long, LRDest as Long, SrcRng as Range With Sheets("Source") LRSrc = .cells(.rows.count,1).end(xlup).row 'assumes column 1 is contiguous Set SrcRng = .Range("A1:AJ" & LRSrc) End with With Sheets("Destination") LRDest = .cells(.rows.count,1).end(xlup).row 'assumes column 1 is contiguous SrcRng.Copy .Cells(LRDest+1,1) End with ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: Would this work for you. defining srcLastRow as below. srcLastRow = Cells(Rows.Count, 36).End(xlUp).Row Upvotes: 0
2018/03/15
370
1,483
<issue_start>username_0: I'm on a project cross-platform where the client wants a website, an iOS application and an android app. I chose cordova to realise it. But I have one question, can I host my application like a classic website on a distant server ? Maxime<issue_comment>username_1: EDIT: the following is only for testing, not production Cordova supports the browser platform. By running `cordova platform add browser` you add it to your project and with `cordova run browser` you run your app in a browser. To run it in a specific port, you need to run `cordova run browser -- --port=1234`. You can add, build and run the iOS and Android platforms in the same project. Take a look here <https://www.raymondcamden.com/2016/03/22/the-cordova-browser-platform/> Upvotes: 0 <issue_comment>username_2: While you might be able to reuse parts of your HTML, CSS and JavaScript for the website, Cordova apps run on physical devices like phones and tables and interact (access device capabilities) with the underlining operating system like Android and iOS. When you have an existing server-based web app, and you want it running in a Cordova app, you'll have to migrate it. Depending on what your application does you could use a thin Cordova client (think of it as a web browser embedded in a native app) that automatically redirects to your web site. Link to officially supported platforms. <https://cordova.apache.org/docs/en/latest/guide/support/index.html> Upvotes: 1
2018/03/15
492
2,028
<issue_start>username_0: How to show TeamController@index and ProductController@index both show list of team and product inside one view main.blade.php<issue_comment>username_1: You can't show results from two controllers like that. Create a view that includes both the view that TeamController@index and ProductController@index return. be aware that both might be extending a layout which will probably try to load your page twice, so keep in mind to split the views into smaller components and include only those. More info here <https://laravel.com/docs/5.6/views#creating-views> Upvotes: 0 <issue_comment>username_2: Looks like you want to show two datasets on one page. Basically, it means you have to execute two controller methods but it's not necessary to follow each and everything that official documentation says. For example, if Products belong to a team, you can execute only `TeamController@index` and show products as given below. ``` @foreach($teams as $team) @foreach($team->products as $product) {{ $product->name }} @endforeach @endforeach ``` If no teams and products are two different entities and does not have any relation, you can just pass teams and products like this: **TeamController.php** ``` public function index() { $teams = Team::all(); $products = Product::all(); // Don't forget to include 'use App\Product' return view('index',compact(['teams','products']); } ``` and then you can show teams and products like this: **index.blade.php** ``` @foreach($teams as $team) {{ $team->name }} @endforeach @foreach($products as $product) {{ $product->name }} @endforeach ``` Getting information from two different models does not mean you have to execute two different controller functions. Still, if you want to get data from two different controllers, you can setup **index.blade.php** and create two ajax requests that will get data from two different URLs (two different controller methods). Let me know if you have any more questions. Upvotes: 5 [selected_answer]
2018/03/15
863
3,455
<issue_start>username_0: I'm developing an application where a user may parse some binary files. Once he clicks the "parse"-button, he first may select some files, which are parsed afterwards. While the application is processing the files, I'd like to display a modal dialog, which informs the user about the progress (QProgressBar bar) and the already parsed files (QListView list / listModel). My current approach is to override the exec()-method of a QDialog-sublcass. This way I could just call ``` parseAssistant.exec() ``` The current implementation looks like this: ``` class ParseAssistant : public QDialog { public: int exec(); }; int ParseAssistant::exec() { bar->setMaximum(files.size()); this->show(); this->setModal(true); for (int i = 0; i < files.size(); i++) { PluginTable* table = parser.parse(files[i]); // do something with the table // saveTableintoDB(); // update GUI // bar->setValue(i); // listModel->insertRow(0, new QStandardItem(files[i])); } this->hide(); return QDialog::Accepted; } ``` After this (blocking) method has run, the user either has parsed all files or canceled the progress somewhere. To achieve this I attempted to use QApplication::processEvents in the while-loop (which feels kinda laggy as it's only progressed when a file has finished parsing) or to outsource the heavy calculation(s) to some QConcurrent implementation (::run, ::mapped). Unfortunately, I don't know how to return the program flow back to the exec() method once the QFuture has finished without relying on some CPU-intense loop like: ``` while (!future.isFinished()) { QApplication::processEvents(); } ``` Is there a smarter approach to having a modal dialog, which runs a heavy calculation (which may be canceled by the user) without blocking the eventloop?<issue_comment>username_1: My personal approach would be: * create a separate thread and do the processing there ([QThread](https://doc.qt.io/qt-5/qthread.html); std::thread should do the trick as well) * provide a signal that informs about the file currently being processed * possibly another signal informing about progress in % * another signal informs that processing is done, emitted just before the thread ends * provide your dialog with appropriate slots and connect them to the signals (as different threads involved, make sure connection type is `Qt::QueuedConnection`) Upvotes: 0 <issue_comment>username_2: I wouldn't subclass `Qdialog`, in the first place, but just use a `QFutureWatcher` and connect the watcher `finished` signal to the dialog `close` slot, this way: ``` QDialog d; QFutureWatcher watcher; QObject::connect(&watcher, &QFutureWatcher::finished, &d, &QDialog::close); QFuture future = QtConcurrent::run(your\_parse\_function); watcher.setFuture(future); d.exec(); //control returns here when your\_parse\_function exits ``` The parse function could be a method in a QObject derived class, like this: ``` class Parser : public QObject { Q_OBJECT public: void parse() { for (int i = 0; i < files.size(); i++) { PluginTable* table = parser.parse(files[i]); emit fileParsed(i, files.size); // ... } } signals: void fileParsed(int id, int count); }; ``` You can connect the `fileParsed` signal to a slot of choice, and from there set the progress bar value accordingly. Upvotes: 2
2018/03/15
1,004
3,171
<issue_start>username_0: Can someone explain to me (or redirect to resources) why in this particular case the type tag is not "properly" generated: ``` class A(s: Seq[_]*) def toto[T: TypeTag](p: Seq[T]): Seq[T] = { println(typeTag[T].tpe) p } val data = Seq( ("a", "a") ) val x = data.map(_._1) new A( toto(x), toto(data.map(_._2)), toto[String](data.map(_._2)) ) // output: // java.lang.String // Any // String ``` As far as I understand, it seems that as my class `A` takes "untyped" (well with existential types) sequences, then the compiler does not bother generate the proper type tag when not required explicitly (though it does know the type of `data.map(_._2)` it still uses `TypeTag[Any]`... ). But it looks quite strange and I wondered if there was a more scientific explanation to this phenomenom. Also, how can I force the compiler to generate a proper `TypeTag[String]` even if I don't want to create special variable (like this `x` variable above)?<issue_comment>username_1: Nice problem! I have an explanation, but I am not certain it's right (80%, let's say). As very often with Scala type inference question, you need to be aware of expected types. In this case, all arguments of `new A` are typed with expected type `Seq[_]`, which is the same as `Seq[Any]` because of covariance. So: 1. `toto(data.map(_._2))` is typed with expected type `Seq[Any]`; `data.map(_._2)` is typed with expected type `Seq[Any]`. The signature of `Seq#map` is ``` def map[B, That](f: A => B)(implicit bf: CanBuildFrom[Seq[A], B, That]): That ``` so `That` is inferred based on the expected type and a suitable implicit `bf` is found. I am not actually sure if `B` is inferred to `String` or `Any`, but it probably doesn't matter. 2. In the `val x = data.map(_._1)`, there is no expected type so `B` is inferred to `String`, an implicit `bf` is found based on `A` and `B` and then `That` is inferred from the complete type of `bf`. 3. `toto(x)` is typed with expected type `Seq[Any]`; `x` is typed with expected type `Seq[Any]`, but it already has type `Seq[String]` and the expected type doesn't matter. Upvotes: 3 [selected_answer]<issue_comment>username_2: I would like to extend answer of @AlexeyRomanov by possible solution how to force compiler to evaluate specific type: From [here](https://stackoverflow.com/questions/6909053/enforce-type-difference) I took idea for forcing type difference: ``` sealed class =!=[A,B] trait LowerPriorityImplicits { implicit def equal[A]: =!=[A, A] = sys.error("should not be called") } object =!= extends LowerPriorityImplicits { implicit def nequal[A,B](implicit same: A =:= B = null): =!=[A,B] = if (same != null) sys.error("should not be called explicitly with same type") else new =!=[A,B] } ``` Now we can add limitation for parameter to `toto`: ``` class A(s: Seq[_]*) def toto[T: TypeTag](p: Seq[T])(implicit guard: T =!= Any): Seq[T] = { println(typeTag[T].tpe) p } val data = Seq(("a", "a")) val x = data.map(_._1) new A( toto(x), toto(data.map(_._2)), toto[String](data.map(_._2)) ) ``` And output I have ``` java.lang.String java.lang.String String ``` Upvotes: 1
2018/03/15
736
2,736
<issue_start>username_0: We are trying to update the framework of our program. We currently have it in version 4.5.2 and we want to update it to version 4.7.1 We have changed all the csproj of the solution, and when we compile in debug, the application compiles and works correctly. But when we do it in release, it fails us with the following error: ``` An attempt was made to load an assembly with an incorrect format: C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.7.1\Facades\System.IO.Compression.ZipFile.dll ``` We don't really know what's wrong, does anyone know what it could be? Thank you very much.<issue_comment>username_1: UPDATE: As Josh suggests below, now that 4.7.2 is available, upgrade to that .NET version for the best resolution of this problem. If stuck with 4.7.1: This probably isn't addressing the root of the problem, but if you want to get over this for the moment, then find the offending project and edit its settings (rclick project, 'Properties', then 'Build' tab.) Set 'Generate serialization assemblies' to 'Off' for Release mode. If it still complains, try adding the following s to your .csproj file (e.g. towards the bottom, just inside the enclosing root tag: ``` ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: The root of the issue is that the assembly you are seeing in the error message has an incorrect entry in the .NET Framework unification table. That incorrect entry causes the assembly reference to not correctly unify with the assembly in the framework and leads to that error. This is documented as a [known issue in .NET Framework 4.7.1](https://github.com/Microsoft/dotnet/pull/670). As a workaround you can add these targets to your project. They will remove the `DesignFacadesToFilter` from the list of references passed to SGEN (and add them back once SGEN is done) ``` <\_FilterOutFromReferencePath Include="@(\_DesignTimeFacadeAssemblies\_Names->'%(OriginalIdentity)')" Condition="'@(DesignFacadesToFilter)' == '@(\_DesignTimeFacadeAssemblies\_Names)' and '%(Identity)' != ''" /> ``` **Edit**: If the above doesn't work, please share a detailed msbuild log to help understand why the target doesn't work. Another option (machine wide) is to add the following binding redirect to sgen.exe.config: ``` ``` This will only work on machines with .NET Framework 4.7.1. installed. Once .NET Framework 4.7.2 is installed on that machine, this workaround should be removed. Upvotes: 3 <issue_comment>username_3: This issue is fixed in the latest .net dev pack 4.7.2: <https://github.com/dotnet/sdk/issues/1630#issuecomment-415811457> <https://www.microsoft.com/net/download/thank-you/net472-developer-pack> Upvotes: 3
2018/03/15
507
1,995
<issue_start>username_0: I am trying to create a list of sorted characters from a file, however there are quotation marks in the file, and they are messing up the order of my list, so I need to remove them prior to creating my list. I have tried countless approaches, but have been unsuccessful with all of them. Here is the block of code that deals with opening and splitting the file: ``` def openfile(): filename = filedialog.askopenfilename(parent=root,title='Select a file.') if filename != None: thefile = open(filename, 'r') contents = thefile.read() print(contents) translator = str.maketrans('', '', string.punctuation) contents = contents.translate(translator) contents = contents.replace('"', '').replace("'", '') contents = contents.lower() wordList = contents.split() for word in wordList: letter = word.split() for letter in word: letter.replace('"', '').replace("'", '') print('\n', wordList) ttk.Button(root, text='Sort', command=splitfile).grid(row=1, column=1) ```<issue_comment>username_1: If your file contains fancy unicode quotes, then you'll need to first convert them to regular `'`/`"` quotes. You can do this using the `unidecode` module: ``` from unidecode import unidecode contents = unidecode(contents).replace('"', '').replace("'", '') ``` Now if you want to remove *all* punctuation, then you'll need to use a slightly different approach: ``` from unidecode import unidecode import string trans_table = str.maketrans('', '', string.punctuation) contents = unidecode(contents).translate(trans_table) ``` Upvotes: 1 <issue_comment>username_2: This should work Semi\_final is the list of the words containing the words with punctuation ``` Answer = [] for word in semi_final: a = [] for l in word: if l != " ' " or l != ' " ': a.append(l) Answer.append("".join(a)) ``` Upvotes: 0
2018/03/15
1,101
4,072
<issue_start>username_0: I'm trying to use node-red-contrib-socketio package to emit a 'weather' event from Node-Red to a client based on input from Weather Underground. ![Emit Weather Event from Node-Red](https://i.stack.imgur.com/0XEmN.jpg) I'm using the following code in a node-red function to process the input from WeatherUnderground and set the event: ![Setting 'weather' event](https://i.stack.imgur.com/3eqEi.jpg) ``` weather = msg.payload.weather; msg.payload = {weather: weather}; msg.socketIOEvent = 'weather'; RED.util.setMessageProperty(msg, "socketIOEmit", "emit", true); return msg; ``` Is this the correct way to set and emit the weather event? For reference: I've bound the SocketIO Out node to Node\_Red (so presumably port 1880 on local host). I'm using the Unity game engine as the client to receive the event with the Socket.IO library from the asset store: <https://assetstore.unity.com/packages/tools/network/socket-io-for-unity-21721> Unity is listening for the weather event on the following URL: ``` ws://127.0.0.1:1880/socket.io/?EIO=4&transport=websocket ``` Currently Unity seems to be registering the connection but not the emitted weather event. My test C# script for handling the events in Unity is as follows: ``` using UnityEngine; using SocketIO; public class NodeNetwork : MonoBehaviour { //Reference socket component static SocketIOComponent socket; void Start() { //Initialise reference to socket component socket = GetComponent(); //Register callbacks for network events socket.On("open", OnConnected); socket.On("weather", OnWeather); } //Create a callback for the connection void OnConnected(SocketIOEvent e) { Debug.Log("Connected"); //Emit a move call back to the server socket.Emit("client connected"); } //Create a callback for receipt of weather events void OnWeather(SocketIOEvent e) { Debug.Log("New weather event received" + e.data); socket.Emit("weather received"); } } ``` Any advice would be appreciated.<issue_comment>username_1: From the code you provided and the owner of the asset it seems to be the [DEPRECATED SocketIO Unity port](https://github.com/fpanettieri/unity-socket.io-DEPRECATED), I'd advise you to look into [socket.io-unity](https://github.com/floatinghotpot/socket.io-unity) (which is free from github and 10$ from the asset store) which is a revised version of Quobject's [SocketIoClientDotNet](https://github.com/Quobject/SocketIoClientDotNet) to work with Unity3D I know this is not a concrete answer (apart from switching library) to fixing your problem, but the owner of your package has said himself over 2 years ago now that he had stopped development of it. Hence why I think it would be interesting to switch library as primary option, I actually think you did things right, and it just doesn't work. I'm currently at work and can't test this out, sorry. I hope I was able to help in some way. Upvotes: 1 <issue_comment>username_2: After further research and testing I got this working. The Node-Red flow was fine but some sort of configuration issue prevented it working. Reinstalling Node.js and Node-Red resolved the issue. Using `msg.socketIOEmit = "emit"` as advised by @hardillb works. I tested several SocketIO solutions for Unity and ended using [SocketIO for Native and WebGL builds by DASPETE](https://assetstore.unity.com/packages/tools/network/socketio-for-native-and-webgl-builds-76508) which is a $10 paid asset. In order to deserialise the JSON I used SaladLab's [JSONNetLite Unity package](https://github.com/SaladLab/Json.Net.Unity3D/releases) which is a fork of NewtonSoft.JSon.NET. To successfully use the package in a Unity WebGL build you need to add a [link.xml file](https://github.com/SaladLab/Json.Net.Unity3D/blob/master/src/UnityPackage/Assets/link.xml) to your assets folder. This adds exceptions to the default Unity bytecode stripping which removes unused code from DLL's like the Newtonsoft.Json package. I hope that helps if you have the same issues. Upvotes: 1 [selected_answer]
2018/03/15
1,483
5,872
<issue_start>username_0: I have UICollectionView 2 rows 10+ cells. deselected by default. when I click it becomes selected but when I click again not deselect. ``` func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath) { print(indexPath) let cell = collectionView.cellForItem(at: indexPath) let collectionActive: UIImageView = { let image=UIImageView(image: #imageLiteral(resourceName: "collectionActive")) image.contentMode = .scaleAspectFill return image }() let collectionInactive: UIImageView = { let image=UIImageView(image: #imageLiteral(resourceName: "collectionInactive")) image.contentMode = .scaleAspectFill return image }() if cell?.isSelected == true { cell?.backgroundView = collectionActive }else{ cell?.backgroundView = collectionInactive } } ``` how fix that problem?<issue_comment>username_1: in viewDidLoad() ``` collectionView.allowsMultipleSelection = true; ``` afterword I implemented these methods ``` func collectionView(collectionView: UICollectionView, didSelectItemAtIndexPath indexPath: NSIndexPath) { let cell = collectionView.cellForItemAtIndexPath(indexPath) as! MyCell cell.toggleSelected() } func collectionView(collectionView: UICollectionView, didDeselectItemAtIndexPath indexPath: NSIndexPath) { let cell = collectionView.cellForItemAtIndexPath(indexPath) as! MyCell cell.toggleSelected() } ``` finally in my class ``` class MyCell : UICollectionViewCell { func toggleSelected () { if (selected){ backgroundColor = UIColor.redColor() }else { backgroundColor = UIColor.whiteColor() } } } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: According to [UICollectionView class doc,](https://developer.apple.com/documentation/uikit/uicollectionviewcell/1620138-selectedbackgroundview) you can use: ``` var selectedBackgroundView: UIView? { get set } ``` > > You can use this view to give the cell a custom appearance when it is selected. When the cell is selected, this view is layered above the backgroundView and behind the contentView. > > > In your example in the `cellForItem(at indexPath: IndexPath) -> UICollectionViewCell?` function you can set: ``` cell.backgroundView = collectionInactive cell.selectedBackgroundView = collectionActive ``` Upvotes: 0 <issue_comment>username_3: If you don't want to enable multiple selection and only want one cell to be selected at a time, you can use the following delegate instead: If the cell is selected then this deselects all cells, otherwise if the cell is not selected, it selects it as normal. ``` func collectionView(_ collectionView: UICollectionView, shouldSelectItemAt indexPath: IndexPath) -> Bool { let cell = collectionView.cellForItem(at: indexPath) as! CustomCell if cell.isSelected { collectionView.selectItem(at: nil, animated: true, scrollPosition: []) return false } return true } ``` Upvotes: 2 <issue_comment>username_4: **For Swift 5 +** in viewDidLoad() ``` collectionView.allowsMultipleSelection = true ``` afterword I implemented these methods ``` func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath) { let cell = collectionView.cellForItem(at: indexPath) as! MovieDetailsDateCollectionViewCell cell.toggleSelected() } func collectionView(_ collectionView: UICollectionView, didDeselectItemAt indexPath: IndexPath) { let cell = collectionView.cellForItem(at: indexPath) as! MovieDetailsDateCollectionViewCell cell.toggleSelected() } ``` In TableView Cell class ``` class MyCell : UICollectionViewCell { func toggleSelected () { if (isSelected){ backgroundColor = .red }else { backgroundColor = .white } } } ``` Upvotes: 2 <issue_comment>username_5: If the cell is selected, just set `cell.isSelected = false` in `shouldSelectItemAt` delegate and in a `DispatchQueue.main.async { }` block. So the state is actually changed to `false` (very) soon after the `shouldSelectItemAt` has been executed. It may look like a hack but it actually works. ``` func collectionView(_ collectionView: UICollectionView, shouldSelectItemAt indexPath: IndexPath) -> Bool { if let cell = collectionView.cellForItem(at: indexPath), cell.isSelected { DispatchQueue.main.async { // change the isSelected state on next tick of the ui thread clock cell.isSelected = false self.collectionView(collectionView, didDeselectItemAt: indexPath) } return false } return true } ``` Please let me know if you find/know any cons to do this. Thanks Upvotes: 0 <issue_comment>username_6: In iOS 14 and newer, you can set the `backgroundConfiguration` property of a cell. Once set, all necessary visual effects for selecting and deselecting works automatically. You can use one of the preconfigured configurations, like this: ``` cell.backgroundConfiguration = .listSidebarCell() ``` …or create a `UIBackgroundConfiguration` object from scratch. You can also change a preconfigured configuration before applying. More info here: <https://developer.apple.com/documentation/uikit/uibackgroundconfiguration> Upvotes: 0 <issue_comment>username_7: ``` override var isSelected: Bool{ didSet{ if self.isSelected { //This block will be executed whenever the cell’s selection state is set to true (i.e For the selected cell) } else { //This block will be executed whenever the cell’s selection state is set to false (i.e For the rest of the cells) } } } ``` Add this to your cell [Source](https://medium.com/hackernoon/uicollectionviewcell-selection-made-easy-41dae148379d) Upvotes: 0
2018/03/15
722
2,551
<issue_start>username_0: Wondering if there's a way to make a text bubble show up when the user hovers over a metric in a Power BI report. I have tried using alt text, but that does not show up anywhere.<issue_comment>username_1: Currently, I don't think this is possible natively. There are some ideas related to this that you can vote for: [Custom Alt Text (tooltip) On Hover Over Any Visual](https://ideas.powerbi.com/forums/265200-power-bi-ideas/suggestions/31501933-custom-alt-text-tooltip-on-hover-over-any-visual) [Tooltip for Matrix visual (and others)](https://ideas.powerbi.com/forums/265200-power-bi-ideas/suggestions/31511665-tooltip-for-matrix-visual-and-others) [Tooltips for Tables](https://ideas.powerbi.com/forums/265200-power-bi-ideas/suggestions/30979345-tooltips-for-tables) There is also a [Dynamic Tooltip custom visual](https://appsource.microsoft.com/en-us/product/office/WA104380983) that you might be close enough to what you are looking for. Upvotes: 3 [selected_answer]<issue_comment>username_2: I had the same problem just now, and the documentation page [Create tooltips based on report pages in Power BI Desktop](https://learn.microsoft.com/en-us/power-bi/desktop-tooltips) describes how to do this. Probably this functionality was not around in March 2018 (when the accepted answer was posted). Steps ===== In summary the steps are as follows (assuming you already have a visualization): 1. Create a new page, name it `Tooltip 1` 2. On the `Tooltip 1` page, go to Format → Page Information and switch on the Tooltip toggle 3. Go to Canvas settings and set the Type to `Tooltip`. 4. Create a text box via Home → Text box, fill it with the text you want to show when one hovers over your visualization. 5. Select your visualization 6. Click on Format → Tooltip 7. Set Type to `Report page` 8. Set Page to `Tooltip 1` Now when you hover over the visualization, the text box from step 4 will be shown. Additions ========= * You can mark the page `Tooltip 1` as hidden, such that it does not show up as a tab on your report. * You can also customize the tooltip size by setting the Page size of the tooltip page to `Custom` with suitable width and height values. * As described in the link, instead of text you can also create a whole visualization. Upvotes: 4 <issue_comment>username_3: You can use a transparent shape as an overlay on the text you want the hover over test to appear on Turn Action On Type Page Navigation Destination None Tooltip - Add the test you want to display on the hover Upvotes: 0
2018/03/15
363
1,351
<issue_start>username_0: I have two methods for mapping entity to domain. ``` RDomain entityToDomain(REntity rEntity) /* this method ignores some of the fields in the domain. */ RDomain entityToDomainLight(REntity rEntity) ``` I'm getting **Ambiguous mapping methods found for mapping collection** element when I try to define mapping method for List of entities to domains. ``` List entitiesToDomains(List rEntities) ``` Is there a way to define which method to use for mapping collection of objects<issue_comment>username_1: As far as I understand Mapstruct, there is no wo to tell a mapper ``` List entitiesToDomains(List rEntities) ``` which of your to mapping methods it should use. But you can implement `entitiesToDomains` as a Java 8 default method on your mapper interface. ``` default List entitiesToDomains(List rEntities) { List domains = new ArrayList<>(); for(REntity r : rEntities) { //delegate to your dedicated mapper domains.add(entityToDomainLight(r)); } return domains; } ``` Upvotes: 0 <issue_comment>username_2: As @Filip suggested, it is better to do something like that : ``` RDomain entityToDomain(REntity rEntity) @Named(value = "useMe") RDomain entityToDomainLight(REntity rEntity) @IterableMapping(qualifiedByName = "useMe") List entitiesToDomains(List rEntities) ``` Upvotes: 6 [selected_answer]
2018/03/15
322
1,135
<issue_start>username_0: So I'm creating a basic (my first project using python) game with python. there is a part where I put a random.choice. I want to refer back to the same random number that it picked so I wondered if it is possible to create a variable for that output. I've tried str = randomint(1,7) but that didnt give me the result I wanted. ``` # random module import random dice1 = ['1','2','3','4','5','6','7'] print (random.choice(dice1)) ```<issue_comment>username_1: Your use of random.choice is indeed giving you a random selection from your list dice1. You can store the return value of random.choice in a variable. ``` # random module import random dice1 = ['1','2','3','4','5','6','7'] random_number = random.choice(dice1) ``` Upvotes: 0 <issue_comment>username_2: Here is how you would generate and then store a random number in Python. If you want a number between two numbers use [`random.randint(a,b)`](https://docs.python.org/3/library/random.html#random.randint). Note that using randint will give you an int and not a string ``` import random number = random.randint(1,7) print(number) ``` Upvotes: 2
2018/03/15
2,769
11,608
<issue_start>username_0: I am using ExoPlayer in my activity,What i want is to smoothly play video in portrait and landscape mode.For this purpose what I am doing is in `onpause` I save the currentPlayerPosition and seek player to that position in `onresume` but while rotating it face a jerk and video is stopped for a while and played to the saved position. My code is below please help me how i can smoothly switch the mode portrait and landscape.Thanks ``` @Override public void onPause() { super.onPause(); if (mExoPlayerView != null && mExoPlayerView.getPlayer() != null) { mResumeWindow = mExoPlayerView.getPlayer().getCurrentWindowIndex(); mResumePosition = Math.max(0, mExoPlayerView.getPlayer().getContentPosition()); mExoPlayerView.getPlayer().release(); } } @Override public void onDestroy() { super.onDestroy(); if (mExoPlayerView.getPlayer() != null) mExoPlayerView.getPlayer().release(); } @Override public void onSaveInstanceState(Bundle outState) { outState.putInt(STATE_RESUME_WINDOW, mResumeWindow); outState.putLong(STATE_RESUME_POSITION, mResumePosition); outState.putBoolean(STATE_PLAYER_FULLSCREEN, mExoPlayerFullscreen); super.onSaveInstanceState(outState); } @Override protected void onResume() { super.onResume(); if (mExoPlayerView == null) { mExoPlayerView = (SimpleExoPlayerView) findViewById(R.id.exoplayer); videoURL = getIntent().getStringExtra("url"); postID = getIntent().getIntExtra("UserID", 0); String userAgent = Util.getUserAgent(Vid.this, getApplicationContext().getApplicationInfo().packageName); DefaultHttpDataSourceFactory httpDataSourceFactory = new DefaultHttpDataSourceFactory(userAgent, null, DefaultHttpDataSource.DEFAULT_CONNECT_TIMEOUT_MILLIS, DefaultHttpDataSource.DEFAULT_READ_TIMEOUT_MILLIS, true); DefaultDataSourceFactory dataSourceFactory = new DefaultDataSourceFactory(Vid.this, null, httpDataSourceFactory); Uri daUri = Uri.parse(videoURL); ExtractorsFactory extractorsFactory = new DefaultExtractorsFactory(); if (daUri.toString().startsWith("https://player.vimeo")) mVideoSource = new HlsMediaSource(daUri, dataSourceFactory, 1, null, null); else mVideoSource = new ExtractorMediaSource(daUri, dataSourceFactory, extractorsFactory, null, null); initExoPlayer(); } else { resumeExoPlayer(); } } private void resumeExoPlayer() { boolean haveResumePosition = mResumeWindow != C.INDEX_UNSET; if (haveResumePosition) { hideKeyboard(); hideProgress(); mExoPlayerView.getPlayer().seekTo(mResumeWindow, mResumePosition); } } private void initExoPlayer() { BandwidthMeter bandwidthMeter = new DefaultBandwidthMeter(); TrackSelection.Factory videoTrackSelectionFactory = new AdaptiveTrackSelection.Factory(bandwidthMeter); TrackSelector trackSelector = new DefaultTrackSelector(videoTrackSelectionFactory); LoadControl loadControl = new DefaultLoadControl(); SimpleExoPlayer player = ExoPlayerFactory.newSimpleInstance(new DefaultRenderersFactory(this), trackSelector, loadControl); mExoPlayerView.setPlayer(player); boolean haveResumePosition = mResumeWindow != C.INDEX_UNSET; if (haveResumePosition) { hideKeyboard(); hideProgress(); mExoPlayerView.getPlayer().seekTo(mResumeWindow, mResumePosition); } mExoPlayerView.getPlayer().prepare(mVideoSource); mExoPlayerView.getPlayer().setPlayWhenReady(true); mExoPlayerView.getPlayer().addListener(new Player.EventListener() { @Override public void onTimelineChanged(Timeline timeline, Object manifest) { } @Override public void onTracksChanged(TrackGroupArray trackGroups, TrackSelectionArray trackSelections) { } @Override public void onLoadingChanged(boolean isLoading) { } @Override public void onPlayerStateChanged(boolean playWhenReady, int playbackState) { if (playbackState == ExoPlayer.STATE_ENDED) { hideProgress(); mExoPlayerView.getPlayer().seekTo(0); mExoPlayerView.getPlayer().setPlayWhenReady(false); } else if (playbackState == ExoPlayer.STATE_BUFFERING) { } else if (playbackState == ExoPlayer.STATE_READY) { hideProgress(); if (preferenceManager.getLoggedIn()) { APIGetComment(); } } } @Override public void onRepeatModeChanged(int repeatMode) { } @Override public void onShuffleModeEnabledChanged(boolean shuffleModeEnabled) { } @Override public void onPlayerError(ExoPlaybackException error) { hideProgress(); finish(); } @Override public void onPositionDiscontinuity(int reason) { } @Override public void onPlaybackParametersChanged(PlaybackParameters playbackParameters) { } @Override public void onSeekProcessed() { } }); } ```<issue_comment>username_1: If you want the video to resume on orientation change, you can add this to your manifest `android:configChanges="keyboardHidden|orientation|screenSize"` ``` android:icon="@mipmap/ic\_launcher\_2"> ``` Upvotes: 2 <issue_comment>username_2: Finally, After wasting 2 days I found it. Simple add it in the manifest and will work on all android version ? ``` android:configChanges="orientation|screenSize|layoutDirection" ``` cheers! Upvotes: 7 [selected_answer]<issue_comment>username_3: No need of any additional coding, simply add this line ``` android:configChanges="keyboardHidden|orientation|screenSize" ``` in your `AndroidManifest.xml`'s activity section. Upvotes: 2 <issue_comment>username_4: I also wasted quite a lot time in this. Take a look at it EXO PLAYER 2.11.2 ``` implementation 'com.google.android.exoplayer:exoplayer:2.11.2' ``` STEP - 1 Make an activity in which string url is passed as intent. ``` public class VideoPlayerActivity extends Activity { public static final String sURL_KEY = "STREAMING_URL_KEY"; public static final String sTOAST_TEXT = "Unable to stream, no media found"; static final String LOADING = "PLAYER_LOADING"; static final String STOPPED = "PLAYER_STOPPED"; static final String PAUSED = "PLAYER_PAUSED"; static final String PLAYING = "PLAYER_PLAYING"; static final String IDLE = "PLAYER_IDLE"; private static final String TAG = "StreamMediaActivity"; int orientation; private Uri streamUrl; private SimpleExoPlayer mPlayer; private PlayerView playerView; private ProgressBar progressBar; private String mPlayerStatus; private long mPlaybackPosition = 0L; private boolean mIsPlayWhenReady = true; private int mCurrentWindow = 0; private Display display; private String STATE_RESUME_WINDOW = "resumeWindow"; private String STATE_RESUME_POSITION = "resumePosition"; private String STATE_PLAYER_FULLSCREEN = "playerFullscreen"; private boolean mExoPlayerFullscreen = false; @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); fullScreen(); display = ((WindowManager) getSystemService(WINDOW_SERVICE)).getDefaultDisplay(); orientation = display.getRotation(); setContentView(R.layout.activity_video_player); playerView = findViewById(R.id.player_view); progressBar = findViewById(R.id.progressBar_player); // Pass a string uri to this class String urlString = getIntent().getStringExtra(sURL_KEY); if (urlString != null) { streamUrl = Uri.parse(urlString); } else { Toast.makeText(this, sTOAST_TEXT, Toast.LENGTH_LONG).show(); finish(); } } @Override protected void onStart() { super.onStart(); initPlayer(); } @Override protected void onResume() { super.onResume(); if (mPlaybackPosition != 0L && mPlayer != null) { mPlayer.seekTo(mCurrentWindow, mPlaybackPosition); } } @Override protected void onStop() { super.onStop(); } @Override protected void onPause() { super.onPause(); releasePlayer(); } private void initPlayer() { // ESTABLISH THE DATA SOURCE FROM URL // here i'm playing local video file that's // why using the DefaultDataSourceFactory but you //may use DefaultHttpDataSourceFactory to stream //online videos DataSource.Factory dataSourceFactory = new DefaultDataSourceFactory(this, Util.getUserAgent(this, getApplicationInfo().name)); MediaSource mediaSource = new ProgressiveMediaSource.Factory(dataSourceFactory).createMediaSource( streamUrl); // CREATE A NEW INSTANCE OF EXO PLAYER if (mPlayer == null) { mPlayer = new SimpleExoPlayer.Builder(this, new DefaultRenderersFactory(this)).build(); playerView.setPlayer(mPlayer); progressBar.setVisibility(View.VISIBLE); } mPlayer.setPlayWhenReady(mIsPlayWhenReady); mPlayer.seekTo(mCurrentWindow, mPlaybackPosition); // PREPARE MEDIA PLAYER mPlayer.prepare(mediaSource, true, false); mPlayer.addListener(new Player.EventListener() { @Override public void onPlayerStateChanged(boolean playWhenReady, int playbackState) { switch (playbackState) { case Player.STATE_BUFFERING: mPlayerStatus = LOADING; runOnUiThread(() -> progressBar.setVisibility(View.VISIBLE)); break; case Player.STATE_ENDED: mPlayerStatus = STOPPED; break; case Player.STATE_READY: mPlayerStatus = (playWhenReady) ? PLAYING : PAUSED; runOnUiThread(() -> progressBar.setVisibility(View.INVISIBLE)); break; default: mPlayerStatus = IDLE; break; } } @Override public void onPlayerError(ExoPlaybackException error) { Toast.makeText(VideoPlayerActivity.this, "Something went wrong", Toast.LENGTH_SHORT).show(); finish(); } }); } @Override protected void onSaveInstanceState(Bundle outState) { mExoPlayerFullscreen = !mExoPlayerFullscreen; super.onSaveInstanceState(outState); outState.putInt(STATE_RESUME_WINDOW, mCurrentWindow); outState.putLong(STATE_RESUME_POSITION, mPlaybackPosition); outState.putBoolean(STATE_PLAYER_FULLSCREEN, mExoPlayerFullscreen); super.onSaveInstanceState(outState); } public void fullScreen() { View decorView = getWindow().getDecorView(); decorView.setSystemUiVisibility(View.SYSTEM_UI_FLAG_FULLSCREEN); } private void releasePlayer() { if (mPlayer != null) { mPlayer.stop(); mPlaybackPosition = mPlayer.getCurrentPosition(); mCurrentWindow = mPlayer.getCurrentWindowIndex(); mIsPlayWhenReady = mPlayer.getPlayWhenReady(); playerView.setPlayer(null); mPlayer.release(); mPlayer = null; } } } ``` Step 2 : Make the XML layout ``` xml version="1.0" encoding="utf-8"? ``` STEP 3: start VideoPlayerActivity using intent from another activity ``` Intent streamVideoIntent = new Intent(context, VideoPlayerActivity.class); streamVideoIntent.putExtra(sURL_KEY, stringUrl); context.startActivity(streamVideoIntent); ``` STEP 4 : Lastly add activity to manifest ``` ``` Upvotes: 0
2018/03/15
546
2,120
<issue_start>username_0: I upgraded `bokeh` library by executing the following command: `!pip install --user --upgrade bokeh` Tornado version 5 dependency was installed. After upgrading the library from the DSX Notebook the kernel stops responding after restart. I see "Slow Kernel connection" dialog. Opening another notebook for the same language displays the same error dialog.<issue_comment>username_1: Tornado 5.0 is incompatible with the current ipython kernel installation for DSX Notebooks. If you are experiencing "Slow kernel connection" issue after upgrading `Bokeh` library try the following steps: * Switch to Python kernel with another version (e.g. if you installed Tornado 5 for "Python 2" kernel, switch kernel to "Python 3.5") * Execute the following command in the notebook code cell: `!rm -rf ~/.local/lib/` This command will remove all python packages installed by user * All Python kernels should be started normally If Tornado 5 was installed for both Python versions (Python 2 and Python 3) you could remove user's packages by switching to `R` kernel and executing the following command: `system('!rm -rf ~/.local/lib/')` After that you could switch back to Python kernel. You could upgrade `bokeh` library with the following command: `!pip install --upgrade --upgrade-strategy only-if-needed bokeh` In this case Tornado package will not be updated Upvotes: 3 [selected_answer]<issue_comment>username_2: Just suggesting another way for users who do not want to remove any other package that user may have installed in ~/.local/lib/, can simply uninstall tornado that is installed in user's python environment For user's having issue with Python 3 Switch to R kernel and run following:- ``` system("/usr/local/src/conda3_runtime/home/envs/DSX-Python35-Spark/bin/pip uninstall -y tornado",intern=TRUE) ``` For Python 2 ``` system("/usr/local/src/bluemix_jupyter_bundle.v83/notebook/bin/pip uninstall -y tornado",intern=TRUE) ``` Likely if above pip binary locations change, you can try to locate them with ``` system("which pip",intern=TRUE) ``` Thanks, Charles. Upvotes: 0
2018/03/15
962
2,525
<issue_start>username_0: I know there have been similar question to my title, but I cannot find a similar problem to the problem I have outlined below: I am trying to get the following 9 list comprehensions to be stored into one list, so that each of the list comprehensions is itself a list within a new list: ``` a = [(i,j) for i in range(3) for j in range(3)] b = [(i,j) for i in range(3) for j in range(3,6)] c = [(i,j) for i in range(3) for j in range(6,9)] d = [(i,j) for i in range(3,6) for j in range(3)] e = [(i,j) for i in range(3,6) for j in range(3,6)] f = [(i,j) for i in range(3,6) for j in range(6,9)] g = [(i,j) for i in range(6,9) for j in range(3)] h = [(i,j) for i in range(6,9) for j in range(3,6)] i = [(i,j) for i in range(6,9) for j in range(3,9)] ``` Specifically, these print out the indexes for the BLOCKS within a sudoku board. I would like have each block as a list within one list. Would anyone be able to point me in the right direction. Thanks<issue_comment>username_1: I can think of two ways, both trivial rewrites of what you've got: 1) `master_list = [a,b,c,d,e,f,g,h,i]` That is, just take the individually named lists you've got and put them in a list. 2) ``` master_list = [ [(i,j) for i in range(3) for j in range(3)], [(i,j) for i in range(3) for j in range(3,6)], [(i,j) for i in range(3) for j in range(6,9)], [(i,j) for i in range(3,6) for j in range(3)], [(i,j) for i in range(3,6) for j in range(3,6)], [(i,j) for i in range(3,6) for j in range(6,9)], [(i,j) for i in range(6,9) for j in range(3)], [(i,j) for i in range(6,9) for j in range(3,6)], [(i,j) for i in range(6,9) for j in range(3,9)], ] ``` This is essentially the same, but without the intermediate variable names. Upvotes: 2 <issue_comment>username_2: Rationalising a bit, you can define a function ``` def f(b, r): return [(i, j) for i in range(b, b+3) for j in range(*r)] ``` and drive it it with the wanted ranges ``` ml = [f(b, r) for b in range(0, 9, 3) for r in ((0, 3), (3, 6), (6, 9))] ``` Upvotes: 0 <issue_comment>username_3: Not sure if your last line is correct, fairly sure it should be ``` [(i,j) for i in range(6,9) for j in range(6,9)] ``` not ``` [(i,j) for i in range(6,9) for j in range(3,9)] ``` as this is for a 3x6 grid. Nonetheless this is a cleaner list comp for creating 9 3x3 grids: ``` [[(x0*3 + i, y0*3 + j) for x in range(3) for y in range(3)] for x0 in range(3) for y0 in range(3)] ``` Upvotes: 2 [selected_answer]
2018/03/15
353
1,269
<issue_start>username_0: I have encountered the usual Chrome "feature" of displaying "this type of file can harm your computer mac no keep option" when trying to download a file from a filehosting website. However, when the warning appears there is only ever a "discard" button and not one to "keep" the file as I have seen when using Windows... [![enter image description here](https://i.stack.imgur.com/fzTCU.png)](https://i.stack.imgur.com/fzTCU.png) Can anybody suggest anything?<issue_comment>username_1: @DickKennedy Helped resolve my woes with a workaround for this by going to Chrome settings and enabling the option "Ask where to save each file before downloading" in the downloads section. [![enter image description here](https://i.stack.imgur.com/F8rob.png)](https://i.stack.imgur.com/F8rob.png) This isn't a fix, but it certainly gets the job done! Upvotes: 4 [selected_answer]<issue_comment>username_2: Yes - super annoying issue. Luckily a small work-around that works is going to Window -> Downloads and clicking on "Keep" there. Upvotes: 2 <issue_comment>username_3: Going to the downloads list (Command+Shift+J) shows you the Keep option. [![Downlaods list](https://i.stack.imgur.com/RAVkJ.png)](https://i.stack.imgur.com/RAVkJ.png) Upvotes: 4
2018/03/15
361
967
<issue_start>username_0: I will from a string with numbers and words ,than print it , separate every two Len with an x as start. I have this code: ``` d = "e345a46be5" for i in d: print "x", i[::2], ``` and i come, x 43c48eaff x 43c48eaff x ..... . I will xe3,x45,xa4,x6b,xe5<issue_comment>username_1: The third parameter of the slice is the step size. This means you are taking the first, third, fifth... item. I think the easiest way to do what you want is to use range: ``` d = "e345a46be5" for i in range(0, len(d), 2): print "x", d[i:i+2] ``` range(start, stop, step) => range(0,5,2) => [0, 2, 4] Upvotes: 2 [selected_answer]<issue_comment>username_2: This code that should work on python 2.7 and python 3: ``` for i in range(0,len(d),2): print ("x", d[i:i+2]) ``` Here we use i as an index instead as a specific letter as you did. and range (a,b,2) enables to go through a list starting with a, going to b excluded with step 2. Upvotes: 0
2018/03/15
428
1,337
<issue_start>username_0: I am getting the error that my PostsController isn't initialized. Here is my routes.rb file: ``` Rails.application.routes.draw do root to: 'posts#index' scope module: 'blog' do get 'about' => 'pages#about', as: :about get 'contact' => 'pages#contact', as: :contact resources :posts end end ``` The filename for my posts controller is: posts\_controller.rb The path is app/controller/blog/posts\_controller.rb Here's an excerpt of the posts\_controller.rb file: ``` module Blog class PostsController < ApplicationController before_action :set_post, only: [:show, :edit, :update, :destroy] ```<issue_comment>username_1: The third parameter of the slice is the step size. This means you are taking the first, third, fifth... item. I think the easiest way to do what you want is to use range: ``` d = "e345a46be5" for i in range(0, len(d), 2): print "x", d[i:i+2] ``` range(start, stop, step) => range(0,5,2) => [0, 2, 4] Upvotes: 2 [selected_answer]<issue_comment>username_2: This code that should work on python 2.7 and python 3: ``` for i in range(0,len(d),2): print ("x", d[i:i+2]) ``` Here we use i as an index instead as a specific letter as you did. and range (a,b,2) enables to go through a list starting with a, going to b excluded with step 2. Upvotes: 0
2018/03/15
507
1,692
<issue_start>username_0: I'm trying to get logging information on the server side when a client tries to connect but fails using SSL. The obvious way to do this is to set the flag `-Djavax.net.debug=ssl`, but this won't work, because: 1. it's way too verbose 2. any of the modifiers (i.e. `-Djavax.net.debug=ssl:record:handshake` etc) won't work (this is a known bug [referenced here](https://bugs.openjdk.java.net/browse/JDK-8044609), yet a lot of sites on the web suggest using these). I've thought of capturing all Standard output to a file and filtering from there, but there are other messages that are supposed to go to standard out that i'd also catch. Someone recommended using logger, but i'm not sure if it's possible to capture javax.net.debug output using logger **What I need** Is there any way to output logging information for ssl that isn't 200+ lines per request or is there any way to get the ssl logging to go to a separate file while leaving everything else unaffected. Thanks!<issue_comment>username_1: The third parameter of the slice is the step size. This means you are taking the first, third, fifth... item. I think the easiest way to do what you want is to use range: ``` d = "e345a46be5" for i in range(0, len(d), 2): print "x", d[i:i+2] ``` range(start, stop, step) => range(0,5,2) => [0, 2, 4] Upvotes: 2 [selected_answer]<issue_comment>username_2: This code that should work on python 2.7 and python 3: ``` for i in range(0,len(d),2): print ("x", d[i:i+2]) ``` Here we use i as an index instead as a specific letter as you did. and range (a,b,2) enables to go through a list starting with a, going to b excluded with step 2. Upvotes: 0
2018/03/15
330
1,198
<issue_start>username_0: JSON schema format which is getting in response and stored it in buttonschema ``` { "Name": "Test", "layoutSections": [ { "layoutColumns": [ { "layoutItems": [ { "name": "test", },] },] },] } ``` Html File to read Json schema ``` ``` Typescript File ``` buttonSchema: any; ngOnInit() { this.buttonSchema = this.authenticationService.buttonSchema; **const s1 = this.buttonSchema.layoutsections; const s2 = s1.layoutcolumns; const s3 = s2.layoutItems;** } ``` How to implement logic in typescript what is achieved with html.<issue_comment>username_1: ``` for (let i = 0; i < buttonSchema.layoutsections.length; i++) { for (let j = 0; j < buttonSchema.layoutsections[i].layoutcolumns.length; j++) { for (let k = 0; k < buttonSchema.layoutsections[i].layoutcolumns[j].layoutitems.length; k++) { const item = buttonSchema.layoutsections[i].layoutcolumns[j].layoutitems[k]; } } } ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: try this ``` ``` Upvotes: 0
2018/03/15
443
1,440
<issue_start>username_0: i would like to identify a key of a dictionary, if a certain value is contained in the value, which itself can be a list. I tested the following code: ``` h={"Hi":[1,2],"du":3} for book, product in h.items(): if 1 in product: print(book) ``` and it gives me the error ``` if 1 in product: TypeError: argument of type 'int' is not iterable ``` Cant figure out what is wrong here. Thanks for any help.<issue_comment>username_1: You should first check whether it is a `list`: ``` if isinstance(product, list) and 1 in product: ``` Upvotes: 2 <issue_comment>username_2: You get an error, because `3` (the value of `"du"`) is not a list. Simply add a check if the item is indeed an instance of type list: ``` h={"Hi":[1,2],"du":3} for book, product in h.items(): if isinstance(product, list): if 1 in product: print(book) ``` Upvotes: 1 <issue_comment>username_3: The problem is, in one of the iterations, specifically when `book` is `"du"`, `product` is `3`, which is an `int`, not a `list`, and therefore is not iterable. You should check first if `product` is a list. If it is, check if 1 is in it; if it's not, check if it's equal to 1. ``` h={"Hi":[1,2],"du":3,"nn":1} for book, product in h.items(): if (isinstance(product, list) and 1 in product) or product == 1: print(book) # prints both "Hi" and "nn" ``` Upvotes: 3 [selected_answer]
2018/03/15
742
2,768
<issue_start>username_0: I'm trying to create a simple countdown button that displays a paragraph when it reaches 0. The countdown button works, but the paragraph isn't displayed at 0. My code is below. Any ideas what I might be doing wrong? Thanks! Also, I know it's not the cleanest or most efficient code but I'm just doing this to practice my Javascript. ```js function countdown(){ var currentVal = document.getElementById("countdown").innerHTML; var newValue = 0; if (currentVal > 0){ newValue = currentVal - 1; } document.getElementById("countdown").innerHTML = newValue; if (currentVal = 0){ var vacation = document.getElementById("vacation"); vacation.style.display = "block"; } } ``` ```css #vacation{ display: none; } ``` ```html Days until vacation =================== 10 Time for vacation woo! ```<issue_comment>username_1: You wrote a wrong conditional statement. There is an assignment instead of comparison. So, change to `if (newValue === 0) {`. Upvotes: 1 <issue_comment>username_2: You wrote a wrong conditional statement if (currentVal === 0) {. Upvotes: 0 <issue_comment>username_3: You are using the assignment operator instead of comparison operator for checking `currentVal`. See updated snippet: ```js function countdown(){ var currentVal = document.getElementById("countdown").innerHTML; var newValue = 0; if (currentVal > 0){ newValue = currentVal - 1; } document.getElementById("countdown").innerHTML = newValue; if (currentVal == 0){ var vacation = document.getElementById("vacation"); vacation.style.display = "block"; } } ``` ```css #vacation{ display: none; } ``` ```html Days until vacation =================== 10 Time for vacation woo! ``` Upvotes: 0 <issue_comment>username_4: In javascript you should use `===` for comparisons. Also, I suggest that you parse the currentValue as an integer, because `.innerHTML` returns a string. So replace `if (currentVal = 0){` with `if (currentVal === 0){` and use `var currentVal = parseInt(document.getElementById("countdown").innerHTML);` and it should work! ```js function countdown(){ var currentVal = parseInt(document.getElementById("countdown").innerHTML); // integer parsing var newValue = 0; if (currentVal > 0) { newValue = currentVal - 1; } document.getElementById("countdown").innerHTML = newValue; if (currentVal === 0){ // currentVal is equal to newValue (and they are both integers) so you can use either of them var vacation = document.getElementById("vacation"); vacation.style.display = "block"; } } ``` ```css #vacation{ display: none; } ``` ```html Days until vacation =================== 10 Time for vacation woo! ``` Upvotes: 2
2018/03/15
895
2,153
<issue_start>username_0: I've a data frame which have many columns with common prefix "\_B" e,g '\_B1', '\_B2',...'\_Bn'. So that I can grab the column names by: ``` allB <- c(grep( "_B" , names( my.df ),value = TRUE ) ) ``` I wish to select the rows for which each of these \_B\* columns passes a single condition like values >= some\_cutoff Can someone tell how to do that, my efforts with 'all()' and 'any()' failed ``` set.seed(12345) my.df <- data.frame(a = round(rnorm(10,5),1), m_b1= round(rnorm(10,4),1),m_b2=round(rnorm(10,4),1)) allB <- c(grep( "_b" , names( my.df ),value = TRUE ) ) > my.df a m_b1 m_b2 1 5.6 3.9 4.8 2 5.7 5.8 5.5 3 4.9 4.4 3.4 4 4.5 4.5 2.4 5 5.6 3.2 2.4 6 3.2 4.8 5.8 7 5.6 3.1 3.5 8 4.7 3.7 4.6 9 4.7 5.1 4.6 10 4.1 4.3 3.8 ``` I wish to select rows for which every m\_b1 and m\_b2 column is >= 4.0<issue_comment>username_1: We could use `filter_at` from `dplyr`, and specify `all_vars` (if all the values in the row meets the condition. If it is any of the value in the row, it would be `any_vars`) ``` library(dplyr) my.df %>% filter_at(allB, all_vars(. >= some_cutoff)) ``` ### data ``` some_cutoff <- 3 my.df <- structure(list(`_B1` = c(1, 1, 9, 4, 10), `_B2` = c(2, 3, 12, 6, 12), V3 = c(3, 6, 13, 10, 13), V4 = c(4, 5, 16, 13, 18)), .Names = c("_B1", "_B2", "V3", "V4"), row.names = c(NA, -5L), class = "data.frame") allB <- grep( "_B" , names( my.df ),value = TRUE ) ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: In base `R`: ``` some_cutoff = 4 selectedCols <- my.df[grep("_b", names(my.df), fixed = T)] selectedRows <- selectedCols[apply(selectedCols, 1, function(x) all(x>=some_cutoff)), ] selectedRows # m_b1 m_b2 # 2 5.8 5.5 # 6 4.8 5.8 # 9 5.1 4.6 ``` `grep()` is used to get the indices of columns with the pattern of interest, which is then used to subset `my.df`. `apply()` iterates over rows when the second argument, `MARGIN = 1`. The anonymous function returns `TRUE` if `all()` the entries match the condition. This logical vector is then used to subset `selectedCols`. Upvotes: 1
2018/03/15
606
1,970
<issue_start>username_0: I used the canvas tag to crop my image. as following my code: ``` autoCropImage(url){ var img = new Image(); const cropApp = this; let x = img.onload = function(){ var canvas = document.createElement('canvas'); var ctx = canvas.getContext('2d'); const center_X = img.width/2, center_Y = img.height/2; let init_X=0, init_Y=0; ctx.drawImage(img, init_X, init_Y, img.width, img.height, 0, 0, img.width, img.height); let dataUrl = canvas.toDataURL("image/jpeg", 1.0); let dataUrl_short = dataUrl.replace("data:image/jpeg;base64,", ""); return dataUrl; }(); img.src=url; console.log(x); return x; } //log result: //data:, ``` I consider the log result should be a string of base64, but the callback is `data:,` What's the problem in my code?<issue_comment>username_1: Your function is an [IIFE](https://developer.mozilla.org/en-US/docs/Glossary/IIFE). You are calling it **immediately** and then assigning its return value to `img.onload`. (Then you assign the value of `img.onload` to `x`). This means that you are not assigning a function to `img.onload`, so nothing happens after the image loads. It also means that the image hasn't loaded when you try to pass it as an argument to `ctx.drawImage`. Since the image hasn't loaded, there is no data to convert to a data URL with `canvas.toDataURL`. --- You need to: 1. Remove the `()` from after the function definition so it gets assigned to `img.onload` and called *when the image loads* 2. Get rid of `let x =` because that will just be a copy of the function 3. Read [How do I return the response from an asynchronous call?](https://stackoverflow.com/questions/14220321/how-do-i-return-the-response-from-an-asynchronous-call) Upvotes: 2 [selected_answer]<issue_comment>username_2: Are you wanting to return the data url without the `data:image/jpeg;base64` part? If so you're returning `dataUrl` instead of `dataUrl_short` in the method. Upvotes: 0
2018/03/15
609
2,102
<issue_start>username_0: For a text field, I would like to expose those that contain invalid characters. The list of invalid characters is unknown; I only know the list of accepted ones. For example for French language, the accepted list is `A-z, 1-9, [punc::], space, àéèçè, hyphen, etc.` The list of invalid charactersis unknown, yet I want anything unusual to resurface, for example, I would want `This is an 2-piece à-la-carte dessert` to pass when `'Ã this Øs an apple'` pumps up as an anomalie The 'not contain' notion in R does not behave as I would like, for example ``` grep("[^(abc)]",c("abcdef", "defabc", "apple") ) ``` (those that does not contain 'abc') match all three while ``` grep("(abc)",c("abcdef", "defabc", "apple") ) ``` behaves correctly and match only the first two. Am I missing something How can we do that in R ? Also, how can we put hypen together in the list of accepted characters ?<issue_comment>username_1: Your function is an [IIFE](https://developer.mozilla.org/en-US/docs/Glossary/IIFE). You are calling it **immediately** and then assigning its return value to `img.onload`. (Then you assign the value of `img.onload` to `x`). This means that you are not assigning a function to `img.onload`, so nothing happens after the image loads. It also means that the image hasn't loaded when you try to pass it as an argument to `ctx.drawImage`. Since the image hasn't loaded, there is no data to convert to a data URL with `canvas.toDataURL`. --- You need to: 1. Remove the `()` from after the function definition so it gets assigned to `img.onload` and called *when the image loads* 2. Get rid of `let x =` because that will just be a copy of the function 3. Read [How do I return the response from an asynchronous call?](https://stackoverflow.com/questions/14220321/how-do-i-return-the-response-from-an-asynchronous-call) Upvotes: 2 [selected_answer]<issue_comment>username_2: Are you wanting to return the data url without the `data:image/jpeg;base64` part? If so you're returning `dataUrl` instead of `dataUrl_short` in the method. Upvotes: 0
2018/03/15
1,856
4,143
<issue_start>username_0: I'm running a simple bit of code to produce a plot similar to [this](https://i.stack.imgur.com/eTg1n.png). However the plot I am getting from my code is empty - I assume this is a fairly simple error in my code as I'm new to this. My code is: ``` import matplotlib.pyplot as plt import numpy as np omega_0 = 0.6911 w_0 = -0.77 a = [] for a in range(0,1,100): omega_phi = (omega_0*(a**(-3.0*w_0)))/((omega_0*(a**(-3.0*w_0))) + 1 - omega_0) w = w_0 + (((w_0*(1.0 - w_0**2.0))/(1.0 - 2.0*w_0 + 4.0*w_0**2))*((omega_phi/(1.0 - omega_phi)))) + ((((-1.0)*w_0*(1.0 - w_0**2.0))/(1.0 - 3.0*w_0 + 12.0*w_0**2))*((omega_phi/(1.0 - omega_phi))**2.0)) + ((((1.0)*w_0*(1.0 - w_0**2.0))/(1.0 - 4.0*w_0 + 24.0*w_0**2))*((omega_phi/(1.0 - omega_phi))**3.0)) plt.plot(a,w) plt.xlabel('a') plt.ylabel('w') plt.grid(True) plt.show() ``` Any help is greatly appreciated.<issue_comment>username_1: There are several things going on. Firstly, you should not be calling `plot` in your loop. You call it once, where the arguments are your data points, in this case `a` and `w`. These should be lists. Secondly, your `range` parameters were wrong; they should be `range(start, stop, step). In your code, you were going from 0 to 1 in with a stepsize of 1000, which would immediately go to 1. (I assume you wanted 1000 points from 0 to 1 instead? I've used simpler parameters.) This should give you the desired plot. ``` import matplotlib.pyplot as plt omega_0 = 0.6911 w_0 = -0.77 a = [] w = [] for i in range(0, 100, 1): omega_phi = (omega_0*(i**(-3.0*w_0)))/((omega_0*(i**(-3.0*w_0))) + 1 - omega_0) w.append(w_0 + (((w_0*(1.0 - w_0**2.0))/(1.0 - 2.0*w_0 + 4.0*w_0**2))*((omega_phi/(1.0 - omega_phi)))) + ((((-1.0)*w_0*(1.0 - w_0**2.0))/(1.0 - 3.0*w_0 + 12.0*w_0**2))*((omega_phi/(1.0 - omega_phi))**2.0)) + ((((1.0)*w_0*(1.0 - w_0**2.0))/(1.0 - 4.0*w_0 + 24.0*w_0**2))*((omega_phi/(1.0 - omega_phi))**3.0))) a.append(i) print w, a plt.plot(a, w) plt.xlabel('a') plt.ylabel('w') plt.grid(True) plt.show() ``` Upvotes: 0 <issue_comment>username_2: `range(0,1,100)` gives you a single point. But you cannot draw a line through a single point. Possibly you want `range(0,100,1)`, instead, which gives you 100 points between 0 and 99, or `np.linspace(0,1,100)`, which gives you 100 points between 0 and 1. Next you better use numpy to calculate your values. Finally, consider simplifying your equations a bit, such that they become readable. ``` import matplotlib.pyplot as plt import numpy as np omega_0 = 0.6911 w_0 = -0.77 a = np.arange(0,100,1) # or np.linspace(0,1,100) depending on what you want. om = omega_0*(a**(-3.0*w_0)) omega_phi = om/(om + 1 - omega_0) p = w_0*(1.0 - w_0**2.0) q = omega_phi/(1.0 - omega_phi) w = w_0 + p/(1.0 - 2.0*w_0 + 4.0*w_0**2)*q - \ p/(1.0 - 3.0*w_0 + 12.0*w_0**2)*q**2.0 + \ p/(1.0 - 4.0*w_0 + 24.0*w_0**2)*q**3.0 plt.plot(a,w) plt.xlabel('a') plt.ylabel('w') plt.grid(True) plt.show() ``` [![enter image description here](https://i.stack.imgur.com/2NQHE.png)](https://i.stack.imgur.com/2NQHE.png) Upvotes: 1 <issue_comment>username_3: I suggest you the following update of your original code. I assumed you wanted to display the curve with `a` going from 0 to 1 by 0.01 step. I also added some comments for readability and for you to better understand the code: ``` import matplotlib.pyplot as plt import numpy as np # Define constants omega_0 = 0.6911 w_0 = -0.77 # Set 'a' values from 0 to 1 by 0.01 step a = np.arange(0, 1, 0.01) # Set 'w' values associated with each 'a' value omega_phi = (omega_0*(a**(-3.0*w_0)))/((omega_0*(a**(-3.0*w_0))) + 1 - omega_0) w = w_0 + (((w_0*(1.0 - w_0**2.0))/(1.0 - 2.0*w_0 + 4.0*w_0**2))*((omega_phi/(1.0 - omega_phi)))) + ((((-1.0)*w_0*(1.0 - w_0**2.0))/(1.0 - 3.0*w_0 + 12.0*w_0**2))*((omega_phi/(1.0 - omega_phi))**2.0)) + ((((1.0)*w_0*(1.0 - w_0**2.0))/(1.0 - 4.0*w_0 + 24.0*w_0**2))*((omega_phi/(1.0 - omega_phi))**3.0)) # Plot the curve with all (a,w) points plt.plot(a,w) plt.xlabel('a') plt.ylabel('w') plt.grid(True) # Show the curve plt.show() ``` Upvotes: 1 [selected_answer]
2018/03/15
1,342
2,906
<issue_start>username_0: I have a dataframe nf as follows : ``` StationID DateTime Channel Count 0 1 2017-10-01 00:00:00 1 1 1 1 2017-10-01 00:00:00 1 201 2 1 2017-10-01 00:00:00 1 8 3 1 2017-10-01 00:00:00 1 2 4 1 2017-10-01 00:00:00 1 0 5 1 2017-10-01 00:00:00 1 0 6 1 2017-10-01 00:00:00 1 0 7 1 2017-10-01 00:00:00 1 0 ``` .......... and so on I want to groupby values by each hour and for each channel and StationID Output Req ``` Station ID DateTime Channel Count 1 2017-10-01 00:00:00 1 232 1 2017-10-01 00:01:00 1 23 2 2017-10-01 00:00:00 1 244... ``` ...... and so on<issue_comment>username_1: I think you need [`groupby`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) with aggregate `sum`, for `datetime`s with floor by `hour`s add [`floor`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.floor.html) - it set `minute`s and `second`s to `0`: ``` print (df) StationID DateTime Channel Count 0 1 2017-12-01 00:00:00 1 1 1 1 2017-12-01 00:00:00 1 201 2 1 2017-12-01 00:10:00 1 8 3 1 2017-12-01 10:00:00 1 2 4 1 2017-10-01 10:50:00 1 0 5 1 2017-10-01 10:20:00 1 5 6 1 2017-10-01 08:10:00 1 4 7 1 2017-10-01 08:00:00 1 1 df['DateTime'] = pd.to_datetime(df['DateTime']) df1 = (df.groupby(['StationID', df['DateTime'].dt.floor('H'), 'Channel'])['Count'] .sum() .reset_index() ) print (df1) StationID DateTime Channel Count 0 1 2017-10-01 08:00:00 1 5 1 1 2017-10-01 10:00:00 1 5 2 1 2017-12-01 00:00:00 1 210 3 1 2017-12-01 10:00:00 1 2 print (df['DateTime'].dt.floor('H')) 0 2017-12-01 00:00:00 1 2017-12-01 00:00:00 2 2017-12-01 00:00:00 3 2017-12-01 10:00:00 4 2017-10-01 10:00:00 5 2017-10-01 10:00:00 6 2017-10-01 08:00:00 7 2017-10-01 08:00:00 Name: DateTime, dtype: datetime64[ns] ``` But if dates are not important, only hours use [`hour`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.hour.html): ``` df2 = (df.groupby(['StationID', df['DateTime'].dt.hour, 'Channel'])['Count'] .sum() .reset_index() ) print (df2) StationID DateTime Channel Count 0 1 0 1 210 1 1 8 1 5 2 1 10 1 7 ``` Upvotes: 1 <issue_comment>username_2: Or you can use [`Grouper`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Grouper.html): ``` df.groupby(pd.Grouper(key='DateTime', freq='"H'), 'Channel', 'StationID')['Count'].sum() ``` Upvotes: 0
2018/03/15
540
1,892
<issue_start>username_0: I have not seen this issue in my google searches. I am trying to install PyInstaller on my work Mac which I do not have administrative access. ``` $ pip install --user pyinstaller Collecting pyinstaller Requirement already satisfied: dis3 in ./Library/Python/2.7/lib/python/site-packages (from pyinstaller) Requirement already satisfied: setuptools in ./Library/Python/2.7/lib/python/site-packages (from pyinstaller) Requirement already satisfied: macholib>=1.8 in ./Library/Python/2.7/lib/python/site-packages (from pyinstaller) Requirement already satisfied: pefile>=2017.8.1 in ./Library/Python/2.7/lib/python/site-packages (from pyinstaller) Requirement already satisfied: altgraph>=0.15 in ./Library/Python/2.7/lib/python/site-packages (from macholib>=1.8->pyinstaller) Requirement already satisfied: future in ./Library/Python/2.7/lib/python/site-packages (from pefile>=2017.8.1->pyinstaller) Installing collected packages: pyinstaller Successfully installed pyinstaller-3.3.1 ``` The install seems successful, however... ``` $ pyinstaller -bash: pyinstaller: command not found ``` I checked my PATH ``` $ echo $PATH /usr/local/git/current/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/share/dotnet:/Library/Frameworks/Mono.framework/Versions/Current/Commands:/Users/jc/Library/Python/2.7/lib/python/site-packages ``` What am I missing here? Thanks for reading.<issue_comment>username_1: I had to install the tar.gz file from here: <http://www.pyinstaller.org/downloads.html> Then it can be run from the extracted folder. ``` $ python pyinstaller.py script.py ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: On my Mac pip installed the binary in `/Users/%Username%/Library/Python/2.7/bin`. If you add that to your PATH it should work. Upvotes: 0
2018/03/15
2,693
8,690
<issue_start>username_0: I learned how to use Container Registry trigger for Google Cloud Functions deploy from the following tutorial. [Automatic serverless deployments with Cloud Source Repositories and Container Builder](https://cloudplatform.googleblog.com/2018/03/automatic-serverless-deployments-with-Cloud-Source-Repositories-and-Container-Builder.html) I have Google App engine flexible app. The runtime is Node.js. I want to deploy the app triggered by git push. Are there any good references? I'm using these example code. Manual deployment works normally. ``` * tree . ├── app.js ├── app.yaml └── package.json * app.js 'use strict'; const express = require('express'); const app = express(); app.get('/', (req, res) => { res.status(200).send('Hello, world!').end(); }); const PORT = process.env.PORT || 8080; app.listen(PORT, () => { console.log(`App listening on port ${PORT}`); console.log('Press Ctrl+C to quit.'); }); * app.yaml runtime: nodejs env: flex * package.json { "name": "appengine-hello-world", "description": "Simple Hello World Node.js sample for Google App Engine Flexible Environment.", "version": "0.0.1", "private": true, "license": "Apache-2.0", "author": "Google Inc.", "repository": { "type": "git", "url": "https://github.com/GoogleCloudPlatform/nodejs-docs-samples.git" }, "engines": { "node": ">=4.3.2" }, "scripts": { "deploy": "gcloud app deploy", "start": "node app.js", "lint": "samples lint", "pretest": "npm run lint", "system-test": "samples test app", "test": "npm run system-test", "e2e-test": "samples test deploy" }, "dependencies": { "express": "4.15.4" }, "devDependencies": { "@google-cloud/nodejs-repo-tools": "1.4.17" }, "cloud-repo-tools": { "test": { "app": { "msg": "Hello, world!" } }, "requiresKeyFile": true, "requiresProjectId": true } } * deploy command $ gcloud app deploy ``` --- Update 1 I found a similar question. [How to auto deploy google app engine flexible using Container Registry with Build Trigger](https://stackoverflow.com/questions/45389919/how-to-auto-deploy-google-app-engine-flexible-using-container-registry-with-buil) I added cloudbuild.yaml. ``` steps: # Build the Docker image. - name: gcr.io/cloud-builders/docker args: ['build', '-t', 'gcr.io/$PROJECT_ID/app', '.'] # Push it to GCR. - name: gcr.io/cloud-builders/docker args: ['push', 'gcr.io/$PROJECT_ID/app'] # Deploy your Flex app from the image in GCR. - name: gcr.io/cloud-builders/gcloud args: ['app', 'deploy', 'app.yaml', '--image-url=gcr.io/$PROJECT_ID/app'] # Note that this build pushes this image. images: ['gcr.io/$PROJECT_ID/app'] ``` However, I got an error. The error message is "**error loading template: yaml: line 5: did not find expected key**". I'm looking into it. --- Update 2 The reason was invalid yaml format. I changed it like the following. ``` steps: # Build the Docker image. - name: gcr.io/cloud-builders/docker args: ['build', '-t', 'gcr.io/$PROJECT_ID/app', '.'] # Push it to GCR. - name: gcr.io/cloud-builders/docker args: ['push', 'gcr.io/$PROJECT_ID/app'] # Deploy your Flex app from the image in GCR. - name: gcr.io/cloud-builders/gcloud args: ['app', 'deploy', 'app.yaml', '--image-url=gcr.io/$PROJECT_ID/app'] # Note that this build pushes this image. images: ['gcr.io/$PROJECT_ID/app'] ``` I got another error. The message is "**error loading template: unknown field "images" in cloudbuild\_go\_proto.BuildStep**" --- Update 3 I noticed that "images" indent was wrong. ``` steps: ... # Note that this build pushes this image. images: ['gcr.io/$PROJECT_ID/app'] ``` I encountered new error. ``` starting build "e3e00749-9c70-4ac7-a322-d096625b695a" FETCHSOURCE Initialized empty Git repository in /workspace/.git/ From https://source.developers.google.com/p/xxxx/r/bitbucket-zono-api-btc * branch 0da6c8bf209c72b6406f3801f3eb66d346187f4e -> FETCH_HEAD HEAD is now at 0da6c8b fix invalid yaml BUILD Starting Step #0 Step #0: Already have image (with digest): gcr.io/cloud-builders/docker Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory Finished Step #0 ERROR ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1 ``` Yes. I don't have Dockerfile because I use Google App Engine flexible Environment Node.js runtime. It is not necessary Docker. --- Update 4 I added Dockerfile ``` FROM gcr.io/google-appengine/nodejs ``` Then new error was occurred. ``` Step #2: ERROR: (gcloud.app.deploy) User [<EMAIL>] does not have permission to access app [xxxx] (or it may not exist): App Engine Admin API has not been used in project xxx before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/appengine.googleapis.com/overview?project=xxx then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry. ``` --- Update 5 I enabled App Engine Admin API then next error has come. ``` Step #2: Do you want to continue (Y/n)? Step #2: WARNING: Unable to verify that the Appengine Flexible API is enabled for project [xxx]. You may not have permission to list enabled services on this project. If it is not enabled, this may cause problems in running your deployment. Please ask the project owner to ensure that the Appengine Flexible API has been enabled and that this account has permission to list enabled APIs. Step #2: Beginning deployment of service [default]... Step #2: WARNING: Deployment of service [default] will ignore the skip_files field in the configuration file, because the image has already been built. Step #2: Updating service [default] (this may take several minutes)... Step #2: ...............................................................................................................................failed. Step #2: ERROR: (gcloud.app.deploy) Error Response: [9] Step #2: Application startup error: Step #2: npm ERR! path /app/package.json Step #2: npm ERR! code ENOENT Step #2: npm ERR! errno -2 Step #2: npm ERR! syscall open Step #2: npm ERR! enoent ENOENT: no such file or directory, open '/app/package.json' Step #2: npm ERR! enoent This is related to npm not being able to find a file. ``` I changed my code tree but it did not work. I confirmed that Appengine Flexible API has been enabled. I have no idea what should I try next. ``` . ├── Dockerfile ├── app │   ├── app.js │   └── package.json ├── app.yaml └── cloudbuild.yaml ``` --- Update 6 When I deploy manually, the artifact is like the following. ``` us.gcr.io/xxxxx/appengine/default.20180316t000144 ``` Should I use this artifact...? I'm confused.. --- Update 7 Two builds are executed. I don't know whether this is correct. [![enter image description here](https://i.stack.imgur.com/w2fiE.png)](https://i.stack.imgur.com/w2fiE.png)<issue_comment>username_1: A tech guy helped me. I changed directory structure and cloudbuild.yaml. Then it worked. Thanks. ``` * Code Tree . ├── app │   ├── app.js │   ├── app.yaml │   └── package.json └── cloudbuild.yaml * cloudbuild.yaml steps: - name: gcr.io/cloud-builders/npm args: ['install', 'app'] - name: 'gcr.io/cloud-builders/gcloud' args: ['app', 'deploy', 'app/app.yaml'] ``` Upvotes: 1 <issue_comment>username_2: Your Dockerfile doesn't copy source to the image. You can move everything back to the same directory such that ``` . ├── app.js ├── app.yaml ├── cloudbuild.yaml ├── Dockerfile └── package.json ``` but it doesn't matter. Paste this into your Dockerfile and it should work: ``` FROM gcr.io/google-appengine/nodejs # Working directory is where files are stored, npm is installed, and the application is launched WORKDIR /app # Copy application to the /app directory. # Add only the package.json before running 'npm install' so 'npm install' is not run if there are only code changes, no package changes COPY package.json /app/package.json RUN npm install COPY . /app # Expose port so when container is launched you can curl/see it. EXPOSE 8080 # The command to execute when Docker image launches. CMD ["npm", "start"] ``` Edit: This is the cloudbuild.yaml I used: ``` steps: - name: gcr.io/cloud-builders/docker args: ['build', '-t', 'gcr.io/$PROJECT_ID/app', '.'] - name: gcr.io/cloud-builders/docker args: ['push', 'gcr.io/$PROJECT_ID/app'] - name: gcr.io/cloud-builders/gcloud args: ['app', 'deploy', 'app.yaml', '--image-url=gcr.io/$PROJECT_ID/app'] images: ['gcr.io/$PROJECT_ID/app'] ``` Upvotes: 2
2018/03/15
4,295
12,339
<issue_start>username_0: [enter image description here](https://i.stack.imgur.com/JDNb2.png) ``` sparkDF = sqlContext.read.format('com.databricks.spark.csv').options(header='true').load('PR_DATA_35.csv') ``` > > **ERROR\_** > ---------------------------------------------------------------------------------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call > last) in () > ----> 1 sparkDF = sqlContext.read.format('com.databricks.spark.csv') .options(header='true').load('PR\_DATA\_35.csv') > > > /home/ec2-user/spark/python/pyspark/sql/readwriter.pyc in load(self, > path, format, schema, \*\*options) > 157 self.options(\*\*options) > 158 if isinstance(path, basestring): > --> 159 return self.\_df(self.\_jreader.load(path)) > 160 elif path is not None: > 161 if type(path) != list: > > > /home/ec2-user/spark/python/lib/py4j-0.10.4-src.zip/py4j/java\_gateway.py > in **call**(self, \*args) 1131 answer = > self.gateway\_client.send\_command(command) 1132 return\_value > = get\_return\_value( > -> 1133 answer, self.gateway\_client, self.target\_id, self.name) 1134 1135 for temp\_arg in temp\_args: > > > /home/ec2-user/spark/python/pyspark/sql/utils.pyc in deco(\*a, \*\*kw) > 61 def deco(\*a, \*\*kw): > 62 try: > ---> 63 return f(\*a, \*\*kw) > 64 except py4j.protocol.Py4JJavaError as e: > 65 s = e.java\_exception.toString() > > > /home/ec2-user/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py > in get\_return\_value(answer, gateway\_client, target\_id, name) > 317 raise Py4JJavaError( > 318 "An error occurred while calling {0}{1}{2}.\n". > --> 319 format(target\_id, ".", name), value) > 320 else: > 321 raise Py4JError( > > > Py4JJavaError: An error occurred while calling o312.load. : > org.apache.spark.SparkException: Job aborted due to stage failure: > Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 > in stage 5.0 (TID 23, 172.31.17.233, executor 0): > java.io.FileNotFoundException: File file:/home/ec2-user/PR\_DATA\_35.csv > does not exist It is possible the underlying files have been updated. > You can explicitly invalidate the cache in Spark by running 'REFRESH > TABLE tableName' command in SQL or by recreating the Dataset/DataFrame > involved. at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:174) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:105) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown > Source) at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:234) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at > org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at > org.apache.spark.scheduler.Task.run(Task.scala:108) at > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > > > Driver stacktrace: at > org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1499) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1487) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1486) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at > org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1486) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814) > at scala.Option.foreach(Option.scala:257) at > org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1714) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658) > at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) > at > org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630) > at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022) at > org.apache.spark.SparkContext.runJob(SparkContext.scala:2043) at > org.apache.spark.SparkContext.runJob(SparkContext.scala:2062) at > org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:336) > at > org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38) > at > org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:2853) > at > org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2153) > at > org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2153) > at org.apache.spark.sql.Dataset$$anonfun$55.apply(Dataset.scala:2837) > at > org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65) > at org.apache.spark.sql.Dataset.withAction(Dataset.scala:2836) at > org.apache.spark.sql.Dataset.head(Dataset.scala:2153) at > org.apache.spark.sql.Dataset.take(Dataset.scala:2366) at > org.apache.spark.sql.execution.datasources.csv.TextInputCSVDataSource$.infer(CSVDataSource.scala:147) > at > org.apache.spark.sql.execution.datasources.csv.CSVDataSource.inferSchema(CSVDataSource.scala:62) > at > org.apache.spark.sql.execution.datasources.csv.CSVFileFormat.inferSchema(CSVFileFormat.scala:57) > at > org.apache.spark.sql.execution.datasources.DataSource$$anonfun$7.apply(DataSource.scala:177) > at > org.apache.spark.sql.execution.datasources.DataSource$$anonfun$7.apply(DataSource.scala:177) > at scala.Option.orElse(Option.scala:289) at > org.apache.spark.sql.execution.datasources.DataSource.getOrInferFileFormatSchema(DataSource.scala:176) > at > org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:366) > at > org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178) > at > org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:156) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) at > py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at > py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at > py4j.Gateway.invoke(Gateway.java:280) at > py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) > at py4j.commands.CallCommand.execute(CallCommand.java:79) at > py4j.GatewayConnection.run(GatewayConnection.java:214) at > java.lang.Thread.run(Thread.java:748) Caused by: > java.io.FileNotFoundException: File file:/home/ec2-user/PR\_DATA\_35.csv > does not exist It is possible the underlying files have been updated. > You can explicitly invalidate the cache in Spark by running 'REFRESH > TABLE tableName' command in SQL or by recreating the Dataset/DataFrame > involved. at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.org$apache$spark$sql$execution$datasources$FileScanRDD$$anon$$readCurrentFile(FileScanRDD.scala:127) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.nextIterator(FileScanRDD.scala:174) > at > org.apache.spark.sql.execution.datasources.FileScanRDD$$anon$1.hasNext(FileScanRDD.scala:105) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown > Source) at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:234) > at > org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:228) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$25.apply(RDD.scala:827) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) at > org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at > org.apache.spark.scheduler.Task.run(Task.scala:108) at > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > ... 1 more > > > Any help is appreciated. Thanks<issue_comment>username_1: Caused by: java.io.FileNotFoundException: File file:/home/ec2-user/PR\_DATA\_35.csv does not exist?? Upvotes: 1 <issue_comment>username_2: Your code is not finding the file you want to convert to a DataFrame ``` Py4JJavaError: An error occurred while calling o312.load.: Job aborted due to stage failure: Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 23, 172.31.17.233, executor 0): java.io.FileNotFoundException: File file:/home/ec2 user/PR_DATA_35.csv does not exist It is possible the underlying files have been updated. ``` I have some recommendations 1. Verify the file is stored in your EC2 instance and It is in the same folder of your spark code. 2. Following the Databricks [guide](https://docs.databricks.com/spark/latest/data-sources/read-csv.html), You can specify the entire path to the file, for ec2 instances it should be something like: /home/ubuntu/project\_folder/csv\_file.csv 3. You can take a look to this [stackoverflow](https://stackoverflow.com/questions/29704333/spark-load-csv-file-as-dataframe?answertab=active#tab-top) question 4. This is the way I read csv files ``` # submit to spark on my dev env # ./bin/spark-submit /Users/estebance/Documents/Projects/tests/spark_csv.py # Spark version 2.0 and up, spark-csv is part of core Spark functionality and doesn't require a separate library. So you could just do for example from pyspark import SparkContext from pyspark.sql import SparkSession spark = SparkSession.builder.master("local").appName("CsvReader").getOrCreate() csv_df = spark.read.format("csv").option("header", "true").load("/Users/estebance/Documents/Projects/tests/sample.csv") print(csv_df.head(2)) ``` Hope this is helpful, best regards Upvotes: 0 <issue_comment>username_3: Contrary to what others have said, this error normally occurs in Spark when you are reading data from a folder, modifying it and saving on top of the data which you have initially read. Just try saving the first set of data in a temporary folder or use the solution described in this [answer](https://stackoverflow.com/a/51590446/5595338). Upvotes: 0
2018/03/15
1,655
4,895
<issue_start>username_0: So I have a script that I'm writing as a pet project that seemed simpler than it turns out to be...I have a for loop that is meant to increment a certain amount of times based on user input but in the for loop, I have a destination statement for a goto earlier in the script. The issue is that when that goto is used it breaks the for and causes the for loop to only provide 1 answer. Here is the code: ``` for /l %%x in (1, 1, %player%) do ( :defense call :defenders :defense_return call :colorEcho 0e %operator% echo. ) ``` The :defense\_return is the culprit but it's neccesary because I'm using %RANDOM% so I need it to reference back to when the %RANDOM% is used or else I just get the same output when I really want 2 different outputs. here is the defenders block: ``` :defenders set /a operator=%random%%%18+1 REM 707th SMB if %operator%== 1 set operator=Vigil REM G.R.O.M if %operator%== 2 set operator=Ela REM S.D.U if %operator%== 3 set operator=Lesion REM G.E.O if %operator%== 4 set operator=Mira REM S.A.T if %operator%== 5 set operator=Echo REM BOPE if %operator%== 6 set operator=Caviera REM Navy Seal if %operator%== 7 set operator=Valkyrie REM JTF2 if %operator%== 8 set operator=Frost REM S.A.S if %operator%== 9 set operator=Mute if %operator%== 10 set operator=Smoke REM SWAT if %operator%== 11 set operator=Castle if %operator%== 12 set operator=Pulse REM GIGN if %operator%== 13 set operator=Doc if %operator%== 14 set operator=Rook REM GSG9 if %operator%== 15 set operator=Jager if %operator%== 16 set operator=Bandit REM Spetsnaz if %operator%== 17 set operator=Tachanka if %operator%== 18 set operator=Kapkan goto :defense_return ``` I really want to make this script work out, but this for loop is causing me issues...Any help is much appreciated!<issue_comment>username_1: As I explained above you need to exit the CALL correctly and use Delayed Expansion. ``` @echo off setlocal EnableDelayedExpansion for /F "tokens=1,2 delims=#" %%a in ('"prompt #$H#$E# & echo on & for %%b in (1) do rem"') do ( set "DEL=%%a" ) for /l %%x in (1,1,2) do ( call :defenders call :colorEcho 0E !operator! echo. ) pause GOTO :EOF :defenders set /a operator=%random%%%18+1 REM 707th SMB if %operator%== 1 set operator=Vigil REM G.R.O.M if %operator%== 2 set operator=Ela REM S.D.U if %operator%== 3 set operator=Lesion REM G.E.O if %operator%== 4 set operator=Mira REM S.A.T if %operator%== 5 set operator=Echo REM BOPE if %operator%== 6 set operator=Caviera REM Navy Seal if %operator%== 7 set operator=Valkyrie REM JTF2 if %operator%== 8 set operator=Frost REM S.A.S if %operator%== 9 set operator=Mute if %operator%== 10 set operator=Smoke REM SWAT if %operator%== 11 set operator=Castle if %operator%== 12 set operator=Pulse REM GIGN if %operator%== 13 set operator=Doc if %operator%== 14 set operator=Rook REM GSG9 if %operator%== 15 set operator=Jager if %operator%== 16 set operator=Bandit REM Spetsnaz if %operator%== 17 set operator=Tachanka if %operator%== 18 set operator=Kapkan goto :eof :colorEcho "%~2" findstr /v /a:%1 /R "^$" "%~2" nul del "%~2" > nul 2>&1 goto :eof ``` Upvotes: 1 <issue_comment>username_2: You have a misconception here, as already explained in the comments... The `:defenders` block is a *subprogram*, that is, a code segment that you want to *call* (or *invoke*, or *execute*, or whichever) and when such a code segment ends you want *not* to "reference back" (goto) another place, but to *return* to the point where such a subprogram was called. The way to return to the command that follows the `call` command is `exit /B`, although `goto :EOF` also works for this purpose, but it is more confusing (in most programming languages, the *statement* to do this is called `return`). Also, when you want to use the value of a variabe that change *inside* a `for` loop, you must use the `!variable!` construct instead of `%variable%` one (and insert `setlocal EnableDelayedExpansion` command at beginning of the program). Finally, your code may be improved if you use an [array](https://stackoverflow.com/a/10167990/778560). This is the final version of your code after including all previously described modifications: ``` @echo off setlocal EnableDelayedExpansion rem Initialize "operator" array set i=0 for %%a in (Vigil Ela Lesion Mira Echo Caviera Valkyrie Frost Mute Smoke Castle Pulse Doc Rook Jager Bandit Tachanka Kapkan) do ( set /A i+=1 set operator[!i!]=%%a ) rem Get a Back-Space (ASCII 8) character (for :colorEcho routine) for /F %%a in ('echo prompt $H ^| cmd') do set "BS=%%a" set player=4 for /l %%x in (1, 1, %player%) do ( call :defenders call :colorEcho 0e !operator! echo/ ) pause goto :EOF :defenders set /a operator=%random%%%18+1 set operator=!operator[%operator%]! exit /B :colorEcho color text set /P "=%BS% " > "%~2" ``` Upvotes: 3 [selected_answer]
2018/03/15
1,952
5,157
<issue_start>username_0: I'm trying to repeat N times an array with a FOR loop, but I've been stuck in the last array that contains more elements. This is my code so far: ``` function data() { for ($i = 1 ; $i <= 50 ; $i++) { $magnitud = array('nombre' => 'medidor'); for ($j = 1 ; $j <= 5 ; $j++) { $magnitude['magnitude'.$i] = array('date' => $date, 'value' => mt_rand(1,200));; } } return $magnitude; } for ($i = 1 ; $i <= 50 ; $i++) { $center->insertOne(['meter' => 'meter'.$i, 'data' => data()]); } ``` I need to repeat 5 times the values of the array that contains the date and a value. ``` array('date' => $date, 'value' => mt_rand(1,200)); ``` Something like this: ``` Date: 05/03/2015 Value: 25 Date: 10/12/2012 Value: 45 Date: 15/06/2005 Value: 67 Date: 26/05/2009 Value: 78 . . . ``` I've tried it in many ways, but I haven't achieved. Someone who lend me a hand to how to repeat N times that array? Just in case you haven't got any idea about my question, I attach this picture which is what I get in Robo3T (Visual Manager) [![enter image description here](https://i.stack.imgur.com/a8BMF.png)](https://i.stack.imgur.com/a8BMF.png) EDIT: To explain myself better, this is the JSON that I'm trying to get. ``` "_id" : ObjectId("5ab0c29db57dce17e0002bc0"), "meter" : "meter1", "data" : { "magnitude1" : { "date" : "20/10/2015 21:57:05", "value" : 192, "date" : "13/12/2015 22:51:15", "value" : 85, "date" : "15/05/2016 05:21:06", "value" : 65, "date" : "28/06/2017 15:32:26", "value" : 72, "date" : "02/11/2017 18:15:34", "value" : 12, }, "magnitude2" : { "date" : "15/12/2009 15:21:32", "value" : 45, "date" : "12/05/2013 16:45:07", "value" : 96, "date" : "21/02/2015 03:06:12", "value" : 85, "date" : "15/08/2015 14:05:22", "value" : 78, "date" : "05/01/2017 21:12:32", "value" : 198, }, . . . } ```<issue_comment>username_1: As I explained above you need to exit the CALL correctly and use Delayed Expansion. ``` @echo off setlocal EnableDelayedExpansion for /F "tokens=1,2 delims=#" %%a in ('"prompt #$H#$E# & echo on & for %%b in (1) do rem"') do ( set "DEL=%%a" ) for /l %%x in (1,1,2) do ( call :defenders call :colorEcho 0E !operator! echo. ) pause GOTO :EOF :defenders set /a operator=%random%%%18+1 REM 707th SMB if %operator%== 1 set operator=Vigil REM G.R.O.M if %operator%== 2 set operator=Ela REM S.D.U if %operator%== 3 set operator=Lesion REM G.E.O if %operator%== 4 set operator=Mira REM S.A.T if %operator%== 5 set operator=Echo REM BOPE if %operator%== 6 set operator=Caviera REM Navy Seal if %operator%== 7 set operator=Valkyrie REM JTF2 if %operator%== 8 set operator=Frost REM S.A.S if %operator%== 9 set operator=Mute if %operator%== 10 set operator=Smoke REM SWAT if %operator%== 11 set operator=Castle if %operator%== 12 set operator=Pulse REM GIGN if %operator%== 13 set operator=Doc if %operator%== 14 set operator=Rook REM GSG9 if %operator%== 15 set operator=Jager if %operator%== 16 set operator=Bandit REM Spetsnaz if %operator%== 17 set operator=Tachanka if %operator%== 18 set operator=Kapkan goto :eof :colorEcho "%~2" findstr /v /a:%1 /R "^$" "%~2" nul del "%~2" > nul 2>&1 goto :eof ``` Upvotes: 1 <issue_comment>username_2: You have a misconception here, as already explained in the comments... The `:defenders` block is a *subprogram*, that is, a code segment that you want to *call* (or *invoke*, or *execute*, or whichever) and when such a code segment ends you want *not* to "reference back" (goto) another place, but to *return* to the point where such a subprogram was called. The way to return to the command that follows the `call` command is `exit /B`, although `goto :EOF` also works for this purpose, but it is more confusing (in most programming languages, the *statement* to do this is called `return`). Also, when you want to use the value of a variabe that change *inside* a `for` loop, you must use the `!variable!` construct instead of `%variable%` one (and insert `setlocal EnableDelayedExpansion` command at beginning of the program). Finally, your code may be improved if you use an [array](https://stackoverflow.com/a/10167990/778560). This is the final version of your code after including all previously described modifications: ``` @echo off setlocal EnableDelayedExpansion rem Initialize "operator" array set i=0 for %%a in (Vigil Ela Lesion Mira Echo Caviera Valkyrie Frost Mute Smoke Castle Pulse Doc Rook Jager Bandit Tachanka Kapkan) do ( set /A i+=1 set operator[!i!]=%%a ) rem Get a Back-Space (ASCII 8) character (for :colorEcho routine) for /F %%a in ('echo prompt $H ^| cmd') do set "BS=%%a" set player=4 for /l %%x in (1, 1, %player%) do ( call :defenders call :colorEcho 0e !operator! echo/ ) pause goto :EOF :defenders set /a operator=%random%%%18+1 set operator=!operator[%operator%]! exit /B :colorEcho color text set /P "=%BS% " > "%~2" ``` Upvotes: 3 [selected_answer]
2018/03/15
1,231
3,665
<issue_start>username_0: I want to create a bit of code that saves a file out to a folder (PDF / DWG ) and moves all my files with a lower revision #5 than the current file being saved into a superseded folder. I cannot see how to set a condition for the revision number: I can't use a wildcard as that would cause issues as other files in the folder would be picked up and moved incorrectly. I have the save function sorted, I just dont know were to start with filing part. Examples of the filenames: Pdf/TE1801\_200-01\_{name}\_#5.PDF Dwg/TE1801\_200-01\_{name}\_#5.DWG<issue_comment>username_1: As I explained above you need to exit the CALL correctly and use Delayed Expansion. ``` @echo off setlocal EnableDelayedExpansion for /F "tokens=1,2 delims=#" %%a in ('"prompt #$H#$E# & echo on & for %%b in (1) do rem"') do ( set "DEL=%%a" ) for /l %%x in (1,1,2) do ( call :defenders call :colorEcho 0E !operator! echo. ) pause GOTO :EOF :defenders set /a operator=%random%%%18+1 REM 707th SMB if %operator%== 1 set operator=Vigil REM G.R.O.M if %operator%== 2 set operator=Ela REM S.D.U if %operator%== 3 set operator=Lesion REM G.E.O if %operator%== 4 set operator=Mira REM S.A.T if %operator%== 5 set operator=Echo REM BOPE if %operator%== 6 set operator=Caviera REM Navy Seal if %operator%== 7 set operator=Valkyrie REM JTF2 if %operator%== 8 set operator=Frost REM S.A.S if %operator%== 9 set operator=Mute if %operator%== 10 set operator=Smoke REM SWAT if %operator%== 11 set operator=Castle if %operator%== 12 set operator=Pulse REM GIGN if %operator%== 13 set operator=Doc if %operator%== 14 set operator=Rook REM GSG9 if %operator%== 15 set operator=Jager if %operator%== 16 set operator=Bandit REM Spetsnaz if %operator%== 17 set operator=Tachanka if %operator%== 18 set operator=Kapkan goto :eof :colorEcho "%~2" findstr /v /a:%1 /R "^$" "%~2" nul del "%~2" > nul 2>&1 goto :eof ``` Upvotes: 1 <issue_comment>username_2: You have a misconception here, as already explained in the comments... The `:defenders` block is a *subprogram*, that is, a code segment that you want to *call* (or *invoke*, or *execute*, or whichever) and when such a code segment ends you want *not* to "reference back" (goto) another place, but to *return* to the point where such a subprogram was called. The way to return to the command that follows the `call` command is `exit /B`, although `goto :EOF` also works for this purpose, but it is more confusing (in most programming languages, the *statement* to do this is called `return`). Also, when you want to use the value of a variabe that change *inside* a `for` loop, you must use the `!variable!` construct instead of `%variable%` one (and insert `setlocal EnableDelayedExpansion` command at beginning of the program). Finally, your code may be improved if you use an [array](https://stackoverflow.com/a/10167990/778560). This is the final version of your code after including all previously described modifications: ``` @echo off setlocal EnableDelayedExpansion rem Initialize "operator" array set i=0 for %%a in (<NAME> Mira Echo Caviera Valkyrie Frost Mute Smoke Castle Pulse Doc Rook J<NAME> Kapkan) do ( set /A i+=1 set operator[!i!]=%%a ) rem Get a Back-Space (ASCII 8) character (for :colorEcho routine) for /F %%a in ('echo prompt $H ^| cmd') do set "BS=%%a" set player=4 for /l %%x in (1, 1, %player%) do ( call :defenders call :colorEcho 0e !operator! echo/ ) pause goto :EOF :defenders set /a operator=%random%%%18+1 set operator=!operator[%operator%]! exit /B :colorEcho color text set /P "=%BS% " > "%~2" ``` Upvotes: 3 [selected_answer]
2018/03/15
583
2,422
<issue_start>username_0: In my database, I have cities and states that (appear to be) are stored from user input. I am trying to get only distinct locations, so my query looks like this: ``` SELECT Distinct C.City, C.State FROM Customers C ``` The problem I ran into is that some cities have typos, so in my results there could be "Dallas" TX, "Dalas" TX, "Dallas," TX, and so on. Is there a way to filter for the "correct" spelling without having to sort through ten thousand rows?<issue_comment>username_1: Well, this seems like a huge problem. If you have ZIP codes, you can enrich the data. But judging by your question, this is not the case. I think you might import a table with unique states and cities and join this table to your table. The non-matching rows might be checked and updated. Still an awful job though. Good luck. Upvotes: 2 [selected_answer]<issue_comment>username_2: You're design should be addressed, storing free text cities and states tends to produce issues...well, like this. If you can address the design, do so...if you are forced to work with this, then I'm afraid you are in for some manual fun. Create a table as city,state,equivlent\_city, equivlent\_state Select all distinct values from city/state from your table and export them into a spreadsheet (copy and paste works too). Go through each line in this spreadsheet and assign the proper spelling to each city state that you want it to appear under...ya, this can be a long tedious process (last time I did this, I delegated this fantastic task to the summer students). When you're done, import the spreadsheet back into your database as a table. Anytime you need to refer to the city/state from your fact table, join to this translation table on city and state, then refer to equivalent\_city and equivalent\_state in youre queries. Advantage to this method is this translation table grows as your users find new and interesting ways to spells cities (you will need a process to continue to update this table with new spellings)....there is an upper limit on screwy spellings out there and you'll eventually capture the majority of them. Disadvantage is in maintaining the setup (and the disbelief that even though you thought you had all spellings of Dallas tx captured, some user will input 'Daalass tex'). Alternative is stated in the comments, better design prevents this horrible solution from being needed. Upvotes: 0
2018/03/15
448
1,390
<issue_start>username_0: I'm trying to learn Vue and encountered this problem. ```js Vue.component('alert', { props: ['type', 'bold', 'msg'], template: '**{{ bold }}** {{ msg }}' }); var componentProps=new Vue( { el: '#app', } ); ``` ```html ``` This is in the output in the inspector.As you can see the props[type] is not changed there. ``` **Slow down.** You might crash. ``` Link to codepen => <https://codepen.io/dakata911/pen/XEKbyq?editors=1010><issue_comment>username_1: On attributes interpolation doesn't work, you can use `:` to bind `:class="type"` or `:class="[ type, other, ... ]"` or `:class="{ 'someClass': true, 'other-class': false, 'another': method() }"` And you can have both `:class="..."` attributes and `class="normal class attribute"` on the same element/tag Upvotes: 2 <issue_comment>username_2: In vue 2 you [**can't** use interpolations in attributes anymore](https://v2.vuejs.org/v2/guide/migration.html#Interpolation-within-Attributes-removed). You have [several possible syntaxes for class and style bindings](https://v2.vuejs.org/v2/guide/class-and-style.html) now. In your specific case you can use: ```html ``` Demo below. ```js new Vue({ el: '#app', data: { type: 'warning' } }) ``` ```css .alert { background: yellow; } .alert-warning { color: red } ``` ```html Warning! ``` Upvotes: 4 [selected_answer]
2018/03/15
458
1,493
<issue_start>username_0: This is driving me nuts. When running any ant command (even `ant -version`), I always get the following error on Mac OSX: Exception in thread "main" java.lang.UnsupportedClassVersionError: org/apache/tools/ant/launch/Launcher : Unsupported major.minor version 52.0 I removed and re-installed ant entirely, and still get the same error. [![enter image description here](https://i.stack.imgur.com/tYd8y.png)](https://i.stack.imgur.com/tYd8y.png) For our company we have to run Java7 to run the command, so I created an alias. This used to work, I'm not sure what changed.<issue_comment>username_1: I see you have java 1.7 installed and post that you have installed Ant 1.10. Ant 1.10 requires jdk 8 Refer : <http://ant.apache.org/> > > The Apache Ant team currently maintains two lines of development. The > 1.9.x releases require Java5 at runtime and **1.10.x requires Java8 at runtime**. Both lines are based off of Ant 1.9.7 and the 1.9.x releases > are mostly bug fix releases while additional new features are > developed for 1.10.x. **We recommend using 1.10.x unless you are > required to use versions of Java prior to Java8 during the build > process.** > > > Try using Ant 1.9.x if you can't use java 8 Upvotes: 5 [selected_answer]<issue_comment>username_2: Here is a list of minimum java version for given ant version [![enter image description here](https://i.stack.imgur.com/BK9o8.png)](https://i.stack.imgur.com/BK9o8.png) Upvotes: 0
2018/03/15
662
1,881
<issue_start>username_0: How can I make a progressive web app to be added to the app drawer, not restricted to only the home screen. this is my manifest file. An example is the twitter web app, it can be added to the home screen and app drawer too. I don't know if there is something I should add to this web manifest. ``` { "name": "Weather", "short_name": "Weather", "icons": [{ "src": "images/icons/icon-128x128.png", "sizes": "128x128", "type": "image/png" }, { "src": "images/icons/icon-144x144.png", "sizes": "144x144", "type": "image/png" }, { "src": "images/icons/icon-152x152.png", "sizes": "152x152", "type": "image/png" }, { "src": "images/icons/icon-192x192.png", "sizes": "192x192", "type": "image/png" }, { "src": "images/icons/icon-256x256.png", "sizes": "256x256", "type": "image/png" }], "start_url": "/index.html", "display": "standalone", "background_color": "#3E4EB8", "theme_color": "#2F3BA2" } ```<issue_comment>username_1: I see you have java 1.7 installed and post that you have installed Ant 1.10. Ant 1.10 requires jdk 8 Refer : <http://ant.apache.org/> > > The Apache Ant team currently maintains two lines of development. The > 1.9.x releases require Java5 at runtime and **1.10.x requires Java8 at runtime**. Both lines are based off of Ant 1.9.7 and the 1.9.x releases > are mostly bug fix releases while additional new features are > developed for 1.10.x. **We recommend using 1.10.x unless you are > required to use versions of Java prior to Java8 during the build > process.** > > > Try using Ant 1.9.x if you can't use java 8 Upvotes: 5 [selected_answer]<issue_comment>username_2: Here is a list of minimum java version for given ant version [![enter image description here](https://i.stack.imgur.com/BK9o8.png)](https://i.stack.imgur.com/BK9o8.png) Upvotes: 0
2018/03/15
394
1,606
<issue_start>username_0: I want to show toast on button `click`, and that button is in the fragment. I have tried methods to get context for the `toast`, but it is not showing on button click. **This is my code** ``` public class Bottom_Sheet_Fragment extends Fragment { Button addComment; public Bottom_Sheet_Fragment() { // Required empty public constructor } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View view=inflater.inflate(R.layout.fragment_bottom__sheet, container, false); addComment=(Button) container.findViewById(R.id.addCommentBtn); addComment.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Toast.makeText(getActivity(), "Added", Toast.LENGTH_SHORT).show(); } }); return view; } } ``` This is fragment layout ``` ``` this layout file is included in another activity, with in include statement.dont know where is the problem. and this fragment works like a bottomsheet.<issue_comment>username_1: Try this: `addComment = view.findViewById(R.id.addCommentBtn);` Because you must get button from layout was inflated. Upvotes: 0 <issue_comment>username_2: You need to use `view.findViewById`. **Try This** ``` addComment=(Button) view.findViewById(R.id.addCommentBtn); addComment.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Toast.makeText(getActivity(), "Added", Toast.LENGTH_SHORT).show(); } }); ``` Upvotes: -1
2018/03/15
1,581
6,179
<issue_start>username_0: I'm looking for a way to update only set properties in Dapper. i.e Update an Entity's property only if its not null. I'm solving the same problem with a rather crude approach as shown below but I believe there should be a cleaner dapper way of doing this. ``` public void UpdateCustomer(Customer cust) { try { StringBuilder sb = new StringBuilder("UPDATE CUSTOMER_SETUP SET DATE_MODIFIED = @DATE_MODIFIED "); if(cust.BUSINESSNAME != null) sb.Append(",BUSINESSNAME = @BUSINESSNAME "); if (cust.BUSINESS_ADDRESS != null) sb.Append(",BUSINESS_ADDRESS = @BUSINESS_ADDRESS "); if (cust.CONTACT_NAME != null) sb.Append(",CONTACT_NAME = @CONTACT_NAME "); if (cust.CONTACT_TITLE != null) sb.Append(",CONTACT_TITLE = @CONTACT_TITLE "); if (cust.CONTACT_PHONE1 != null) sb.Append(",CONTACT_PHONE1 = @CONTACT_PHONE1 "); if (cust.CONTACT_PHONE2 != null) sb.Append(",CONTACT_PHONE2 = @CONTACT_PHONE2 "); if (cust.CONTACT_EMAIL != null) sb.Append(",CONTACT_EMAIL = @CONTACT_EMAIL "); if (cust.CONTACT_URL != null) sb.Append(",CONTACT_URL = @CONTACT_URL "); if (cust.DATE_CREATED != null) sb.Append(",DATE_CREATED = @DATE_CREATED "); if (cust.CUSTOMER_TYPE != null) sb.Append(",CUSTOMER_TYPE = @CUSTOMER_TYPE "); if (cust.SUBSCRIPTION_TYPE != null) sb.Append(",SUBSCRIPTION_TYPE = @SUBSCRIPTION_TYPE "); sb.Append("WHERE ID = @ID "); sb.Append("; SELECT CAST(SCOPE_IDENTITY() as int "); var sql = sb.ToString(); using (connection = new SqlConnection(connectString)) { connection.Execute(sql, cust); } } catch (Exception ex) { throw ex; } } ```<issue_comment>username_1: The feature you are looking for is called Change Tracking. This feature is one of the part of a bigger Unit Of Work pattern. Dapper do not support Change Tracking. There are few add-ons of Dapper those support this at different level. Refer ~~[this](https://blog.falafel.com/implementing-a-generic-repository-with-dapper-extensions/)~~ [this](https://web.archive.org/web/20160807040540/http://blog.falafel.com/implementing-a-generic-repository-with-dapper-extensions/) blog post for comparison chart. As mentioned in chart, Dapper.Contrib and Dapper.Rainbow support it in different ways. As @MarcGravell said in comment, `null` values for POCO properties is common. It does not always mean "do not update that field". That may also mean "set that DB field to `null` (or `DBNull`)". As there is no one guaranteed meaning for the property value being `null`, most ORMs implement it same way as Dapper does. Upvotes: 3 <issue_comment>username_2: We wrap Dapper.Rainbow's `Snapshotter` to work like a change tracker. You need an instance of the db object for it to work. It works perfectly for us delivering a dictionary you could quite easily use to generate the SQL you're after. It might look something like this: ``` public class Foo{public string Name{get;set;}} var foo = new Foo(); var snapshotter = Snapshotter.Start(foo); foo.Name = "A new name"; var dynparams = snapshotter.Diff(); //we basically wrap the snapshotter to give a dict here, but it's basically the same thing foreach(var name in dynparams.ParameterNames){ sb.Append($",{name} = @{dynparams[name]} "); } ``` Upvotes: 2 <issue_comment>username_3: Roll your own. Add a private collection to the model, and copy the data when reading. Compare old to new, when updating. Build the SQL statement as needed. Would there be a net performance gain by adding to the pre-database processing, over letting the columns "dry fire" to the database vendor? I think most database vendors are aware when the column data is unchanged, and will exclude these columns where necessary, such as an update trigger on specific columns. Upvotes: 0 <issue_comment>username_4: Here is the example of Update query using **Dapper** that you want. It might help you. ``` public async Task UpdateDataByIdAsync(Data data) { using MySqlConnection connection = new MySqlConnection("your connection string"); const string sqlQuery = @"Update Datas Set name = @name, description = @description, tags = @tags where data\_id = @data\_id;"; var rowAffected = await connection.ExecuteAsync(sqlQuery, data); return rowAffected > 0; } ``` Upvotes: 0 <issue_comment>username_5: Here is a snippet found on Github, if you are using SimpleCrud ``` /// /// Updates table T with the values in param. /// The table must have a key named "Id" and the value of id must be included in the "param" anon object. The Id value is used as the "where" clause in the generated SQL /// /// Type to update. Translates to table name /// /// An anonymous object with key=value types /// The Id of the updated row. If no row was updated or id was not part of fields, returns null public static object UpdateFields(this IDbConnection connection, object param, IDbTransaction transaction = null, int? commandTimeOut = null, CommandType? commandType = null) { var names = new List(); object id = null; foreach (PropertyDescriptor property in TypeDescriptor.GetProperties(param)) { if (!"Id".Equals(property.Name, StringComparison.InvariantCultureIgnoreCase)) names.Add(property.Name); else id = property.GetValue(param); } if (id != null && names.Count > 0) { var sql = string.Format("UPDATE {1} SET {0} WHERE Id=@Id", string.Join(",", names.Select(t => { t = t + "=@" + t; return t; })), GetTableName(typeof(T))); if (Debugger.IsAttached) Trace.WriteLine(string.Format("UpdateFields: {0}", sql)); return connection.Execute(sql, param, transaction, commandTimeOut, commandType) > 0 ? id : null; } return null; } public static object UpdateFields(this IDbConnection connection, object fields, CommandDefinition commandDefinition) { return UpdateFields(connection, fields, commandDefinition.Transaction, commandDefinition.CommandTimeout, commandDefinition.CommandType); } ``` Upvotes: 0