date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/14
1,141
3,234
<issue_start>username_0: Just implemented a google map to the website but there is a very small gap maybe 1-2px between the map and the footer. I was wondering if there was anyway to make them sit flush together and not have a tiny gap. Thanks for any help or advice. Not sure how i can solve this! Maps and footer html ``` ![](assets/img/logo.png) ##### Thank you We would like to thank you for taking the time and visiting thebeckwood.co.uk. If you have any queries please don't hesitate to use the contact us button or give us a quick phone call. ![](assets/img/hygeine.png) ##### Navigation * [Home](index.html) * [Menu](menu.html) * [Gallery](gallery.html) * [About](about.html) * [Book](book.html) * [Contact](contact.html) [Contact us](contact.html) © 2018 <NAME>. ``` Footer CSS ``` #footerlogo { height:90px; margin-top: 20px; } #myFooter { background-color: #3d280c; color: white; padding-top: 30px } #myFooter .footer-copyright { background-color: #35240a; padding-top: 3px; padding-bottom: 3px; text-align: center; } #myFooter .row { margin-bottom: 30px; } #myFooter .navbar-brand { margin-top: 45px; height: 65px; } #myfooter .navbar-brand>img { height: 50px; width: auto; } #myFooter .footer-copyright p { margin: 10px; color: #ccc; } #myFooter ul { list-style-type: none; padding-left: 0; line-height: 1.7; } #myFooter h5 { font-size: 25px; margin-top: 30px; color: #FFB03B;; font-family: 'Satisfy', cursive; } #myFooter h2 a{ font-size: 50px; text-align: center; color: #fff; } #myFooter a { color: #d2d1d1; text-decoration: none; } #myFooter a:hover, #myFooter a:focus { text-decoration: none; color: #FFB03B; } #myFooter .social-networks { text-align: center; padding-top: 30px; padding-bottom: 16px; } #myFooter .social-networks a { font-size: 32px; color: #f9f9f9; padding: 10px; transition: 0.2s; } #myFooter .social-networks a:hover { text-decoration: none; } #myFooter .facebook:hover { color: #002659; } #myFooter .google:hover { color: #ef1a1a; } #myFooter .twitter:hover { color: #00aced; } #myFooter .btn { color: white; background-color: #c48529; border-radius: 20px; border: none; width: 150px; display: block; margin: 0 auto; margin-top: 10px; line-height: 25px; } #myFooter .btn:hover { color: #593c12; } ```<issue_comment>username_1: There could be a number or reasons such as padding and margins set somewhere else. You can move the footer up a bit by setting a `margin-top` to negative. Although, try adding this: ``` #myFooter { background-color: #3d280c; color: white; padding-top: 30px; margin-top: -5px; } ``` Upvotes: 0 <issue_comment>username_2: Just add ``` iframe { display:block; } ``` or ``` #maps { display: block; } ``` in your case. The default display property of an iframe is inline. That means they will be placed on the text baseline. The gap you're seeing is the space for alphabetic letters in the line of text. So it has nothing to do with Google Maps in particular. Upvotes: 2 [selected_answer]
2018/03/14
690
2,148
<issue_start>username_0: I've been developing a web site and I'm testing for responsiveness. Everything seems fine but one hurdle that I've been unable to overcome is the side navigation bar height not always being the size of the web site. My application displays additional components based on user selection so the height of the web site can be different. I've set the css of the navigation bar to have a height of 100% and this is fine for the screen I'm using to develop the site but whenever I change my responsiveness to say 1080p x 720p my side bar doesn't persist to the height of the screen and I'm left with white space (please see photo below). [![enter image description here](https://i.stack.imgur.com/Xud2G.png)](https://i.stack.imgur.com/Xud2G.png) My css for the side navigation is as follows: ``` .side-nav { min-height: 100% !important; width: 240px; position: absolute; z-index: 999; left: 0; background-color: #00a56b; padding-top: 20px; } ``` I was considering using media queries to change the percentage of the min-height value for the .side-nav class based on the screen height but is there a more effective way to achieve my goal? **(EDIT) html div structure:** ``` ![](/img/LOGO - SMALL.png) Navigation [New Incident](#!incident/new "New Incident") [Search Incident](#!incident/search "Search Incident") [Reports](#!reports "Reports") [Manager Setup](#!manager "Manger Setup") ```<issue_comment>username_1: You can add this css for side navbar CSS: ``` .side-nav{ width: 240px; position: absolute; z-index: 999; left: 0; background-color: #00a56b; padding-top: 20px; top: 0; bottom: 0; height: 100vh; } ``` Upvotes: 0 <issue_comment>username_1: Other way for this type of design using display table CSS: ``` .parent-div{ display: table; width: 100%; } .side-nav, .main-content{ display: table-cell; vertical-align: top; } .side-nav{ width: 240px; height: 100vh; background-color: #00a56b; } .main-content{ width: calc(100% - 260px); padding-left: 20px; } ``` HTML: ``` ``` Upvotes: 2 [selected_answer]
2018/03/14
595
1,912
<issue_start>username_0: I have an array of Vector3's from the polygon collider. I want to have an array of the indexes of all the Vector3's that are higher then a certain y. ``` private void Awake() { polygonCollider2D = GetComponent(); float lowestY = polygonCollider2D.points.Min(x => x.y); // So in the sentence below I want to have a array of indexes instead of a array of vector3's. topPoints = polygonCollider2D.points.Where(x => x.y > lowestY).ToArray(); } ``` Can I do this with Linq?<issue_comment>username_1: The `Select` allows you to capture the index. You can first select indexes and filter with the `-1` 'filter sentinel' value I used below, ``` topPointsIndexes = polygonCollider2D.points .Select((x, index) => (x.y > lowestY) ? index : -1) .Where(i => i >= 0) .ToArray(); ``` (or first expand the point and index together, and then filter, as username_2 did in his answer) Upvotes: 0 <issue_comment>username_2: Yes you can use an overload of [`Select`](https://msdn.microsoft.com/en-us/library/bb534869(v=vs.110).aspx) that includes the index like so ``` var topPointIndexes = polygonCollider2D.points .Select((p,i) => new{Point=p, Index=i}) .Where(x => x.Point.y > lowestY) .Select(x => x.Index) .ToArray(); ``` Another option is to just create the set of indexes up front ``` var points = polygonCollider2D.points; var topPointIndexes = Enumerable.Range(0, points.Count()) .Where(i => points[i].y > lowestY) .ToArray(); ``` Upvotes: 1 <issue_comment>username_3: ``` indexesOfTopPoints = polygonCollider2D.points.Select((x, index) => index) .Where(x => polygonCollider2D.points[x].y > lowestY).ToArray(); ``` Upvotes: 1 [selected_answer]<issue_comment>username_4: Try it ``` topPoints = polygonCollider2D.points.Select((x, i) => new {obj = x, i = i}).Where(x => x.y > lowestY).Select(x => x.i).ToArray(); ``` Upvotes: 0
2018/03/14
667
2,023
<issue_start>username_0: I want to display a float that represents the timer and I am trying to format it like this: > > 00:00:00 (Minutes:Seconds:Milliseconds) > > > ``` public static string ConvertToTime(float t){ TimeSpan ts = TimeSpan.FromSeconds(t); return string.Format("{0:00}:{1:00}:{2:00}", ts.Minutes, ts.Seconds, ts.Milliseconds); } ``` But this will give the full milliseconds, not a precision less even I defined the format with 00. > > For example if the timer is 3.4234063f it should output 00:03:42 not 00:03:423. > > > Its such a basic thing, but I can't resolve it when using timespan.<issue_comment>username_1: The `Select` allows you to capture the index. You can first select indexes and filter with the `-1` 'filter sentinel' value I used below, ``` topPointsIndexes = polygonCollider2D.points .Select((x, index) => (x.y > lowestY) ? index : -1) .Where(i => i >= 0) .ToArray(); ``` (or first expand the point and index together, and then filter, as username_2 did in his answer) Upvotes: 0 <issue_comment>username_2: Yes you can use an overload of [`Select`](https://msdn.microsoft.com/en-us/library/bb534869(v=vs.110).aspx) that includes the index like so ``` var topPointIndexes = polygonCollider2D.points .Select((p,i) => new{Point=p, Index=i}) .Where(x => x.Point.y > lowestY) .Select(x => x.Index) .ToArray(); ``` Another option is to just create the set of indexes up front ``` var points = polygonCollider2D.points; var topPointIndexes = Enumerable.Range(0, points.Count()) .Where(i => points[i].y > lowestY) .ToArray(); ``` Upvotes: 1 <issue_comment>username_3: ``` indexesOfTopPoints = polygonCollider2D.points.Select((x, index) => index) .Where(x => polygonCollider2D.points[x].y > lowestY).ToArray(); ``` Upvotes: 1 [selected_answer]<issue_comment>username_4: Try it ``` topPoints = polygonCollider2D.points.Select((x, i) => new {obj = x, i = i}).Where(x => x.y > lowestY).Select(x => x.i).ToArray(); ``` Upvotes: 0
2018/03/14
584
1,856
<issue_start>username_0: We have use-case where we want to partition the data frame by a column value and then write each partition into single file. I did following things to do the same: ``` val df = spark.read.format("csv").load("hdfs:///tmp/PartitionKeyedDataset.csv") df.repartition($"_c1") df.rdd.saveAsTextFile("s3://dfdf/test1234") ``` When i do: ``` df.rdd.partitions.size ``` I get only 62 partition.But, the distinct values for the column is 10,214 (got it by running df.select("\_c1").distinct.count) I can't use: ``` df.write.partitionBy("_c1").save("s3://dfdf/test123") ``` as this creates the folder in destination with partition name. We don't want this. We want only files to be dumped.<issue_comment>username_1: I did a silly mistake of not using new variable. Hence, i saw same number of partition. Below is the updated code: ``` val df = spark.read.format("csv").load("hdfs:///tmp/PartitionKeyedDataset.csv") df.repartition($"_c1") df.rdd.saveAsTextFile("s3://dfdf/test1234") ``` repartition will only create 200 partitions by default as the default value for spark.sql.shuffle.partitions is 200. I have set this value to number of unique values i have for the column on which i want to partition. ``` spark.conf.set("spark.sql.shuffle.partitions", "10214") ``` After this, i got 10214 partitions and write operation created 10214 files in S3. Upvotes: 2 <issue_comment>username_2: You need to assign the new dataframe to a variable and use that instead. Currently in your code the `repartition` part does not actually do anything. ``` val df = spark.read.format("csv").load("hdfs:///tmp/PartitionKeyedDataset.csv") val df2 = df.repartition($"_c1") df2.rdd.saveAsTextFile("s3://dfdf/test1234") ``` Although it is possible to change the `spark.sql.shuffle.partitions` setting, that is not as flexible. Upvotes: 0
2018/03/14
490
1,610
<issue_start>username_0: I'm trying to bundle a NativeScript App with the snapshot flag like this: ``` tns build android --bundle --env.snapshot ``` The following error appears: ``` ERROR in NativeScriptSnapshot. Snapshot generation failed! Target architecture: x86 # Script run failed in @736:2461 ReferenceError: com is not defined # # Fatal error in ../src/snapshot/mksnapshot.cc, line 175 # Check failed: blob.data. # ``` Anyone have an idea how to fix that?<issue_comment>username_1: I did a silly mistake of not using new variable. Hence, i saw same number of partition. Below is the updated code: ``` val df = spark.read.format("csv").load("hdfs:///tmp/PartitionKeyedDataset.csv") df.repartition($"_c1") df.rdd.saveAsTextFile("s3://dfdf/test1234") ``` repartition will only create 200 partitions by default as the default value for spark.sql.shuffle.partitions is 200. I have set this value to number of unique values i have for the column on which i want to partition. ``` spark.conf.set("spark.sql.shuffle.partitions", "10214") ``` After this, i got 10214 partitions and write operation created 10214 files in S3. Upvotes: 2 <issue_comment>username_2: You need to assign the new dataframe to a variable and use that instead. Currently in your code the `repartition` part does not actually do anything. ``` val df = spark.read.format("csv").load("hdfs:///tmp/PartitionKeyedDataset.csv") val df2 = df.repartition($"_c1") df2.rdd.saveAsTextFile("s3://dfdf/test1234") ``` Although it is possible to change the `spark.sql.shuffle.partitions` setting, that is not as flexible. Upvotes: 0
2018/03/14
405
1,449
<issue_start>username_0: I have been looking for ways to read/write to shared folders in a Windows Machine via my **Xamarin.Forms** App. So far I found two **.Net** libraries that i thought would solve my problem: [SharpCifs](https://github.com/ume05rw/SharpCifs.Std) and [Xamarin.Android.jCIFS](https://github.com/sushihangover/Xamarin.Android.jCIFS) nevertheless, they are a porting and a binding of JCIFS respectively and as stated in [this info](https://lists.samba.org/archive/jcifs/2013-December/010123.html) JCIFS only supports SMB1 which is being deactivated from many Windows Machines since WCRY (as soon as I disable SMB1 on the remote PC, those libraries stop working.) So, is there any .NET SMBv2+ Client Library available? Or, What would be an alternative to achieve this task (read/write to shared folders in a Windows Machine via my **Xamarin.Forms** App)?<issue_comment>username_1: Visuality Systems sells two SMB2/3 libraries. One (NQE) is a C lib and can be ported to .NET. Another one, jNQ is pure Java. In bot cases, you will need to develop a thin .NET wrapper. Upvotes: 3 [selected_answer]<issue_comment>username_1: NQE is shipped in sources so it is easier to modify. jNQ is shipped in binaries, however, it is easier to use. Also JNQ may be chipper, imho. I am not an expert in writing .NET wrappers but I do not see issues in either approach. Just to know that both SMB libraries are using threads internally. Upvotes: 0
2018/03/14
866
2,878
<issue_start>username_0: I needed a way to pull 10% of the files in a folder, at random, for sampling after every "run." Luckily, my current files are numbered numerically, and sequentially. So my current method is to list file names, parse the numerical portion, pull max and min values, count the number of files and multiply by .1, then use `random.sample` to get a "random [10%] sample." I also write these names to a .txt then use `shutil.copy` to move the actual files. Obviously, this does not work if I have an outlier, i.e. if I have a file `345.txt` among other files from `513.txt - 678.txt`. I was wondering if there was a direct way to simply pull a number of files from a folder, randomly? I have looked it up and cannot find a better method. Thanks.<issue_comment>username_1: This will give you the list of names in the folder with mypath being the path to the folder. ``` from os import listdir from os.path import isfile, join from random import shuffle onlyfiles = [f for f in listdir(mypath) if isfile(join(mypath, f))] shuffled = shuffle(onlyfiles) small_list = shuffled[:len(shuffled)/10] ``` This should work Upvotes: 1 <issue_comment>username_2: You can use following strategy: 1. Use `list = os.listdir(path)` to get all your files in the directory as list of paths. 2. Next, count your files with `range = len(list)` function. 3. Using `range`number you can get random item number like that `random_position = random.randrange(1, range)` 4. Repeat step 3 and save values in a list until you get enough positions (range/10 in your case) 5. After that you can get required files names like that `list[random_position]` Use cycle `for` for iterating. Hope this helps! Upvotes: 0 <issue_comment>username_3: Using `numpy.random.choice(array, N)` you can select `N` items at random from an array. ``` import numpy as np import os # list all files in dir files = [f for f in os.listdir('.') if os.path.isfile(f)] # select 0.1 of the files randomly random_files = np.random.choice(files, int(len(files)*.1)) ``` Upvotes: 4 <issue_comment>username_4: I was unable to get the other methods to work easily with my code, but I came up with this. ``` output_folder = 'C:/path/to/folder' for x in range(int(len(files) *.1)): to_copy = choice(files) shutil.copy(os.path.join(subdir, to_copy), output_folder) ``` Upvotes: 3 [selected_answer]<issue_comment>username_5: Based on Karl's solution (which did not work for me under Win 10, Python 3.x), I came up with this: ``` import numpy as np import os # List all files in dir files = os.listdir("C:/Users/.../Myfiles") # Select 0.5 of the files randomly random_files = np.random.choice(files, int(len(files)*.5)) # Get the remaining files other_files = [x for x in files if x not in random_files] # Do something with the files for x in random_files: print(x) ``` Upvotes: 0
2018/03/14
433
1,439
<issue_start>username_0: I have a div that I want to style based on a condition. If styleOne is true I want a background colour of red. If StyleTwo is true, I want the background colour to be blue. I've got half of it working with the below code. ```html ``` Is it possible to add a condition to say: * if styleOne is true, do this * if styleTwo is true, do this? **Edit** I think i've resolved it. It works. Not sure if it's the best way: ```html ```<issue_comment>username_1: For a single style attribute, you can use the following syntax: ```html ``` I assumed that the background color should not be set if neither `style1` nor `style2` is `true`. --- Since the question title mentions `ngStyle`, here is the equivalent syntax with that directive: ```html ``` Upvotes: 9 [selected_answer]<issue_comment>username_2: You can use an inline if inside your ngStyle: ``` [ngStyle]="styleOne?{'background-color': 'red'} : {'background-color': 'blue'}" ``` A better way in my opinion is to store your background color inside a variable and then set the background-color as the variable value: ``` [style.background-color]="myColorVaraible" ``` Upvotes: 6 <issue_comment>username_3: ``` [ngStyle]="{'opacity': is_mail_sent ? '0.5' : '1' }" ``` Upvotes: 5 <issue_comment>username_4: ``` ``` Upvotes: 3 <issue_comment>username_5: ``` [ngStyle]="{ 'top': yourVar === true ? widthColumHalf + 'px': '302px' }" ``` Upvotes: 2
2018/03/14
823
3,091
<issue_start>username_0: I'm building some test operations against an Angular application using Protractor. I'm trying to keep my locators as easy to read and maintain as possible, and was trying to use "element chaining" to do this. Based on everything that I've read here in SO and the Protractor documentation, I think the following locator strategy should work: The xpPanelPayment variable is defined just for readability. ``` let xpPanelPayment = "//div [@class='panel-heading' and text()='Payment']/following-sibling::div [@class='panel-body']"; this.pnlPayment = element(by.xpath(`${xpPanelPayment}`)); this.valTotalPayment = element(by.xpath(`${xpPanelPayment}`)) .element(by.xpath(`//strong [text()='Total Payment:']/../following-sibling::div/strong`)); ``` What I would prefer is: ``` this.valTotalPayment = this.pnlPayment .element(by.xpath(`//strong [text()='Total Payment:']/../following-sibling::div/strong`)); ``` But when I try that, I get an error that seems to indicate that this.pnlPayment is undefined. Perhaps this is a clue? Here is the method that makes use of those locators: ``` const Receipt = require('./Receipt.js').Receipt; exports.verifyTotalPayment = (payment) => { it(`Receipt Validation - Verify total payment $${payment}`, () => { console.log(`Receipt.pnlPayment.locator() = '${Receipt.pnlPayment.locator()}'`); console.log(`Receipt.valTotalPayment.locator() = '${Receipt.valTotalPayment.locator()}'`); expect(Receipt.valTotalPayment.getText()).toEqual(`$${payment}`); }); } ``` Here is the contents of the run log: ``` Receipt.pnlPayment.locator() = 'By(xpath, //div [@class='panel-heading' and text()='Payment']/following-sibling::div [@class='panel-body'])' Receipt.valTotalPayment.locator() = 'By(xpath, //strong [text()='Total Payment:']/../following-sibling::div/strong)' [09:31:07] W/element - more than one element found for locator By(xpath, //strong [text()='Total Payment:']/../following-sibling::div/strong) - the first result will be used ``` It appears that the "parent" portion of valTotalPayment is completely ignored. What did I do wrong with my specification for valTotalPayment? If I use the entire xpath string without referencing the parent object, valTotalPayment finds the correct element, but that defeats what I'm trying to do.<issue_comment>username_1: The issue comes from your xpath for `valTotalPayment`. You expect to find `valTotalPayment` from `pnlPayment`'s descendant. You use `//` but `./` for `valTotalPayment`. `//` means any element node of the whole page `./` means any descendant element node of previous/parent element Finally, CSS selector is first option when writing locator, xpath is second. And you can mix Css selector and xpath in element chain if necessary: `element(by.css()).element(by.xpath()).element(by.css())....` Upvotes: 3 <issue_comment>username_2: It works for me: ``` var parent = element(by.css('.parent-class')); var child = element(by.css('.child-class')); parent.element(child.locator()).getText(); ``` Upvotes: 0
2018/03/14
849
3,395
<issue_start>username_0: When I try to load a URL in the `WebView` it only shows a blank screen. If I load <https://www.google.com> or <https://www.facebook.com> it is working fine. ``` package com.example.hp.cccapp; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.webkit.WebView; import android.webkit.WebViewClient; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); WebView webb=(WebView)findViewById(R.id.web1); webb.setWebViewClient(new WebViewClient()); //webb.loadUrl("https://www.google.com/"); webb.loadUrl("https://192.168.2.29/ccc/"); } } ``` Can anyone one suggest me how can I do this so my `WebView` can handle HTTPS URL?<issue_comment>username_1: Try to add setJavascriptEnabled(true). And change the ` ``` webb.setWebViewClient(new WebViewClient()); ``` to this `webView.setWebChromeClient(new WebChromeClient());` ``` public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); WebView webb=(WebView)findViewById(R.id.web1); webb.setWebViewClient(new WebViewClient()); webb.getSettings().setJavaScriptEnabled(true); //webb.loadUrl("https://www.google.com/"); webb.loadUrl("https://192.168.2.29/ccc/"); } } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: i share with you the solution that work for me , it gave me juste the acces to the Web page : ``` import android.net.http.SslError; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.webkit.GeolocationPermissions; import android.webkit.SslErrorHandler; import android.webkit.WebChromeClient; import android.webkit.WebView; import android.webkit.WebViewClient; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { WebView webView; super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); webView = (WebView)findViewById(R.id.web1); webView.getSettings().setJavaScriptEnabled(true); webView.getSettings().setAppCacheEnabled(true); webView.getSettings().setDatabaseEnabled(true); webView.getSettings().setDomStorageEnabled(true); webView.getSettings().setSupportZoom(true); webView.getSettings().setJavaScriptCanOpenWindowsAutomatically(true); webView.getSettings().setBuiltInZoomControls(true); webView.getSettings().setGeolocationEnabled(true); webView.setWebViewClient(new WebViewClient() { @Override public void onReceivedSslError(WebView view, SslErrorHandler handler, SslError error) { handler.proceed(); } }); webView.loadUrl("https://192.168.2.29/ccc/"); webView.setWebChromeClient(new WebChromeClient() { @Override public void onGeolocationPermissionsShowPrompt(String origin, GeolocationPermissions.Callback callback) { callback.invoke(origin,true,false); } }); } ``` } Upvotes: 2
2018/03/14
735
2,873
<issue_start>username_0: i got a problem with scroll div. Code is like this HTML: ```css ul {height: 100px; overflow: scroll;} li {height: 25px;} ``` ```html * * * * * * * * * * * * * * * * ``` All i need is auto scroll when page loaded ul#sub-menu-item to element with one of classes is li.current-menu-item. Can someone help me to find a method to do this.<issue_comment>username_1: Try to add setJavascriptEnabled(true). And change the ` ``` webb.setWebViewClient(new WebViewClient()); ``` to this `webView.setWebChromeClient(new WebChromeClient());` ``` public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); WebView webb=(WebView)findViewById(R.id.web1); webb.setWebViewClient(new WebViewClient()); webb.getSettings().setJavaScriptEnabled(true); //webb.loadUrl("https://www.google.com/"); webb.loadUrl("https://192.168.2.29/ccc/"); } } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: i share with you the solution that work for me , it gave me juste the acces to the Web page : ``` import android.net.http.SslError; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.webkit.GeolocationPermissions; import android.webkit.SslErrorHandler; import android.webkit.WebChromeClient; import android.webkit.WebView; import android.webkit.WebViewClient; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { WebView webView; super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); webView = (WebView)findViewById(R.id.web1); webView.getSettings().setJavaScriptEnabled(true); webView.getSettings().setAppCacheEnabled(true); webView.getSettings().setDatabaseEnabled(true); webView.getSettings().setDomStorageEnabled(true); webView.getSettings().setSupportZoom(true); webView.getSettings().setJavaScriptCanOpenWindowsAutomatically(true); webView.getSettings().setBuiltInZoomControls(true); webView.getSettings().setGeolocationEnabled(true); webView.setWebViewClient(new WebViewClient() { @Override public void onReceivedSslError(WebView view, SslErrorHandler handler, SslError error) { handler.proceed(); } }); webView.loadUrl("https://192.168.2.29/ccc/"); webView.setWebChromeClient(new WebChromeClient() { @Override public void onGeolocationPermissionsShowPrompt(String origin, GeolocationPermissions.Callback callback) { callback.invoke(origin,true,false); } }); } ``` } Upvotes: 2
2018/03/14
749
3,092
<issue_start>username_0: So i have an LDAP server that has one port that directs to LDAP consumers and an other port that directs to LDAP providers. However when i make a write request using php's `ldap_add` function the LDAP provider is throwing a `code 10 : Referral` error (this is the error that wants me to follow the referral) Why do i have to follow the referral when i am already talking to the master/provider server ? From what i read only a slave/consumer should send back a referral when you try to write to that server. ( as you are not allowed to write to a consumer )<issue_comment>username_1: Try to add setJavascriptEnabled(true). And change the ` ``` webb.setWebViewClient(new WebViewClient()); ``` to this `webView.setWebChromeClient(new WebChromeClient());` ``` public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); WebView webb=(WebView)findViewById(R.id.web1); webb.setWebViewClient(new WebViewClient()); webb.getSettings().setJavaScriptEnabled(true); //webb.loadUrl("https://www.google.com/"); webb.loadUrl("https://192.168.2.29/ccc/"); } } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: i share with you the solution that work for me , it gave me juste the acces to the Web page : ``` import android.net.http.SslError; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; import android.webkit.GeolocationPermissions; import android.webkit.SslErrorHandler; import android.webkit.WebChromeClient; import android.webkit.WebView; import android.webkit.WebViewClient; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { WebView webView; super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); webView = (WebView)findViewById(R.id.web1); webView.getSettings().setJavaScriptEnabled(true); webView.getSettings().setAppCacheEnabled(true); webView.getSettings().setDatabaseEnabled(true); webView.getSettings().setDomStorageEnabled(true); webView.getSettings().setSupportZoom(true); webView.getSettings().setJavaScriptCanOpenWindowsAutomatically(true); webView.getSettings().setBuiltInZoomControls(true); webView.getSettings().setGeolocationEnabled(true); webView.setWebViewClient(new WebViewClient() { @Override public void onReceivedSslError(WebView view, SslErrorHandler handler, SslError error) { handler.proceed(); } }); webView.loadUrl("https://192.168.2.29/ccc/"); webView.setWebChromeClient(new WebChromeClient() { @Override public void onGeolocationPermissionsShowPrompt(String origin, GeolocationPermissions.Callback callback) { callback.invoke(origin,true,false); } }); } ``` } Upvotes: 2
2018/03/14
648
2,236
<issue_start>username_0: I installed Wireshark on macOS High Sierra and captured some network trafic while making HTTP calls to a local server using CURL. The traffic captured in wireshark only showed TCP packets. When looking at the data whitin the TCP packets I could see the HTTP packets, but these were not recognized by Wireshark as the packet protocol. Any way to make it properly parse the HTTP packets? Here's an example capture: [![enter image description here](https://i.stack.imgur.com/uae2m.png)](https://i.stack.imgur.com/uae2m.png) One guess I had was that Wireshark only recognises a packet as HTTP if it's on port 80. If this is so, is there any way to change this setting? P.S. No HTTPS involved here, just plane old HTTP from a client to a REST API.<issue_comment>username_1: Ok, figured out the issue. My server was exposed on port 5000 (which is the default Flask port). Turns out that port 5000 is conventionally used for IPA packets, which is a GSM over IP protocol. Wireshark aparently used the port number to determine the type of packet, and so it misclasified it as an IPA packet. Once I moved my server to another port (e.g. 5001) - the problem was gone. P.S. See <https://osqa-ask.wireshark.org/questions/9240/data-which-has-been-sent-over-tcpip-has-been-recognized-by-wireshark-as-ipa-protocol> for more details. Upvotes: 3 [selected_answer]<issue_comment>username_2: To supplement @MartanRubin's answer, it's also possible to indicate to WireShark that port 5000 is not GSM over IP. In *Edit → Preferences → Protocols → GSM over IP* remove port 5000 from the "TCP port(s)" field: [![wireshark preferences](https://i.stack.imgur.com/3ES15.png)](https://i.stack.imgur.com/3ES15.png) To persist the preference you also need to add 5000 to HTTP protocol "TCP port(s)" field. Then they survive restart (tested in a custom profile). Note however, that when you open GSM over IP protocol's preferences, 5000 is still there, but doesn't have effect. But when I save it (click OK button), my `/home/user/.config/wireshark/profiles/CustomProfile/decode_as_entries` gets messed up again, and I need to repeat the process on both protocol's "TCP port(s)" field. A counter-intuitive UI, I would say. Upvotes: 0
2018/03/14
653
2,166
<issue_start>username_0: Here's my `ProspectAPIsService` ``` import {Injectable} from "@angular/core"; import {HttpClient, HttpParams} from "@angular/common/http"; @Injectable() export class ProspectAPIsService { constructor(private http: HttpClient) { } public getOneProspect(nome) { return this.http.get('assets/mocks/prospect.json').toPromise(); } } ``` I'm fetching data by reading a dummy json file. Here's the data in that file. ``` [ {"nome":"Dam", "cog":"prova"}, {"nome":"luc", "cog":"prova2"} ] ``` I have to use Promises over Observable. I really don't know how to use the parameter 'name' that I'm getting in `getOneProspect` method for filtering the result data.<issue_comment>username_1: Ok, figured out the issue. My server was exposed on port 5000 (which is the default Flask port). Turns out that port 5000 is conventionally used for IPA packets, which is a GSM over IP protocol. Wireshark aparently used the port number to determine the type of packet, and so it misclasified it as an IPA packet. Once I moved my server to another port (e.g. 5001) - the problem was gone. P.S. See <https://osqa-ask.wireshark.org/questions/9240/data-which-has-been-sent-over-tcpip-has-been-recognized-by-wireshark-as-ipa-protocol> for more details. Upvotes: 3 [selected_answer]<issue_comment>username_2: To supplement @MartanRubin's answer, it's also possible to indicate to WireShark that port 5000 is not GSM over IP. In *Edit → Preferences → Protocols → GSM over IP* remove port 5000 from the "TCP port(s)" field: [![wireshark preferences](https://i.stack.imgur.com/3ES15.png)](https://i.stack.imgur.com/3ES15.png) To persist the preference you also need to add 5000 to HTTP protocol "TCP port(s)" field. Then they survive restart (tested in a custom profile). Note however, that when you open GSM over IP protocol's preferences, 5000 is still there, but doesn't have effect. But when I save it (click OK button), my `/home/user/.config/wireshark/profiles/CustomProfile/decode_as_entries` gets messed up again, and I need to repeat the process on both protocol's "TCP port(s)" field. A counter-intuitive UI, I would say. Upvotes: 0
2018/03/14
675
2,162
<issue_start>username_0: I need to figure out the best way to put together an array where multiple keys have the same value. For example,I need to return ***LARGE*** if any of the following values are provided: ``` "lrge", "lrg", "lg" ``` I think it should be in a form of multidimensional array. Something like: ``` $myArr= array ( "color" = array ( "RED" => array("red", "rd", "r"), "BLUE" => array("blue", "blu", "bl") ), "size" = array ( "LARGE" => array("lrge", "lrg", "lg"), "SMALL" => array("smal", "sml", "sm") ) ); ``` Having a blank moment on how to use it: ``` $cat = "size"; $val = "lrg"; echo ... // need to return LARGE ```<issue_comment>username_1: Ok, figured out the issue. My server was exposed on port 5000 (which is the default Flask port). Turns out that port 5000 is conventionally used for IPA packets, which is a GSM over IP protocol. Wireshark aparently used the port number to determine the type of packet, and so it misclasified it as an IPA packet. Once I moved my server to another port (e.g. 5001) - the problem was gone. P.S. See <https://osqa-ask.wireshark.org/questions/9240/data-which-has-been-sent-over-tcpip-has-been-recognized-by-wireshark-as-ipa-protocol> for more details. Upvotes: 3 [selected_answer]<issue_comment>username_2: To supplement @MartanRubin's answer, it's also possible to indicate to WireShark that port 5000 is not GSM over IP. In *Edit → Preferences → Protocols → GSM over IP* remove port 5000 from the "TCP port(s)" field: [![wireshark preferences](https://i.stack.imgur.com/3ES15.png)](https://i.stack.imgur.com/3ES15.png) To persist the preference you also need to add 5000 to HTTP protocol "TCP port(s)" field. Then they survive restart (tested in a custom profile). Note however, that when you open GSM over IP protocol's preferences, 5000 is still there, but doesn't have effect. But when I save it (click OK button), my `/home/user/.config/wireshark/profiles/CustomProfile/decode_as_entries` gets messed up again, and I need to repeat the process on both protocol's "TCP port(s)" field. A counter-intuitive UI, I would say. Upvotes: 0
2018/03/14
775
2,141
<issue_start>username_0: Trying to determine a sensible way to clean dates (character), then put those dates in a proper date format via `input` function, but maintain sensible variable names (and possibly even preserve the original variable names) once the char-to-number process is executed. The dates are being cleaned with an array (replacing `'..'` with `'01'`, or `'....'` with `0101`) since there are about 75 variables that have dates as strings. Ex. - ``` data sample; input d1 $ d2 $ d3 $ d4 $ d5 $; cards; 200103.. 20070905 20060222 2007.... 199801.. ; run; data clean; set sample; array dt_cln(5) d1-d5; array fl_dt (5) f1-f5; *clean out '..'/'....', replace with '01'/'0101'; do i=1 to 5; if substr(dt_cln(i),5,4) = '....' then do; dt_cln(i) = substr(dt_cln(i),1,4) || '0101'; end; else if substr(dt_cln(i),7,2) = '..' then do; dt_cln(i) = substr(dt_cln(i),1,6) || '01'; end; end; *change to number; do i=1 to 5; fl_dt(i)=input(dt_cln(i),yymmdd8.); end; format f: date9.; drop i d:; run; ``` What would be the best way to approach this?<issue_comment>username_1: You cannot preserve the original names and convert from character to numeric directly - however, with a bit of macro code you could drop all the old character variables and rename the numeric versions you've created. E.g. ``` %macro rename_loop(); %local i; %do i = 1 %to 5; f&i = d&i %end; %mend; ``` Then in your data step add a rename statement at the end, after your drop statement: ``` rename %rename_loop; ``` Otherwise, your existing approach is already pretty good. You could perhaps simplify the cleaning process a bit, e.g. remove your first do-loop and do the following within the second one: ``` fl_dt(i)=input(tranwrd(dt_cln(i),'..','01'),yymmdd8.); ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: ``` data want; set sample; array var1 newd1-newd5; array var2 d:; do over var2; var1=input(ifc(index(var2,'.')^=0,put(prxchange('s/((\.){1,})/0101/',-1,var2),8.),var2),yymmdd8.); end; format newd1-newd5 yymmddn8.; drop d:; run; ``` Upvotes: 0
2018/03/14
931
3,048
<issue_start>username_0: I am trying to figure out how to remove capital letters from a string using Python but without the `for` loop. I’m trying to do this while traversing a list using a `while` loop. So how can I remove the capital letters in a provided string?<issue_comment>username_1: You have a few options here: 1) If you simply want to convert all upper-case letters to lower-case, then `.lower()` is the simplest approach. ``` s = 'ThiS iS A PyTHon StrinG witH SomE CAPitaL LettErs' ``` Gives: ``` this is a python string with some capital letters ``` 2) If you want to completely remove them, then `re.sub()` is a simple approach. ``` import re print(re.sub(r'[A-Z]', '', s)) ``` Gives: ``` hi i yon trin wit om ita ettrs ``` 3) For a list of strings, you could use a list comprehension: ``` #Option1 [i.lower() for i in s]) #Option2 import re [re.sub(r'[A-Z]', '', i) for i in s]) #Option3 (as mentioned by @JohnColeman) [''.join([j for j in i if not j.isupper()]) for i in s] ``` Upvotes: 2 <issue_comment>username_2: Strings are immutable, so you can’t literally remove characters from them, but you can create a new string that skips over those characters. The simplest way is: ``` s = ''.join(ch for ch in s if not ch.isupper()) ``` If you want to do this without `for` for some reason (like an assignment requirement), we can write this out as an explicit loop, and then convert it to a `while`. So: ``` result = [] for ch in s: if not ch.isupper(): result.append(ch) s = ''.join(result) ``` To change the loop, we have to manually setup and next the iterator, but it may be easier to understand with just a plain int as an index instead of an iterator: ``` result = [] i = 0 while i < len(s): ch = s[i] if not ch.isupper(): result.append(ch) i += 1 s = ''.join(result) ``` Of course this is more verbose, slightly less efficient, and easier to get wrong, but otherwise it’s basically equivalent, and it meets your strange requirements. In real life, there might be better ways to do this—e.g., `str.translate` with a map from all caps to None should be pretty fast if you only care about ASCII caps—but I assume your teacher doesn’t want you thinking in those directions, they want you thinking about the loops explicitly. (Of course there is a loop in `str.translate`, or `re.sub`, etc., that loop is just hidden under the covers where you don’t see it.) If you need to do this to multiple strings in a list, you’d wrap it up in a function, and apply it to each string in the list, using a comprehension—or you can write it out as a loop statement, and convert it to a `while` loop, if you prefer, in exactly the same way. For example: ``` def remove_caps(s): result = [] i = 0 while i < len(s): ch = s[i] if not ch.isupper(): result.append(ch) i += 1 return ''.join(result) strings = ['aBC', 'Abc', 'abc', ''] new_strings = [] i = 0 while i < len(strings): new_strings.append(remove_caps(strings[i])) i += 1 ``` Upvotes: 3
2018/03/14
1,363
4,626
<issue_start>username_0: I created a mySQL database with phpMyAdmin in my local server. In this database I store the names and the favourite NBA teams of my friends.This is obviously a many-to-many relationship. For this a reason, I created three tables: one with the id & name of my friends, one with the id & name of the teams and one with friends\_id and teams\_id (which is the relations table). This is more clearly shown by the following MySQL script: ``` CREATE TABLE `friends` ( `id` int(4) NOT NULL AUTO_INCREMENT, `name` varchar(30) NOT NULL, PRIMARY KEY (`id`) ) CREATE TABLE `teams` ( `id` int(4) NOT NULL AUTO_INCREMENT, `name` varchar(30) NOT NULL, PRIMARY KEY (`id`) ) CREATE TABLE `relations` ( `friends_id` int(4) NOT NULL AUTO_INCREMENT, `teams_id` int(4) NOT NULL AUTO_INCREMENT, ) ``` I want to give an json output with these data and for this reason I run the following PHP script: ``` php $dbServername = 'localhost'; $dbUsername = 'root'; $dbPassword = ''; $dbName = 'Friends'; $conn = mysqli_connect($dbServername, $dbUsername, $dbPassword, $dbName); header('Content-Type: application/json'); $sql = 'SELECT * FROM friends;'; $result = mysqli_query($conn, $sql); $resultCheck = mysqli_num_rows($result); $arr = []; if ($resultCheck 0) { while ($row = mysqli_fetch_assoc($result)) { $arr[] = $row; } } echo json_encode($arr, JSON_PRETTY_PRINT); $sql = 'SELECT * FROM teams;'; $result = mysqli_query($conn, $sql); $resultCheck = mysqli_num_rows($result); $arr = []; if ($resultCheck > 0) { while ($row = mysqli_fetch_assoc($result)) { $arr[] = $row; } } echo json_encode($arr, JSON_PRETTY_PRINT); $sql = 'SELECT * FROM relations;'; $result = mysqli_query($conn, $sql); $resultCheck = mysqli_num_rows($result); $arr = []; if ($resultCheck > 0) { while ($row = mysqli_fetch_assoc($result)) { $arr[] = $row; } } echo json_encode($arr, JSON_PRETTY_PRINT); ?> ``` However this does not give a valid json output overall but three distinct json arrays. For example, something like this: ``` [..., { "id": "3", "name": "<NAME>", }, ...] [..., { "id": "4", "name": "<NAME>", }, ...] [..., { "friends_id": "3", "teams_id": "4" }, ...] ``` How can I print all of my tables with having a valid json output?<issue_comment>username_1: You need not to write muliple sql queries for retrieve this. You can do the following ``` php $dbServername = 'localhost'; $dbUsername = 'root'; $dbPassword = ''; $dbName = 'Friends'; $conn = mysqli_connect($dbServername, $dbUsername, $dbPassword, $dbName); header('Content-Type: application/json'); $sql = 'SELECT * FROM friends INNER JOIN relations ON friends.id=relations.friends_id INNER JOIN teams ON relations.teams_id=teams.id'; $result = mysqli_query($conn, $sql); $resultCheck = mysqli_num_rows($result); $arr = []; if ($resultCheck 0) { while ($row = mysqli_fetch_assoc($result)) { $arr[] = $row; } } echo json_encode($arr, JSON_PRETTY_PRINT); ?> ``` Upvotes: 0 <issue_comment>username_2: Because this question relates to [How to join arrays with MySQL from 3 tables of many-to-many relationship](https://stackoverflow.com/questions/49279952/how-to-join-arrays-with-mysql-from-3-tables-of-many-to-many-relationship) iám posting a MySQL only answer. **Query** ``` SELECT CONCAT( "[" , GROUP_CONCAT(json_records.json) , "]" ) AS json FROM ( SELECT CONCAT( "{" , '"id"' , ":" , '"' , friends.id , '"' , "," , '"name"' , ":" , '"' , friends.name , '"' , "," , '"team"' , ":" , "[" , GROUP_CONCAT('"', teams.name, '"') , "]" , "}" ) AS json FROM friends INNER JOIN relations ON friends.id = relations.friends_id INNER JOIN teams ON relations.teams_id = teams.id WHERE friends.id IN(SELECT id FROM friends) #select the friends you need GROUP BY friends.id ) AS json_records ``` **Result** ``` | json | |--------------------------------------------------------------------------------------------------------------------------------------------------| | [{"id":"1","name":"<NAME>","team":["Cleveland Cavaliers"]},{"id":"2","name":"<NAME>","team":["Boston Celtics","Cleveland Cavaliers"]}] | ``` **demo** <http://www.sqlfiddle.com/#!9/4cd244/61> Upvotes: 1
2018/03/14
570
2,319
<issue_start>username_0: Let's say I have five view controllers and I want to go to the specific view controller RootViewController ==> FirstViewController ==> SecondViewController ==> ThirdViewController ==> FourthViewController(Modally presented having a button) and all other controllers I connected through push method. My task is I want to go to the firstViewController from FourthViewController when button is clicked. Any help? ``` for controller in self.navigationController!.viewControllers as Array { if controller.isKind(of: HomeViewController.self) { self.navigationController!.popToViewController(controller, animated: true) break } } ``` this is the code I have done.<issue_comment>username_1: Add delegate in FourthViewController: ``` self.dismiss(animated: true) { self.delegate.popToFirstVC() } ``` Add `func popToFirstVC()` in ThirdViewController. Use [popToViewController](https://developer.apple.com/documentation/uikit/uinavigationcontroller/1621871-poptoviewcontroller): ``` func popToFirstVC() { if let firstViewController = self.navigationController?.viewControllers[1] { self.navigationController?.popToViewController(firstViewController, animated: true) } } ``` or better ``` guard let viewControllers = self.navigationController?.viewControllers else { return } for firstViewController in viewControllers { if firstViewController is FirstViewController { self.navigationController?.popToViewController(firstViewController, animated: true) break } } ``` There is still such an option. Add an Observer for this function and call where necessary. But I would do it only in the most extreme cases. ``` func popToThisVC() { if let topController = UIApplication.topViewController() { topController.navigationController?.popToViewController(self, animated: true) } } ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: You mentioned connected, hence I reckon you used storyboards and segues, in that case why not create an unwind segue? It's a bit hard to show you snippet of unwind segue here via text only but I think [Unwind segue blog from medium](https://medium.com/@mimicatcodes/create-unwind-segues-in-swift-3-8793f7d23c6f) has your answer Upvotes: 0
2018/03/14
305
823
<issue_start>username_0: How to get the difference between two arrays in Javascript likeArray([1, 2, 3, 7], [3, 2, 1, 4, 5]);//[1,2,3]<issue_comment>username_1: Pretty easy if you're supporting "newer" browsers. ``` Array.prototype.diff = function (x) { return this.filter(function (y) { return a.indexOf(i) === -1; }); }; ``` then you can call it like `[1, 2, 3, 4, 5, 6].diff([2, 4, 6]);` Upvotes: -1 <issue_comment>username_2: There is a little function that compares 2 arrays and get the difference. The first parameter is you array and the second the one you want to compare to ```js var array1 = [1, 2, 3, 7] var array2 = [3, 2, 1, 4, 5] Diff = function(a, d){ return a.filter(_=>d.indexOf(_)<0) } console.log(Diff(array1, array2)) console.log(Diff(array2, array1)) ``` Upvotes: 1
2018/03/14
732
2,489
<issue_start>username_0: **The Problem:** When I access a (32-bit) `DLL` via URL like ***<http://localhost/somepath/some.dll?action>*** IIS always thinks I want to download the file (with file size 0 byte) instead of executing the dll. **What I tried so far:** * added an entry for this specific DLL in ISAPI- and CGI-Restrictions * enabled the "ISAPI-dll" Handler for \*.dll with feature permissions read,script and execute. * IIS User / AppPool Identity have full access rights to the physcal location of the dll * App-Pool is running in classic mode and 32 bit applications are enabled * I deleted the MIME-Type Entry for \*.dll Still any browser prompts a download window. I'm running out of ideas now. I'm currently using IIS 8.5 on Windows Server 2012 R2. The same application is running without troubles in IIS 5 on Windows 2000 SP4. Any help or idea is appreciated!<issue_comment>username_1: And as Windows 2000 was `32-bit` and Windows Server 2012 R2 is `64-bit` ( and as you said your DLL is `32-bit`) I expect that your issue is maybe possibly linked to [this issue.](https://blogs.msdn.microsoft.com/irfanahm/2008/12/15/how-to-use-a-32-bit-dll-in-asp-net-page-which-is-hosted-on-64-bit-iis/) --- Does your DLL has been registered on server side by using command `regsvr32` (run that command from the windows directory) ? * You may also try to set the MIME type Entry like this: `Extension` > `Type` `.dll` > `Assembly` (Remove the stars and keep the dot on the extension) * You may try to run the IIS host process in 32 bit mode * You may try to create a wrapper of your dll's and host this wrapper dll's in COM+ Finally, according to [this similar issue](https://forums.iis.net/t/1166609.aspx?IIS%207%205%20DLL%20downloads%20instead%20of%20running%20What%20am%20I%20missing%20), you may refer to [this](https://learn.microsoft.com/en-us/iis/configuration/system.webServer/handlers/add) and double check that the Handler is properly setup. Upvotes: 1 <issue_comment>username_2: I know this is an old question, but what I discovered was that the web browser was actually caching the the file download. I proved this by completely stopping IIS and accessing the URL and still it prompted me to download the file. I then restarted IIS, confirmed the same issue existed still, tried from a new private window and the DLL ran instead of prompting me to download. In short, try private mode or a different web browser after you've configured everything. Upvotes: 2
2018/03/14
1,070
3,477
<issue_start>username_0: I know such questions are previously answered and I applied all the possible solutions. I defined all the variables before the foreach loop but still, it's not working. Here My code: ``` $settings_table = $wpdb->prefix."wpsp_settings"; $sel_setting = $wpdb->get_results("select * from $settings_table"); $school_name = ""; $school_logo = ""; $school_add = ""; $school_city = ""; $school_state = ""; $school_country = ""; $school_number = ""; $school_email = ""; $school_site = ""; foreach( $sel_setting as $setting ) : ($setting->id == 1) ? $school_name = $setting->option_value : $school_name = ""; ($setting->id == 2) ? $school_logo = $setting->option_value : $school_logo = ""; ($setting->id == 6) ? $school_add = $setting->option_value : $school_add = ""; ($setting->id == 7) ? $school_city = $setting->option_value : $school_city = ""; ($setting->id == 8) ? $school_state = $setting->option_value : $school_state = ""; ($setting->id == 9) ? $school_country = $setting->option_value : $school_country = ""; ($setting->id == 10) ? $school_number = $setting->option_value : $school_number = ""; ($setting->id == 12) ? $school_email = $setting->option_value : $school_email = ""; ($setting->id == 13) ? $school_site = $setting->option_value : $school_site = ""; endforeach; ?> ```<issue_comment>username_1: You are resetting the values each time round the loop, as for each item you'r saying... ``` ($setting->id == 1) ? $school_name = $setting->option_value : $school_name = ""; ``` As this loop has different values for $setting->id, this will reset all of the values which don't match. You would be better off with a `switch... case...` structure... ``` foreach( $sel_setting as $setting ) { switch ($setting->id) { case (1): $school_name = $setting->option_value; break; case (2): $school_logo = $setting->option_value; break; // Same for all the others. } } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: it doesn't have sens: ``` foreach( $sel_setting as $setting ) : ($setting->id == 1) ? $school_name = $setting->option_value : $school_name = ""; ($setting->id == 2) ? $school_logo = $setting->option_value : $school_logo = ""; ($setting->id == 6) ? $school_add = $setting->option_value : $school_add = ""; ($setting->id == 7) ? $school_city = $setting->option_value : $school_city = ""; ($setting->id == 8) ? $school_state = $setting->option_value : $school_state = ""; ($setting->id == 9) ? $school_country = $setting->option_value : $school_country = ""; ($setting->id == 10) ? $school_number = $setting->option_value : $school_number = ""; ($setting->id == 12) ? $school_email = $setting->option_value : $school_email = ""; ($setting->id == 13) ? $school_site = $setting->option_value : $school_site = ""; endforeach; ``` for example if $setting->id == 5, you set all variables to blank string OR if you $setting->id == 1 you set $school\_name to option\_value BUT in the same time set all other variables to blank string. Simple solution is to use switch / case statement like below: ``` foreach( $sel_setting as $setting ) { switch ($setting->id) { case 1: $school_name = $setting->option_value; break; case 2: $school_logo = $setting->option_value; break; ... case 13: $school_site = $setting->option_value; break; } } ``` Upvotes: 1
2018/03/14
413
1,331
<issue_start>username_0: ...because the row is only evaluated once and the next row is called for evaluation. But the next row is now the previous row. How do I account for this? ``` For i = 5 To Range("A" & "65536").End(xlUp).Row Step 1 If Application.WorksheetFunction.CountIf(Range("A" & i), "#N/A") = 1 Then Range("A" & i).EntireRow.Delete End If Next i ```<issue_comment>username_1: Loop backwards (and use Rows.Count rather than hard-coding 65536) as new versions of Excel have a capacity of more than a million rows. ``` For i = Range("A" & Rows.Count).End(xlUp).Row To 5 Step -1 If Application.WorksheetFunction.CountIf(Range("A" & i), "#N/A") = 1 Then Range("A" & i).EntireRow.Delete End If Next i ``` Upvotes: 1 <issue_comment>username_2: You can delete your rows all at once, using `Union`. Like this: ``` Sub test() Dim i As Long Dim deleteRange As Range For i = 5 To Range("A" & "65536").End(xlUp).Row Step 1 If Application.WorksheetFunction.CountIf(Range("A" & i), "#N/A") = 1 Then If deleteRange Is Nothing Then Set deleteRange = Range("A" & i).EntireRow Else: Set deleteRange = Union(deleteRange, Range("A" & i).EntireRow) End If End If Next i deleteRange.Delete End Sub ``` Upvotes: 3 [selected_answer]
2018/03/14
440
1,561
<issue_start>username_0: I have developed a school management system, which is connected to a database. Now, I want to take backup of tables. My idea is to generate an SQL file of each table that will later be used for backup. I achieved this goal manually in Oracle SQL Developer (attached a screen shot), first exporting the SQL file and then importing those files. Now I want to do this programmatically using C#. I have searched a lot on Google, but found nothing useful. [![screenshot of oracle sql developer](https://i.stack.imgur.com/d7A3g.png)](https://i.stack.imgur.com/d7A3g.png)<issue_comment>username_1: Loop backwards (and use Rows.Count rather than hard-coding 65536) as new versions of Excel have a capacity of more than a million rows. ``` For i = Range("A" & Rows.Count).End(xlUp).Row To 5 Step -1 If Application.WorksheetFunction.CountIf(Range("A" & i), "#N/A") = 1 Then Range("A" & i).EntireRow.Delete End If Next i ``` Upvotes: 1 <issue_comment>username_2: You can delete your rows all at once, using `Union`. Like this: ``` Sub test() Dim i As Long Dim deleteRange As Range For i = 5 To Range("A" & "65536").End(xlUp).Row Step 1 If Application.WorksheetFunction.CountIf(Range("A" & i), "#N/A") = 1 Then If deleteRange Is Nothing Then Set deleteRange = Range("A" & i).EntireRow Else: Set deleteRange = Union(deleteRange, Range("A" & i).EntireRow) End If End If Next i deleteRange.Delete End Sub ``` Upvotes: 3 [selected_answer]
2018/03/14
489
1,327
<issue_start>username_0: ``` var str1 = "Sarah"; var str2 = "Tom"; var strTable = " | "+ str1 +" | "+ str2 +" | Age | | --- | --- | --- | | Jill | Smith | 50 | "; $scope.rTable= strTable; ``` I am trying to pass HTML code in `$Scope.rTable` but instead of rendering the table it shows the HTML code as it is in the output. i.e. ``` | Sarah | Tom | Age | | --- | --- | --- | | Jill | Smith | 50 | ``` I want it like: ![enter image description here](https://i.stack.imgur.com/KBBsu.png)<issue_comment>username_1: Its a improper way to code. The code should be like **In Controller** ``` $scope.str1 = "Sarah"; $scope.str2 = "Tom"; ``` **In HTML** *Considering your controller name as DemoController* ``` | {{str1}} | {{str2}} | Age | | --- | --- | --- | ``` And if your data is huge its recommended to use an Array of Object with ng-repeat. you can read it here -> <https://docs.angularjs.org/api/ng/directive/ngRepeat> Upvotes: 2 <issue_comment>username_2: **Use ng-bind-html and $sce.** Controller ``` app.controller('MainCtrl', function($scope, $sce) { var str1 = "Sarah"; var str2 = "Tom"; var strTable = " | " + str1 + " | " + str2 + " | Age | | --- | --- | --- | | Jill | Smith | 50 | "; $scope.rTable = $sce.trustAsHtml(strTable); }); ``` HTML ``` ``` Upvotes: 2 [selected_answer]
2018/03/14
1,061
3,143
<issue_start>username_0: I have loaded my data into pandas dataframe and one of the columns in my dataframe has values like the following. I need to count each fruits count and pass on its value to a dataprovider for plotting a graph. ``` ************************ Data in the Dataframe ************************ orange apple grapes mango orange orange orange mango apple ``` For example, I wanted to pass the values into the dataProvider in the below format. ``` "dataProvider": [{ "flavor": "orange", "count": 4 }, { "flavor": "apple", "count": 2 }, { "flavor": "grapes", "count": 1 }, { "flavor": "mango", "count": 2 }], ``` Basically what I wanted to get is the following format from the above data. ``` [{ "flavor": "orange", "count": 4 }, { "flavor": "apple", "count": 2 }, { "flavor": "grapes", "count": 1 }, { "flavor": "mango", "count": 2 }] ```<issue_comment>username_1: I think need [`groupby`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) with [`size`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html) or [`Series.value_counts`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) for count, then convert index to column by [`reset_index`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reset_index.html) and last convert to `list of dict`s by [`DataFrame.to_dict`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_dict.html): ``` print (df) flavor 0 orange 1 apple 2 grapes 3 mango 4 orange 5 orange 6 orange 7 mango 8 apple d = df.groupby('flavor', sort=False).size().reset_index(name='count').to_dict('r') print (d) [{'count': 4, 'flavor': 'orange'}, {'count': 2, 'flavor': 'apple'}, {'count': 1, 'flavor': 'grapes'}, {'count': 2, 'flavor': 'mango'}] ``` --- ``` d = (df['flavor'].value_counts(sort=False) .rename_axis('flavor') .reset_index(name='count') .to_dict('r')) print (d) [{'count': 1, 'flavor': 'grapes'}, {'count': 2, 'flavor': 'apple'}, {'count': 2, 'flavor': 'mango'}, {'count': 4, 'flavor': 'orange'}] ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Assuming your data frame, `df`, looks like: ``` flavor 0 orange 1 apple 2 grapes 3 mango 4 orange 5 orange 6 orange 7 mango 8 apple ``` You could use `pd.factorize` in a comprehension: ``` f, u = pd.factorize(df.flavor) [dict(count=c, flavor=f) for c, f in zip(np.bincount(f), u)] [{'count': 4, 'flavor': 'orange'}, {'count': 2, 'flavor': 'apple'}, {'count': 1, 'flavor': 'grapes'}, {'count': 2, 'flavor': 'mango'}] ``` --- Alternatively, you could have used `pd.Series.value_counts` to perform a similar task as `factorize` and `bincount` ``` s = df.flavor.value_counts() [dict(count=c, flavor=f) for c, f in zip(s.values, s.index)] [{'count': 4, 'flavor': 'orange'}, {'count': 2, 'flavor': 'apple'}, {'count': 1, 'flavor': 'grapes'}, {'count': 2, 'flavor': 'mango'}] ``` Upvotes: 2
2018/03/14
367
1,296
<issue_start>username_0: This is kind of a beginners question. What I'm basically trying to do is loop different words in an HTML page's header. For instance, I would want a header that says "Paint your car the color of \_\_\_\_\_" where the empty space loops through the different words of "red", "blue", "green", "purple" etc... I've been looking everyone, but I can't seem to find anything. If someone can point me to the right direction of a link or something, that'd be much appreciated! Cheers<issue_comment>username_1: Use span: ``` Paint your car the color of \_\_\_\_ ``` Then change the innerHTML of custom-text as needed. <https://www.w3schools.com/jsref/met_document_getelementbyid.asp> <https://www.w3schools.com/jsref/prop_html_innerhtml.asp> Upvotes: 0 <issue_comment>username_2: Here is an example that should help you out. ```js const colors = ['red', 'blue', 'purple']; const duration = 1000; let index = colors.length - 1; const element = document.getElementById('page-header-color'); function updateElementText() { index = index < colors.length - 1 ? index + 1 : 0; element.innerText = colors[index]; } updateElementText(); setInterval(updateElementText, duration); ``` ```html Paint your car the color of =========================== ``` Upvotes: 1
2018/03/14
550
1,955
<issue_start>username_0: I create a class like this ``` public class Something{ private int foo; private int bar; public Something(int f){ setFoo(f) } public int getFoo(){ return foo; } public void setFoo(int f){ this.foo = f; } public int getBar(){ return bar; } public void setBar(int b){ this.bar = b; } } ``` How can I create a new instance of this class with something like this > > Something smt = new Something(15) **.setBar(10)**; > > > When I try to do it it marks an error saying that its a **void** when smt requires a **Something object**. I dont really know how this is called in english but I hope my question is clear<issue_comment>username_1: This is because `setBar(..)` is not returning anything. You should do something like ``` Something smt = new Something(15); smt.setBar(10); ``` Upvotes: 2 <issue_comment>username_2: You are talking about the fluent builder "pattern". Simply have your `void` setters return `Something` and add `return this;` as your last statement in the method body. E.g.: ``` public Something setBar(int b){ this.bar = b; return this; } ``` You can then chain method invocations while "building" your `Something`, e.g.: `Something mySomething = new Something(42).setBar(42).set...` Upvotes: 4 [selected_answer]<issue_comment>username_3: Define the class this way and it will work: ``` public class Something{ private int foo; private int bar; public Something(int f){ setFoo(f) } public int getFoo(){ return foo; } public Something setFoo(int f){ this.foo = f; return this; } public int getBar(){ return bar; } public Something setBar(int b){ this.bar = b; return this; } } ``` This way anytime you use a setter you return the instance. You can even chain the setters. Upvotes: 1
2018/03/14
1,616
4,720
<issue_start>username_0: There are many cases online how to plot couple of lines in d3 if you add svg object only once, such as ``` svg.selectAll("line") .data(dataset) .enter().append("line") .style("stroke", "black") // colour the line .attr("x1", function(d) { console.log(d); return xScale(d.x1); }) .attr("y1", function(d) { return yScale(d.y1); }) .attr("x2", function(d) { return xScale(d.x2); }) .attr("y2", function(d) { return yScale(d.y2); }); ``` This plot create one line. I want to create many different lines in an array smth like ``` var svg = d3.select("body") .append("svg") .attr("width", w) .attr("height", h); for (a_ind=1; a_ind<3; a_ind++){ dataset_a=dataset.filter(function(d) { return (d.a==a_ind)}) svg.selectAll("line") .data(dataset_a) - //!!! using new dataset in each cycle .enter().append("line") .style("stroke", "black") // colour the line .attr("x1", function(d) { console.log(d); return xScale(d.x1); }) .attr("y1", function(d) { return yScale(d.y1); }) .attr("x2", function(d) { return xScale(d.x2); }) .attr("y2", function(d) { return yScale(d.y2); }); } ``` I was told it's impossible. Or maybe there is the way? And also how to access then line from dataset\_a if i want to delete it with the click of the mouse?<issue_comment>username_1: I would do something like this. Make each data set (1 data set per line), [an array inside the final data array](https://groups.google.com/forum/#!topic/d3-js/8XLzUYLoFnY) `.enter().append()` will then work properly. To remove the line on click, I added an event handler that will select the line just clicked and remove it. ``` var data = [[dataset_a], [dataset_b], [dataset_c], [dataset_d], [dataset_e]]; var xValue = function(d){return d.x;} var yValue = function(d){return d.y;} var lineFunction = d3.line() .x(function(d) { return xScale(xValue(d)); }) .y(function(d) { return yScale(yValue(d)); }); var lines = d3.select("svg").selectAll("path") lines.data(data) .enter().append("path") .attr("d", lineFunction) .on("click", function(d){ d3.select(this).remove(); }); ``` Upvotes: 1 <issue_comment>username_2: Well, if you want to plot lines, I suggest that you append...s! The thing with a D3 enter selection is quite simple: the number of appended elements is the number of objects in the data array that doesn't match any element. So, you just need a data array with several objects. For instance, let's create 50 of them: ``` var data = d3.range(50).map(function(d) { return { x1: Math.random() * 300, x2: Math.random() * 300, y1: Math.random() * 150, y2: Math.random() * 150, } }); ``` And, as in the below demo I'm selecting `null`, all of them will be in the enter selection. Here is the demo: ```js var svg = d3.select("svg"); var data = d3.range(50).map(function(d) { return { x1: Math.random() * 300, x2: Math.random() * 300, y1: Math.random() * 150, y2: Math.random() * 150, } }); var color = d3.scaleOrdinal(d3.schemeCategory20); var lines = svg.selectAll(null) .data(data) .enter() .append("line") .attr("x1", function(d) { return d.x1 }) .attr("x2", function(d) { return d.x2 }) .attr("y1", function(d) { return d.y1 }) .attr("y2", function(d) { return d.y2 }) .style("stroke", function(_, i) { return color(i) }) .style("stroke-width", 1); ``` ```html ``` Finally, a tip: as this is JavaScript you can use `for` loops anywhere you want. However, **do not** use `for` loops to append elements in a D3 code. It's unnecessary and not idiomatic. That being said, whoever told you that it is *impossible* was wrong, it's clearly possible. Here is a demo (but don't do that, it's a very cumbersome and ugly code): ```js var svg = d3.select("svg"); var data = d3.range(50).map(function(d, i) { return { x1: Math.random() * 300, x2: Math.random() * 300, y1: Math.random() * 150, y2: Math.random() * 150, id: "id" + i } }); var color = d3.scaleOrdinal(d3.schemeCategory20); for (var i = 0; i < data.length; i++) { var filteredData = data.filter(function(d) { return d.id === "id" + i }); var lines = svg.selectAll(null) .data(filteredData) .enter() .append("line") .attr("x1", function(d) { return d.x1 }) .attr("x2", function(d) { return d.x2 }) .attr("y1", function(d) { return d.y1 }) .attr("y2", function(d) { return d.y2 }) .style("stroke", function() { return color(i) }) .style("stroke-width", 1); } ``` ```html ``` Upvotes: 2
2018/03/14
1,454
4,504
<issue_start>username_0: I'm programming a simple text adventure game with Python 3 and the `cmd` module. I need to somehow trigger the game over method but I didn't find a solution on document. The `CMD` module got the `do_quit()` function, but that needs user input, and `quit()` or `exit()` kills the whole program, whereas I just need to get out of `cmdloop()` Any idea how to deal with this? Thanks in advance! ``` def moveDirection(direction): global location if direction in rooms[location]: if rooms[rooms[location][direction]].get(UNLOCKED, True) == True: print('You move to the %s.' % direction) location = rooms[location][direction] if location == 'Hallway' and bGuardAlive == True: print("Game over! Guard caught you!") printLocation(location) else: print("Door is locked") else: print('You cannot move in that direction') def main(): printLocation(location) GameLoop().cmdloop() class GameLoop(cmd.Cmd): prompt = '\n> ' def do_quit(self, arg): """Quit the game.""" return True ```<issue_comment>username_1: I would do something like this. Make each data set (1 data set per line), [an array inside the final data array](https://groups.google.com/forum/#!topic/d3-js/8XLzUYLoFnY) `.enter().append()` will then work properly. To remove the line on click, I added an event handler that will select the line just clicked and remove it. ``` var data = [[dataset_a], [dataset_b], [dataset_c], [dataset_d], [dataset_e]]; var xValue = function(d){return d.x;} var yValue = function(d){return d.y;} var lineFunction = d3.line() .x(function(d) { return xScale(xValue(d)); }) .y(function(d) { return yScale(yValue(d)); }); var lines = d3.select("svg").selectAll("path") lines.data(data) .enter().append("path") .attr("d", lineFunction) .on("click", function(d){ d3.select(this).remove(); }); ``` Upvotes: 1 <issue_comment>username_2: Well, if you want to plot lines, I suggest that you append...s! The thing with a D3 enter selection is quite simple: the number of appended elements is the number of objects in the data array that doesn't match any element. So, you just need a data array with several objects. For instance, let's create 50 of them: ``` var data = d3.range(50).map(function(d) { return { x1: Math.random() * 300, x2: Math.random() * 300, y1: Math.random() * 150, y2: Math.random() * 150, } }); ``` And, as in the below demo I'm selecting `null`, all of them will be in the enter selection. Here is the demo: ```js var svg = d3.select("svg"); var data = d3.range(50).map(function(d) { return { x1: Math.random() * 300, x2: Math.random() * 300, y1: Math.random() * 150, y2: Math.random() * 150, } }); var color = d3.scaleOrdinal(d3.schemeCategory20); var lines = svg.selectAll(null) .data(data) .enter() .append("line") .attr("x1", function(d) { return d.x1 }) .attr("x2", function(d) { return d.x2 }) .attr("y1", function(d) { return d.y1 }) .attr("y2", function(d) { return d.y2 }) .style("stroke", function(_, i) { return color(i) }) .style("stroke-width", 1); ``` ```html ``` Finally, a tip: as this is JavaScript you can use `for` loops anywhere you want. However, **do not** use `for` loops to append elements in a D3 code. It's unnecessary and not idiomatic. That being said, whoever told you that it is *impossible* was wrong, it's clearly possible. Here is a demo (but don't do that, it's a very cumbersome and ugly code): ```js var svg = d3.select("svg"); var data = d3.range(50).map(function(d, i) { return { x1: Math.random() * 300, x2: Math.random() * 300, y1: Math.random() * 150, y2: Math.random() * 150, id: "id" + i } }); var color = d3.scaleOrdinal(d3.schemeCategory20); for (var i = 0; i < data.length; i++) { var filteredData = data.filter(function(d) { return d.id === "id" + i }); var lines = svg.selectAll(null) .data(filteredData) .enter() .append("line") .attr("x1", function(d) { return d.x1 }) .attr("x2", function(d) { return d.x2 }) .attr("y1", function(d) { return d.y1 }) .attr("y2", function(d) { return d.y2 }) .style("stroke", function() { return color(i) }) .style("stroke-width", 1); } ``` ```html ``` Upvotes: 2
2018/03/14
1,369
4,120
<issue_start>username_0: ``` # Redirecting only stderr to a pipe. exec 3>&1 # Save current "value" of stdout. ls -l 2>&1 >&3 3>&- | grep bad 3>&- # Close fd 3 for 'grep' (but not 'ls'). # ^^^^ ^^^^ exec 3>&- # Now close it for the remainder of the script. ``` I was looking through <https://www.tldp.org/LDP/abs/html/io-redirection.html> trying to understand how input and output are redirected in bash. I don't understand how the second line of code closes fd 3 for 'grep' but not for ls. What I understand is stderr of ls is directed to stdout and then stdout is redirected to fd 3 and then fd 3 is closed before the pipe command. Where am I going wrong?<issue_comment>username_1: I would do something like this. Make each data set (1 data set per line), [an array inside the final data array](https://groups.google.com/forum/#!topic/d3-js/8XLzUYLoFnY) `.enter().append()` will then work properly. To remove the line on click, I added an event handler that will select the line just clicked and remove it. ``` var data = [[dataset_a], [dataset_b], [dataset_c], [dataset_d], [dataset_e]]; var xValue = function(d){return d.x;} var yValue = function(d){return d.y;} var lineFunction = d3.line() .x(function(d) { return xScale(xValue(d)); }) .y(function(d) { return yScale(yValue(d)); }); var lines = d3.select("svg").selectAll("path") lines.data(data) .enter().append("path") .attr("d", lineFunction) .on("click", function(d){ d3.select(this).remove(); }); ``` Upvotes: 1 <issue_comment>username_2: Well, if you want to plot lines, I suggest that you append...s! The thing with a D3 enter selection is quite simple: the number of appended elements is the number of objects in the data array that doesn't match any element. So, you just need a data array with several objects. For instance, let's create 50 of them: ``` var data = d3.range(50).map(function(d) { return { x1: Math.random() * 300, x2: Math.random() * 300, y1: Math.random() * 150, y2: Math.random() * 150, } }); ``` And, as in the below demo I'm selecting `null`, all of them will be in the enter selection. Here is the demo: ```js var svg = d3.select("svg"); var data = d3.range(50).map(function(d) { return { x1: Math.random() * 300, x2: Math.random() * 300, y1: Math.random() * 150, y2: Math.random() * 150, } }); var color = d3.scaleOrdinal(d3.schemeCategory20); var lines = svg.selectAll(null) .data(data) .enter() .append("line") .attr("x1", function(d) { return d.x1 }) .attr("x2", function(d) { return d.x2 }) .attr("y1", function(d) { return d.y1 }) .attr("y2", function(d) { return d.y2 }) .style("stroke", function(_, i) { return color(i) }) .style("stroke-width", 1); ``` ```html ``` Finally, a tip: as this is JavaScript you can use `for` loops anywhere you want. However, **do not** use `for` loops to append elements in a D3 code. It's unnecessary and not idiomatic. That being said, whoever told you that it is *impossible* was wrong, it's clearly possible. Here is a demo (but don't do that, it's a very cumbersome and ugly code): ```js var svg = d3.select("svg"); var data = d3.range(50).map(function(d, i) { return { x1: Math.random() * 300, x2: Math.random() * 300, y1: Math.random() * 150, y2: Math.random() * 150, id: "id" + i } }); var color = d3.scaleOrdinal(d3.schemeCategory20); for (var i = 0; i < data.length; i++) { var filteredData = data.filter(function(d) { return d.id === "id" + i }); var lines = svg.selectAll(null) .data(filteredData) .enter() .append("line") .attr("x1", function(d) { return d.x1 }) .attr("x2", function(d) { return d.x2 }) .attr("y1", function(d) { return d.y1 }) .attr("y2", function(d) { return d.y2 }) .style("stroke", function() { return color(i) }) .style("stroke-width", 1); } ``` ```html ``` Upvotes: 2
2018/03/14
364
1,535
<issue_start>username_0: I'm creating an App, with some classmates, and we want to share our Activities with others, so we don't have to do all over again. Is that possible?<issue_comment>username_1: You can copy the activity class and the XML layout of the activity from one project to another. The activity class file goes in the source folder and the xml layout goes in the layouts folder. To declare the activity in the manifest.xml you have to add: ``` ``` with the correct name and label (You have to declare in manifest, otherwise it won't work). You can try to open other activities from main activity with: ``` Intent myIntent = new Intent(this, MyActivityName.class); startActivity(myIntent); ``` In that case, you can add a Button to open the new activities and test if it works: ``` Button clickButton = (Button) findViewById(R.id.clickButton); clickButton.setOnClickListener( new OnClickListener() { @Override public void onClick(View v) { Intent myIntent = new Intent(this, MyActivityName.class); startActivity(myIntent); } }); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: You can copy the activity class into the Java folder in your project and the XML layout of the activity from one project to another, paste it in the layout folder (under "res"). Remember to fix the package name, and define the activity in the manifest! you should enter a path according to the package structure in your project good luck Upvotes: 0
2018/03/14
692
2,637
<issue_start>username_0: Learning Redux and React, and I'm having an issue where I have the store created, and passed over to my through react-redux, but I get an empty object when logging in the console. ``` import { createStore, applyMiddleware } from 'redux'; import logger from 'redux-logger'; import thunk from 'redux-thunk'; import uuid from 'uuid'; var defaultState = { tasks: [{ key: uuid.v4(), name: 'learn Redux', description: 'Learn how to create a completely statefully managed application.', priority: 1, notes: [{ key: uuid.v4(), content: 'Creation of the store is paramount. One must import {createStore, applyMiddleware from redux package}, then define the root reducer, and create the store with applymiddleware, and then export the store.' }], }, ] }; var root = (state = defaultState, action) => { return state; }; var store = createStore(root, applyMiddleware(thunk,logger)); export default store; ``` I think the issue may lie with how I'm passing it to the component, but that also could be wrong. Just for good measure, here is my App component. ``` import React, { Component } from 'react'; import './App.css'; import store from './store/createStore'; import { Provider } from 'react-redux'; class App extends Component { render() { console.log(this.props); // let tasks = this.props.tasks.map(x => { // return {x.name} // }) return ( Nothing to see here. ==================== ); } } export default App; ```<issue_comment>username_1: "provides" the store prop to components placed below it that use `connect()`. You can't place the within a component's render function and change the props passed to it. It's already too late at that point. The props are what they are. That will happen above this component in the tree, either another component or during your `ReactDOM.render` call. Upvotes: 2 <issue_comment>username_2: The redux state does not automatically show up as props everywhere; and rightfully so. If that is the case, the performance would be devastating unless you have custom `shouldComponentUpdate`. What you need to use is [`connect`](https://github.com/reactjs/react-redux/blob/master/docs/api.md#connectmapstatetoprops-mapdispatchtoprops-mergeprops-options) the redux state to your component. For your example, it'll be something like: ``` import { connect } from 'react-redux'; ... // Replace last line with this: export default connect( state => ({ tasks: state.tasks }), null, )(App); ``` Now, `this.props.tasks` will be the `tasks` in your redux state. Upvotes: 1 [selected_answer]
2018/03/14
636
2,556
<issue_start>username_0: I'm having problems where two Date fields are updated to the exact same date when only one should be. I'm trying to figure out why this is happening and how I can update only the one date field I want updated, and leave the other at its original value. I'm using Hibernate with JPA on a MySQL database, in case that is part of the reason. I have a persistence entity that looks something like this: ``` @NamedQueries({ @NamedQuery(name="MyObject.updateItem", query="UPDATE MyObject m SET m.item = :item, m.lastUpdate = :updated WHERE m.id = :id") }) @Entity @Table(name="entries") public class MyObject implements Serializable { @Id @GeneratedValue(strategy=GenerationType.IDENTITY) private Long id; private String item; @Column(columnDefinition = "TIMESTAMP", nullable = false) private Date dateCreated = new Date(); @Column(columnDefinition = "TIMESTAMP", nullable = false) private Date lastUpdate = new Date(); // after here standard constructors, getters, setters, etc. } ``` When from my DAO I call the `NamedQuery` and provide the correct paramters, I find that both `lastUpdate` and `dateCreated` are changed. Is there any reason for this and how can I prevent this from happening? Is this caused because I initialize the to date fields in the entity class? I'm using the `TIMESTAMP` column definition because I want to be able to perform queries with `<` or `>`.<issue_comment>username_1: "provides" the store prop to components placed below it that use `connect()`. You can't place the within a component's render function and change the props passed to it. It's already too late at that point. The props are what they are. That will happen above this component in the tree, either another component or during your `ReactDOM.render` call. Upvotes: 2 <issue_comment>username_2: The redux state does not automatically show up as props everywhere; and rightfully so. If that is the case, the performance would be devastating unless you have custom `shouldComponentUpdate`. What you need to use is [`connect`](https://github.com/reactjs/react-redux/blob/master/docs/api.md#connectmapstatetoprops-mapdispatchtoprops-mergeprops-options) the redux state to your component. For your example, it'll be something like: ``` import { connect } from 'react-redux'; ... // Replace last line with this: export default connect( state => ({ tasks: state.tasks }), null, )(App); ``` Now, `this.props.tasks` will be the `tasks` in your redux state. Upvotes: 1 [selected_answer]
2018/03/14
4,119
15,958
<issue_start>username_0: I am using hibernate to connect my mysql database and perform transactions. I am using a single SessionFactory throughout the application and i don't have other connections to the database, yet, i am receiving the exception below: ``` java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3008) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3466) ... 21 common frames omitted Wrapped by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure The last packet successfully received from the server was 526 milliseconds ago. The last packet sent successfully to the server was 1 milliseconds ago. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at com.mysql.jdbc.Util.handleNewInstance(Util.java:425) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:989) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3556) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3456) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3897) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2524) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2677) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2545) at com.mysql.jdbc.ConnectionImpl.setAutoCommit(ConnectionImpl.java:4842) at org.hibernate.engine.jdbc.connections.internal.PooledConnections.poll(PooledConnections.java:84) at org.hibernate.engine.jdbc.connections.internal.DriverManagerConnectionProviderImpl.getConnection(DriverManagerConnectionProviderImpl.java:186) at org.hibernate.internal.NonContextualJdbcConnectionAccess.obtainConnection(NonContextualJdbcConnectionAccess.java:35) at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:99) ... 11 common frames omitted Wrapped by: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:115) at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:111) at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:97) at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.acquireConnectionIfNeeded(LogicalConnectionManagedImpl.java:102) at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getPhysicalConnection(LogicalConnectionManagedImpl.java:129) at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.getConnectionForTransactionManagement(LogicalConnectionManagedImpl.java:247) at org.hibernate.resource.jdbc.internal.LogicalConnectionManagedImpl.begin(LogicalConnectionManagedImpl.java:254) at org.hibernate.resource.transaction.backend.jdbc.internal.JdbcResourceLocalTransactionCoordinatorImpl$TransactionDriverControlImpl.begin(JdbcResourceLocalTransactionCoordinatorImpl.java:203) at org.hibernate.engine.transaction.internal.TransactionImpl.begin(TransactionImpl.java:56) at org.hibernate.internal.AbstractSharedSessionContract.beginTransaction(AbstractSharedSessionContract.java:387) at com.kitaplist.common.book.dao.HibernateBookDao.find(HibernateBookDao.java:56) at com.kitaplist.common.Collector.lambda$collectMetaBooksAndNewBooks$1(Collector.java:137) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) ``` This is how i create my SessionFactory: ``` public static SessionFactory getSessionFactory() { if (sessionFactory == null) { sessionFactory = new Configuration() .configure() .addAnnotatedClass(Seller.class) .addAnnotatedClass(Book.class) .buildSessionFactory(); } return sessionFactory; } ``` and here is the function that I use in my BookDao: ``` @Override public void save(Book book) { Session session = sessionFactory.openSession(); Transaction tx = session.beginTransaction(); try { session.save(book); } catch (Exception e) { e.printStackTrace(); } finally { try { tx.commit(); } catch (Exception e) { e.printStackTrace(); } finally { session.close(); } } } ``` my application is a crawler crawls a book object from web and saves the object to the database through the above save function. I couldn't find the reason behind this exception. on the command console, i can see that the connection is re-established after it is lost, here : ``` SLF4J: A number (289) of logging calls during the initialization phase have been intercepted and are SLF4J: now being replayed. These are subject to the filtering rules of the underlying logging system. SLF4J: See also http://www.slf4j.org/codes.html#replay Wed Mar 14 16:36:29 UTC 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Wed Mar 14 16:36:29 UTC 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Wed Mar 14 16:36:30 UTC 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Wed Mar 14 16:36:29 UTC 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Wed Mar 14 16:47:14 UTC 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. Wed Mar 14 16:47:17 UTC 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. ``` I would appreciate any help.<issue_comment>username_1: Based in your problem description, I believe the connection it is dropped because it becomes iddle... Try to append '?autoReconnect=true' to the end of your database's JDBC URL... and see if the problem does not happen more... However, if you are not able to connect to the database even once, I suggest to check the following items: 1. Do a ping command to your database IP from the server host 2. Do a telnet command to your database to see if you can reach the database port 3. See if MySql does not have rules about which IPs can talk to it (I known that postgres have this feature, do not know if MySql does) 4. Check that MySql does not have something like drop iddle connections... 5. Check your JDBC connection params Upvotes: 2 <issue_comment>username_2: Since your problem involves network the best idea might be to add a JDBC connection pool to your setup. This will ensure that connections don't get stale while your application is running e.g. due to database server-side timeout. [HikariCP](https://github.com/brettwooldridge/HikariCP) is by far my favourite pool as it sets sensible defaults e.g. on-borrow connection validation that can't be disabled. Since you are using Hibernate, HikariCP shows how to set everything up [on their wiki page](https://github.com/brettwooldridge/HikariCP/wiki/Hibernate4). The right setup however might depend on other libraries you are using e.g. shared `java.sql.DataSource` bean in Spring Boot. There are other pooling libraries you can use but please take few minutes to read below before going that route: * ["My benchmark doesn't show a difference."](https://github.com/brettwooldridge/HikariCP/wiki/%22My-benchmark-doesn't-show-a-difference.%22) * [Pool Analysis](https://github.com/brettwooldridge/HikariCP/wiki/Pool-Analysis) If using a pooling library with on-borrow connection validation won't solve this problem you'd have to dive into the network setup. Perhaps between your application and the database there is a firewall which terminates any network connection e.g. by imposing a hard timeout. Normally this situation is solvable by setting a connection validation interval in the pool which is lower than the timeout imposed by the network. Upvotes: 3 <issue_comment>username_3: The main culprit is [`wait_timeout`](http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_wait_timeout). Its default value is 28800 sec i.e. 8 hours. From the doc: > > The number of seconds the server waits for activity on a noninteractive connection before closing it. > > > The error you are receiving is caused when the DB connection is idle(not performing any DB query) for `wait_timeout` secs. After this time, MySQL drops the connection and you when your code makes any DB calls, it gets this error. Increase this value(to say 1 day) and you can circumvent this problem. --- However, to fix this issue, put `autoReconnect=true` in the DB connect string, like below: ``` jdbc:mysql://db_user:db_user@localhost/mydb?autoReconnect=true ``` This will cause the code to automatically reconnect the connection when the connection is dropped after `wait_timeout` secs. Upvotes: 2 <issue_comment>username_4: You can use ``` sessionFactory.isClosed(); ``` to determine if the connection is still open. Replace your `getSessionFactory()` method like this. ``` public static SessionFactory getSessionFactory() { if (sessionFactory == null || sessionFactory.isClosed()) { sessionFactory = new Configuration() .configure() .addAnnotatedClass(Seller.class) .addAnnotatedClass(Book.class) .buildSessionFactory(); } return sessionFactory; } ``` Upvotes: 2 <issue_comment>username_5: Well when we see log attached it says ``` The last packet successfully received from the server was 526 milliseconds ago. The last packet sent successfully to the server was 1 milliseconds ago. ``` Its not timeout issue for sure. As idle wait time is around 500ms. ``` java.io.EOFException: Can not read response from server. Expected to read 4 bytes, read 0 bytes before connection was unexpectedly lost. at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3008) Wrapped by: org.hibernate.exception.JDBCConnectionException: Unable to acquire JDBC Connection at org.hibernate.exception.internal.SQLStateConversionDelegate.convert(SQLStateConversionDelegate.java:115) ``` It looks like Database connection is getting blocked/interrupted. It can also most likely means that the database has restarted or the network connection to the database has been broken (e.g. a NAT connection has timed out) or may be firewall issue. AutoReconnect is not recommended. From MySQL [here](http://pages.citebite.com/p4x3a0r8pmhm) > > Should the driver try to re-establish stale and/or dead connections? > If enabled the driver will throw an exception for a queries issued on > a stale or dead connection, which belong to the current transaction, > but will attempt reconnect before the next query issued on the > connection in a new transaction. The use of this feature is not > recommended, because it has side effects related to session state and > data consistency when applications don't handle SQLExceptions > properly, and is only designed to be used when you are unable to > configure your application to handle SQLExceptions resulting from dead > and stale connections properly. Alternatively, as a last option, > investigate setting the MySQL server variable "wait\_timeout" to a high > value, rather than the default of 8 hours. > > > Additional Suggestion: to get rid of this warning : Establishing SSL connection without server's identity verification is use `useSSL=false` in Mysql Configuration eg: ``` jdbc:mysql://localhost:3306/Peoples?autoReconnect=true&useSSL=false ``` Upvotes: 2 <issue_comment>username_6: ``` at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:989) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3556) at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3456) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3897) ``` From your stacktrace it seems hibernate is not able to obtain connections from database as db server has gone away ``` // Check return value, if we get a java.io.EOFException, the server has gone away. We'll pass it on up the exception chain and let someone higher up // decide what to do (barf, reconnect, etc). ``` **Check source here:** <http://grepcode.com/file/repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.36/com/mysql/jdbc/MysqlIO.java#3774> **Possible explaination :** The problem seems at database server side not in your code. You may need to tweak your mysql server settings and problem will be solved. The data you are sending is larger than the packet that is sent over the network to the database. **Causes and solutions :** <https://dev.mysql.com/doc/refman/5.7/en/gone-away.html> <http://befused.com/mysql/server-has-gone-away> **Other Solutions:** in my.cnf file of mysql add following settings ``` [mysqld] max_allowed_packet=256M ``` Upvotes: 2
2018/03/14
615
2,387
<issue_start>username_0: I'm building a memory game using React. I have an array of cards where each card can be "matching" (the user has just clicked on it) or "matched" (the user has found both cards and they are now completely shown). My issue is that when I try to set the matching state using `setState` the state changes for every card and not just the clicked one. Here's what I have: ``` import React from 'react'; import ReactDOM from 'react-dom'; import Card from './card'; import './index.css'; class Game extends React.Component { constructor() { super(); this.state = { cards: Array(4).fill( { matching: false, matched: false, } ), currentlyMatching: false, } } handleCardClick(index, type) { let currentCardsState = this.state.cards.slice(); currentCardsState[index].matching = true; this.setState({cards: currentCardsState}); } renderCard(index, type) { return this.handleCardClick(index, type)} />; }; render() { return {this.renderCard(0,"red")} {this.renderCard(1, "green")} {this.renderCard(2, "red")} {this.renderCard(3, "green")} ; }; } ReactDOM.render( , document.getElementById('root') ); ```<issue_comment>username_1: And this is exactly what is expected: you are setting in the state a new array of objects and for this reason the render is called again for each of them Upvotes: -1 <issue_comment>username_2: The problem you have is that you are not creating 4 independent objects for every card. You are creating *one* object which appears in the array four times. That means that changing any index affects all indices. That's how `Array.fill` works. To create four independent states, you need something like this: ``` const cards = []; for(var i = 0; i < 4; i++) { cards.push({ matched: false, matching: false )); } ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: You can add a `shouldComponentUpdate` to each `Card` component to prevent unnecessary re-renders: ``` shouldComponentUpdate(prevProps, prevState) { return prevProps !== this.props } ``` Or you can specifically target a single prop: ``` shouldComponentUpdate(prevProps, prevState) { return prevProps.matched !== this.props.matched } ``` Upvotes: 0
2018/03/14
824
2,711
<issue_start>username_0: There is a loop. ``` for (int i = 0; i < n; ++i) { //... v[i] = o.f(i); //... } ``` Each `v[i] = o.f(i)` is independent of all the other `v[i] = o.f(i)`. `n` can be any value and it may not be a multiple of the number of cores. What is the simplest way to use all the cores to do this?<issue_comment>username_1: The [`ExecutionPolicy`](http://en.cppreference.com/w/cpp/algorithm/execution_policy_tag_t) overloads of the algorithms in exist for this purpose. [`std::transform`](http://en.cppreference.com/w/cpp/algorithm/transform) applies a function to each element of a source range to assign to a destination range. `v.begin()` is an acceptable destination, so long as `v` is of appropriate size. Your snippet assumes this when it uses `v[i]`, so I will too. We then need an iterator that gives the values `[0, n)` as our source, so [`boost::counting_iterator`](http://www.boost.org/doc/libs/1_50_0/libs/iterator/doc/counting_iterator.html). Finally we need a `Callable` that will apply `o.f` to our values, so lets capture `o` in a lambda. ``` #include #include #include // assert(v.size() >= n) std::transform(std::execution::par, boost::counting\_iterator(0), boost::counting\_iterator(n), v.begin(), [&o](int i){ return o.f(i); }); ``` If `o.f` does not perform any "vectorization-unsafe operations", you are able to use [`std::execution::par_unseq`](http://en.cppreference.com/w/cpp/algorithm/execution_policy_tag_t), which *may* interleave calls on the same thread (i.e. unroll the loop and use SIMD instructions) Upvotes: 3 <issue_comment>username_2: In the land of existing compilers, and remembering that M/S can't even get this stuff right for C++11, never mind about C++17/20, the C++11 answer goes something like: ``` typedef v.value_type R; std::vector< std::future > fut(n); for (int i=0; i ``` @arne suggests we can do better by throttling the number of tasks by considering the number of processors (P), which is true, though the above code will give you a clear indication on whether you will really benefit from multi-threading the method f. Given we only want to launch X jobs simultaneously, where X is > P, < 3\*P depending on the variation in job complexity (note I am relying on a signed index): ``` typedef v.value_type R; std::vector< std::future > fut(n); for (ssize\_t i=0, j=-X; j=0) v.push\_back(fut[j].get()); } ``` I'm not claiming the above code is "great", but if the jobs are complex enough for us to need multithreading, the cost of looping a few extra times isn't gointg to be noticed. You will notice that if X > n the loop will spin a few times in the middle, but will produce the correct result :-) Upvotes: 2
2018/03/14
754
2,774
<issue_start>username_0: One annoying thing about writing js app is that nested object can caused error and break down the entire app. ``` if(result.applicant._id === null || applicant_id !== result.applicant._id.toString()){ console.log('redirect user'); } ``` Given above code, it can be dangerous, what if result.applicant.\_id is null? then toString will be invalid because it can't be undefined.toString(). How to ensure toString() work in this case? I can do ``` if(result.applicant._id === null || applicant_id !== (result.applicant._id && result.applicant._id.toString())){} ``` but that's so unclean. I found that I'll have many duplication just for the sake of checking something exist using js.<issue_comment>username_1: Your version works without ever hitting `undefined.toString()` because the if condition will be short-circuited (short-circuitted?) as `result.applicant._id === null` would evaluate as true and never evaluate `applicant_id !== result.applicant._id.toString()`. The test is already there no need to add extra checks in this case. **Update** Just realised the `===` will not match `undefined`. Just change the first part to `result.applicant._id == null` which will match `undefined` and `null`. ``` if (result.applicant._id == null || applicant_id !== result.applicant._id.toString()){ console.log('redirect user'); } ``` I know you may end with linting warnings but in this case that's exactly what you want. Upvotes: 2 [selected_answer]<issue_comment>username_2: A bit of short-handing might give you the cleaner code you're looking for. As long as `0` is not a valid application id: ``` const id = result.application._id || 0; if(!id || applicant_id !== id.toString()){ // redirect } ``` **Edit:** To explain - the `||` in the variable declaration assigns the value to the first `truthy` value - or the second. Therefore, if the value is undefined, it will fall to the second value (`0`) which is still falsey and will fail the check, but can still have `.toString()` called on it. **Edit 2:** If your ids are just numbers (which is what it looks like) then you actually don't need to convert it to a string - just let JavaScript's coercion to the work for you by using `!=` instead of `!==` ``` const id = result.application._id || 0; if(!id || applicant_id != id){ // redirect } ``` In JavaScript, `12 == '12'` is true. `12 === '12'` is false. Generally the advice is to use `===` unless you consciously *want* to take advantage of coercion, and this seems like a good case for it :) Upvotes: 0 <issue_comment>username_3: The shortest way could be this: **.: UPDATED :.** ``` const applicant = result.applicant || {} if (applicant_id !== `${applicant._id}`) { console.log('redirect user'); } ``` Upvotes: 0
2018/03/14
1,065
3,536
<issue_start>username_0: I have a set of div elements inside a container, `.div-to-hide` is displayed by default whilst `.div-to-show` is hidden. When I click in .set, `.div-to-hide` should hide and `.div-to-show` should be visible. Next click should return the previous clicked element to its default state. I need to display to buttons on click inside on `.div-to-show`. ``` Some text Some text Some text ``` So far I have this: ``` let lastClicked; $('.container').on('click', function(e) { if (this == lastClicked) { lastClicked = ''; $('.div-to-hide').show(); $(this).children('.div-to-hide').hide(); } else { lastClicked = this; $('.div-to-hide').hide(); $(this).children('.div-to-hide').show(); $(this).children('.div-to-show').hide(); } }); ``` Can't get it to work properly tho.. I don't know what I am missing... Any help is deeply appreciated! UPDATE: got it working! Thanks everyone!<issue_comment>username_1: Consider using class toggling instead. ``` $('.set').on('click', function(e) { $('.set').removeClass('hidden-child'); $(this).addClass('hidden-child'); }); ``` css: ``` .hidden-child .div-to-hide, .div-to-show { display: none; } .hidden-child .div-to-show, .div-to-hide { display: block; } ``` This will make your code easier to reason about, and lets css control the display (style) rules. Edit: changed class name for clarity; expanded explanation; corrected answer to conform to question Upvotes: 1 <issue_comment>username_2: First, you are not using [delegation](http://api.jquery.com/on/#direct-and-delegated-events) (second parameter on the $.on() function) to define the `.set` element as your `this` inside the function. If I understood correctly, you want to show the elements on the last one clicked and hide the rest. You don't really need to know which one you last clicked to do that ``` $('.container').on('click', '.set', function (e) { // Now "this" is the clicked .set element var $this = $(this); // We'll get the children of .set we want to manipulate var $div_to_hide = $this.find(".div-to-hide"); var $div_to_show = $this.find(".div-to-show"); // If it's already visible, there's no need to do anything if ($div_to_show.is(":visible")) { $div_to_hide.show(); $div_to_show.hide(); } // Now we get the other .sets var $other_sets = $this.siblings(".set"); // This second way works for more complex hierarchies. Uncomment if you need it // var $other_sets = $this.closest(".container").find(".set").not(this); // We reset ALL af them $other_sets.find(".div-to-show").hide(); $other_sets.find(".div-to-hide").show(); }); ``` Upvotes: 2 <issue_comment>username_3: Try to make use of ***[siblings()](https://api.jquery.com/siblings/)*** jQuery to hide and show other divs and ***[toggle()](https://api.jquery.com/toggle/)*** jQuery to show and hide itself and also you will need to set `click()` event on `.set`, not in `.container` ```js $(document).on('click', '.set', function(e) { $(this).find('.hide').toggle(); $(this).find('.show').toggle(); $(this).siblings('.set').find('.hide').show(); $(this).siblings('.set').find('.show').hide(); }); ``` ```css .show { display: none; } .set div { padding: 10px; font: 13px Verdana; font-weight: bold; background: red; color: #ffffff; margin-bottom: 10px; cursor: pointer; } ``` ```html 1 Hide 1 Show 2 Hide 2 Show 3 Hide 3 Show ``` Upvotes: 0
2018/03/14
327
1,177
<issue_start>username_0: I'm learning unity and I have successfully instantiated a new game object into my scene (a cube) Now I'm playing with the Canvas UI and I'm trying to download an asset bundle with images and show them on the UI, but I can't find examples on Google Can someone post me an example on how to load an image into the Canvas from an asset bundle? Thanks!!!<issue_comment>username_1: Select the image from the asset bundle.Set the texture type to Sprite(2D and UI).Then simply drag and drop your image into the canvas. Upvotes: 0 <issue_comment>username_2: There are a few things to do to get this to work: You'll need to Create a UnityEngine.UI.Image (sprites don't work on Canvas on their own). Assign the Image.Sprite property by grabbing the Texture2D out of the bundle, and if needed, you can create a sprite using the Sprite.Create() method which takes a Texture2D. In other words, an Image has a Sprite, and a Sprite is made from a Texture2D. ``` Texture2D tex = myAssetBundle.LoadAsset("myTex"); Sprite mySprite = Sprite.Create(tex, new Rect(0.0f, 0.0f, tex.width, tex.height), new Vector2(0.5f, 0.5f), 100.0f); ``` Upvotes: 3 [selected_answer]
2018/03/14
615
2,524
<issue_start>username_0: I have 2 questions regarding context creation. When i access main context via singleton: ``` let appDelegate = UIApplication.shared.delegate as! AppDelegate let managedContext = appDelegate.managedObjectContext ``` Is that the SAME context everytime? And if i then create a child context like this: ``` let bgContext = NSManagedObjectContext(concurrencyType: .privateQueueConcurrencyType) ``` With Parent as the about main context, is that the SAME child context or is it generating a completely new child context. Thanks!<issue_comment>username_1: Yeah, as long as it's a stored property, `appDelegate.managedObjectContext` is the same object every time. And to your second question, that's a new child context. You can go from [here](https://developer.apple.com/documentation/coredata/nsmanagedobjectcontext/1506792-concurrencytype) to read more from Apple's documentation. Upvotes: 2 [selected_answer]<issue_comment>username_2: > > When i access main context via singleton [...] Is that the SAME context everytime? > > > There's nothing magical or opaque about it. You're getting the managed object context from your app delegate class. Look at your `AppDelegate.swift` and see how it creates that context. This kind of code almost always means that you're getting the same one every time, but it doesn't have to be that way. Go look at your code and see. > > And if i then create a child context like this [...] is that the SAME child context or is it generating a completely new child context. > > > That line of code initializes a new managed object context. That's what the `NSManagedObjectContext(...)` syntax implies-- creating a new object using a specific initializer method. Upvotes: 1 <issue_comment>username_3: This is a computed property. ``` appdelegate.managedobjectcontext ``` So somewhere in AppDelegate's implementation, you most likely could find the line: ``` NSManagedObjectContext(...) ``` In this case, AppDelegate is creating it's own managed object context inside its implementation. `NSManagedObjectContext` is a class and calling `NSManagedObjectContext(...)` instantiates a new object from that class. So if you called `NSManagedObjectContext(...)` somewhere else, that would be instantiating another object using `NSManagedObjectContext` as it's blueprint. I hope that makes sense. Also: [Read under Parent Store for the answer about child contexts](https://developer.apple.com/documentation/coredata/nsmanagedobjectcontext) Upvotes: 0
2018/03/14
667
2,737
<issue_start>username_0: My homework project is a form, which contains 3 checkboxes orginized in one groupbox. I need to know how many of these 3 checkboxes have been selected by user, because each selected checkbox should increase the price. I don't need to know the exact checkbox that has been selected, because each one of them adds the same amount. All what i need to know is how many have been selected by the user. I cannot use checkbox list in this homework. It should be just 3 checkboxes, not checkbox list. I was thinking about using the list or collection, but i do not know how to make collection of controls.<issue_comment>username_1: Try this: ``` int i = Groupbox.Controls.OfType().Where(c => c.Checked).Count(); ``` Or ``` int i = Groupbox.Controls.OfType().Count(c => c.Checked); ``` Upvotes: 3 <issue_comment>username_2: Maybe something like this? If you aren't familiar with linq ``` int count = 0; foreach (object control in groupBox1.Controls) { if (control.GetType() == typeof(CheckBox)) { var checkbox = (CheckBox)control; if (checkbox.Checked) count++; } } ``` Upvotes: 0 <issue_comment>username_3: There are two ways to go about this. **List/make the Checkboxes yourself** I asume thus far you only worked with the Designer to create controls. If you give all the Controls a name, you can form a Array of Checboxes like this: ``` CheckBox[] CheckBoxes = { checkBox1, checkBox2, /*...*/ }; ``` If the amount is variable, it can become nessesary to create the Chekboxes yourself. This is no as hard as it sounds - indeed the Designer can do nothing you could not do as well. WindowsForms works with Partial Classes where the Designer writes one part of the class, you another. What the Designer wrote is executed with the call to "InitialiseComponents()" in the constructor. The hard part when creating elements yourself is usually putting them into the Proper Containers with the Proper Allingment. **Keep it in the code behind** Accessing the UI to get data is a bit frowned upon. Instead all data should be in the code behind. So there should be 3 bools, 1 per Checkbox. Or maybe a counter variable. You can have either 1 Check and Unckeck Event handler per Checkbox, that sets the appropirate Bool. Or you have one Check and one Uncheck handler, that you use for all 3 checkboxes that increases/decreases the counter variable. Getting the Ui informed of changes to the code behind is a bit tricky. I am not sure if you need some label updated, but whatever code you use to increment/decrement the counter should also force a re-calculation. Upvotes: 0
2018/03/14
424
1,618
<issue_start>username_0: So I've wrote a test that logs in a user: ``` describe('Login', () => { beforeEach(async () => { await device.reloadReactNative() }) it('Should grant access to a user with valid credentials', async () => { test code }) }) ``` And now I'm writing a new spec to log out a user, so instead of writing the same test code again, I want the login spec to run within the log out spec. I would imagine it would look something like: ``` describe('Log Out', () => { beforeEach(async () => { await device.reloadReactNative() it ('Should grant access to a user with valid credentials') }) it('A User Logs Out', async () => { test code }) ``` How do I get Detox to run the first login test before continuing with the new steps? The beforeEach it ('Should grant access to a user with valid credentials') doesn't work unfortunately, so I'm missing something in the syntax.<issue_comment>username_1: This has no relation to Detox, this describe/it API is related to the test runner you are using. Anyway, use functions: ``` describe('Login', () => { beforeEach(async () => { await device.reloadReactNative(); await grantAccessToUserWithValidCredentials(); }); it('A User Logs Out', async () => { // here the app is ready for you specific log out use case }); async function grantAccessToUserWithValidCredentials() { //grant it } }); ``` Upvotes: 2 <issue_comment>username_2: Best practice is to use Drivers in your tests. You can check out these slides: <http://slides.com/shlomitoussiacohen/testing-react-components#/7> Upvotes: 0
2018/03/14
1,982
7,430
<issue_start>username_0: I have written a password strength checker that dynamically checks the strength of the password keyed into a `ttk.Entry` widget. I have applied the [criteria](https://stackoverflow.com/a/32542964/5722359) by @ePi272314 and the [tutorial on adding validation](https://stackoverflow.com/a/4140988/5722359) by @ByranOakley to a `ttk.Entry` widget. The python script for this password strength checking ttk.Entry widget is given below and it works. Presently, I like to express method `_passwordStrength()` as a `@staticmethod` so that other classes may use it. For this to happen, I need to pass `self.pwstrength`, a `tk.StringVar()`, into it. Also, I need to include this `tk.StringVar` when I register the `@staticmethod`. However, I am having difficulty implementing this. I tried something like: ``` vcmd = (self.register(lambda:App._passwordStrength(self.pwstrength),'%P')) ``` but got the error msg: ```none Exception in Tkinter callback Traceback (most recent call last): File "/usr/lib/python3.5/tkinter/__init__.py", line 1552, in __call__ args = self.subst(*args) TypeError: 'str' object is not callable ``` Appreciate guidance on how to register the `@staticmethod _passwordStrength()` when it contains a `tk.StringVar` input parameter. **Python Script:** ``` import tkinter as tk import tkinter.ttk as ttk import re class App(ttk.Frame): def __init__(self, parent=None, *args, **kwargs): ttk.Frame.__init__(self, parent, style='self.TFrame') self.style=ttk.Style() self.style.configure('self.TFrame', background='pink', borderwidth=10, relief='raised') self.pwstrength = tk.StringVar() label = ttk.Label(self, text="Password: ") vcmd = (self.register(self._passwordStrength), '%P') ePassword = ttk.Entry(self, validate="key", validatecommand=vcmd) warnLabel = ttk.Label(self, textvariable=self.pwstrength) label.grid(row=0, column=0, sticky='w', padx=20, pady=20) ePassword.grid(row=0, column=1, sticky='w') warnLabel.grid(row=1, column=1, sticky='w') def _passwordStrength(self, P): '''Check password strength. A password is considered strong if: 8 characters length or more 1 digit or more 1 symbol or more 1 uppercase letter or more 1 lowercase letter or more''' password = P print (password, len(password)) # check length if password == '': self.pwstrength.set('') return True # check length if len(password) < 8: self.pwstrength.set('Password is too short') return True # check for digits if not re.search(r"\d", password): self.pwstrength.set('Password missing a number.') return True # check for uppercase if not re.search(r"[A-Z]", password): self.pwstrength.set('Password missing upper case letter.') return True # check for lowercase if not re.search(r"[a-z]", password): self.pwstrength.set('Password missing lower case letter.') return True # check for symbols if not re.search(r"\W", password): self.pwstrength.set('Password missing a symbol.') return True # Passed all checks. self.pwstrength.set('Strong password provided.') return True if __name__=='__main__': root = tk.Tk() root.geometry('300x300+700+250') root.title('Password Strength Check') app=App(root) app.grid(row=0, column=0, sticky='nsew') root.rowconfigure(0, weight=1) root.columnconfigure(0, weight=1) root.mainloop() ```<issue_comment>username_1: You don't need to set something as a static method for it to be used by other classes. So long as your class has all the required attributes and methods, it will work: ``` class X: def __init__(self): self.a = 0 def act_on_a(self): self.a = self.a + 1 class Y: def __init__(self, a): self.a = a y = Y(a=3) X.act_on_a(y) print(y.a) # prints 4 ``` In general, there is very little use for `staticmethod` except for namespacing. Often things I see as static methods should really be top level python functions. Upvotes: 0 <issue_comment>username_2: You can do it by modifying your `_passwordStrength()` function to accept a `tk.StringVar` argument along with [`functools.partial`](https://docs.python.org/3/library/functools.html#functools.partial) to supply this to the now static function (which can't reference `self.pwstrength` since it's no longer a regular method with a `self` argument). Here's what I mean: ``` import functools import tkinter as tk import tkinter.ttk as ttk import re class App(ttk.Frame): def __init__(self, parent=None, *args, **kwargs): ttk.Frame.__init__(self, parent, style='self.TFrame') self.style=ttk.Style() self.style.configure('self.TFrame', background='pink', borderwidth=10, relief='raised') self.pwstrength = tk.StringVar() label = ttk.Label(self, text="Password: ") # vcmd = (self.register(self._passwordStrength), '%P') valcommand = self.register( functools.partial(App._passwordStrength, self.pwstrength) ) vcmd = (valcommand, '%P') ePassword = ttk.Entry(self, validate="key", validatecommand=vcmd) warnLabel = ttk.Label(self, textvariable=self.pwstrength) label.grid(row=0, column=0, sticky='w', padx=20, pady=20) ePassword.grid(row=0, column=1, sticky='w') warnLabel.grid(row=1, column=1, sticky='w') @staticmethod def _passwordStrength(svar, P): '''Check password strength. A password is considered strong if: 8 characters length or more 1 digit or more 1 symbol or more 1 uppercase letter or more 1 lowercase letter or more''' password = P print (password, len(password)) # check length if password == '': svar.set('') return True # check length if len(password) < 8: svar.set('Password is too short') return True # check for digits if not re.search(r"\d", password): svar.set('Password missing a number.') return True # check for uppercase if not re.search(r"[A-Z]", password): svar.set('Password missing upper case letter.') return True # check for lowercase if not re.search(r"[a-z]", password): svar.set('Password missing lower case letter.') return True # check for symbols if not re.search(r"\W", password): svar.set('Password missing a symbol.') return True # Passed all checks. svar.set('Strong password provided.') return True if __name__=='__main__': root = tk.Tk() root.geometry('300x300+700+250') root.title('Password Strength Check') app=App(root) app.grid(row=0, column=0, sticky='nsew') root.rowconfigure(0, weight=1) root.columnconfigure(0, weight=1) root.mainloop() ``` Upvotes: 2 [selected_answer]
2018/03/14
227
840
<issue_start>username_0: Would appreciate if you could answer my questions: 1) If I created a Z transaction code for maintaining a Z table that has authorization group not equals to `&NC&` and data browser/table view maint is set to `Display/Maintenance Allowed with restriction`, Does that mean the Z tcode will also be restricted to few users? 2) Is there a way to know who are authorized to run a certain Z Tcode created for maintaining a table?<issue_comment>username_1: Ad. 1.) No, it won't. Ad. 2.) Yes, there is. Upvotes: 1 <issue_comment>username_2: To accomplish this in SUIM go like this: *User* -> *Users by Complex Selection Criteria* -> *By Transaction Authorizations* [![enter image description here](https://i.stack.imgur.com/i7iyw.png)](https://i.stack.imgur.com/i7iyw.png) There enter tcode and press F8. Upvotes: 0
2018/03/14
1,346
3,267
<issue_start>username_0: I'm trying to aggregate from the end of a date range instead of from the beginning. Despite the fact that I would think that adding `closed='right'` to the grouper would solve the issue, it doesn't. Please let me know how I can achieve my desired output shown at the bottom, thanks. ``` import pandas as pd df = pd.DataFrame(columns=['date','number']) df['date'] = pd.date_range('1/1/2000', periods=8, freq='T') df['number'] = pd.Series(range(8)) df date number 0 2000-01-01 00:00:00 0 1 2000-01-01 00:01:00 1 2 2000-01-01 00:02:00 2 3 2000-01-01 00:03:00 3 4 2000-01-01 00:04:00 4 5 2000-01-01 00:05:00 5 6 2000-01-01 00:06:00 6 7 2000-01-01 00:07:00 7 ``` With the groupby and aggregation of the date I get the following. Since I have 8 dates and I'm grouping by periods of 3 it must choose whether to truncate the earliest date group or the oldest date group, and it chooses the oldest date group (the oldest date group has a count of 2): ``` df.groupby(pd.Grouper(key='date', freq='3T')).agg('count') date number 2000-01-01 00:00:00 3 2000-01-01 00:03:00 3 2000-01-01 00:06:00 2 ``` My desired output is to instead truncate the *earliest* date group: ``` date number 2000-01-01 00:00:00 2 2000-01-01 00:02:00 3 2000-01-01 00:05:00 3 ``` Please let me know how this can be achieved, I'm hopeful there's just a parameter that can be set that I've overlooked. Note that this is similar to [this](https://stackoverflow.com/questions/48340463/how-to-understand-closed-and-label-arguments-in-pandas-resample-method) question, but my question is specific to the date truncation. EDIT: To reframe the question (thanks Alexdor) the default behavior in pandas is to bin by period [0, 3), [3, 6), [6, 9) but instead I'd like to bin by (-1, 2], (2, 5], (5, 8]<issue_comment>username_1: This is one hack, which let's you group by a constant group size, counting bottom up. ``` from itertools import chain def grouper(x, k=3): n = len(df.index) return list(chain.from_iterable([[0]*int(n//k)] + [[i]*k for i in range(1, int(n/k)+1)])) df['grouper'] = grouper(df, 3) res = df.groupby('grouper', as_index=False)\ .agg({'date': 'first', 'number': 'count'})\ .drop('grouper', 1) # date number # 0 2000-01-01 00:00:00 2 # 1 2000-01-01 00:02:00 3 # 2 2000-01-01 00:05:00 3 ``` Upvotes: 0 <issue_comment>username_2: It seems like the grouper function build up the bins starting from the oldest time in the series that you pass to it. I couldn't see a way to make it build up the bins from the newest time, but it's fairly easy to construct the bins from scratch. ``` freq = '3min' minTime = df.date.min() maxTime = df.date.max() deltaT = pd.Timedelta(freq) minTime -= deltaT - (maxTime - minTime) % deltaT # adjust min time to start of first bin r = pd.date_range(start=minTime, end=maxTime, freq=freq) df.groupby(pd.cut(df["date"], r)).agg('count') ``` Gives ``` date date number (1999-12-31 23:58:00, 2000-01-01 00:01:00] 2 2 (2000-01-01 00:01:00, 2000-01-01 00:04:00] 3 3 (2000-01-01 00:04:00, 2000-01-01 00:07:00] 3 3 ``` Upvotes: 2 [selected_answer]
2018/03/14
722
2,212
<issue_start>username_0: I have set up my Sublime Text editor as follows: [![enter image description here](https://i.stack.imgur.com/Mxkx9.png)](https://i.stack.imgur.com/Mxkx9.png) I have done it using `View > Groups > Max Columns: 2` I am trying to replicate it in Visual Studio Code, but could not find any options for that. I am using Visual Studio Code version 1.21 in Ubuntu 16.04.4.<issue_comment>username_1: It seems that you can't. There is a request for that feature on vscode github. see [here](https://github.com/Microsoft/vscode/issues/9443) for yourself Upvotes: 3 [selected_answer]<issue_comment>username_2: "Toggle editor group layout" You can also toggle between horizontal or vertical split layouts in VS Code. To do so, you use the toggle editor group command from View. then spilt editor command. Upvotes: 2 <issue_comment>username_3: In current VS Code¹ you can find different window groupings and layouts in the following menu: * `View > Editor Layout` To make a new horizontal split (two editors stacked on top of each other), select `Split Down`, or the preset `Two Rows`. [![Screenshot of VS Code Menu](https://i.stack.imgur.com/ST7pD.png)](https://i.stack.imgur.com/ST7pD.png) --- Footnote: > > 1: Grid layouts were added in [May 2018 version 1.24](https://code.visualstudio.com/updates/v1_24#_editor-grid-layout) > > > Upvotes: 3 <issue_comment>username_4: right-click on the top of the terminal window (where you have the PROBLEM OUTPUT TERMINAL ...) and select 'move panel to right' or 'move panel to left' if you prefer the left side. Upvotes: 0 <issue_comment>username_5: VSCode 1.77 (March 2023) offers a new contextual menu directly on the empty editor tabs space, from which you can trigger an horizontal or vertical split: See [issue 175667](https://github.com/microsoft/vscode/issues/175667) and [commit e7157fe](https://github.com/microsoft/vscode/commit/e7157fe5727ef844ab7b17cd115afddbbc7eefe2), available in [VSCode insider](https://code.visualstudio.com/insiders/). [![https://user-images.githubusercontent.com/900690/221912202-80089fae-3045-4948-af49-2cf717b1f478.gif](https://i.stack.imgur.com/ribkH.gif)](https://i.stack.imgur.com/ribkH.gif) Upvotes: 0
2018/03/14
358
1,419
<issue_start>username_0: Given the following scenario: * I have a source.xlsx file with multiple worksheets (worksheet1, worksheet2, workshToCopy, etc) and I generate the destination.xlsx based on another .xlsx template file that has different worksheets than the source.xlsx. I want to add a worksheet from the source file to the destination file. For now I was able to add an empty worksheet to my destination file like this: ``` if (FileExists(outputFile) && FileExists(inputFile)) { var inputPackage = new ExcelPackage(inputFile); var outputPackage = new ExcelPackage(outputFile); var summaryInputWorksheet = inputPackage.Workbook.Worksheets[ExcelSummaryHelper.SummaryWorksheet]; outputPackage.Workbook.Worksheets.Add(summaryInputWorksheet.Name); outputPackage.Workbook.Worksheets.MoveToEnd(summaryInputWorksheet.Name); outputPackage.Save(); } ``` What's the best approach to copy the content of workshToCopy from source.xlsx to destination.xlsx's new worksheet using the EPPlus library ?<issue_comment>username_1: Solved. There's an overload for the Add method on the ExcelWorksheets class that looks like this: ``` ExcelWorksheets.Add(string Name, ExcelWorksheet Copy) ``` Can't believe I haven't seen it. Upvotes: 5 [selected_answer]<issue_comment>username_2: Couldn't you just clone the existing one? ```cs ExcelWorksheets clonedWorksheet = currentExcelWorksheet.Clone(); ``` Upvotes: 1
2018/03/14
609
2,589
<issue_start>username_0: I have 2 data sources: DB and server. When I start the application, I call the method from the repository (MyRepository): ``` public Observable> fetchMyObjs() { Observable> localData = mLocalDataSource.fetchMyObjs(); Observable> remoteData = mRemoteDataSource.fetchMyObjs(); return Observable.concat(localData, remoteData); } ``` I subscribe to it as follows: ``` mMyRepository.fetchMyObjs() .compose(applySchedulers()) .subscribe( myObjs -> { //do somthing }, throwable -> { //handle error } ); ``` I expect that the data from the database will be loaded faster, and when the download of data from the network is completed, I will simply update the data in Activity. When the Internet is connected, everything works well. But when we open the application without connecting to the network, then `mRemoteDataSource.fetchMyObjs();` throws `UnknownHostException` and on this all Observable ends (the subscriber for `localData` does not work (although logs tell that the data from the database was taken)). And when I try to call the `fetchMyObjs()` method again from the MyRepository class (via SwipeRefresh), the subscriber to `localData` is triggered. How can I get rid of the fact that when the network is off, when the application starts, does the subscriber work for `localData`?<issue_comment>username_1: Try some of error handling operators: <https://github.com/ReactiveX/RxJava/wiki/Error-Handling-Operators> I'd guess onErrorResumeNext( ) will be fine but you have to test it by yourself. Maybe something like this would work for you: ``` Observable> remoteData = mRemoteDataSource.fetchMyObjs() .onErrorResumeNext() ``` Addidtionally I am not in position to judge if your idea is right or not but maybe it's worth to think about rebuilding this flow. It is not the right thing to ignore errors - that's for sure ;) Upvotes: 2 [selected_answer]<issue_comment>username_2: You can observe your chain with [observeOn(Scheduler scheduler, boolean delayError)](http://reactivex.io/RxJava/2.x/javadoc/io/reactivex/Observable.html#observeOn-io.reactivex.Scheduler-boolean-) and *delayError* set to **true**. > > delayError - indicates if the onError notification may not cut ahead of onNext notification on the other side of the scheduling boundary. If true a sequence ending in onError will be replayed in the same order as was received from upstream > > > Upvotes: 0
2018/03/14
426
1,520
<issue_start>username_0: I have been trying to create a filter, which prefix some input. The prefix should consist of some ansible variable in my case inventory\_dir and role\_name. I tried to implement following code: ``` from ansible import errors def role_file(self): try: return inventory_dir + "/roles/" + role_name except Exception, e: raise errors.AnsibleFilterError( 'role_file plugin error: {0}, self={1},'.format(str(e), str(self))) class FilterModule(object): ''' prefix a file resource to the inventory directory ''' def filters(self): return { 'role_file': role_file } ``` and my playbook looks as follows: ``` --- - hosts: messagebus tasks: - debug: msg: "Hello World {{ 'abc' | role_file }}" ``` I get following error message: fatal: [localhost]: FAILED! => {"msg": "role\_file plugin error: global name 'inventory\_dir' is not defined, self=abc,"} Can anybody see what the issue is with the implementation Thanks in advance<issue_comment>username_1: Answer from the comments: > > You can define variable `inv_prefix: "{{ inventory_dir + '/roles/' + role_name }}"` and use it in `copy`/`template` as `src: "{{ inv_prefix }}/myfile"` > > > Upvotes: 1 <issue_comment>username_2: You can also pass along the variable to the python filter like: ``` {{ 'abc' | role_file(param2,param3) }}" def role_file(self, param1,param2,param3): try: return param2 + "/roles/" + param3 ``` Upvotes: 0
2018/03/14
831
2,766
<issue_start>username_0: With xdebug enabled I can reproduce an error: ``` composer create-project laravel/laravel cd laravel composer require proengsoft/laravel-jsvalidation php artisan vendor:publish --provider="Proengsoft\JsValidation\JsValidationServiceProvider" --tag=public ``` Error: ``` PHP Warning: Uncaught League\Flysystem\Plugin\PluginNotFoundException: Plugin not found for method: read in /tmp/laravel/vendor/league/flysystem/src/Plugin/PluggableTrait.php:49 ``` Stack trace: But without xdebug enabled, everything runs fine. I am wondering if this is happening only for me or also for others, before reporting it to xdebug. ``` php -v PHP 7.1.15-1+ubuntu16.04.1+deb.sury.org+2 Package: php-xdebug Version: 2.6.0+2.5.5-1+ubuntu16.04.1+deb.sury.org+1 ``` Composer.lock for reference <https://gist.github.com/amenk/9d63975cf4aabf86288b79fb95e8156c> I tracked it down to the following function in Flysystem: ``` public function invokePluginOnFilesystem($method, $arguments, $prefix) { $filesystem = $this->getFilesystem($prefix); try { return $this->invokePlugin($method, $arguments, $filesystem); } catch (PluginNotFoundException $e) { // Let it pass, it's ok, don't panic. } $callback = [$filesystem, $method]; return call_user_func_array($callback, $arguments); } ``` The exception is thrown in invokePlugin() but caught afterwars (if xdebug is off). It Xdebug is on, that does not work anymore. I have a 1G memory limit for PHP-CLI in place. **Bug Reported**: <https://bugs.xdebug.org/view.php?id=1535><issue_comment>username_1: This is not really a question, but a bug report. I can easily reproduce all kinds of wonkyness due to exceptions. Please file a bug report at <https://bugs.xdebug.org> — preferably with a lot smaller test case Upvotes: 3 [selected_answer]<issue_comment>username_2: Some more info / quick fix: I was using xdebug.collect\_params=4 with xdebug.collect\_params=1 the bug does not appear. Also the bug only appears after updating to PHP 7.1.15 - and it does not appear for PHP 7.2 There is also a warning about huge scripts in the docs: <https://xdebug.org/docs/all_settings> > > The setting defaults to 0 because for very large scripts it may use > huge amounts of memory and therefore make it impossible for the huge > script to run. You can most safely turn this setting on, but you can > expect some problems in scripts with a lot of function calls and/or > huge data structures as parameters. Xdebug 2 will not have this > problem with increased memory usage, as it will never store this > information in memory. Instead it will only be written to disk. This > means that you need to have a look at the disk usage though. > > > Upvotes: 0
2018/03/14
2,866
10,957
<issue_start>username_0: I'm trying to post my data from this js ``` $.ajax({ type: 'POST', url: '/url', data: { arr: tdValues }, success: function () { location.reload(); } }); ``` all code in controller works fine and data from js is not null ``` @PostMapping("/url") public ModelAndView deleteQuestions(@RequestParam(value = "arr[]") String[] tdValues) { ModelAndView modelAndView = new ModelAndView(); modelAndView.setViewName("page"); return modelAndView; } ``` but after that i get exception like ``` Required String[] parameter 'arr[]' is not present ``` Are any suggestions how to fix it? Stacktrace: > > Required String[] parameter 'arr' is not present;org.springframework.web.bind.MissingServletRequestParameterException: Required String[] parameter 'arr' is not present at org.springframework.web.method.annotation.RequestParamMethodArgumentResolver.handleMissingValue(RequestParamMethodArgumentResolver.java:198) at org.springframework.web.method.annotation.AbstractNamedValueMethodArgumentResolver.resolveArgument(AbstractNamedValueMethodArgumentResolver.java:109) at org.springframework.web.method.support.HandlerMethodArgumentResolverComposite.resolveArgument(HandlerMethodArgumentResolverComposite.java:121) at org.springframework.web.method.support.InvocableHandlerMethod.getMethodArgumentValues(InvocableHandlerMethod.java:158) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:128) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:97) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:967) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:872) at javax.servlet.http.HttpServlet.service(HttpServlet.java:661) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.rememberme.RememberMeAuthenticationFilter.doFilter(RememberMeAuthenticationFilter.java:158) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:200) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:116) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:64) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:108) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:81) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:478) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1459) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.lang.Thread.run(Thread.java:748) > > ><issue_comment>username_1: Add the required attribute (set to false) to @RequestParam. ``` public ModelAndView deleteQuestions(@RequestParam(required=false, value = "arr[]") String[] tdValues) { ``` Or try ``` public ModelAndView deleteQuestions(@RequestParam(value = "arr") String[] tdValues) { ``` Upvotes: 0 <issue_comment>username_2: Couple of changes. 1. In your Js code. ``` $.ajax({ type: 'POST', url: '/url/?arr='+tdValues success: function () { location.reload(); } }); ``` 2. In your controller code, do this. ``` public ModelAndView deleteQuestions(@RequestParam(value = "arr") String[] tdValues) { ``` Upvotes: 0 <issue_comment>username_3: My problem solved with this way: ``` public @ResponseBody ModelAndView deleteQuestions(@RequestParam("arr") Optional tdValues) {} ``` and with this JS: ``` $.ajax({ type: 'POST', url: '/url?arr='+tdValues, error : function() { console.log("error"); }, success: function () { location.reload() } }); ``` Upvotes: 3 [selected_answer]
2018/03/14
468
1,577
<issue_start>username_0: I'm searching for something like ``` liftPredMaybe :: (a -> Bool) -> a -> Maybe a liftPredMaybe p a | p a = Just a | otherwise = Nothing ``` Is there such a function in Haskell already?<issue_comment>username_1: Not quite a ready-made solution, but with `guard` (from `Control.Monad`) and `(<$)` (from `Data.Functor`) we can write: ``` ensure :: Alternative f => (a -> Bool) -> a -> f a ensure p a = a <$ guard (p a) ``` (Thanks to <NAME> for suggesting a nice name for this function.) A more pointfree spelling of dubious taste is `\p -> (<$) <*> guard . p`. Upvotes: 4 [selected_answer]<issue_comment>username_2: One way to compose it is like this, using `Control.Monad`: ``` liftPredM :: MonadPlus m => (a -> Bool) -> a -> m a liftPredM p = mfilter p . return ``` Another alternative is to use `Data.Foldable`: ``` liftPredF :: (a -> Bool) -> a -> Maybe a liftPredF f = find f . pure ``` This is, however, less general, so I'd lean towards favouring the `MonadPlus`-based implementation. In both cases, though, the idea is to first lift a 'naked' value into a container using either `pure` or `return`, and then apply a filter. Often, you don't need to declare an actual function for this; instead, you can just inline the composition where you need it. Examples: ``` Prelude Control.Monad> liftPredMaybe even 42 :: Maybe Integer Just 42 Prelude Control.Monad> liftPredMaybe (even . length) "foo" :: Maybe String Nothing Prelude Control.Monad> liftPredMaybe (odd . length) "foo" :: Maybe String Just "foo" ``` Upvotes: 3
2018/03/14
567
2,244
<issue_start>username_0: I'm using NiFi to connect 2 systems: * Source one generating events in a Kafka topic * Destination one where I will only consider the Oracle database. I need to reduce the JSON coming in the Kafka topic and push them in appropriate tables. No major issues in doing this but... The source system is generating too many events and the destination database triggers processes for every modifications. And is not sized to handle that many processes. So I'm doing bulk update in my DB, using the `PutSQL Processor` behind a `Text Processor` + `Update Attribute Processor` + `ReplaceText Processor` (as shown here for example: <https://community.hortonworks.com/articles/91849/design-nifi-flow-for-using-putsql-processor-to-per.html>). But this workflow allows me to update my DB based on a number of elements to put in it (my batch size). **I would like to bulk update on a regular, time based, basis.** Reason is that source events are not coming linearly, and destination database cannot accept being more than 5 minutes "away" from the source. So I need to schedule my bullk update at worst every 5 minutes. I can't see right now how to do this. Please could you tell me which processors/solution you would you? *PS: Of course, tons of better solutions exist, like not triggering heavy processes on each commit in my destination database, but changing this "good old system" is not affordable right now.* Cheers, Olivier<issue_comment>username_1: I'd suggest using the `Wait` and `Notify` processors in tandem to set up a "gate" which holds flowfiles in a queue until the `Notify` processor (with a run schedule of ~5 minutes) sends the "trigger" flowfile. Koji Kawamura has written [an extensive article documenting this behavior pattern](https://ijokarumawak.github.io/nifi/2017/02/02/nifi-notify-batch/). Upvotes: 2 <issue_comment>username_2: Well... The answer is pretty simple indeed. You just need to go on the "Schedule" tab of the processor. I'm now running the 1.6.0-SNAPSHOT (by the way, it looks like this option was there for a long time... I just did not notice it) and it provides Scheduling with the ability to setup a Cron scheduler. Which perfectly answer the need... Upvotes: 2 [selected_answer]
2018/03/14
1,008
3,270
<issue_start>username_0: I have a problem with my stored procedure in SQL Server 2017 Developer. It gets file's modification date using `xp_cmdshell`, returns `varchar` into temp table and trying to convert this to date. The stored procedure is working when I execute it in SSMS manually, but fails when I put it into a job step. Error from Job History: > > Conversion failed when converting date and/or time from character string. [SQLSTATE 22007] (Error 241) > > > I have marked the fragment of code that fails (when I tried to execute SP in job without this, it executed successfully). When I execute only the "problem" code, it's working fine and returns such data: Code: ``` select top(1) cast(left(mdate,20) as date) data_ from t_lomag_temp_table ``` Results: ``` 2018-03-14 ``` Code for stored procedure: ``` declare @lomag_bak_file_source nvarchar(400) = /*source folder*/ declare @restore_status int declare @bak_list as table (id int identity , plik varchar(200) , data_bak date) declare @baza_nazwa varchar(200) declare @xp_cmdshell_dir varchar(1000) declare @counter int declare @loop_limit int declare @bak_files_status int set @restore_status = 0 set @bak_files_status = 0 set @counter = 1 ; truncate table t_lomag_temp_table ; insert into @bak_list select concat(dl.baza,'_db.bak'), null from t_lomag_database_list dl ; select @loop_limit = max(id) from @bak_list while @counter <= @loop_limit begin select @baza_nazwa = bl.plik from @bak_list bl where bl.id = @counter set @xp_cmdshell_dir = concat('dir ',@lomag_bak_file_source,@baza_nazwa) insert t_lomag_temp_table exec master.dbo.xp_cmdshell @xp_cmdshell_dir set rowcount 5 delete from t_lomag_temp_table set rowcount 0 /* JOB PROBLEM CODE varchar to date*/ update @bak_list set data_bak = x.data_ from (select top(1) cast(left(mdate, 20) as date) data_ from t_lomag_temp_table) x where @baza_nazwa = plik set @counter = @counter + 1 end ; begin transaction delete from logistyka.dbo.t_lomag_restore_dates commit transaction ; begin transaction insert into logistyka.dbo.t_lomag_restore_dates select plik, data_bak, 1, 0 from @bak_list ; select @bak_files_status = min(rd.date_status_fl) from logistyka.dbo.t_lomag_restore_dates rd ; commit transaction ; ```<issue_comment>username_1: You have a bad date. You can find it using `try_convert()` or `try_cast()`: ``` select mdate from t_lomag_temp_table where try_convert(date, left(mdate, 20)) is null; ``` Your code succeeds because it is -- presumably -- easy to find valid dates in the table. The stored procedure fails because it looks at more of the values. Upvotes: 1 <issue_comment>username_2: Just for kicks and giggles run this and see if any values that cannot be converted to a date (and are throwing the error) are returned. If so, then figure out how to handle these values gracefully, either by running an UPDATE statement to make them date-convertable, or adding a WHERE clause to exclude them from your query. SELECT mdate FROM t\_lomag\_temp\_table WHERE ISDATE(left(mdate,20))= 0 Upvotes: 0
2018/03/14
1,155
4,078
<issue_start>username_0: I am doing `asp.net/c#` tabs including a button. I want to change the color of the button once clicked and after I click another button I want to change the other buttons color and the first one will have the old color, I have used a class on active but it will change it for 1 sec. This is my `asp.net` code: ``` ``` This is the JavaScript. When I am doing window.location.href it displays the default color again. ``` function setColor(btn, par) { if (par == 0) { window.location.href = "Default.aspx"; document.getElementById("tab1").style.backgroundColor = "#ff0000"; document.getElementById("tab2").style.backgroundColor = "#00bcd4"; document.getElementById("tab3").style.backgroundColor = "#00bcd4"; } else if (par == 1) { window.location.href = "Default2.aspx"; document.getElementById("tab1").style.backgroundColor = "#00bcd4"; document.getElementById("tab2").style.backgroundColor = "#ff0000"; document.getElementById("tab3").style.backgroundColor = "#00bcd4"; } else if (par == 2) { window.location.href = "Default3.aspx"; document.getElementById("tab1").style.backgroundColor = "#00bcd4"; document.getElementById("tab2").style.backgroundColor = "#00bcd4"; document.getElementById("tab3").style.backgroundColor = "#ff0000"; } } ```<issue_comment>username_1: This can be accomplished with a simple CSS rule and no JavaScript at all. > > I have used Cascading style sheet class on active but it will change > it for 1 sec. > > > Your problem was that you tried the `:active` pseudo-class (which, for a button, only applies while the button is "actively" being clicked) instead of the `:focus` pseudo-class. ```css .button { background-color:aqua; } /* default color for all buttons */ .button:focus { background-color: rgba(255, 75, 75, .5); } ``` ```html ``` To keep the button's "focus" color, even when it loses the focus to another non-button element, you'd set up a `click` event handler for each button that adds the same class to the clicked button and removes it from all the others: ```js // Get all the relevant buttons into an array var buttons = Array.prototype.slice.call(document.querySelectorAll(".button")); // Loop through the buttons buttons.forEach(function(btn){ // Set up a click event handler for the button btn.addEventListener("click", function(){ // Loop though all the buttons and reset the colors back to default buttons.forEach(function(btn){ btn.classList.remove("focus"); }); this.classList.add("focus"); // Add the class to the one button that got clicked }); }); ``` ```css .button { background-color:aqua; } /* default color for all buttons */ .focus { background-color: rgba(255, 75, 75, .5); } ``` ```html Other things to click on: ``` Upvotes: 2 <issue_comment>username_2: ``` window.location.href = "pageName.aspx" ``` makes you go to a another page. In this case the other page loads up. This makes you lose the colors that you have set. You can set the color from code behind to not lose it. Or check which page you are on with window.location.href. And depending on this result set the color onload. **EDIT:** Since you have them on masterpage you can do the following. ``` protected void Page_Load(object sender, EventArgs e) { SetCurrentPage(); } private void SetCurrentPage() { var pageName = Request.Url.AbsolutePath; switch (pageName) { case "/Default.aspx": t1.Attributes["class"] = "button tabColor"; break; case "/Default2.aspx": t2.Attributes["class"] = "button tabColor"; break; case "/Default3.aspx": t3.Attributes["class"] = "button tabColor"; break; } } ``` Add `runat="server"` to the controls so you can acces them on code behind. ``` ``` Make a css class for the tabcolor ``` .tabColor { background-color: #ff0000; } ``` And to answer your question how to acces a Master control from Child page: ``` var someControl = this.Master.FindControl("controlName"); ``` Upvotes: 0
2018/03/14
981
3,450
<issue_start>username_0: I know how to set the inserted value if it is an input control. Example : ``` ``` This code will still display your inserted value after the submit button. But how should I apply it with the `dropdown` control? I tried to do like this but it's not working : ``` php if(!empty($genders)){ foreach ($genders as $row) { echo '<option value="'.$row['id'].'"'; if($row['id'] == $\_SESSION['regData']['gender']){ echo set\_select('gender', $\_SESSION['regData']['gender']); } else { echo set\_select('gender', $row['id']); } echo ''.$row['gender\_title'].''; } } ?> ``` What is the correct way to apply it? Thank you.<issue_comment>username_1: This can be accomplished with a simple CSS rule and no JavaScript at all. > > I have used Cascading style sheet class on active but it will change > it for 1 sec. > > > Your problem was that you tried the `:active` pseudo-class (which, for a button, only applies while the button is "actively" being clicked) instead of the `:focus` pseudo-class. ```css .button { background-color:aqua; } /* default color for all buttons */ .button:focus { background-color: rgba(255, 75, 75, .5); } ``` ```html ``` To keep the button's "focus" color, even when it loses the focus to another non-button element, you'd set up a `click` event handler for each button that adds the same class to the clicked button and removes it from all the others: ```js // Get all the relevant buttons into an array var buttons = Array.prototype.slice.call(document.querySelectorAll(".button")); // Loop through the buttons buttons.forEach(function(btn){ // Set up a click event handler for the button btn.addEventListener("click", function(){ // Loop though all the buttons and reset the colors back to default buttons.forEach(function(btn){ btn.classList.remove("focus"); }); this.classList.add("focus"); // Add the class to the one button that got clicked }); }); ``` ```css .button { background-color:aqua; } /* default color for all buttons */ .focus { background-color: rgba(255, 75, 75, .5); } ``` ```html Other things to click on: ``` Upvotes: 2 <issue_comment>username_2: ``` window.location.href = "pageName.aspx" ``` makes you go to a another page. In this case the other page loads up. This makes you lose the colors that you have set. You can set the color from code behind to not lose it. Or check which page you are on with window.location.href. And depending on this result set the color onload. **EDIT:** Since you have them on masterpage you can do the following. ``` protected void Page_Load(object sender, EventArgs e) { SetCurrentPage(); } private void SetCurrentPage() { var pageName = Request.Url.AbsolutePath; switch (pageName) { case "/Default.aspx": t1.Attributes["class"] = "button tabColor"; break; case "/Default2.aspx": t2.Attributes["class"] = "button tabColor"; break; case "/Default3.aspx": t3.Attributes["class"] = "button tabColor"; break; } } ``` Add `runat="server"` to the controls so you can acces them on code behind. ``` ``` Make a css class for the tabcolor ``` .tabColor { background-color: #ff0000; } ``` And to answer your question how to acces a Master control from Child page: ``` var someControl = this.Master.FindControl("controlName"); ``` Upvotes: 0
2018/03/14
1,141
3,293
<issue_start>username_0: So, I have a simple SQL query, which however seems to be bugged (or my `where-clause` is written wrong), as it doesn't return a value if I select on a specific field (`matflag`) with a specific value (`50`). The query is basically a `select from table1` with a subquery on `table2` where the `where-clause` just checks if the returned field from the subquery exists in `table1`: ``` Select distinct t1.matnum as matnum, t1.matflag as matflag, t1.factory as factory from table1 t1, (select matnum from table2 where technical_value = 'XX') t2 where t1.matnum = t2.matnum and t1.matnum = '60000000'; ``` This returns this output: ``` +----------+---------+---------+ | MATNUM | MATFLAG | FACTORY | +----------+---------+---------+ | 60000000 | | 001000 | | 60000000 | | 002000 | | 60000000 | | 003000 | | 60000000 | | 004000 | | 60000000 | | 005000 | +----------+---------+---------+ ``` If I, however add `and t1.matflag != '50'` to the end of the `where-clause`, the whole output disappears. ``` Select distinct t1.matnum as matnum, t1.matflag as matflag, t1.factory as factory from table1 t1, (select matnum from table2 where technical_value = 'XX') t2 where t1.matnum = t2.matnum and t1.matnum = '60000000' and t1.matflag != '50'; ``` Output: ``` +----------+---------+---------+ | MATNUM | MATFLAG | FACTORY | +----------+---------+---------+ ``` Additional information for the column `matflag`: It is a varchar2(2 Char) column, either filled with nothing or the value '50' or the value '10' or '20'. Now, if I change the where clause from `and t1.matflag != '50'` to `and t1.matflag is null`, the output is correct again: ``` Select distinct t1.matnum as matnum, t1.matflag as matflag, t1.factory as factory from table1 t1, (select matnum from table2 where technical_value = 'XX') t2 where t1.matnum = t2.matnum and t1.matnum = '60000000' and t1.matflag is null; ``` So this returns this output: ``` +----------+---------+---------+ | MATNUM | MATFLAG | FACTORY | +----------+---------+---------+ | 60000000 | | 001000 | +----------+---------+---------+ .... and so on, have a look at the first table above ``` So how does it return something if I select on `is null` but not if I select on `!= '50'`? (Sidenote: changing `!=` to `<>` didn't help either) How does `matflag is null` apply but `matflag != '50'` not? We run the Oracle Database 11g Release 11.2.0.3.0 - 64bit Production.<issue_comment>username_1: Learn how to use proper explicit `JOIN` syntax: ``` Select distinct t1.matnum as matnum, t1.matflag as matflag, t1.factory as factory from table1 t1 join table2 on t1.matnum = t2.matnum where t2.technical_value = 'XX' and t1.matnum = '60000000'; ``` Then learn about `NULL` values and how they fail almost every comparison, including `<>`. The logic you want is: ``` where t2.technical_value = 'XX' and t1.matnum = '60000000' and (matflag <> '50' or matflag is null) ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: In SQL `NULL` means "unknown" value, thus any operation with it will result in `NULL`. Try `COALESCE(t1.matflag, 0) <> 50`... Upvotes: 1
2018/03/14
654
2,094
<issue_start>username_0: I get the following error: > > 18/03/14 15:31:11 ERROR ApplicationMaster: User class threw exception: > org.apache.spark.sql.AnalysisException: Table or view not found: > products; line 1 pos 42 > > > This is my code: ``` val spark = SparkSession .builder() .appName("Test") .getOrCreate() val products = spark.read.parquet(productsPath) products.createGlobalTempView("products") val q1 = spark.sql("SELECT PERCENTILE(product_price, 0.25) FROM products").map(_.getAs[Double](0)).collect.apply(0) ``` What am I doing wrong? Is it possible to do the same thing in Spark without using `sql`?<issue_comment>username_1: All the global temporary views are created in Spark preserved temporary `global_temp` database. Below should work- ```scala val q1 = spark.sql("""SELECT PERCENTILE(product_price, 0.25) FROM global_temp.products""").map(_.getAs[Double](0)).collect.apply(0) ``` Spark has 2 different types of views, `Tempview` and `globalTempView`, see post [here](https://stackoverflow.com/questions/42774187/spark-createorreplacetempview-vs-createglobaltempview) for more details. Upvotes: 1 <issue_comment>username_2: **TEMPORARY VIEW** Just use `createOrReplaceTempView` as ``` products.createOrReplaceTempView("products") val q1 = spark.sql("SELECT PERCENTILE(product_price, 0.25) FROM products").map(_.getAs[Double](0)).collect.apply(0) ``` **GLOBAL TEMPORARY VIEW** If you use [global temp view](https://spark.apache.org/docs/latest/sql-programming-guide.html#global-temporary-view) then you should do ``` products.createGlobalTempView("products") val q1 = spark.sql("SELECT PERCENTILE(product_price, 0.25) FROM global_temp.products").map(_.getAs[Double](0)).collect.apply(0) ``` Upvotes: 4 [selected_answer]<issue_comment>username_3: If you want to use sql API you can try ``` import org.apache.spark.sql.expressions.Window val wdw = Window.partitionBy($"Field1", $"Field2").orderBy($"Field".asc) products.withColumn("percentile",functions.ntile(100).over(wdw)) ``` Upvotes: 1
2018/03/14
1,328
5,212
<issue_start>username_0: Problem ======= I have forms that are hosted by a 3rd party service. We'll call this www.3rdpartyform.com. I have my site, www.mysite.com. I want to be able to track traffic using google analytics campaigns going to www.3rdpartyform.com. My Solution =========== I've created a landing page www.mysite.com/redirected. It's used like so: www.mysite.com/redirected/?redirected\_url=www.3rdpartyform.com/theform&utm\_googlestuff=stuff The page is essentially this: ``` php $redirection_link = htmlspecialchars($_GET['redirected_url']); ? (google analytics script) var retryAttempts = 0; function checkIfAnalyticsLoaded() { console.log('Checking if GA loaded.'); if (window.ga && ga.create) { console.log('GA loaded.'); redirect(); } else if (window.urchinTracker) { console.log('Old GA loaded.'); redirect(); } else if (retryAttempts < 10) { retryAttempts += 1; setTimeout(checkIfAnalyticsLoaded, 500); } else { console.log('GA not loaded') redirect(); } } function redirect() { console.log('Redirecting'); setTimeout(function () { window.location.href= '<?php echo $redirection\_link; ?>'; }, <?php echo $time\_to\_redirect \* 1000 ?>); } checkIfAnalyticsLoaded(); ``` How I think it will work ------------------------ Someone clicks the link, the page loads my google analytics and sees the campaign URL parameters and logs the hit, then the redirect takes the user to the form at www.3rdpartyform.com/theform. My Question =========== I tried searching for a solution but could not think of what to search for that gave relevant results. Is there a better solution to my current problem than what I'm attempting? Would google analytics even register a view/campaign (even if it's a bounce) with this solution? **Thanks!** Updated Solution ================ This is my updated solution so far based off of [Max](https://stackoverflow.com/users/359650/max)'s response. To negate possible negative impacts that a redirect would have on my SEO, I've added nofollow, noindex metadata. Not 100% on this solution but seems logical. To address the **Race Condition** I've added a function to check if GA is loaded. As for the **Poor UX/Speed** issue, I don't see this as a problem with my current implementation/target audience configuration at this time. I've updated my solution above. **Issues Remaining** Would like to use GA events as suggested by [SMX](https://stackoverflow.com/users/4006592/smx) to prevent hits to this page being counted as bounces. Haven't messed with GA events yet though and would need to learn about them, but have to move on to next project for now.<issue_comment>username_1: Yes, this is probably the most reasonable solution for your analytics. GA will register hits on your landing page and attribute campaign information to them, as intended. Of course, note that what you'll be measuring is landing page visits: there's no way you can track actions people perform on your third-party forms site (pageviews, clicks, etc.). The way you currently have it set up, all visits to your landing page will likely count as bounces, since the user will touch your site only once. To prevent this, you could add a line of javascript in your header to send another hit (e.g. event) to GA. Upvotes: 0 <issue_comment>username_2: *This type of intermediate page with JavaScript redirects may be bad for SEO but my SEO days are behind me and I'm not sure there so going to leave this point out.* Other problems you didn't mention: * **Race condition**: since the GA snippet is async, there is actually no guarantee it will load and track the pageview before the redirect occurs. Increasing the timeout is not a solution because of the below problem. * **Poor UX/Speed**: before your users get a chance to load the 3rd party website, they will be subjected to 1 additional page load + 1 sec timeout, potentially several seconds of unnecessary wait, not great. Facebook do something similar, but they have a super fast infrastructure, yours is probably significantly slower. The solutions: **[Measurement Protocol](https://developers.google.com/analytics/devguides/collection/protocol/v1/)**: this would allow you [to track the redirect via `PHP`](https://github.com/theiconic/php-ga-measurement-protocol), thus guaranteeing tracking, and once the API call has been made by PHP, you could issue a `302` redirect, thus removing the timeout, the ugly redirect and potentially also the additional page load (you could have the client send an AJAX request to /redirected/ to make that call). This solution is quite some work however. [**Hit callback**](https://developers.google.com/analytics/devguides/collection/analyticsjs/field-reference#hitCallback): this solution is probably the best "value" (close to 100% reliability, not much implementation work). The idea is to disable the default behaviour of links (eg `return false;`) and use GA's [`hitcallback`](https://developers.google.com/analytics/devguides/collection/analyticsjs/field-reference#hitCallback) functionality to [track the click, then allow the link redirect to proceed](https://support.google.com/analytics/answer/1136920?hl=en). Upvotes: 3 [selected_answer]
2018/03/14
785
2,569
<issue_start>username_0: I have a dataframe and I am looking to calculate the mean based on store and all stores. I created code to calculate the mean but I am looking for a way that is more efficient. DF ``` Cashier# Store# Sales Refunds 001 001 100 1 002 001 150 2 003 001 200 2 004 002 400 1 005 002 600 4 ``` DF-Desired ``` Cashier# Store# Sales Refunds Sales_StoreAvg Sales_All_Stores_Avg 001 001 100 1 150 290 002 001 150 2 150 290 003 001 200 2 150 290 004 002 400 1 500 290 005 002 600 4 500 290 ``` My Attempt I created two additional dataframes then did a left join ``` df.groupby(['Store#']).sum().reset_index().groupby('Sales').mean() ```<issue_comment>username_1: Use this, with `transform` and `assign`: ``` df.assign(Sales_StoreAvg = df.groupby('Store#')['Sales'].transform('mean'), Sales_All_Stores_Avg = df['Sales'].mean()).astype(int) ``` Output: ``` Cashier# Store# Sales Refunds Sales_All_Stores_Avg Sales_StoreAvg 0 1 1 100 1 290 150 1 2 1 150 2 290 150 2 3 1 200 2 290 150 3 4 2 400 1 290 500 4 5 2 600 4 290 500 ``` Upvotes: 2 <issue_comment>username_2: I think you need [`DataFrameGroupBy.transform`](https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html) for a new column filled with aggregate values computed by `mean`: ``` df['Sales_StoreAvg'] = df.groupby('Store#')['Sales'].transform('mean') df['Sales_All_Stores_Avg'] = df['Sales'].mean() print (df) Cashier# Store# Sales Refunds Sales_StoreAvg Sales_All_Stores_Avg 0 1 1 100 1 150 290.0 1 2 1 150 2 150 290.0 2 3 1 200 2 150 290.0 3 4 2 400 1 500 290.0 4 5 2 600 4 500 290.0 ``` Upvotes: 4 [selected_answer]
2018/03/14
846
3,035
<issue_start>username_0: Updated question based on comments: Project P, is made up of submodules/mini-projects A, B,C,D,E. > > Please note that A,B,C,D,E are directories which house their own > projects, ex A: Web, B: Analytics C: Devops D:does\_somethings E : > Extra\_features and so on. in other words each of A-E is its own > repository. > > > A can have b1,b2,b3 branches which were created or checkeout by user1. B can have x1,x2,x3 branches,, again by user1. and so on. soEach subfolder A,B,C,D,E can have multiple unmerged/merged branches. My question is , is there a command that will automatically tell me what branch is active on which repository (only A,B,C,D,E i.e first level only) present under P. right now I'm `cd`'ing into each subfolder and then typing ' git branch'. so if I have 10 subfolders, I have to cd into them 10 times and do git branch another 10 times. I checked this: <https://stackoverflow.com/a/2421063/4590025> `git log --graph --pretty=oneline --abbrev-commit` but that is not what I'm looking for. I am looking for something like a bird's eye view.<issue_comment>username_1: Let's consider subfolder as a feature one developer is working on. Locating a piece of work is easier when team follows the strategy of descriptive branch naming. Example: Branch name "P/A/b1" tells you that there is work in progress in P/A/b1 location. Upvotes: 0 <issue_comment>username_2: Git only works on one repository at a time. A repository consists of object and reference databases and additional files as described in [the documentation](https://www.kernel.org/pub/software/scm/git/docs/gitrepository-layout.html). A normal (non-bare) repository has one single work-tree, in which you do your work. A work-tree can contain subdirectories, but these are just directories within the work-tree. A work-tree can also contain, as sub-directories, *submodules*. These are Git repositories in their own right but are referenced by the containing *superproject* (the higher level Git repository). If you are working with submodules, there are Git commands for dealing with each submodule (e.g., `git submodule foreach`). Essentially, these run sub-commands inside the sub-repositories. [See the `git submodule` documentation for details.](https://www.kernel.org/pub/software/scm/git/docs/git-submodule.html) This just automates what I'm about to suggest in the next paragraph. If you are using `git submodule foreach` itself, you still have to write the command. Otherwise, e.g., you have a top level directory that contains *N* sub-directories each of which is an independent repository, you *must* run *N* separate `git` commands within each sub-directory to inspect the independent repositories. There is no Git command to do that. It's pretty trivial to write a *shell* command (with a loop) that does it, though: ``` for i in */; do \ (cd $i && echo -n "${i}: " && git rev-parse --abbrev-ref HEAD); \ done ``` (this assumes a BSD or Linux compatible `echo`). Upvotes: 4
2018/03/14
677
2,725
<issue_start>username_0: I have a panel dataset in Stata that contains payroll data for 261 employers over two years. Each agency has a unique ID variable, as does each employee. Each row of data is a pay period. I'm trying to figure out how to count the number of employees for each agency. I was easily able to count the number of pay periods per employee using `by employee: gen pp_id = _n` but this does not work for counting employees within an agency. I've tried using `egen employeecount = count(employee), by(agency)`, but that seems to add up the value of the employee IDs rather than counting the number (so an agency with employees 5, 15, and 20 would have employeecount 40 instead of 3). Is there a solution for this? Is there another way entirely that I should be approaching this? Thank you!<issue_comment>username_1: Let's consider subfolder as a feature one developer is working on. Locating a piece of work is easier when team follows the strategy of descriptive branch naming. Example: Branch name "P/A/b1" tells you that there is work in progress in P/A/b1 location. Upvotes: 0 <issue_comment>username_2: Git only works on one repository at a time. A repository consists of object and reference databases and additional files as described in [the documentation](https://www.kernel.org/pub/software/scm/git/docs/gitrepository-layout.html). A normal (non-bare) repository has one single work-tree, in which you do your work. A work-tree can contain subdirectories, but these are just directories within the work-tree. A work-tree can also contain, as sub-directories, *submodules*. These are Git repositories in their own right but are referenced by the containing *superproject* (the higher level Git repository). If you are working with submodules, there are Git commands for dealing with each submodule (e.g., `git submodule foreach`). Essentially, these run sub-commands inside the sub-repositories. [See the `git submodule` documentation for details.](https://www.kernel.org/pub/software/scm/git/docs/git-submodule.html) This just automates what I'm about to suggest in the next paragraph. If you are using `git submodule foreach` itself, you still have to write the command. Otherwise, e.g., you have a top level directory that contains *N* sub-directories each of which is an independent repository, you *must* run *N* separate `git` commands within each sub-directory to inspect the independent repositories. There is no Git command to do that. It's pretty trivial to write a *shell* command (with a loop) that does it, though: ``` for i in */; do \ (cd $i && echo -n "${i}: " && git rev-parse --abbrev-ref HEAD); \ done ``` (this assumes a BSD or Linux compatible `echo`). Upvotes: 4
2018/03/14
446
1,634
<issue_start>username_0: I try to reloadData on UITableView at viewWillApper. But deselectRow Animation is not working well. How can I do reloadData & deselectRow animation? ``` override func viewWillAppear(_ animated: Bool) { self.tableView.reloadData() if let indexPathForSelectedRow = tableView.indexPathForSelectedRow { self.tableView.deselectRow(at: indexPathForSelectedRow, animated: true) } super.viewWillAppear(animated) } ``` and below is different. Fade animation duration is little bit short. ``` override func viewWillAppear(_ animated: Bool) { if (self.tableView.indexPathForSelectedRow != nil){ self.tableView.reloadRows(at: [self.tableView.indexPathForSelectedRow!], with: .fade) } super.viewWillAppear(animated) } ```<issue_comment>username_1: I can update cell with deselect rows animation. ``` override func viewWillAppear(_ animated: Bool) { var indexes = self.tableView.indexPathsForVisibleRows var i = 0 for var records in self.tableView.visibleCells{ records.textLabel?.text = textDataArray[indexes![i].row] i = i + 1 } } ``` Upvotes: 0 <issue_comment>username_2: Instead of reloading the whole table, you can just update the contents of the cell directly: ``` override func viewWillAppear(_ animated: Bool) { let indices = self.tableView.indexPathsForVisibleRows ?? [] for index in indices { guard let cell = self.tableView.cellForRow(index) else { continue } cell.textLabel?.text = textDataArray[index.row] } super.viewWillAppear(animated) } ``` Upvotes: 2 [selected_answer]
2018/03/14
1,856
3,396
<issue_start>username_0: ``` file_location3 = "F:/python/course1_downloads/City_Zhvi_AllHomes.csv" housing = pd.read_csv(file_location3) housing.set_index(['State','RegionName'],inplace=True) housing = housing.iloc[:, 49:] housing = housing.groupby(pd.PeriodIndex(housing.columns,freq='Q'),axis=1).mean() data = housing data = data.iloc[:,'2008q3' : '2009q2'] ``` The error that I am getting is: > > cannot do slice indexing on `'` with these indexers `[2008q3]` > of `<'class 'str'>` > > > Now I'm getting another error ----------------------------- ``` def price_ratio(row): return (row['2008q3'] - row['2009q2']) / row['2008q3'] data['up&down'] = data.apply(price_ratio, axis=1) ``` This gives me error: `KeyError: ('2008q3', 'occurred at index 0')`<issue_comment>username_1: Try: ``` data.loc[:,'2008q3':'2009q2'] ``` Upvotes: 3 <issue_comment>username_2: Thanks @Scott for helping me out, After like trying a lot, i got it to work now. I converted the data i had to DataFrame and then performed the above operation, it worked then. ``` data = pd.DataFrame(housing) data = data.loc[:,'2008q3':'2009q2'] data = data.reset_index() data.columns = ['State', 'RegionName', '2008Q3', '2008Q4', '2009Q1', '2009Q2'] def price_ratio(difference): return difference['2008Q3'] - difference['2009Q2'] data['Diff'] = data.apply(price_ratio,axis=1) ``` Upvotes: 2 <issue_comment>username_3: Transforming the columns like so: ``` data.columns = data.columns.astype(str) ``` will fix the issue. You can visualize the problem: With PeriodIndex: ``` >>> print(data.columns) Index([2000Q1, 2000Q2, 2000Q3, 2000Q4, 2001Q1, 2001Q2, 2001Q3, 2001Q4, 2002Q1, 2002Q2, 2002Q3, 2002Q4, 2003Q1, 2003Q2, 2003Q3, 2003Q4, 2004Q1, 2004Q2, 2004Q3, 2004Q4, 2005Q1, 2005Q2, 2005Q3, 2005Q4, 2006Q1, 2006Q2, 2006Q3, 2006Q4, 2007Q1, 2007Q2, 2007Q3, 2007Q4, 2008Q1, 2008Q2, 2008Q3, 2008Q4, 2009Q1, 2009Q2, 2009Q3, 2009Q4, 2010Q1, 2010Q2, 2010Q3, 2010Q4, 2011Q1, 2011Q2, 2011Q3, 2011Q4, 2012Q1, 2012Q2, 2012Q3, 2012Q4, 2013Q1, 2013Q2, 2013Q3, 2013Q4, 2014Q1, 2014Q2, 2014Q3, 2014Q4, 2015Q1, 2015Q2, 2015Q3, 2015Q4, 2016Q1, 2016Q2, 2016Q3], dtype='object') ``` After setting `data.columns = data.columns.astype(str)` ``` >>> print(data.columns) Index(['2000Q1', '2000Q2', '2000Q3', '2000Q4', '2001Q1', '2001Q2', '2001Q3', '2001Q4', '2002Q1', '2002Q2', '2002Q3', '2002Q4', '2003Q1', '2003Q2', '2003Q3', '2003Q4', '2004Q1', '2004Q2', '2004Q3', '2004Q4', '2005Q1', '2005Q2', '2005Q3', '2005Q4', '2006Q1', '2006Q2', '2006Q3', '2006Q4', '2007Q1', '2007Q2', '2007Q3', '2007Q4', '2008Q1', '2008Q2', '2008Q3', '2008Q4', '2009Q1', '2009Q2', '2009Q3', '2009Q4', '2010Q1', '2010Q2', '2010Q3', '2010Q4', '2011Q1', '2011Q2', '2011Q3', '2011Q4', '2012Q1', '2012Q2', '2012Q3', '2012Q4', '2013Q1', '2013Q2', '2013Q3', '2013Q4', '2014Q1', '2014Q2', '2014Q3', '2014Q4', '2015Q1', '2015Q2', '2015Q3', '2015Q4', '2016Q1', '2016Q2', '2016Q3'], dtype='object') ``` You'll know it worked because `data.loc['Texas'].loc['Austin'].loc['2002Q3']` will work, instead of having to use `data.loc['Texas'].loc['Austin'].loc[pd.Period('2002Q3')]` if you need them in lowercase, e.g. `2001q3` instead of `2001Q3`: ``` data.columns = list(map(str.lower, data.columns.astype(str))) ``` Upvotes: 1
2018/03/14
662
2,509
<issue_start>username_0: Kotlin introduces the wonderful concept of Data Classes. These classes will derive the `equals()/hashCode()`, `toString()`, `getters()/setters()`, and a `copy()` function based on the properties declared in the constructor: `data class KotlinUser(val name: String, val age: Int)` In Java, this would look something like: ``` public class JavaUser { public JavaUser(String name, Int age) { ... } //getters //setters //equals()/hashCode() //toString() } ``` My question is about the packaging of these data class files in Kotlin. Coming from Java I would store `JavaUser` in its own Class file under: `org.package.foo.JavaUser` Due to the simplicity of a Data Class, do we store Data Class files the same way in Kotlin? (I.e. `org.package.foo.KotlinUser` and seperate files for each Data Class). Also, is it frowned upon to store multiple Data Classes in one Class file?: `org.package.foo.DataClasses` contains: ``` data class Foo(val a: String, val b: String) data class Bar(val a: Int, val b: Int) ``` I looked around in the idioms/coding style sections of the Kotlin Documentation and could not find anything about this (maybe I skimmed past it though). What is the best practice? Thanks!<issue_comment>username_1: The book [Kotlin in Action](https://www.manning.com/books/kotlin-in-action) says about source code layout (chapter 2.2.3, page 27): > > In Kotlin, you can put multiple classes in the same file and choose any name for that file. > > > ... > > > In most cases, however, it's still good practice to follow Java's directory layout and to organize files into directories according to the package structure. Sticking to that structure is especially important in projects where Kotlin is mixed with Java. > > > ... > > > But you shouldn't hesitate to pull multiple classes into the same file, especially if the classes are small (and in Kotlin, they often are). > > > So to answer your questions: it depends :) Upvotes: 3 <issue_comment>username_2: The coding style conventions give [quite explicit guidance](http://kotlinlang.org/docs/reference/coding-conventions.html#source-file-organization) on this: > > Placing multiple declarations (classes, top-level functions or properties) in the same Kotlin source file is encouraged as long as these declarations are closely related to each other semantically and the file size remains reasonable (not exceeding a few hundred lines). > > > Upvotes: 5 [selected_answer]
2018/03/14
459
1,769
<issue_start>username_0: I just get started with Android studio but I have ran into a problem in the beginning stage of setup. I have created my virtual device using AVD manager but whenever i hit 'run' button, it ask me to select device to run on but the drop box(Prefer android virtual device) doesnt show mine. what am i missing?[enter image description here](https://i.stack.imgur.com/bBCzE.png) I have added link of the pictures. i apology for the inconvinience. this site doesnt allow me to post pictures yet<issue_comment>username_1: The book [Kotlin in Action](https://www.manning.com/books/kotlin-in-action) says about source code layout (chapter 2.2.3, page 27): > > In Kotlin, you can put multiple classes in the same file and choose any name for that file. > > > ... > > > In most cases, however, it's still good practice to follow Java's directory layout and to organize files into directories according to the package structure. Sticking to that structure is especially important in projects where Kotlin is mixed with Java. > > > ... > > > But you shouldn't hesitate to pull multiple classes into the same file, especially if the classes are small (and in Kotlin, they often are). > > > So to answer your questions: it depends :) Upvotes: 3 <issue_comment>username_2: The coding style conventions give [quite explicit guidance](http://kotlinlang.org/docs/reference/coding-conventions.html#source-file-organization) on this: > > Placing multiple declarations (classes, top-level functions or properties) in the same Kotlin source file is encouraged as long as these declarations are closely related to each other semantically and the file size remains reasonable (not exceeding a few hundred lines). > > > Upvotes: 5 [selected_answer]
2018/03/14
982
2,261
<issue_start>username_0: Hey guys I dont usually use regex so I need a bit of help to get some matches from the below string. I only want the information in bold to match the regex expression, any help or explanation would be appreciated thanks. '"FM 2222 RD / RIVER PLACE BLVD","0:",,"18:","00","**2008-08-14**","**CRASH/LEAVING THE SCENE**","30.39452568","(30.39452568-97.84551164)","-97.84551164","18:00:00","4","2008-08-14 18:00:00-06:00","20085043619"'<issue_comment>username_1: If what you need is to detect the date and the field next to it, you could use the following regex: ``` expression = '"\d{4}-\d{2}-\d{2}","[^"]*"' ``` Working example: ``` import re my_str = '"FM 2222 RD / RIVER PLACE BLVD","0:",,"18:","00","2008-08-14","CRASH/LEAVING THE SCENE","30.39452568","(30.39452568-97.84551164)","-97.84551164","18:00:00","4","2008-08-14 18:00:00-06:00","20085043619"' expression = '"\d{4}-\d{2}-\d{2}","[^"]*"' re.findall(expression, my_str) # returns ['"2008-08-14","CRASH/LEAVING THE SCENE"'] ``` Upvotes: 2 <issue_comment>username_2: The string you provided looks like a line from a csv file and another option is to use Pythons [csv module](https://docs.python.org/3/library/csv.html). Since I don't know if you have a file or list full of these strings, this example shows how you could take this single string and read it into [`csv.reader`](https://docs.python.org/3/library/csv.html#csv.reader) using [`io.StringIO`](https://docs.python.org/3/library/io.html?highlight=io%20stringio#io.StringIO) (Python 3.6.4) ``` In[2]: s = '"FM 2222 RD / RIVER PLACE BLVD","0:",,"18:","00","2008-08-14","CRASH/LEAVING THE SCENE","30.39452568","(30.39452568-97.84551164)","-97.84551164","18:00:00","4","2008-08-14 18:00:00-06:00","20085043619"' ...: In[3]: import csv ...: import io ...: ...: reader = csv.reader(io.StringIO(s)) ...: for line in reader: ...: address, a, b, c, d, date, msg, *stuff = line ...: print(date) ...: print(msg) ...: 2008-08-14 CRASH/LEAVING THE SCENE ``` If you had the actual csv file with the header, you could use [`csv.DictReader`](https://docs.python.org/3/library/csv.html#csv.DictReader) and then do something like `print(line['date'])` (assuming the key was 'date'). Upvotes: 0
2018/03/14
688
2,746
<issue_start>username_0: When trying to use [Microsoft Dynamics 365 SDK Core Assemblies](https://www.nuget.org/packages/Microsoft.CrmSdk.CoreAssemblies/) in a .NET Core 2.0 project, the following error occurs at runtime simply by `using Microsoft.Xrm.Sdk`: > > TypeLoadException: Could not load type > 'System.ServiceModel.Description.MetadataConversionError' from > assembly 'System.ServiceModel, Version=4.0.0.0, Culture=neutral, > PublicKeyToken=b77a5c561934e089'. > > > It looks like the Core Assemblies (Microsoft.Xrm.Sdk.Client) may simply not be compatible with anything other than ~net4x. Is there any obvious way to get around this error or load the WCF `System.ServiceModel` class/interfaces needed by `Microsoft.Xrm.Sdk` in the context of target `netcoreapp2.0`? Is it possible to use [Microsoft.Windows.Compatibility](https://www.nuget.org/packages/Microsoft.Windows.Compatibility) to bridge the gap? It looks like the Microsoft.Windows.Compatibility pack [documentation](https://learn.microsoft.com/en-us/dotnet/core/porting/windows-compat-pack) indicates **Windows Communication Foundation (WCF)** classes/interfaces are "available". How can I use the compatibility pack to perhaps load `System.ServiceModel.Description`? Thank you for any help you can provide!<issue_comment>username_1: I tried all possible things and can say that SDK, ServiceModel etc are not compatible with .net core and never will be, according to multiple discussions on github. However, i was able to do this: * Use XrmToolBox and crmsvcutil.exe to generate models (optional) * place them in netstandard2 project * reference XRM SDK from nuget * SDK works under .net core in part where LINQ queries and raw QueryExpressions are translated to subclasses of OrganizationRequest * write custom IOrganizationService which serializes OrganizationRequests and sends them to some other app * Other app is .net core web api which references that project and XRM SDK, but runs on **full framework on windows** and executes actual requests, serializes responses and sends them back. IMPORTANT EDIT: I found out that SDK 2016 doesn't work reliably in .net core on linux due to various reasons, and stopped at 2011 (nuget package is `Microsoft.Xrm.Sdk.2011`). It works fine except in one case: whe you do `context.AddObject` and pass an Entity **with no ID**. SDK relies on p/invoking native Windows library to create sequential UUID and crashes on Linux. You can overcome this by setting ID prior to calling `.AddObject()`. Upvotes: 4 [selected_answer]<issue_comment>username_2: I had the same issue and it got resolved when I selected template Console Application (.Net Framework) in Visual Studio instead of Console Application (.Net Core). Upvotes: 0
2018/03/14
1,117
3,515
<issue_start>username_0: I am using C++ in native mode with Visual Studio 2017 and I am **trying** to compile and run the example code found at [Debugging a Parallel Application in Visual Studio](https://learn.microsoft.com/en-us/visualstudio/debugger/walkthrough-debugging-a-parallel-application). For the record, I program in C not C++. I am clueless when it comes to method declarations (among many other things). I suspect correcting the error is simple but, I simply don't know how. In other words, I am currently RTFineM. I simply copied and pasted the example given in the url above and ran into 2 problems. First it complained about something being deprecated but a simple define took care of that problem. Second it complained about not being able to convert a type into another as stated in the title. The RunFunc class causing the problem is declared as follows: ``` class RunFunc { Func& m_Func; int m_o; public: RunFunc(Func func,int o):m_Func(func),m_o(o) { }; void operator()()const { m_Func(m_o); }; }; ``` **My question/request is:** how does the declaration of RunFunc need to be in order for the example to compile and run properly ? Thank you, much appreciate the help.<issue_comment>username_1: In this constructor ``` RunFunc(Func func,int o):m_Func(func),m_o(o) { }; ``` the prameter `Func func` is adjusted by the compiler to the type `Func *func`. On the other hand the data member `m_Func` is declared as a referenced type. ``` Func& m_Func; ``` And the error message says about incompatibility of the types. > > C2440 cannot convert from 'void (\_cdecl\*)(int)' to 'void(\_cdecl&)(int) > > > Try to declare the constructor like ``` RunFunc(Func &func,int o):m_Func(func),m_o(o) { }; ``` Or declare the data member like ``` Func *m_Func; ``` without changing the constructor. Here are two demonstrative programs ``` #include typedef void Func( int ); class RunFunc { Func& m\_Func; int m\_o; public: RunFunc(Func &func,int o):m\_Func(func),m\_o(o) { }; void operator()()const { m\_Func(m\_o); }; }; int main() { return 0; } ``` and ``` #include typedef void Func( int ); class RunFunc { Func \*m\_Func; int m\_o; public: RunFunc(Func func,int o):m\_Func(func),m\_o(o) { }; void operator()()const { m\_Func(m\_o); }; }; int main() { return 0; } ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: In your code you are tyring to bound a reference to a temporary, namely to copy of argument passed to the constructor. You can try to run the following code snippet to see the difference: ``` struct Func { int _i; void operator()(int i) { cout << i*_i << endl; } }; class RunFunc { Func& m_Func; int m_o; public: RunFunc(Func &func, int o) :m_Func(func), m_o(o) // RunFunc(Func func, int o) :m_Func(func), m_o(o) { }; void operator()()const { m_Func(m_o); }; }; int main() { Func f{ 5 }; RunFunc rf(f, 2); rf(); return 0; } ``` Upvotes: 1 <issue_comment>username_3: This is a legacy approach. You can use standard library [functor and binder instead](https://www.tutorialspoint.com/cpp_standard_library/functional.htm). For example: ``` #include #include static void my\_callback(int i) { std::cout<< i << std::endl; } int \_tmain(int argc, \_TCHAR\* argv[]) { std::function functor; functor = std::bind(my\_callback, 1); functor(); return 0; } ``` Upvotes: 1
2018/03/14
1,093
4,573
<issue_start>username_0: I've configured a [spring cloud config server](https://cloud.spring.io/spring-cloud-config/single/spring-cloud-config.html#_encryption_and_decryption) to use oAuth2 for security. Everything is working well, except the encrypt end point. When I try to access `/encrypt` I get a 403 Forbidden. I am including the Authorization Bearer token in the header. Is there a way to allow the encrypt end point to be called when the server is secured with oAuth, or is it always blocked? Let me know if you would like to see any config files for this server. Just for reference, here are the things that are working. * calling `/encrypt/status` produces `{"status":"OK"}` * The git repository is being pulled because I can access a property file from the server. * oAuth authentication is working with Google because it takes me through the logon process. Here is the spring security settings. ``` security: require-ssl: true auth2: client: clientId: PROVIDED BY GOOGLE clientSecret: PROVIDED BY GOOGLE accessTokenUri: https://www.googleapis.com/oauth2/v4/token userAuthorizationUri: https://accounts.google.com/o/oauth2/v2/auth scope: - openid - email - profile resource: userInfoUri: https://www.googleapis.com/oauth2/v3/userinfo preferTokenInfo: true server: port: 8443 ssl: key-store-type: PKCS12 key-store: /spring-config-server/host/tomcat-keystore.p12 key-alias: tomcat key-store-password: ${KEYSTORE_PASSWORD} ``` Here are my dependencies from the POM file so you can see the version of the libraries I'm using. ``` org.springframework.boot spring-boot-starter-parent 2.0.0.RELEASE UTF-8 UTF-8 1.8 Finchley.M8 org.springframework.cloud spring-cloud-config-server org.springframework.boot spring-boot-starter-test org.springframework.cloud spring-cloud-security org.springframework.cloud spring-cloud-dependencies ${spring-cloud.version} pom import ```<issue_comment>username_1: To fix this issue, I needed to extend WebSecurityConfigurerAdapter and in the configure method I disabled CSRF token. ``` http .csrf().disable() .antMatcher("/**") .authorizeRequests() .antMatchers("/", "/login**", "/error**") .permitAll() .anyRequest().authenticated(); ``` Upvotes: 1 <issue_comment>username_2: I solve it implementing this WebSecurityConfigurer. It disables CSRF and set basic authentication.In Spring Boot 2.0.0 you cannot disable CSRF using properties it forces you to implement a java security config bean. ``` package my.package.config.server; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; @Configuration @EnableWebSecurity public class WebSecurityConfiguration extends WebSecurityConfigurerAdapter { @Override protected void configure(HttpSecurity http) throws Exception { http.csrf().disable().authorizeRequests() .anyRequest().authenticated().and() .httpBasic(); ; } } ``` Hope it helps Upvotes: 4 <issue_comment>username_3: We must implement WebSecurityConfigurerAdapter in configuration related class. So that the encrypt/decrypt services can be accessible. Make sure that you have configured **secret.key** in bootstrap.properties or application.properties. Upvotes: 2 <issue_comment>username_4: `WebSecurityConfigurerAdapter` is deprecated <https://spring.io/blog/2022/02/21/spring-security-without-the-websecurityconfigureradapter> Try the following instead of: ``` import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.web.SecurityFilterChain; @Configuration public class SecurityConfiguration { @Bean public SecurityFilterChain filterChain(HttpSecurity http) throws Exception { http.csrf().disable().authorizeRequests() .anyRequest().authenticated().and() .httpBasic(); return http.build(); } } ``` Upvotes: 2
2018/03/14
618
1,694
<issue_start>username_0: Matching acronyms containing both lower and upper case letters (atleast one of more lower and capital case like reKHS) or capital case acronyms of length 3 or more (CASE, CAT) in R. Regex should match both reKHS and CASE. This regex takes care of the latter case (matching acronyms of length 3 or more) `regex <- "\\b^[a-zA-Z]*${3,10}\\b";`. Would need to find a way to combine this with the regex containing both lower and upper case.<issue_comment>username_1: A positive look-ahead or two should solve this ``` (.*(?=.*[a-z])(?=.*[A-Z]).*)|([A-Z]{3,}) ``` To explain: ``` Either contain a lower and upper case character somewhere (.*(?=.*[a-z])(?=.*[A-Z]).*) or | have at least 3 upper case characters ([A-Z]{3,}) ``` Upvotes: 1 <issue_comment>username_2: You may use a TRE compliant pattern like ``` regex <- "\\b(?:[[:upper:]]{3,10}|(?:[[:lower:]]+[[:upper:]]|[[:upper:]][[:lower:]]*[[:upper:]])[[:alpha:]]*)\\b" ``` Or a PCRE regex (use with `perl=TRUE` in base R functions): ``` regex <- "\\b(?:\\p{Lu}{3,10}|(?:\\p{Ll}+\\p{Lu}|\\p{Lu}\\p{Ll}*\\p{Lu})\\p{L}*)\\b" ``` See the [regex demo](https://regex101.com/r/Vkr6lA/1) (and the [PCRE regex demo](https://regex101.com/r/Vkr6lA/3)). **Details** * `\\b` - a word boundary * `(?:` - either + `[[:upper:]]{3,10}` - 3 to 10 uppercase letters * `|` - or + `(?:` - either - `[[:lower:]]+[[:upper:]]` - 1 or more lowecase and 1 uppercase + `|` - or - `[[:upper:]][[:lower:]]*[[:upper:]]` - an uppercase, then 0+ lowercase and then an uppercase letter + `)` - end of the grouping + `[[:alpha:]]*` - 0+ letters * `)` - end of the alternation group * `\\b` - a word boundary. Upvotes: 0
2018/03/14
358
1,222
<issue_start>username_0: I ran a `mvn clean install` on a big Java project that I work on, but it kept failing due to some files not having the proper license headers. Well, thats not my concern right now, how do I skip that? the actual error i am seeing is, ``` Failed to execute goal org.codehaus.mojo:license-maven-plugin:1.14:add-third-party (default) on project test-project: There are some dependencies with no license, please fill the file /Users/test-project/src/license/THIRD-PARTY.properties ``` I also tried this maven command, but it didn't work ``` mvn clean install -Dlicense.skip=true ```<issue_comment>username_1: Try skipping AddThirdParty mojo with `-Dlicense.skipAddThirdParty=true`. Upvotes: 4 [selected_answer]<issue_comment>username_2: I found this helpful ``` $ mvn license:help -Ddetail | fgrep skip ``` which gave me ``` -Dlicense.skipDownloadLicenses ``` you don't need to add `=true` because just defining it is enough. Upvotes: 2 <issue_comment>username_3: This one works for me ``` -Dlicense.skipCheckLicense ``` Upvotes: 0 <issue_comment>username_4: In my case, it worked with just `-Dlicense.skip`, without `=true`. ``` com.mycila license-maven-plugin 3.0 ``` Upvotes: 3
2018/03/14
594
2,262
<issue_start>username_0: I'm new to SQL and ran into a problem. Let's say I have a database of a bank which contains a category named accounts. I have id number 1 who has 1000 dollars and id number 2 who has 700 dollars. I want to make a transaction between them in 1 go. I've tried do the following: ``` update accounts set balance = balance + 100 where id = 2; set balance = balnce - 100 where id = 3 ``` What happens is the account number 2 looses 100 dollars but account number 3 gets nothing. How can I do such a thing in 1 go to ensure there wouldn't be any transaction in between? Thank you<issue_comment>username_1: You can do this in one step: ``` update accounts set balance = (case when id = 2 then balance + 100 when id = 3 then balance - 100 else balance end) where id in (2, 3); ``` This is ANSI-standard syntax and most (if not all) databases should execute it "all-or-nothing" -- that is, either both updates take effect or neither. Upvotes: 3 [selected_answer]<issue_comment>username_2: You need to use transactions. Transactions are not an easy thing to learn and might take some time so I suggest reading about them from the [documentation](https://learn.microsoft.com/en-us/sql/t-sql/language-elements/transactions-transact-sql). Specially the ACID properties of relational databases. For SQL Server, the basic steps are: ``` BEGIN TRY BEGIN TRANSACTION -- Start your transaction here /* You can do these 2 operations on the same statement in this case but I believe you want to learn the concept */ update accounts set balance = balance + 100 where id = 2 update accounts set balance = balance - 100 where id = 3 -- Do other operations like INSERTS, DELETES, etc. COMMIT -- You apply your changes here. From this point it will be visible to other users and will be persisted. END TRY BEGIN CATCH -- If something went wrong... IF @@TRANCOUNT > 0 -- ... and the transaction is still open ROLLBACK -- revert all the operations done from the point of "BEGIN TRANSACTION" statement onwards RAISERROR('Something went horribly wrong!', 15, 1) END CATCH ``` Upvotes: 2
2018/03/14
710
2,687
<issue_start>username_0: I have read [this article](https://learn.microsoft.com/en-us/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs), but I am still not sure whether I should store PDFs as page or block blobs in Azure Blob Storage. The documents are just corporate documents for archiving, i.e. they will never be modified but need to be accessed via web and downloaded. The size of each document varies between 50 kB and 5 MB. Any insights would be greatly appreciated.<issue_comment>username_1: You should use **block blobs** since you don't need random read or write operations. If you really only need to archive files, consider using [Azure Archive storage](https://azure.microsoft.com/en-us/services/storage/archive/), which is the **lowest-priced** storage offer in Azure. Upvotes: 3 [selected_answer]<issue_comment>username_2: @Meneghino Using **Block blob** would be best for objects such as PDFs. Page blobs are suitable for VHDs, basically, by default when you create a VM resource, the VHDs get stored on **Page blobs** due to its optimization to read and write operations. **Page Blob:** are a collection of 512-byte pages optimized for random read and write operations. To create a page blob, you initialize the page blob and specify the maximum size the page blob will grow. To add or update the contents of a page blob, you write a page or pages by specifying an offset and a range that align to 512-byte page boundaries. A write to a page blob can overwrite just one page, some pages, or up to 4 MB of the page blob. Writes to page blobs happen in-place and are immediately committed to the blob. The maximum size for a page blob is 8 TB. **Block blobs**: let you upload large blobs efficiently. Block blobs are comprised of blocks, each of which is identified by a block ID. You create or modify a block blob by writing a set of blocks and committing them by their block IDs. Each block can be a different size, up to a maximum of 100 MB (4 MB for requests using REST versions before 2016-05-31), and a block blob can include up to 50,000 blocks. The maximum size of a block blob is therefore slightly more than 4.75 TB (100 MB X 50,000 blocks). For REST versions before 2016-05-31, the maximum size of a block blob is a little more than 195 GB (4 MB X 50,000 blocks). If you are writing a block blob that is no more than 256 MB (64 MB for requests using REST versions before 2016-05-31) in size, you can upload it in its entirety with a single write operation; More information can be found here: <https://learn.microsoft.com/en-us/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs> Upvotes: 2
2018/03/14
624
2,253
<issue_start>username_0: I have a 1TB zpool and a 700GB volume with one clean snapshot, such as: ``` zpool1 zpool1/volume1 zpool1/volume1@snap1 ``` After writing 500GB data into volume, its written property has growth to 500GB as well. Then I tried to rollback to the snapshot and I got error with "out of space". Does zpool need extra space to rollback snapshot with big written value? Or can anyone explain why it fails?<issue_comment>username_1: Rolling back to a snapshot requires a little space (for updating metadata), but this is very small. From what you’ve described, I would expect nearly anything you write in the same pool / quota group to fail with `ENOSPC` at this point. If you run `zpool status`, I bet you’ll see that the entire pool is almost entirely full, or if you are using quotas, perhaps you’ve eaten up all of whatever quota group it applies to. If this is not what you expected, it could be that you’re using mirroring or RAID-Z, which causes duplicate bytes to be written (to allow corruption recovery). You can tell this by looking at the `used` physical bytes (instead of `written` logical bytes) in `zfs list`. Most of the data you added after the snapshot can be deleted once rollback has completed, but not before then (so rollback has to keep that data around until it completes). Upvotes: 1 <issue_comment>username_2: After searching of zfs source code(dsl\_dataset.c), I found the last part of dsl\_dataset\_rollback\_check() may explain this limit: ``` * When we do the clone swap, we will temporarily use more space * due to the refreservation (the head will no longer have any * unique space, so the entire amount of the refreservation will need * to be free). We will immediately destroy the clone, freeing * this space, but the freeing happens over many txg's. * unused_refres_delta = (int64_t)MIN(ds->ds_reserved, dsl_dataset_phys(ds)->ds_unique_bytes); if (unused_refres_delta > 0 && unused_refres_delta > dsl_dir_space_available(ds->ds_dir, NULL, 0, TRUE)) { dsl_dataset_rele(ds, FTAG); return (SET_ERROR(ENOSPC)); } ``` So that the volume's "avail" must be lager than "refreserv" to perform rollback. Only for thin-volume can pass this check. Upvotes: 3 [selected_answer]
2018/03/14
540
1,998
<issue_start>username_0: ``` ``` plz help me guys I'm creating a website only using html. I want to know how to add a picture into the title which is shown in the browser<issue_comment>username_1: Rolling back to a snapshot requires a little space (for updating metadata), but this is very small. From what you’ve described, I would expect nearly anything you write in the same pool / quota group to fail with `ENOSPC` at this point. If you run `zpool status`, I bet you’ll see that the entire pool is almost entirely full, or if you are using quotas, perhaps you’ve eaten up all of whatever quota group it applies to. If this is not what you expected, it could be that you’re using mirroring or RAID-Z, which causes duplicate bytes to be written (to allow corruption recovery). You can tell this by looking at the `used` physical bytes (instead of `written` logical bytes) in `zfs list`. Most of the data you added after the snapshot can be deleted once rollback has completed, but not before then (so rollback has to keep that data around until it completes). Upvotes: 1 <issue_comment>username_2: After searching of zfs source code(dsl\_dataset.c), I found the last part of dsl\_dataset\_rollback\_check() may explain this limit: ``` * When we do the clone swap, we will temporarily use more space * due to the refreservation (the head will no longer have any * unique space, so the entire amount of the refreservation will need * to be free). We will immediately destroy the clone, freeing * this space, but the freeing happens over many txg's. * unused_refres_delta = (int64_t)MIN(ds->ds_reserved, dsl_dataset_phys(ds)->ds_unique_bytes); if (unused_refres_delta > 0 && unused_refres_delta > dsl_dir_space_available(ds->ds_dir, NULL, 0, TRUE)) { dsl_dataset_rele(ds, FTAG); return (SET_ERROR(ENOSPC)); } ``` So that the volume's "avail" must be lager than "refreserv" to perform rollback. Only for thin-volume can pass this check. Upvotes: 3 [selected_answer]
2018/03/14
1,045
3,820
<issue_start>username_0: Ok so my question is , is there a way to loop something until a choice of Strings are inserted ? ``` case "John": n = 12; break; case "Jenny": n = 6; break; default: System.out.print("Wrong Name"); ``` Lets say in this i want to loop the user to input the name until he uses any of the above case values.Now i know that i can write a while loop and use the OR operator for each , But i have a lot of valid inputs so is there a simpler way to loop until correct switch name is entered by a user.? if its an incorrect one i want to display wrong name and prompt again for user to input I am using Java.Any help regarding this is greatly appreciated. Thanks a lot in advance. ``` public static int[] amount(int n) { int[] values = new int[6]; int i; i=n+6+6; values[a]=i; return values; } public static void returnarray(){ int values[]=amount() int i = 0; if (values[i]%==0) {system.out.println("the value is an even value"); else{ System.out.print("Not so even"); } } ``` the issue i am having is when i am tryin to return the value from the first array the amount() method requires the parameter , i am not sure to return the first array to the second due to it having a parameter (int n) , im not sure if i am making enough sense to you and the code is not exactly how i typed it . Il make it clearer . I need to return the n value from the switch into one method where i will be using that n method to do a certain caluculation multiple times and store those values inside an array. and i will return this array into another method where i will do another calculation there and display the out put; what i am having the issue with is on how to return the array into the second method becz the first method has a parameter which is (int n) as described by @kaushal28<issue_comment>username_1: I would suggest a [`Map`](https://docs.oracle.com/javase/8/docs/api/java/util/Map.html) instead of a loop or a `switch` or `if`. It has the advantage of being `O(1)` and clean to implement. Like, ``` Map map = new HashMap<>(); map.put("John", 12); map.put("Jenny", 6); String key = ""; // <-- your name field. if (map.containsKey(key)) { System.out.println(map.get(key)); // <-- 12 or 6 } else { System.out.println("Wrong name"); } ``` Upvotes: 1 <issue_comment>username_2: You can use infinite loop for that and set a flag when one of the cases is encountered. For example: ``` boolean flag = false; while(!flag){ //take user input here. case "John": n = 12; flag = true; break; case "Jenny": n = 6; flag = true; break; default: System.out.print("Wrong Name"); } ``` **EDIT:** Instead of keeping flag, you can use labels. For example: ``` loop: while(!flag){ //take user input here. case "John": display(12); //passing value of n break loop; case "Jenny": display(6); break loop; default: System.out.print("Wrong Name"); } private void display(int n) { System.out.println(n) ; } ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: You can use a labeled break instead of the unlabeled <https://docs.oracle.com/javase/tutorial/java/nutsandbolts/branch.html> ``` getvalidname: // this is a label while(true){ //take user input here. switch(nameInput) { case "John": n = 12; break getvalidname; // breaks out of the statement just after the label case "Jenny": n = 6; flag = true; break getvalidname; default: System.out.print("Wrong Name"); } } ``` Upvotes: 0
2018/03/14
481
912
<issue_start>username_0: ``` df A 0 503.36 1 509.80 2 612.31 3 614.29 ``` I want to round to nearest 5 in a new *B column*, **using numpy** if possible. Output should be: ``` A B 0 503.36 505.00 1 509.80 510.00 2 612.31 610.00 3 614.29 615.00 ```<issue_comment>username_1: You can use: ``` df['B'] = df.div(5).round(0) * 5 ``` Or as @username_2 states: ``` df['B'] = df['A'].mul(2).round(-1).div(2) ``` Output: ``` A B 0 503.36 505.0 1 509.80 510.0 2 612.31 610.0 3 614.29 615.0 ``` Upvotes: 2 <issue_comment>username_2: ``` df.assign(B=df.mul(2).round(-1).div(2)) A B 0 503.36 505.0 1 509.80 510.0 2 612.31 610.0 3 614.29 615.0 ``` Upvotes: 2 <issue_comment>username_3: Since you mention `numpy` ``` np.around(df.A.values/5, decimals=0)*5 Out[31]: array([505., 510., 610., 615.]) ``` Upvotes: 4 [selected_answer]
2018/03/14
572
1,855
<issue_start>username_0: I want to serialise a mongoldb cursor. For this, I want to use bson.json\_util.dumps. Code example that works: ``` >>> from bson.json_util import dumps >>> dumps(values) '[{...}]' ``` However, I am also want to use json.dumps in the same code. For this reason, I would like to explicitly call bson.json\_util.dumps: ``` >>> import bson >>> bson.json_util.dumps(values) Traceback (most recent call last): File "", line 1, in AttributeError: 'module' object has no attribute 'json\_util' ``` This creates an error. I know I can solve my problem by using "import as", but I do not find this a clean solution, and I feel like I am fundamentally missing a point. **Question:** Can anybody explain to me why the second code snippet is not an option? My best guess is that I have some conflicting modules (one of which pymongo that has bson in it?), so here is my pip freeze output: ``` $ pip freeze certifi==2018.1.18 chardet==3.0.4 click==6.7 Flask==0.12.2 idna==2.6 itsdangerous==0.24 Jinja2==2.10 MarkupSafe==1.0 pycrypto==2.6.1 pymongo==3.6.1 requests==2.18.4 urllib3==1.22 Werkzeug==0.14.1 ```<issue_comment>username_1: `bson` is a package. Importing a package does not automatically give you access to its modules; only those modules that are explicitly imported into the package's `__init__.py` are accessible. For everything else, you need to import the module separately. Note, you could import json\_util directly: ``` from bson import json_util json_util.dumps(...) ``` or, as you mentioned, use `as` to alias the function: ``` from bson.json_util import dumps as bson_dumps ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: I do not know how to test this without having bson installed, but I think the following should work: ``` import bson.json_util bson.json_util.dumps(values) ``` Upvotes: 2
2018/03/14
666
2,336
<issue_start>username_0: I can't seem to pass certain items into an `item` prop if they relate to the category that I am looping through I have a `JSON` like this: ``` { "Categories": [ { "Name": "Music", }, { "Name": "Comedy", }, { "Name": "Sport", }, { "Name": "Family", }, ], "Items": [ { "Name": "<NAME>", "NameId": "dolly-parton", "Category": "Music", }, { "Name": "<NAME>", "NameId": "cee-lo-green", "Category": "Music", }, { "Name": "<NAME>", "NameId": "take-that", "Category": "Music", }, { "Name": "Football", "NameId": "football", "Category": "Sport", }, { "Name": "Hockey", "NameId": "hockey", "Category": "Sport", } ] } ``` I'm looping through all the categories and then printing them into a list while trying to only pass items that relate to that category in an `items` prop. I have the code below but it is passing all my data to each element and I'm not sure why. ``` class CategoryItems extends Component { constructor(props) { super(props); } state = { items: this.props.items, categories: this.props.categories, }; render() { const items = this.state.items; return ( {this.state.categories.map((category, index) => ( { item.Category === category.Name ? item : ''; })} /> ))} ); } } ``` All the data is there and in the react dev-tools it says each element has 667 items but I know there should only be 7 items on the sports category.<issue_comment>username_1: Apply a filter instead of a map. ``` item.Category === category.Name)} /> ``` Upvotes: 1 <issue_comment>username_2: You can try this , ``` class CategoryItems extends Component { constructor(props) { super(props); } state = { items: this.props.items, categories: this.props.categories, }; render() { const items = this.state.items; const renderList = this.state.categories.reduce((total, category) => { const list = items.filter(item => item.Category === category.Name); if(list.length > 0){ total.push(); } return total },[]) return ( {renderList} ); } ``` } Upvotes: 0
2018/03/14
590
2,227
<issue_start>username_0: I am trying to do a left ourter join in Athena and my query looks like the following: ``` SELECT customer.name, orders.price FROM customer LEFT OUTER JOIN order ON customer.id = orders.customer_id WHERE price IS NULL; ``` Where each customer could only have one order in the orders table at most and there are customers with no order in the orders table at all. So I am expecting to get some number of records where there is a customer in the customer table with no records in orders table which means when I do `LEFT OUTER JOIN` the price will be NULL. But this query returns 0 every time I run it. I have queries both tables separately and pretty sure there is data in both but not sure why this is returning zero where it works if I remove the `price IS NULL`. I have also tried `price = ''` and `price IN ('')` and none of them works. Has anyone here had a similar experience before? Or is there something wrong with my query that I can not see or identify?<issue_comment>username_1: It seems that your query is correct. To validate, I created two CTEs that should match up with your `customer` and `orders` table and ran your query against them. When running the query below, it returns a record for customer 3 `<NAME>` who did not have an order. ``` WITH customer AS ( SELECT 1 AS id, '<NAME>' AS name UNION SELECT 2 AS id, '<NAME>' AS name UNION SELECT 3 AS id, '<NAME>' AS name ), orders AS ( SELECT 1 AS customer_id, 20 AS price UNION SELECT 2 AS customer_id, 15 AS price ) SELECT customer.name, orders.price FROM customer LEFT OUTER JOIN orders ON customer.id = orders.customer_id WHERE price IS NULL; ``` I'd suggest running the following queries: * `COUNT(DISTINCT id) FROM customers;` * `COUNT(DISTINCT customer_id) FROM orders;` Based on the results you are seeing, I would expect those counts to match. Perhaps your system is creating a record in the `orders` table whenever a customer is created with a `price` of 0. Upvotes: 3 <issue_comment>username_2: Probably you can't use `where` for `order table`. ``` SELECT customer.name, order.price FROM customer LEFT OUTER JOIN order ON customer.id = orders.customer_id AND order.price IS NULL; ``` Upvotes: 1
2018/03/14
551
2,176
<issue_start>username_0: In java, if a variable is immutable and final then should it be a static class variable? I ask because it seems wasteful to create a new object every time an instance of the class uses it (since it is always the same anyway). Example: Variables created in the method each time it is called: ``` public class SomeClass { public void someMethod() { final String someRegex = "\\d+"; final Pattern somePattern = Pattern.compile(someRegex); ... } } ``` Variables created once: ``` public class SomeClass { private final static String someRegex = "\\d+"; private final static Pattern somePattern = Pattern.compile(someRegex); public void someMethod() { ... } } ``` Is it always preferable to use the latter code? This answer seems to indicate that it is preferable to use the latter code: [How can I initialize a String array with length 0 in Java?](https://stackoverflow.com/questions/1665834/how-can-i-initialize-a-string-array-with-length-0-in-java/1665899#answer-1665899)<issue_comment>username_1: Ultimately it depends on what you're doing with those variables. If the variable only ever has a lifecycle inside of that *specific* method - that is, nothing else will *ever* need to see it or use those values - then declaring them inside of the method is appropriate and correct. Making it more visible than it needs to only adds to confusion for future maintainers (including yourself). If the variable has a lifecycle *outside* of the class, it *might* make sense to declare it `static`. This is particularly true in the case of constants or variables that don't store any state themselves. If it isn't a constant or it doesn't have any purpose outside of the class, then keep it non-static and private. Upvotes: 1 <issue_comment>username_2: No definitely not. ``` class MyIntegerContainer{ private final int x; public MyIntegerContainer(int x){ this.x = x; } } ``` If you made immutable, final `x` static, then all instances of `MyIntegerContainer` would share the same value of `x` which would not make for a very good data container. Upvotes: 3
2018/03/14
717
2,367
<issue_start>username_0: Consider 3 versions of a code with the same effects: Version 1: ``` int main() { std::map x = {{0,0}, {1,1}, {2,2}}; // Do some stuff... return 0; } ``` Version 2: ``` int main() { std::map x; x[0] = 0; x[1] = 1; x[2] = 2; // Do some stuff... return 0; } ``` Version 3: ``` int main() { std::map x; x.insert(std::pair(0,0)); x.insert(std::pair(1,1)); x.insert(std::pair(2,2)); // Do some stuff... return 0; } ``` What is the efficiency of each of these codes? I think that version 1 is a fully static allocation: space required by `x` is allocated once and values are set. I also think that version 3 requires dynamic allocation: each call to insert will check if the key is not already used, check where to insert and allocate more space to map before assigning the value. For version 2, I am not sure. Could you help me with that?<issue_comment>username_1: The C++ standard makes no requirements on whether the allocation happens at compile-time or run-time. All this means is that implementations are free to make their own optimizations (or not). So the proper thing to do would be to test. Most likely such optimizations have not been implemented. There is no `constexpr` constructor of `std::map`, despite the fact that the `std::initializer_list` you create here may be a compile-time constant (note that no aggregate initialization is being performed here either) ``` std::map x = {{0,0}, {1,1}, {2,2}}; ``` Upvotes: 3 <issue_comment>username_2: Although its implementation defined, `std::map`'s allocations is never static. It using an [RB tree](https://en.wikipedia.org/wiki/Red%E2%80%93black_tree) as an underlying data structure in 99 cases of 100. In your case, you have exactly the same time and space complexity in all three cases. Upvotes: 2 <issue_comment>username_3: After all these speculations, I finally profiled the code. For this, I generated 3 codes like in the question, but with 100,000 entries. Here are the results, averaged on several runs, compiled with g++ with no optimization: * Version 1: 26 ms (compile time: 2.5 s) * Version 2: 80 ms (compile time: 10 s) * Version 3: 73 ms (compile time: 37 s) So clearly, the best solution is the first one. Both versions 2 and 3 are nearly equivalent at execution, but version 3 is really worse at compile time. Upvotes: 1
2018/03/14
2,554
7,403
<issue_start>username_0: When create using **CloudFormation**, there is no `Scale ECS Instances` button, to scale the instance you need to find the **Auto Scaling Group** to scale the instance which is not I want. [![enter image description here](https://i.stack.imgur.com/3s2MK.png)](https://i.stack.imgur.com/3s2MK.png) When create using **AWS Console**, there is a `Scale ECS Instances` button. [![enter image description here](https://i.stack.imgur.com/L1PSS.png)](https://i.stack.imgur.com/L1PSS.png) I want to have the button when create using CloudFormation. Anything I have missed or did wrong? ``` { "AWSTemplateFormatVersion":"2010-09-09", "Description":"Create ECS Cluster, ECS Task Definitions, Lambdas, CloudWatchs for different country and environment.", "Parameters":{ "CountryName":{ "Type":"String", "Description":"Auto inclusion launch country name.", "AllowedValues":[ "my", "sg" ] }, "EnvironmentName":{ "Type":"String", "Description":"An environment name that will be suffixed to resource names.", "AllowedValues":[ "dev", "stage", "live" ] }, "KeyName":{ "Type":"AWS::EC2::KeyPair::KeyName", "Description":"Name of an existing EC2 KeyPair to enable SSH access to the ECS instance." }, "VpcId":{ "Type":"AWS::EC2::VPC::Id", "Description":"Select a VPC to deploy the ECS instance." }, "SubnetId":{ "Type":"List", "Description":"Select at least two subnets in your selected VPC to deploy the ECS instance." }, "InstanceType":{ "Description":"ECS instance type", "Type":"String", "Default":"t2.micro", "AllowedValues":[ "t2.micro", "t2.small", "t2.medium", "t2.large", "m3.medium", "m3.large", "m3.xlarge", "m3.2xlarge", "m4.large", "m4.xlarge", "m4.2xlarge", "m4.4xlarge", "m4.10xlarge", "c4.large", "c4.xlarge", "c4.2xlarge", "c4.4xlarge", "c4.8xlarge", "c3.large", "c3.xlarge", "c3.2xlarge", "c3.4xlarge", "c3.8xlarge", "r3.large", "r3.xlarge", "r3.2xlarge", "r3.4xlarge", "r3.8xlarge", "i2.xlarge", "i2.2xlarge", "i2.4xlarge", "i2.8xlarge" ], "ConstraintDescription":"Please choose a valid instance type." } }, "Mappings":{ "AWSRegionToAMI":{ "us-east-1":{ "AMIID":"ami-a7a242da" }, "us-east-2":{ "AMIID":"ami-b86a5ddd" }, "us-west-1":{ "AMIID":"ami-9ad4dcfa" }, "us-west-2":{ "AMIID":"ami-92e06fea" }, "eu-west-1":{ "AMIID":"ami-0693ed7f" }, "eu-west-2":{ "AMIID":"ami-f4e20693" }, "eu-west-3":{ "AMIID":"ami-698b3d14" }, "eu-central-1":{ "AMIID":"ami-0799fa68" }, "ap-northeast-1":{ "AMIID":"ami-68ef940e" }, "ap-northeast-2":{ "AMIID":"ami-a5dd70cb" }, "ap-southeast-1":{ "AMIID":"ami-0a622c76" }, "ap-southeast-2":{ "AMIID":"ami-ee884f8c" }, "ca-central-1":{ "AMIID":"ami-5ac94e3e" }, "ap-south-1":{ "AMIID":"ami-2e461a41" }, "sa-east-1":{ "AMIID":"ami-d44008b8" } } }, "Resources":{ "ECSCluster":{ "Type":"AWS::ECS::Cluster", "Properties":{ "ClusterName":{ "Fn::Join":[ "-", [ { "Ref":"AWS::StackName" }, { "Ref":"EnvironmentName" } ] ] } } }, "ECSSecurityGroup":{ "Type":"AWS::EC2::SecurityGroup", "Properties":{ "GroupDescription":"Auto Inclusion Security Group", "VpcId":{ "Ref":"VpcId" } } }, "ECSSecurityGroupSSHinbound":{ "Type":"AWS::EC2::SecurityGroupIngress", "Properties":{ "GroupId":{ "Ref":"ECSSecurityGroup" }, "IpProtocol":"tcp", "FromPort":"22", "ToPort":"22", "CidrIp":"0.0.0.0/0" } }, "ECSAutoScalingGroup":{ "Type":"AWS::AutoScaling::AutoScalingGroup", "Properties":{ "VPCZoneIdentifier":{ "Ref":"SubnetId" }, "LaunchConfigurationName":{ "Ref":"ECSLaunchConfiguration" }, "MinSize":"0", "MaxSize":"1", "DesiredCapacity":"1" } }, "ECSLaunchConfiguration":{ "Type":"AWS::AutoScaling::LaunchConfiguration", "Properties":{ "ImageId":{ "Fn::FindInMap":[ "AWSRegionToAMI", { "Ref":"AWS::Region" }, "AMIID" ] }, "InstanceType":{ "Ref":"InstanceType" }, "IamInstanceProfile":{ "Ref":"EC2InstanceProfile" }, "KeyName":{ "Ref":"KeyName" }, "SecurityGroups":[ { "Ref":"ECSSecurityGroup" } ], "UserData":{ "Fn::Base64":{ "Fn::Join":[ "", [ "#!/bin/bash\n", "echo ECS\_CLUSTER=", { "Ref":"ECSCluster" }, " >> /etc/ecs/ecs.config" ] ] } } } }, "EC2Role":{ "Type":"AWS::IAM::Role", "Properties":{ "AssumeRolePolicyDocument":{ "Statement":[ { "Effect":"Allow", "Principal":{ "Service":[ "ec2.amazonaws.com" ] }, "Action":[ "sts:AssumeRole" ] } ] }, "Path":"/", "ManagedPolicyArns":[ "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role" ], "Policies":[ { "PolicyName":"auto-inclusion", "PolicyDocument":{ "Version":"2012-10-17", "Statement":[ { "Effect":"Allow", "Action":[ "s3:GetObject", "s3:ListBucket", "s3:PutObject", "s3:DeleteObject" ], "Resource":[ "\*" ] }, { "Effect":"Allow", "Action":[ "dynamodb:\*" ], "Resource":[ { "Fn::Join":[ "", [ "arn:aws:dynamodb:", { "Ref":"AWS::Region" }, ":", { "Ref":"AWS::AccountId" }, ":table/ai-process-tracking-", { "Ref":"EnvironmentName" } ] ] } ] } ] } } ] } }, "EC2InstanceProfile":{ "Type":"AWS::IAM::InstanceProfile", "Properties":{ "Path":"/", "Roles":[ { "Ref":"EC2Role" } ] } } }, "Outputs":{ "ecscluster":{ "Value":{ "Ref":"ECSCluster" } } } } ```<issue_comment>username_1: Update: I found this in the official [documentation](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scale_cluster.html). > > If your cluster was created with the [console first-run experience](https://aws.amazon.com/blogs/compute/amazon-ecs-console-first-run-troubleshoot-docker-errors/) after November 24th, 2015, then the Auto Scaling group associated with > the AWS CloudFormation stack created for your cluster can be scaled up > or down to add or remove container instances. You can perform this > scaling operation from within the Amazon ECS console. > > > **If your cluster was not created with the console first-run experience after November 24th, 2015, then you cannot scale your > cluster from the Amazon ECS** console. However, you can still modify > existing Auto Scaling groups associated with your cluster in the Auto > Scaling console. > > > --- Ref: [Scaling Cluster](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/scale_cluster.html) > > If a Scale ECS Instances button appears, then you can scale your > cluster in the next step. If not, you must manually adjust your Auto > Scaling group to scale up or down your instances, or you can manually > launch or terminate your container instances in the Amazon EC2 > console. > > > Upvotes: 2 [selected_answer]<issue_comment>username_2: UPDATE: If your cluster was not created with the console first-run experience after November 24th, 2015, then you cannot scale your cluster from the Amazon ECS console. ORIGINAL: this is just a step by step explanation how to get the button, you'll need to reflect this in your CloudFormation template. * Create an empty ECS cluster, like **my-test-cluster** * Go to CloudFormation, and create stack named **EC2ContainerService-my-test-cluster** using your template * Template might need to have the following ``` Outputs: TemplateVersion: Value: '2.0.0' UsedByECSCreateCluster: Value: 'true' ``` Upvotes: 0
2018/03/14
649
1,906
<issue_start>username_0: Using `sed` how can I match only lines containing the exact count of two tabs in order to remove their `\n`. ### Example: Delete `\n` in line 1 and 3 only: **Input:** ``` foo \t bar \t foo foo \t bar foo foo \t bar \t foo foo \t bar \t foo \t bar ``` **Expected output:** ``` foo \t bar \t foofoo \t bar foo foo \t bar \t foofoo \t bar \t foo \t bar ``` I tried this regex in Vim but it matches the 4th line as well: `/\v\t.*\t.*`<issue_comment>username_1: ``` sed '/^[^\t]*\t[^\t]*\t[^\t]*$/N;s/\n//' ``` ### Explanation: * `/^[^\t]*\t[^\t]*\t[^\t]*$/` matches a line with exactly 2 tab characters on it * `N` appends a line to the input buffer * `s/\n//` removes the newline between the two lines now in the input buffer Upvotes: 0 <issue_comment>username_2: Perl to the rescue: ``` perl -pe 'chomp if 2 == tr/\t//' < input > output ``` * `-p` reads the input line by line * [chomp](http://p3rl.org/chomp) removes the final newline * `tr///` is the transliteration operator similar to `tr`, it returns the number of matching characters Upvotes: 0 <issue_comment>username_3: I'd use awk for this: if the line has 3 tab-separated fields, print the line without a newline ``` awk -F'\t' '{printf "%s%s", $0, (NF==3 ? "" : ORS)}' file ``` However, if the *next* line also has 3 fields, it will join with the following line. Your question is not clear about this edge case. If you just want to join, then ``` awk -F'\t' 'NF==3 {line = $0; getline; $0 = line $0} 1' file ``` These 2 commands produce the same output with your sample input. Upvotes: 1 <issue_comment>username_4: This might work for you (GNU sed): ``` sed ':a;s/\t/&/3;t;s//&/2;T;N;s/\n//;ta' file ``` Match 3 or more tabs, break out. Match 1 or less tabs, break out. Match 2 tabs, append next line and remove the newline between the two lines and go to the start to check for 2 tabs. Upvotes: 0
2018/03/14
2,585
7,615
<issue_start>username_0: I have a list like this: ``` mylist <- list(PP = c("PP 1", "OMITTED"), IN01 = c("DID NOT PARTICIPATE", "PARTICIPATED", "OMITTED"), RD1 = c("YES", "NO", "NOT REACHED", "INVALID", "OMITTED"), RD2 = c("YES", "NO", "NOT REACHED", "NOT AN OPTION", "OMITTED"), LOS = c("LESS THAN 3", "3 TO 100", "100 TO 500", "MORE THAN 500", "LOGICALLY NOT APPLICABLE", "OMITTED"), COM = c("BAN", "SBAN", "RAL"), VR1 = c("WITHIN 30", "WITHIN 200", "NOT AVAILABLE", "OMITTED"), INF = c("A LOT", "SOME", "LITTLE OR NO", "NOT APPLICABLE", "OMITTED"), IST = c("FULL-TIME", "PART-TIME", "FULL STAFFED", "NOT STAFFED", "LOGICALLY NOT APPLICABLE", "OMITTED"), CMP = c("ALL", "MOST", "SOME", "NONE", "LOGICALLY NOT APPLICABLE", "OMITTED")) ``` I have another list like this: ``` matchlist <- list("INVALID", c("INVALID", "OMITTED OR INVALID"), c("INVALID", "OMITTED"), "OMITTED", c("NOT REACHED", "INVALID", "OMITTED"), c("LOGICALLY NOT APPLICABLE", "INVALID", "OMITTED"), c("LOGICALLY NOT APPLICABLE", "INVALID", "OMITTED OR INVALID"), c("Not applicable", "Not stated"), c("Not reached", "Not administered/missing by design", "Presented but not answered/invalid"), c("Not administered/missing by design", "Presented but not answered/invalid"), "OMITTED OR INVALID", c("LOGICALLY NOT APPLICABLE", "OMITTED OR INVALID"), c("NOT REACHED", "OMITTED"), c("NOT APPLICABLE", "OMITTED"), c("LOGICALLY NOT APPLICABLE", "OMITTED"), c("LOGICALLY NOT APPLICABLE", "NOT REACHED", "OMITTED"), "NOT EXCLUDED", c("Default", "Not applicable", "Not stated"), c("Valid Skip", "Not Reached", "Not Applicable", "Invalid", "No Response"), c("Not administered", "Omitted"), c("NOT REACHED", "INVALID RESPONSE", "OMITTED"), c("INVALID RESPONSE", "OMITTED")) ``` As you can see, some of the vectors in `matchlist` partially match vectors in `mylist`. In some cases the vectors in `matchlist` have exact match with part of vectors in `mylist`. For example, the last values of `RD1` in `mylist` match the vector in the fifth component of `matchlist`, but `RD2` does not match it, although common values are present. The values in `RD2` in `mylist` ("NOT REACHED", "NOT AN OPTION", "OMITTED") **together and in this order** do not have a match in any of the vectors in `matchlist`. It is the same for the values of `COM` in `mylist`. What I am trying to achieve is to compare the elements in each vector in `mylist` against each vector in `matchlist`, extract the values that are common and match the values in `matchlist` **in the same order**, and store them in another list. The desired result shall look like this: ``` $PP [1] "OMITTED" $IN01 [1] "OMITTED" $RD1 [1] "NOT REACHED" "INVALID" "OMITTED" $RD2 character(0) $LOS [1] "LOGICALLY NOT APPLICABLE" "OMITTED" $COM character(0) $VR1 [1] "OMITTED" $INF [1] "NOT APPLICABLE" "OMITTED" $IST [1] "LOGICALLY NOT APPLICABLE" "OMITTED" $CMP [1] "LOGICALLY NOT APPLICABLE" "OMITTED" ``` What I tried so far: Using `intersect` ``` lapply(mylist, function(i) { intersect(i, lapply(matchlist, function(i) {i})) }) ``` It returns only the last value in each vector of `matchlist` ("OMITTED"). Using `match` through `%in%`: ``` lapply(mylist, function(i) { i[which(i %in% matchlist)] }) ``` Returns the desired result only for `RD1` ("INVALID", "OMITTED"), for the rest it returns just the last value ("OMITTED"), except for `COM` which is correct. Using `mapply` and `intersect`: ``` mapply(intersect, mylist, matchlist) ``` Returns a long list with mixture of pretty much everything, including combinations that should not be there, plus a warning for the unequal lengths. Can someone help, please?<issue_comment>username_1: Here is a simple solution using `unlist` with `matchlist`: ``` lapply(mylist, function(x) x[x %in% unlist(matchlist)]) ``` Output (new list): ``` $PP [1] "OMITTED" $IN01 [1] "OMITTED" $RD1 [1] "NOT REACHED" "INVALID" "OMITTED" $LOS [1] "LOGICALLY NOT APPLICABLE" "OMITTED" $COM character(0) $VR1 [1] "OMITTED" $INF [1] "NOT APPLICABLE" "OMITTED" $IST [1] "LOGICALLY NOT APPLICABLE" "OMITTED" $CMP [1] "LOGICALLY NOT APPLICABLE" "OMITTED" ``` Upvotes: 2 <issue_comment>username_2: Writing simply ``` lapply(mylist, intersect, unlist(matchlist)) ``` also works. Upvotes: 2 <issue_comment>username_3: ``` lapply(mylist, function(i) { unlist(sapply(i,function(x){if(any(grepl(paste0("^",x,"$"),matchlist))){x}})) }) ``` I added the "\b" before and after the string because of the "NO" that can lead to finding "NOT". Using grepl is surely not the best way as the other answer show :) Upvotes: 2 <issue_comment>username_4: There are some really simple/good answers, but they all seem to rely on `unlist`. I'm assuming that you need to preserve the grouping within `matchlist`, so unlisting them does not make sense. Here's a solution that works without that, using a double-`lapply` loop as you started to do: ``` out <- lapply(mylist, function(this) { mtch <- lapply(matchlist, intersect, this) wh <- which.max(lengths(mtch)) if (length(wh)) mtch[[wh]] else character(0) }) str(out) # List of 9 # $ PP : chr "OMITTED" # $ IN01: chr "OMITTED" # $ RD1 : chr [1:3] "NOT REACHED" "INVALID" "OMITTED" # $ LOS : chr [1:2] "LOGICALLY NOT APPLICABLE" "OMITTED" # $ COM : chr(0) # $ VR1 : chr "OMITTED" # $ INF : chr [1:2] "NOT APPLICABLE" "OMITTED" # $ IST : chr [1:2] "LOGICALLY NOT APPLICABLE" "OMITTED" # $ CMP : chr [1:2] "LOGICALLY NOT APPLICABLE" "OMITTED" ``` It always returns a vector with the most number of matches, but if there are (somehow) more than one, I think it will preserve the natural order and return the first of said long-matches. (The question there is: *"does `which.max` preserve natural order?"* I think it does but have not verified.) ***UPDATE*** The constraint was added that not only the presence and order of the `matchlist` vectors was required, but also that there are no interloping words. For instance, if as suggested in the comments, `mylist$RD1` has `"BLAH"`, then it will not longer match with `matchlist[[5]]`. Checking for a perfectly-ordered subset of one vector to another is a bit more problematic (and therefore not a code-golf champion), and often scales poorly because we don't have easy subset determination. With that caveat, this implementation does some nested `*apply` functions ... (NB: it was suggested in a comment that `$RD1` should return `character(0)`, but it does have `"INVALID"` which matches one of the single-length components of `matchlist`, so it should match, just not the longer one.) ``` out <- lapply(mylist, function(this) { ind <- lapply(matchlist, function(a) which(this == a[1])) perfectmatches <- mapply(function(ml, allis, this) { length(ml) * any(sapply(allis, function(i) all(ml == this[ i + seq_along(ml) - 1 ]))) }, matchlist, ind, MoreArgs = list(this=this)) if (any(perfectmatches) > 0) { wh <- which.max(perfectmatches) return(matchlist[[wh]]) } else return(character(0)) }) str(out) # List of 9 # $ PP : chr "OMITTED" # $ IN01: chr "OMITTED" # $ RD1 : chr "INVALID" # $ LOS : chr [1:2] "LOGICALLY NOT APPLICABLE" "OMITTED" # $ COM : chr(0) # $ VR1 : chr "OMITTED" # $ INF : chr [1:2] "NOT APPLICABLE" "OMITTED" # $ IST : chr [1:2] "LOGICALLY NOT APPLICABLE" "OMITTED" # $ CMP : chr [1:2] "LOGICALLY NOT APPLICABLE" "OMITTED" ``` Upvotes: 2 [selected_answer]
2018/03/14
1,312
4,589
<issue_start>username_0: When a thread call `pthread_cond_signal()`, Unix network programming said `pthread_cond_signal()` just would nofity just one thread, beacause it isn't `pthread_cond_broadcast()`. It means there is no race condition. However, the book does not say which thread would be notified, and how. Does the function wake thread randomly?<issue_comment>username_1: Yes, it will wake one thread seemingly randomly. It's up to the operating system to decide which one will be woken. Upvotes: 0 <issue_comment>username_2: [Straight from the man](https://linux.die.net/man/3/pthread_cond_signal): > > If more than one thread is blocked on a condition variable, the scheduling policy shall determine the order in which threads are unblocked. > > > The "scheduling policy" is the order the operating systems decided on. It's one of the four listed in the below link, but you don't really know (without some impressive hackery at least) which one is "first" anyway. It shouldn't matter either - all threads waiting on the condition should be equally ready to continue - otherwise you have a design problem. [Scheduling policies in Linux Kernel](https://stackoverflow.com/questions/9775010/scheduling-policies-in-linux-kernel) has a bit of discussion on some linux policies, and you can google from there if it's important. Upvotes: 3 <issue_comment>username_3: See below my example which will help you to understand about it very clearly.I am using 3 mutexs and 3 conditions. With the below example you can synchronized or prioritize any number of threads in C. If you see the first thread here it locked mutex lock1 and waiting on cond1, likewise second thread locked mutex lock2 and waits on condition cond2 and 3rd thread locked mutex lock3 and waits on condition cond3. This is the current situation of all the threads after they are being created and now all the threads are waiting for a signal to execute further on its condition variable. In the main thread (i.e. main function, every program has one main thread, in C/C++ this main thread created automatically by operating system once control pass to the main method by kernal) we are calling pthread\_cond\_signal(&cond1); once this system call done thread1 who was waiting on cond1 will be release and it will start executing. Once it finished with its task it will call pthread\_cond\_signal(&cond3); now thread who was waiting on condition cond3 i.e. thread3 will be release and it will start execute and will call pthread\_cond\_signal(&cond2); which will release the thread who is waiting on condition cond2 i.e. in this case thread2. ``` include pthread\_cond\_t cond1 = PTHREAD\_COND\_INITIALIZER; pthread\_cond\_t cond2 = PTHREAD\_COND\_INITIALIZER; pthread\_cond\_t cond3 = PTHREAD\_COND\_INITIALIZER; pthread\_mutex\_t lock1 = PTHREAD\_MUTEX\_INITIALIZER; pthread\_mutex\_t lock2 = PTHREAD\_MUTEX\_INITIALIZER; pthread\_mutex\_t lock3 = PTHREAD\_MUTEX\_INITIALIZER; int TRUE = 1; void print(char \*p) { printf("%s",p); } void \* threadMethod1(void \*arg) { printf("In thread1\n"); do{ pthread\_mutex\_lock(&lock1); pthread\_cond\_wait(&cond1, &lock1); print("I am thread 1st\n"); pthread\_cond\_signal(&cond3);/\* Now allow 3rd thread to process \*/ pthread\_mutex\_unlock(&lock1); }while(TRUE); pthread\_exit(NULL); } void \* threadMethod2(void \*arg) { printf("In thread2\n"); do { pthread\_mutex\_lock(&lock2); pthread\_cond\_wait(&cond2, &lock2); print("I am thread 2nd\n"); pthread\_cond\_signal(&cond1); pthread\_mutex\_unlock(&lock2); }while(TRUE); pthread\_exit(NULL); } void \* threadMethod3(void \*arg) { printf("In thread3\n"); do { pthread\_mutex\_lock(&lock3); pthread\_cond\_wait(&cond3, &lock3); print("I am thread 3rd\n"); pthread\_cond\_signal(&cond2); pthread\_mutex\_unlock(&lock3); }while(TRUE); pthread\_exit(NULL); } int main(void) { pthread\_t tid1, tid2, tid3; int i = 0; printf("Before creating the threads\n"); if( pthread\_create(&tid1, NULL, threadMethod1, NULL) != 0 ) printf("Failed to create thread1\n"); if( pthread\_create(&tid2, NULL, threadMethod2, NULL) != 0 ) printf("Failed to create thread2\n"); if( pthread\_create(&tid3, NULL, threadMethod3, NULL) != 0 ) printf("Failed to create thread3\n"); pthread\_cond\_signal(&cond1);/\* Now allow first thread to process first \*/ sleep(1); TRUE = 0;/\* Stop all the thread \*/ sleep(3); /\* this is how we join thread before exit from a system \*/ /\* pthread\_join(tid1,NULL); pthread\_join(tid2,NULL); pthread\_join(tid3,NULL);\*/ exit(0); } ``` Upvotes: 2
2018/03/14
636
1,976
<issue_start>username_0: Following this RailsCast : <http://railscasts.com/episodes/256-i18n-backends> but using Rails 5.2, I raise this error : ``` Redis::CommandError in Pages#home ERR unknown command '[]' ``` **In config/initializers/i18n\_backend.rb** `TRANSLATION_STORE = Redis.new` seems causing this problem. Whereas `TRANSLATION_STORE = {}` works like a charm. But without Redis! Any hint?<issue_comment>username_1: Not a full answer I know, but I had a similar problem after upgrading redis gem from 3.3.1 to 4.0.2 I set it back to 3.3.1 and it got fixed. Odd thing for me was that the problem only occurred in the production environment. I am using a chained backend `I18n.backend= I18n::Backend::Chain.new( I18n::Backend::KeyValue.new(Redis.current), I18n.backend )` Upvotes: 0 <issue_comment>username_2: The problem is defined here: <https://github.com/ruby-i18n/i18n/blob/master/lib/i18n/backend/key_value.rb#L25-L30> I haven't investigated redis 4 but it seems that these methods has been removed. The solution is to patch the Redis gem and add these methods to the redis. ``` # config/initializers/redis.rb class RedisHash def initialize(redis) @redis = redis end def [](key) @redis.get(key) end def []=(key, value) @redis.set(key, value) end end Redis.current = Redis.new(host: 127.0.0.1, port: 6379, db: 0, thread_safe: true) # config/initializers/i18n.rb I18n::Backend::KeyValue.new(RedisHash.new(Redis.current)) ``` This code above is sample initializer. It works with the newest version of redis 4.3.5 I've tested also redis-store/redis-i18n and it also works with the newest redis versions but in my opinion this implementation enforces huge overload on redis. **EDIT**: Due to the redis contributors [answer][1] [1]: <https://github.com/redis/redis-rb/issues/997#issuecomment-871302883> I've updated my solution Upvotes: 1
2018/03/14
1,044
3,635
<issue_start>username_0: I am trying to format a cell based on multiple conditions. I am creating a spreadsheet to keep track of items borrowed. Let's say I am lending books. I want to have a list of books, one name in each cell. Then below that I want to have 3 columns: One column to enter the name of the book borrowed, the borrowing date, and the return date. I want to turn the cell with the book name RED, if the book has been borrowed AND if the return date is BLANK, meaning book is out. In my example screenshot, cell A2, and B2 should be red. The conditional formula I have come up with is `=AND($A6=A2, $C6="")` for Book1 conditions, but it only works if C6 if empty, not if C8 is empty or other cells in column C where Book1 is found AND the return date is blank. There is no specific deadline to return items, just that if book has been borrowed and the return date in the same row is empty then the book name at the top should turn red. [![Example](https://i.stack.imgur.com/Y0pgQ.jpg)](https://i.stack.imgur.com/Y0pgQ.jpg)<issue_comment>username_1: Compare the result of COUNTA applied to the in and out ranges. E.g. `COUNTA(FILTER($B6:$B,$A6:$A=A2))` will count how many times a specific book is checked out, while `COUNTA(FILTER($C6:$C, $A6:$A=A2))` will count how many times it is checked back in Upvotes: 1 <issue_comment>username_2: Your question title asks about "multiple conditions", but very specifically you're looking to match based on *any row* that itself *matches multiple conditions*. That goes beyond the common `AND` operator and into a function that can process a range. You also need to be prepared for a book to be checked out and returned many times, which means there's no single row that manages the status of a given book; `VLOOKUP` and `INDEX`/`MATCH` are off the table too. Instead, you're effectively looking to generate a list of `0` or `1` values that match *whether that book was checked out without being returned*, and then coloring the cell based on *whether there are any rows that match that condition*. To operate on multiple values at a time, you can use `ARRAYFORMULA` and then combine the output array with `OR`. However, one of the tricks about `ARRAYFORMULA` is that, to preserve the invariant about making single-value functions into array-valued functions, you can't use functions that can take arrays. This means that `AND` and `ISBLANK` don't work the way you'd like them to, but you can resolve that by using `*` instead of `AND` and `= ""` for `ISBLANK`. One such solution ([working example](https://docs.google.com/spreadsheets/d/1-aOwcCNTrFAfDetfXh7XWOMuQ-8Kkkx-gHn39exkuv0/edit?usp=sharing#gid=0)): ``` =OR(ARRAYFORMULA((A1 = $A$5:$A) * ($C$5:$C = ""))) ``` `ARRAYFORMULA` isn't the only function to operate on a list of values, though; you could also use `FILTER` directly to only return matching rows. Here, you're checking whether any row has a matching book name and a blank return value, and then confirming that the value is not the `#N/A` that `FILTER` returns when nothing matches. One such solution ([working example](https://docs.google.com/spreadsheets/d/1-aOwcCNTrFAfDetfXh7XWOMuQ-8Kkkx-gHn39exkuv0/edit?usp=sharing#gid=2107007392)): ``` =NOT(ISNA(FILTER($A$8:$C, $A$8:$A = A1, $C$8:$C = ""))) ``` Of course, you can also take advantage of the fact that you're only checking blanks to use [username_1's solution with `COUNTA` and `FILTER`](https://stackoverflow.com/a/49285461/1426891) above. However, since that solution won't work for arbitrary expressions, you can use `ARRAYFORMULA` or `FILTER` if your needs become more complex. Upvotes: 0
2018/03/14
432
1,542
<issue_start>username_0: i am having a php file that is saving data into database, i want the file to keep on running even if the browser is closed. this is what i tried: js.php file: ``` setInterval(function() { $.get('http://localhost/cryptopiamodelp/tst.php/', function(data) { //do something with the data alert('Load was performed.'); }); }, 5000); ``` tst.php file: ``` php $con = mysqli_connect('localhost','root','','cryptopiamodel') or die(mysql_error()); $sqli = "INSERT INTO test VALUES ('')" or die(mysqli_error()); mysqli_query($con,$sqli) or die(mysqli_error($con)); ? ``` but this only works when the browser is opened, is this possible in php to execute js.php file when the browser is closed so that the data is saved into database continuously? Thanks in advance.<issue_comment>username_1: You can use ignore\_user\_abort to avoid php hanging the execution upon browser disconnection. Then, you can use set\_time\_limit to change the default time limit (30 sec) to no limit or any other value that you need: ``` ignore_user_abort(true); set_time_limit(0); ``` Read <http://php.net/manual/en/function.ignore-user-abort.php> Upvotes: -1 <issue_comment>username_2: You can set up a cron job. A cron job allows you to setup almost any kind of schedule.Depending on where you are hosting your web application a cron job can be set differently. For example if your site hosting comes with a cPanel, they have an option to define a cron job. You can set up a schedule and give path to your php file. Upvotes: 0
2018/03/14
533
1,881
<issue_start>username_0: I'm trying to grab the NS and A records for a list of domains I have in a table. I've started to write this: ``` $domains = GetDomainsForDNS(); foreach ($domains as $domain){ $domain_id = $domain[0]; $domain = $domain[1]; $dns_records = dns_get_record($domain, DNS_NS + DNS_A); echo $domain; foreach($dns_records as $dns_record){ if (!$dns_record){ //var_dump($dns_record); echo "empty"; } } } ``` $domains is the id and domains from the table that I want to check. The Warnings I am getting are: > > Warning: Invalid argument supplied for foreach() for the later foreach > > > And > > Warning: dns\_get\_record(): DNS Query failed for dns\_get\_record > > > By the look of it I am getting these errors when dns\_get\_record() does not find anything. I am trying to mark these domains as having an issue in the database so I need a method to detect them. I've tried empty() and other methods to detect them but everything I do brings up the php warnings above. Is this because it's a multi-dimensional array? How do I go about doing this properly. Thanks<issue_comment>username_1: You can use ignore\_user\_abort to avoid php hanging the execution upon browser disconnection. Then, you can use set\_time\_limit to change the default time limit (30 sec) to no limit or any other value that you need: ``` ignore_user_abort(true); set_time_limit(0); ``` Read <http://php.net/manual/en/function.ignore-user-abort.php> Upvotes: -1 <issue_comment>username_2: You can set up a cron job. A cron job allows you to setup almost any kind of schedule.Depending on where you are hosting your web application a cron job can be set differently. For example if your site hosting comes with a cPanel, they have an option to define a cron job. You can set up a schedule and give path to your php file. Upvotes: 0
2018/03/14
443
1,435
<issue_start>username_0: I was able to manage to be notified when an event's circumstances is met. ``` Private Sub Worksheet_Change(ByVal Target As Range) Dim c As Range For Each c In Range("H2:H7") If Format$(c.Value, "HH:MM:SS") = "00:15:00" Then MsgBox "Block ends in 15 mins" End If Next c ``` Now my current problems is that, when one of the event is triggered. I want to be notified by the MsgBox which Block is triggered. ``` Block 1 15:00 2 17:00 3 19:00 4 21:00 5 23:00 6 01:00 ``` For example like above, Block 2 hits 15 mins, I want to be notify by MsgBox "Block 2 ends in 15 mins". Thank you for the help and hope that I'm not confusing.<issue_comment>username_1: You can use ignore\_user\_abort to avoid php hanging the execution upon browser disconnection. Then, you can use set\_time\_limit to change the default time limit (30 sec) to no limit or any other value that you need: ``` ignore_user_abort(true); set_time_limit(0); ``` Read <http://php.net/manual/en/function.ignore-user-abort.php> Upvotes: -1 <issue_comment>username_2: You can set up a cron job. A cron job allows you to setup almost any kind of schedule.Depending on where you are hosting your web application a cron job can be set differently. For example if your site hosting comes with a cPanel, they have an option to define a cron job. You can set up a schedule and give path to your php file. Upvotes: 0
2018/03/14
1,042
3,708
<issue_start>username_0: New Node & React user here. I'm following the [React tutorial](https://reactjs.org/tutorial/tutorial.html) but run into a problem on my Windows 10 machine: ``` C:\Users\Wout>create-react-app my-app Creating a new React app in C:\Users\Wout\my-app. Installing packages. This might take a couple of minutes. Installing react, react-dom, and react-scripts... npm ERR! path C:\Users\Wout\my-app\node_modules\abab npm ERR! code ENOENT npm ERR! errno -4058 npm ERR! syscall rename npm ERR! enoent ENOENT: no such file or directory, rename 'C:\Users\Wout\my-app\node_modules\abab' -> 'C:\Users\Wout\my-app\node_modules\.abab.DELETE' npm ERR! enoent This is related to npm not being able to find a file. npm ERR! enoent npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\Wout\AppData\Roaming\npm-cache\_logs\2018-03-14T15_21_11_867Z-debug.log Aborting installation. npm install --save --save-exact --loglevel error react react-dom react-scripts has failed. Deleting generated file... node_modules Deleting generated file... package.json Deleting my-app / from C:\Users\Wout Done. ``` Things I've tried so far: * Reinstall Node.js (v8.10.0, npm 5.6.0) * Disabling Adobe creative cloud sync & related processes (these were spawning node.exe processes) * Running CMD in admin * Run the command from VS Code Powershell * Closing Visual Studio Code before executing the command * Run the command with npx * Rebooted the system several times * Running the command from the user folder as well as other drives * Running the Typescript version: `create-react-app my-app --scripts-version=react-scripts-ts` It's all quite strange to me, since on Mac OS X the command executes without issues. I also can't seem to find other people with the same problem. For what it's worth, it always stops after this "finalizing abab" package step. I have an installation of XAMPP running an Apache and MySQL service, don't know if that has anything to do with it. I don't think so since I'm not even running the app yet, plus the server runs on port 3000 anyway.<issue_comment>username_1: I eventually solved it by closing as many extra processes as possible. Will try to find out which process was interfering with the command. **Edit: Ding ding ding! It was MalwareBytes! The "realtime protection 30-day trial" had restarted after an update and it was screwing with the filesystem.** Upvotes: 3 <issue_comment>username_2: I also just installed MalWareBytes, and I get the same error. I tried running create-react-app from the CLI as administrator, and while it was running I read your solution, and so I shut down MalWareBytes as the installation was in progress. It worked, but I don't know if that is because I ran as administrator, or because I shut down MalWareBytes. But for anyone having this problem, you could also try running your command prompt/powershell with administrator rights. Upvotes: 0 <issue_comment>username_3: try running the command from the project directory... worked for me. i ran the command from the parent of the project directory for example: reactApp/hellowworld run from helloworle directory instead... Upvotes: 0 <issue_comment>username_4: In case you have just install the create-react-app command , try to run the command from a new terminal . (re-open another tab). Upvotes: 0 <issue_comment>username_5: Instead of `create-react-app my-app`, run it like `npx create-react-app my-app`. If error still exists, run 'npm install -g npm'. Then, run `npx create-react-app my-app` again. Upvotes: 3 <issue_comment>username_6: Try running > '**npm fund**' then try to rerun > '**npm create-react-app my-app**' This worked for me. Upvotes: 0
2018/03/14
414
1,584
<issue_start>username_0: I have an SSIS project where Flat File Source reads CSV file. It contains a field Order Item Id that is formatted as a string like "347262171", surrounded by quotes. I want to convert that to numeric value so I can use it as an index but everything I try gives me result: > > Data conversion failed. The data conversion for column "Order Item ID" returned status value 2 and status text "The value could not be converted because of a potential loss of data." > > > What would be the easiest workaround for this?<issue_comment>username_1: You can add a Derived Column Transformation (DCT) to the data flow where you add an expression that removes quotes from the value: ``` REPLACE( [ID FIELD], "\"", "" ) ``` where `ID FIELD` is the column with the ID value in your data. Add this column as a new NVARCHAR column to your data flow (ie `STRIPPED_ID_FIELD`). Then, add a second DCT, where you cast this value to number `(DB_NUMERIC(10,0))[STRIPPED_ID_FIELD]`, and name it `NUM_ID_FIELD`. The reason I'd to this in a second, separate DCT, is that you can add an error output to this second one, and redirect that to a Recordset Destination. Then add a Data Viewer to the error output to see what sort of records are wrong. For instance, ID fields that have a letter that you're not expecting. Upvotes: 1 <issue_comment>username_2: You can remove the double quotes in a Flat file connection by specifying Text Qualifier =" ,if you are using flat file . [Image description of where to insert Qualifier](https://i.stack.imgur.com/tnr63.jpg) Upvotes: 0
2018/03/14
905
3,197
<issue_start>username_0: I am new to CakePHP framework. I have been stuck on an issue for sometime now. I am trying to make a json response work. I have read the tutorial on how to here:[JSON and XML views in CakePHP](https://book.cakephp.org/3.0/en/views/json-and-xml-views.html). However, it is still not working on my end. **Here is my code:** Inside my controller - `App\Controller\ExpensesController` ``` public function getMonthlyExpenses($month = null) { $expenses = $this->Expenses->find('all'); $this->set(compact($expenses, $month)); $this->set('_serialize', array("expenses", "month")); } ``` I am calling `getMonthlyExpenses` inside a View Element - `/src/Template/Element/Chart/expenseChart.ctp` ``` $.ajax({ url: '/finance/expenses/getMonthlyExpenses/3.json', accepts: 'application/json', async: false, success: function(data) { console.log("data" + data); }, error: function() { console.log("there was an error"); } }); ``` In my Configuration - `Routes.php' I have ``` php use Cake\Core\Plugin; use Cake\Routing\RouteBuilder; use Cake\Routing\Router; use Cake\Routing\Route\DashedRoute; Router::defaultRouteClass(DashedRoute::class); Router::scope('/', function (RouteBuilder $routes) { $routes-extensions(['json']); $routes->connect('/', ['controller' => 'Pages', 'action' => 'display', 'home']); $routes->connect('/pages/*', ['controller' => 'Pages', 'action' => 'display']); $routes->fallbacks(DashedRoute::class); }); Plugin::routes(); ``` **Results** 1. When I access the Action controller directly via `/expenses/getMonthlyExpenses/3.json`. I simply get a `null` response with `json/application` contentType. 2. When I access the template where the View element is called - `/finance/expenses/index`, it returns `null` as well. Thank you , ANy help would be appreciated. I am sure there is something really simple that I am overlooking.<issue_comment>username_1: I have a similar application running CakePHP 3.5, here is a sample of my routes file, perhaps you missed something. Mine is slightly different since my REST api sits in a plugin. ``` Router::plugin('Api', ['path' => '/publisher/], function (RouteBuilder $routes) { $routes->extensions(['json']); $routes->fallbacks('DashedRoute'); } ); ``` Try changing your AJAX request endpoint from getMonthlyExpenses to either get-monthly-expenses or get\_monthly\_expenses, I believe thats where your problem may be. Also try debugging a bit more by simply doing a die('hello world') at the beginning of your controller functions. Additionally try a different way of setting the results: ``` public function getMonthlyExpenses($month = null) { $expenses = $this->Expenses->find('all'); $this->set(compact($expenses, $month)); $this->set('_serialize', true); } ``` Upvotes: 0 <issue_comment>username_2: Try to change: ``` $this->set(compact($expenses, $month)); ``` to ``` $this->set(compact("expenses", "month")); ``` It looks like the [compact function](http://php.net/manual/en/function.compact.php) accepts either variable names or array of name => variable. Upvotes: 3 [selected_answer]
2018/03/14
936
3,071
<issue_start>username_0: I want to install carthage on my Mac OS using `brew install carthage` command. However, I get the following error: ``` touch: /usr/local/Homebrew/.git/FETCH_HEAD: Permission denied touch: /usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask/.git/FETCH_HEAD: Permission denied touch: /usr/local/Homebrew/Library/Taps/dart-lang/homebrew-dart/.git/FETCH_HEAD: Permission denied touch: /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/.git/FETCH_HEAD: Permission denied fatal: Unable to create '/usr/local/Homebrew/.git/index.lock': Permission denied error: could not lock config file .git/config: Permission denied Warning: carthage 0.26.2 is already installed, it's just not linked. You can use `brew link carthage` to link this version. ``` I also get the following error when I used `sudo brew install carthage`: ``` Error: Running Homebrew as root is extremely dangerous and no longer supported. As Homebrew does not drop privileges on installation you would be giving all build scripts full access to your system. ``` Can you let me know what is the problem. Thanks in advance.<issue_comment>username_1: Check for the permissions on these files. ``` ls -l /usr/local/Homebrew/.git/FETCH_HEAD ls -l /usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask/.git/FETCH_HEAD ls -l /usr/local/Homebrew/Library/Taps/dart-lang/homebrew-dart/.git/FETCH_HEAD ls -l /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/.git/FETCH_HEAD ``` If you don't have the permissions, run ``` sudo chown -R $(whoami):admin /usr/local/* && sudo chmod -R g+rwx /usr/local/* ``` In High Sierra and above, Run this command instead: ``` sudo chown -R $(whoami) $(brew --prefix)/* ``` You can also see the related github issues [here](https://github.com/Homebrew/legacy-homebrew/issues/43471) Upvotes: 8 [selected_answer]<issue_comment>username_2: In High Sierra, run the command: ``` sudo chown -R $(whoami) $(brew --prefix)/* ``` Upvotes: 6 <issue_comment>username_3: In my case this command working: ``` sudo chown -R $(whoami) $(brew --prefix)/* ``` However, there is also an easier way of installing Carthage instead of commands line way. It is enough to download the latest package from this link and install it in a wizard way on you Mac. <https://github.com/Carthage/Carthage/releases> Upvotes: 4 <issue_comment>username_4: I have High Sierra and only this worked for me. ``` 1. sudo chown -R $(whoami):admin /usr/local/* && sudo chmod -R g+rwx /usr/local/* you should not write sudo before brew the right command is 2. brew install mysql ``` Upvotes: 1 <issue_comment>username_5: You can use this also instead `sudo chown -R $USER $(brew --prefix)/*` Upvotes: 2 <issue_comment>username_6: This worked for me : macOS Catalina 10.15.1 ``` sudo chown -R $(whoami):admin /usr/local/* && sudo chmod -R g+rwx /usr/local/* ``` Upvotes: 4 <issue_comment>username_7: I have macOS Catalina 10.15.1 after hours (like always) this worked. `sudo chown -R $(whoami):admin /usr/local/* && sudo chmod -R g+rwx /usr/local/*` Upvotes: 2
2018/03/14
925
3,064
<issue_start>username_0: i am using following code to integrate tinymce ``` tinyMCE.init({ mode : "textareas", theme : "advanced", plugins : "emotions,spellchecker,advhr,insertdatetime,preview", // Theme options - button# indicated the row# only theme_advanced_buttons1 : "newdocument,|,bold,italic,underline,|,justifyleft,justifycenter,justifyright,fontselect,fontsizeselect,formatselect", theme_advanced_buttons2 : "cut,copy,paste,|,bullist,numlist,|,outdent,indent,|,undo,redo,|,link,unlink,anchor,image,|,code,preview,|,forecolor,backcolor", theme_advanced_buttons3 : "insertdate,inserttime,|,spellchecker,advhr,,removeformat,|,sub,sup,|,charmap,emotions", theme_advanced_toolbar_location : "top", theme_advanced_toolbar_align : "left", theme_advanced_statusbar_location : "bottom", theme_advanced_resizing : true }); ``` following is the error i get > > Uncaught ReferenceError: tinyMCE is not defined > > > Please help me with this<issue_comment>username_1: Check for the permissions on these files. ``` ls -l /usr/local/Homebrew/.git/FETCH_HEAD ls -l /usr/local/Homebrew/Library/Taps/caskroom/homebrew-cask/.git/FETCH_HEAD ls -l /usr/local/Homebrew/Library/Taps/dart-lang/homebrew-dart/.git/FETCH_HEAD ls -l /usr/local/Homebrew/Library/Taps/homebrew/homebrew-core/.git/FETCH_HEAD ``` If you don't have the permissions, run ``` sudo chown -R $(whoami):admin /usr/local/* && sudo chmod -R g+rwx /usr/local/* ``` In High Sierra and above, Run this command instead: ``` sudo chown -R $(whoami) $(brew --prefix)/* ``` You can also see the related github issues [here](https://github.com/Homebrew/legacy-homebrew/issues/43471) Upvotes: 8 [selected_answer]<issue_comment>username_2: In High Sierra, run the command: ``` sudo chown -R $(whoami) $(brew --prefix)/* ``` Upvotes: 6 <issue_comment>username_3: In my case this command working: ``` sudo chown -R $(whoami) $(brew --prefix)/* ``` However, there is also an easier way of installing Carthage instead of commands line way. It is enough to download the latest package from this link and install it in a wizard way on you Mac. <https://github.com/Carthage/Carthage/releases> Upvotes: 4 <issue_comment>username_4: I have High Sierra and only this worked for me. ``` 1. sudo chown -R $(whoami):admin /usr/local/* && sudo chmod -R g+rwx /usr/local/* you should not write sudo before brew the right command is 2. brew install mysql ``` Upvotes: 1 <issue_comment>username_5: You can use this also instead `sudo chown -R $USER $(brew --prefix)/*` Upvotes: 2 <issue_comment>username_6: This worked for me : macOS Catalina 10.15.1 ``` sudo chown -R $(whoami):admin /usr/local/* && sudo chmod -R g+rwx /usr/local/* ``` Upvotes: 4 <issue_comment>username_7: I have macOS Catalina 10.15.1 after hours (like always) this worked. `sudo chown -R $(whoami):admin /usr/local/* && sudo chmod -R g+rwx /usr/local/*` Upvotes: 2
2018/03/14
407
1,388
<issue_start>username_0: How can i check if user is logged in to umbraco from the view(.cshtml)? I would also know how to check the users role. `User.Identity.IsAuthenticated` always returns false. ``` if( User has role = "someRole" ) { do stuff } ``` Im using Umbraco version: 7.8.1<issue_comment>username_1: If `User.Identity.IsAuthenticated` is false you are probably not calling `FormsAuthentication.SetAuthCookie(username, true);` after the user is successfully validated. To check the authentication and roles use: ``` var userIsAuthenticated = Request.IsAuthenticated; var userIsAdmin = User.IsInRole(role: "admin"); ``` Upvotes: 1 <issue_comment>username_2: To check if an Umbraco user is logged in on front-end, I'm using this: ``` var isLoggedInUmbraco = userTicket.GetUmbracoAuthTicket()!=null; //To add a Umbraco edit link on a page: if(isLoggedInUmbraco){ [Edit Page](/umbraco/#/content/content/edit/@Model.Content.Id) } ``` Upvotes: 0 <issue_comment>username_3: I found this code to be working for my application: ``` var auth = new HttpContextWrapper(HttpContext.Current).GetUmbracoAuthTicket(); if (auth != null) { Name: @auth.Name ID: @auth.UserData } ``` Resource : ["Umbraco.Web.UmbracoContext.Current.Security.CurrentUser" returns "null" for front-end requests](http://issues.umbraco.org/issue/U4-6496) Upvotes: 3 [selected_answer]