date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/16
| 799 | 2,954 |
<issue_start>username_0: I'm calling `getUserMedia()` to get a video stream and simply setting the `stream` as the `srcObject` of a video element.
Specifically on Chrome on 2 different Windows Tablets, in Portrait mode the video is side ways.
I can't find any orientation info in the stream or video track objects and the width and height track info match the video element and are accurate to the aspect ratio track info.
You can duplicate with <https://camera.stackblitz.io>
How do I get the orientation info from the stream or logically rotate the video?
Edit:
I do not want the orientation of the device or screen, but of the video stream. Maybe "rotation" is the right verbiage.
In other words how do I know when to rotate the video without a human looking at it?
Edit 2:
"Chrome on Windows Tablet in Portrait mode" is just what I experienced I don't know if the issue is isolated to that or the issue is with every Windows tablet but the main question is how do I tell if the video is sideways or rotated?<issue_comment>username_1: You can get the orientation info via javascript with `screen.orientation`.
See on the [MDN Docs](https://developer.mozilla.org/en-US/docs/Web/API/ScreenOrientation)
Upvotes: -1 <issue_comment>username_2: As far as I know,
>
> Orientation is in the field of **screen** not stream.
>
>
>
The [Screen Orientation API](https://learn.microsoft.com/en-us/microsoft-edge/dev-guide/device/screen-orientation-api) provides the ability to read the screen orientation type and angle, to be informed when the screen orientation state changes, and be able to lock the screen orientation to a specific state.
In my view, one great method for managing orientation is using [media query](https://developer.mozilla.org/en-US/docs/Web/CSS/@media/orientation) like below:
```
/*
##Device = Tablets, Ipads (portrait)
##Screen = B/w 768px to 1024px
*/
@media (min-width: 768px) and (max-width: 1024px) {
//CSS
}
```
For using other methods like JavaScript or manifest please see this [article](https://developer.mozilla.org/en-US/docs/Web/API/CSS_Object_Model/Managing_screen_orientation).
**Edit**
You mentioned in title of question to *Windows Tablet* and also you mentioned
you are looking for a way for setting orientation options *without a human looking at it*.The your second condition contradicts the former.
>
> How we do detect device is Windows Tablet without a human looking at
> video??
>
>
>
It's not possible to answer this question. I think your view about stream is wrong.
You can choose one of them at a time:
1. If you need to detect some devices like *Windows Tablet*, you can
use some thing like *media query* or JavaScript method to detect
device.
2. If you need to set option *without a human looking at it*, you can
use some method like [MediaStreamTrack.applyConstraints()](https://developer.mozilla.org/en-US/docs/Web/API/MediaStreamTrack/applyConstraints).
Upvotes: 1
|
2018/03/16
| 271 | 865 |
<issue_start>username_0: Since there are a lot of great apps (like [Pusher](https://github.com/noodlewerk/NWPusher) or [APNs Provider](https://itunes.apple.com/us/app/easy-apns-provider-push-notification-service-testing-tool/id989622350?mt=12)) to test push notifications, they only support `.p12` or `.pem` certificates. How to test keys with `.p8` extension?
[](https://i.stack.imgur.com/AhI12.png)<issue_comment>username_1: You can use <https://github.com/onmyway133/PushNotifications> if you want an existing application.
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can test pushes with .p8 JWT Key via [OwnProvider](https://www.ownprovider.net)
Here is a macOS application which can be used to do this work too: <https://www.angularcorp.com/post/macos-ownprovider>
Upvotes: 1
|
2018/03/16
| 699 | 2,475 |
<issue_start>username_0: How can I convert an HTML to PDF with [OpenPDF](https://github.com/LibrePDF/OpenPDF)?
For what I know, OpenPdf is a fork of Itext 4. Unluckily I can't find Itext 4 documentation.<issue_comment>username_1: Ok,it seems you can't do it directly with only OpenPDF, you have to use [Flying Saucer](https://github.com/flyingsaucerproject/flyingsaucer): get [flying-saucer-pdf-openpdf](https://mvnrepository.com/artifact/org.xhtmlrenderer/flying-saucer-pdf-openpdf) and then use it. An example:
```
String inputFile = "my.xhtml";
String outputFile = "generated.pdf";
String url = new File(inputFile).toURI().toURL().toString();
ITextRenderer renderer = new ITextRenderer();
renderer.setDocument(url);
renderer.layout();
try (OutputStream os = Files.newOutputStream(Paths.get(outputFile))) {
renderer.createPDF(os);
}
```
[Source](https://gist.github.com/sreeprasad/3233907).
PS: FlyingSaucer expects XHTML syntax. If you have some problems with yout HTML file, you could use [Jsoup](https://mvnrepository.com/artifact/org.jsoup/jsoup):
```
String inputFile = "my.html";
String outputFile = "generated.pdf";
String html = new String(Files.readAllBytes(Paths.get(inputFile)));
final Document document = Jsoup.parse(html);
document.outputSettings().syntax(Document.OutputSettings.Syntax.xml);
ITextRenderer renderer = new ITextRenderer();
renderer.setDocumentFromString(document.html());
renderer.layout();
try (OutputStream os = Files.newOutputStream(Paths.get(outputFile))) {
renderer.createPDF(os);
}
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: Here's a simple Kotlin HTML to PDF. Jsoup is not required.
```kotlin
fun pdfFromHtml(ostream: OutputStream, html: String) {
val renderer = ITextRenderer()
val sharedContext = renderer.sharedContext
sharedContext.isPrint = true
sharedContext.isInteractive = false
renderer.setDocumentFromString(html)
renderer.layout()
renderer.createPDF(ostream)
}
```
Here's one with Jsoup.
```kotlin
fun pdfFromHtml(ostream:OutputStream, html: String) {
val renderer = ITextRenderer()
val sharedContext = renderer.sharedContext
sharedContext.isPrint = true
sharedContext.isInteractive = false
val document = Jsoup.parse(html)
document.outputSettings().syntax(org.jsoup.nodes.Document.OutputSettings.Syntax.xml)
renderer.setDocumentFromString(document.html())
renderer.layout()
renderer.createPDF(ostream)
}
```
Upvotes: 0
|
2018/03/16
| 957 | 2,780 |
<issue_start>username_0: I am learning pandas lib. in Python3, but i have a big problem. When i use command to read\_excel i get an error.
```
import pandas as pd
df = pd.read_excel(r'D:\PythonProjects\stocks.xlsx',sheetname=0)
```
The error looks like this:
```
C:\Users\Kuba\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\util\_decorators.py:118: FutureWarning: The `sheetname` keyword is deprecated, use `sheet_name` instead
return func(*args, **kwargs)
Traceback (most recent call last):
File "D:/MyPythonProjects/urlib.py", line 4, in
df = pd.read\_excel(r'D:\MyPythonProjects\NewL\stocksa.xlsx',sheetname=0 )
File "C:\Users\Kuba\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\util\\_decorators.py", line 118, in wrapper
return func(\*args, \*\*kwargs)
File "C:\Users\Kuba\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\io\excel.py", line 230, in read\_excel
io = ExcelFile(io, engine=engine)
File "C:\Users\Kuba\AppData\Local\Programs\Python\Python36\lib\site-packages\pandas\io\excel.py", line 294, in \_\_init\_\_
self.book = xlrd.open\_workbook(self.\_io)
File "C:\Users\Kuba\AppData\Local\Programs\Python\Python36\lib\site-packages\xlrd\\_\_init\_\_.py", line 141, in open\_workbook
ragged\_rows=ragged\_rows,
File "C:\Users\Kuba\AppData\Local\Programs\Python\Python36\lib\site-packages\xlrd\xlsx.py", line 808, in open\_workbook\_2007\_xml
x12book.process\_stream(zflo, 'Workbook')
File "C:\Users\Kuba\AppData\Local\Programs\Python\Python36\lib\site-packages\xlrd\xlsx.py", line 265, in process\_stream
meth(self, elem)
File "C:\Users\Kuba\AppData\Local\Programs\Python\Python36\lib\site-packages\xlrd\xlsx.py", line 392, in do\_sheet
sheet = Sheet(bk, position=None, name=name, number=sheetx)
File "C:\Users\Kuba\AppData\Local\Programs\Python\Python36\lib\site-packages\xlrd\sheet.py", line 326, in \_\_init\_\_
self.extract\_formulas = book.extract\_formulas
AttributeError: 'Book' object has no attribute 'extract\_formulas'
```
I don't know how to fix it. I 've tried to reinstall pandas, xlrd and still i get the same error. Can you give me an advice how to repair that problem.<issue_comment>username_1: What happens when you do
```
pd.ExcelFile(filename)
```
If that throws the same error, then it might due to the version of `xlrd`. I couldn't find `extract_formula` from `xlrd`'s `Book` object from their latest release. [[source]](https://github.com/python-excel/xlrd/blob/master/xlrd/book.py)
Upvotes: 0 <issue_comment>username_2: try installing the
>
> xlrd version '0.9.4'
>
>
>
Worked for me
Upvotes: 0 <issue_comment>username_3: just add at the beginning of your program:
```js
import xlrd
```
>
> in case it's not found, > pip install xlrd
>
>
>
Upvotes: -1
|
2018/03/16
| 690 | 2,455 |
<issue_start>username_0: I am doing some auto testing Python + Selenium.Is there any way to check suggestion box in google for example using selenium. Something like I would like to now that suggestion table is revealed when auto test put google in search bar.
[](https://i.stack.imgur.com/RGBoH.png)<issue_comment>username_1: try the following code:
```
suggestions = driver.find_elements_by_css_selector("li[class='sbsb_c gsfs']")
for element in suggestions:
print(element.text)
```
Iterate through all elements using for loop, and call *text* on WebElement.
Upvotes: 3 [selected_answer]<issue_comment>username_2: To extract the *Auto Suggestions* from *Search Box* on *Google Home Page* you have to induce [WebDriverWait](https://seleniumhq.github.io/selenium/docs/api/py/webdriver_support/selenium.webdriver.support.wait.html#module-selenium.webdriver.support.wait) with [expected\_conditions](https://seleniumhq.github.io/selenium/docs/api/py/webdriver_support/selenium.webdriver.support.expected_conditions.html#module-selenium.webdriver.support.expected_conditions) as [visibility\_of\_all\_elements\_located](https://seleniumhq.github.io/selenium/docs/api/py/webdriver_support/selenium.webdriver.support.expected_conditions.html#selenium.webdriver.support.expected_conditions.visibility_of_all_elements_located) as follows :
```
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Chrome(executable_path="C:\\Utility\\BrowserDrivers\\chromedriver.exe")
driver.get("http://www.google.com")
search_field = driver.find_element_by_name("q")
search_field.send_keys("google")
searchText_google_suggestion = WebDriverWait(driver, 10).until(EC.visibility_of_all_elements_located((By.XPATH, "//form[@action='/search' and @role='search']//ul[@role='listbox']//li//span")))
for item in searchText_google_suggestion :
print(item.text)
```
Console Output :
```
google
google translate
google maps
google drive
google pixel 2
google earth
google news
google scholar
google play store
google photos
```
Here you can find a relevant discussion on [How to automate Google Home Page auto suggestion?](https://stackoverflow.com/questions/53612605/how-to-automate-google-home-page-auto-suggestion/53613491#53613491)
Upvotes: 0
|
2018/03/16
| 776 | 2,756 |
<issue_start>username_0: I am trying to mount a connection to map a folder from a cluster (running Ubuntu 16.04) to my local machine (Mac Book Pro). Therefore I am typing the following:
`sshfs username@host:/home/username/path_to_mount /Users/local_user/Desktop/existing_dir`
I get the following error:
`fuse: bad mount point '/Users/local_user/Desktop/existing_dir': No such file or directory`
That sounds really weird to me because the directory exists, I am sure of that. Any clue on why this happens and what might be a possible solution?
Maybe is just a stupid question, but I am no getting to any point!
Thanks for your help in advance!
Best,
Kim<issue_comment>username_1: try the following code:
```
suggestions = driver.find_elements_by_css_selector("li[class='sbsb_c gsfs']")
for element in suggestions:
print(element.text)
```
Iterate through all elements using for loop, and call *text* on WebElement.
Upvotes: 3 [selected_answer]<issue_comment>username_2: To extract the *Auto Suggestions* from *Search Box* on *Google Home Page* you have to induce [WebDriverWait](https://seleniumhq.github.io/selenium/docs/api/py/webdriver_support/selenium.webdriver.support.wait.html#module-selenium.webdriver.support.wait) with [expected\_conditions](https://seleniumhq.github.io/selenium/docs/api/py/webdriver_support/selenium.webdriver.support.expected_conditions.html#module-selenium.webdriver.support.expected_conditions) as [visibility\_of\_all\_elements\_located](https://seleniumhq.github.io/selenium/docs/api/py/webdriver_support/selenium.webdriver.support.expected_conditions.html#selenium.webdriver.support.expected_conditions.visibility_of_all_elements_located) as follows :
```
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
driver = webdriver.Chrome(executable_path="C:\\Utility\\BrowserDrivers\\chromedriver.exe")
driver.get("http://www.google.com")
search_field = driver.find_element_by_name("q")
search_field.send_keys("google")
searchText_google_suggestion = WebDriverWait(driver, 10).until(EC.visibility_of_all_elements_located((By.XPATH, "//form[@action='/search' and @role='search']//ul[@role='listbox']//li//span")))
for item in searchText_google_suggestion :
print(item.text)
```
Console Output :
```
google
google translate
google maps
google drive
google pixel 2
google earth
google news
google scholar
google play store
google photos
```
Here you can find a relevant discussion on [How to automate Google Home Page auto suggestion?](https://stackoverflow.com/questions/53612605/how-to-automate-google-home-page-auto-suggestion/53613491#53613491)
Upvotes: 0
|
2018/03/16
| 338 | 1,398 |
<issue_start>username_0: If there is a singleton instance of SomeClass and some other class repeatedly call createMap() method on instance of SomeClass then would there ever be a memory leak? I tested this with a 50000 loop and monitored heap and I dont see the leak but would like opinion.
```
class SomeClass {
Map someMap;
public void createMap() {
someMap = new HashMap();
System.out.println("Created map");
try {
Thread.sleep(10);
} catch(InterruptedException ie){
}
}
}
```<issue_comment>username_1: Short answer - NO.
Long answer - depends on complete code under `try` and how `createMap()` is called.
I am assuming from your question that you are concerned about functionality of your app and overall performance. Memory is just one aspect to it. Your Singlton class would break right away from logic perspective as you do not have concurrency control on `createMap()` and multiple parallel call may overwrite `someMap`.
It also depends what you do under `try{` block. If you are sending `someMap` outside to another method or so, there may be a memory leak.
Upvotes: 2 [selected_answer]<issue_comment>username_2: This would not create a memory leak.
You have one variable that is a reference to a `HashMap` and can only reference one `HashMap`.
In a loop, the previous hashmap would lose its reference and then become a candidate for garbage collection.
Upvotes: 0
|
2018/03/16
| 602 | 2,561 |
<issue_start>username_0: I have the task of writing a WebApp with angular, where a prototype exists. As I have no experience with Typescript and asynchronous programming, I am not sure how to approach this in the best way.
In the prototype a handshake is performed, which is a sequence of calculations and HTTP requests. All functions (even those that just calculate something) return a Promise containing the function and resolve or reject as a result.
These function are combined in a long chain of `.then`, which also handles the Errors. This looks rather strange and I am not sure, what really happens there under the hood:
```js
this.startHandshake()
.then((buf) => this.calculateSomthing(buf)
.then((buf2) => http.sendData(buf2)
..., // this goes on many levels
(e:Error) => this.handleError(e)),
(e:Error) => this.handleError(e))
.catch((e: Error) => { /* Error Handling */ });
```
My Questions are:
1. Is this Code pattern common for this kind of problem?
2. I read that Promises are handled in a single threaded way. What does that mean for Promises that just calculate something (no http request, no timer) ? When is the function in the Promise executed? Is it instantly, put in a queue for a time when the original code is finished, or is the javascript engine scheduling those two lines of execution?
3. I would prefer to write some of the functions synchronously, but then I need a way to handle errors. The most obvious - exceptions - does not seem to play well with promises.
4. Is maybe async / await applicable in my case to make the code more readable? How would errors be handled? Is it combinable with exceptions?<issue_comment>username_1: Short answer - NO.
Long answer - depends on complete code under `try` and how `createMap()` is called.
I am assuming from your question that you are concerned about functionality of your app and overall performance. Memory is just one aspect to it. Your Singlton class would break right away from logic perspective as you do not have concurrency control on `createMap()` and multiple parallel call may overwrite `someMap`.
It also depends what you do under `try{` block. If you are sending `someMap` outside to another method or so, there may be a memory leak.
Upvotes: 2 [selected_answer]<issue_comment>username_2: This would not create a memory leak.
You have one variable that is a reference to a `HashMap` and can only reference one `HashMap`.
In a loop, the previous hashmap would lose its reference and then become a candidate for garbage collection.
Upvotes: 0
|
2018/03/16
| 1,187 | 4,079 |
<issue_start>username_0: I am taking over a Cordova/Ionic project. I've never worked with Cordova or Ionic before, so I am complete beginner in that area. However, I have worked with Node, on and off, for a few years, so I mostly know about that.
A simple starting task, I need to add Appsee:
<https://www.appsee.com/docs/ios/ionic>
This part was easy:
```
In case you're using TypeScript (default in ionic 2 and ionic 3) place the following line under the imports:
declare var Appsee:any;
```
Which I put in this file:
./src/app/app.component.ts
But this part is less obvious:
```
Call the following method when your app starts, preferably when the deviceready event fires:
Appsee.start("YOUR API KEY");
```
So I ran grep to find out where deviceready is:
```
grep -iR "deviceready" * | grep -v node_modules
www/build/vendor.js: * resolve when Cordova triggers the `deviceready` event.
www/build/vendor.js: // prepare a custom "ready" for cordova "deviceready"
www/build/vendor.js: doc.addEventListener('deviceready', function () {
www/build/vendor.js: // 3) cordova deviceready event triggered
www/build/vendor.js: var deviceReady = new Promise(function (resolve, reject) {
www/build/vendor.js: document.addEventListener("deviceready", function () {
www/build/vendor.js: var deviceReadyDone = deviceReady.catch(function () {
www/build/vendor.js: return deviceReadyDone.then(function () {
www/build/vendor.js: document.addEventListener('deviceready', function () {
www/build/vendor.js: console.log("Ionic Native: deviceready event fired after " + (Date.now() - before) + " ms");
www/build/vendor.js: console.warn("Ionic Native: deviceready did not fire within " + DEVICE_READY_TIMEOUT + "ms. This can happen when plugins are in an inconsistent state. Try removing plugins from plugins/ and reinstalling them.");
```
So I only see "deviceready" inside the `build` folder. I think I'm supposed to avoid editing anything inside of `build`? Isn't that full of stuff that's generated by Ionic/Cordova?
Where do I register something with deviceready?
If I run:
```
ionic info
```
I get:
```
[WARN] Detected locally installed Ionic CLI, but it's too old--using global CLI.
cli packages: (/usr/local/lib/node_modules)
@ionic/cli-utils : 1.19.2
ionic (Ionic CLI) : 3.20.0
global packages:
cordova (Cordova CLI) : 8.0.0
local packages:
@ionic/app-scripts : 2.1.4
Cordova Platforms : none
Ionic Framework : ionic-angular 3.6.0
System:
ios-deploy : 1.9.2
Node : v6.5.0
npm : 3.10.3
OS : OS X El Capitan
Xcode : Xcode 7.3.1 Build version 7D1014
Environment Variables:
ANDROID_HOME : not set
Misc:
backend : pro
```
I'm happy to follow directions from elsewhere.<issue_comment>username_1: You should use **Platform** to get device ready event in IONIC, In your app.component.ts
>
> 1. import { Platform } from 'ionic-angular';
> 2. Add platform.ready() method inside the constructor as shown below
>
>
>
```
constructor(platform: Platform, statusBar: StatusBar, splashScreen: SplashScreen) {
platform.ready().then(() => {
// Okay, so the platform is ready.
// Here you can do any higher level native things you might need.
Appsee.start("YOUR API KEY");
statusBar.styleDefault();
splashScreen.hide();
});
```
}
it will be triggered when device/platform is ready.
[Here is the documentation](https://ionicframework.com/docs/api/platform/Platform/)
Upvotes: 4 [selected_answer]<issue_comment>username_2: Add this in your app.module and nothing will occur before the platofrm is ready. Like that you dont have to worry what will be the entry point of your angular app.
```
providers: [
{
provide: APP_INITIALIZER,
useFactory: (platform: Platform) => {
return () => platform.ready()
},
deps: [Platform],
multi: true
}]
```
Upvotes: 0
|
2018/03/16
| 572 | 2,035 |
<issue_start>username_0: This is my PHP code. When I upload any files with .jpg or .png extensions, i am storing the file extension in `$lower_img_extension` and I did checked it storing `jpg`
Whoever i am getting result `NOT RIGHT` while the result should be `RIGHT`.
What am I missing?
```
php
if(isset($_POST["submit"]))
{
print_r($_FILES["myfile"]);
echo "<br";
$name=$_FILES["myfile"]["name"]."
";
$tem_image=$_FILES["myfile"]["tmp_name"];
$store="upload/".$name;
$arr_img=explode(".",$name);
$lower_img_extension=end($arr_img);
if(empty($lower_img_extension))
{
echo "yes, is empty";
}
else
{
if($lower_img_extension=='jpg' || $lower_img_extension=='JPG')
{
echo "RIGHT";
}
else{
echo "NOT RIGHT";
}
}
}
?>
this is my html code:-
" enctype="multipart/form-data">
```<issue_comment>username_1: You should use **Platform** to get device ready event in IONIC, In your app.component.ts
>
> 1. import { Platform } from 'ionic-angular';
> 2. Add platform.ready() method inside the constructor as shown below
>
>
>
```
constructor(platform: Platform, statusBar: StatusBar, splashScreen: SplashScreen) {
platform.ready().then(() => {
// Okay, so the platform is ready.
// Here you can do any higher level native things you might need.
Appsee.start("YOUR API KEY");
statusBar.styleDefault();
splashScreen.hide();
});
```
}
it will be triggered when device/platform is ready.
[Here is the documentation](https://ionicframework.com/docs/api/platform/Platform/)
Upvotes: 4 [selected_answer]<issue_comment>username_2: Add this in your app.module and nothing will occur before the platofrm is ready. Like that you dont have to worry what will be the entry point of your angular app.
```
providers: [
{
provide: APP_INITIALIZER,
useFactory: (platform: Platform) => {
return () => platform.ready()
},
deps: [Platform],
multi: true
}]
```
Upvotes: 0
|
2018/03/16
| 2,100 | 5,934 |
<issue_start>username_0: How can I write this using typescript?
So I have a type Status which has three properties: name, actionName and style. I also have a constant statusTypes which has two properties of type Status.
This is what I did so far but it is not working. I am getting this error `[ts] Type '{ Active: any; Inactive: any; }' is not assignable to type 'Status[]'. Object literal may only specify known properties, and 'Active' does not exist in type 'Status[]'`
```
export interface Status {
name: string;
actionName: string;
style: string;
}
export const statusTypes: Status[] = {
Active : {
name: "Active",
actionName: "Deactivate",
style: "success"
},
Inactive : {
name: "Inactive",
actionName: "Activate",
style: "warning"
}
};
```<issue_comment>username_1: The Problem
===========
You are attempting to assign an object to a `Status` array.
Solutions
=========
Below is a list of some of the options you have. It is up to you to decide what works best in your design:
Option 1: Array
---------------
```
export const statusTypes: Status[] = [{
name: "Active",
actionName: "Deactivate",
style: "success"
}, {
name: "Inactive",
actionName: "Activate",
style: "warning"
}
];
```
Option 2: Index Signature
-------------------------
```
export const statusTypes: {[key: string]: Status} = {
active: {
name: "Active",
actionName: "Deactivate",
style: "success"
},
inactive: {
name: "Inactive",
actionName: "Activate",
style: "warning"
}
};
```
Option 3: Interface
-------------------
```
interface StatusTypes {
active: Status;
inactive: Status;
}
export const statusTypes: StatusTypes = {
active: {
name: "Active",
actionName: "Deactivate",
style: "success"
},
inactive: {
name: "Inactive",
actionName: "Activate",
style: "warning"
}
};
```
Upvotes: 2 <issue_comment>username_2: After playing around a bit I figured out what I needed:
```
export interface Status {
name: string;
actionName: string;
style: string;
}
export const statusTypes = {
Active : {
name: "Active",
actionName: "Deactivate",
style: "success"
},
Inactive : {
name: "Inactive",
actionName: "Activate",
style: "warning"
}
};
```
Upvotes: 1 <issue_comment>username_3: Items in an array can't have names.
You can either create a simple array.
```
export const statusTypes: Status[] = [
{
name: "Active",
actionName: "Deactivate",
style: "success"
},
{
name: "Inactive",
actionName: "Activate",
style: "warning"
}
];
```
Or create an object and let the compiler infer the type for it :
```
export const statusTypes = {
Active: {
name: "Active",
actionName: "Deactivate",
style: "success"
},
Inactive: {
name: "Inactive",
actionName: "Activate",
style: "warning"
}
};
```
Or if you want to ensure the correct type for the properties you can define a helper function:
```
function defineStatuses(o: T) {
return o
}
export const statusTypes = defineStatuses({
Active: {
name: "Active",
actionName: "Deactivate",
style: "success"
},
Inactive: {
name: "Inactive",
actionName: "Activate",
style: "warning"
}
});
```
This last approach is the one I would recommend, you have full type safety as the compiler will warn you if you forget or add properties of `Status`, no extra interfaces to maintain and you can use defined properties (such as `Active`/`Inactive`) instead of `[string]`
Upvotes: 2 <issue_comment>username_4: As others have mentioned, the problem is that it's an array:
If that's the schema you want, you should achieve it like this:
```
interface StatusTypes {
Active: Status;
Inactive: Status
}
export interface Status {
name: string;
actionName: string;
style: string;
}
export const statusTypes: StatusTypes = {
Active : {
name: "Active",
actionName: "Deactivate",
style: "success"
},
Inactive : {
name: "Inactive",
actionName: "Activate",
style: "warning"
}
};
```
Check it out at the [TS Playground](https://www.typescriptlang.org/play/#src=%2F%2FAs%20others%20have%20mentioned%2C%20the%20problem%20is%20that%20it's%20an%20array%3A%2F%2F%20If%20that's%20the%20schema%20you%20want%2C%20you%20should%20achieve%20it%20like%20this%3A%20%20%20%20interface%20StatusTypes%20%7B%20%20%20%20%20Active%3A%20Status%3B%20%20%20%20%20Inactive%3A%20Status%20%20%20%20%7D%20%20%20%20%20%20%20%20export%20interface%20Status%20%7B%20%20%20%20%20%20name%3A%20string%3B%20%20%20%20%20%20actionName%3A%20string%3B%20%20%20%20%20%20style%3A%20string%3B%20%20%20%20%7D%20%20%20%20%20%20%20%20export%20const%20statusTypes%3A%20StatusTypes%20%3D%20%7B%20%20%20%20%20%20Active%20%3A%20%7B%20%20%20%20%20%20%20%20name%3A%20%22Active%22%2C%20%20%20%20%20%20%20%20actionName%3A%20%22Deactivate%22%2C%20%20%20%20%20%20%20%20style%3A%20%22success%22%20%20%20%20%20%20%7D%2C%20%20%20%20%20%20Inactive%20%3A%20%7B%20%20%20%20%20%20%20%20name%3A%20%22Inactive%22%2C%20%20%20%20%20%20%20%20actionName%3A%20%22Activate%22%2C%20%20%20%20%20%20%20%20style%3A%20%22warning%22%20%20%20%20%20%20%7D%20%20%20%20%7D%3B)
Whenever you want to have objects with keys, the correct way is to create interfaces, and each interface should have a property, which will be the object's key.
In English:
By assigning an object the interface `StatusTypes`, you are telling it that it's an object with `Active`, and `Inactive` keys whose shape conform to the `Status` interface. If you want to add another key to the `const statusTypes` you'd add it to the `StatusTypes` interface. If there is one key that you don't want it to be specified, append the "?" at the end of the property:
```
interface StatusTypes {
Active: Status;
Inactive: Status
// This won't be required.
optional?: Status
}
```
Upvotes: 2
|
2018/03/16
| 676 | 2,215 |
<issue_start>username_0: When using raw\_input in python, a user has to put an input and then press enter.
Is there a way for me to code something, where the code will prompt several user inputs at the same time and THEN press enter for the code to run?
For example:
instead of ...
```
>>> Name:
>>> Age:
>>> Gender:
```
it'll have ...
```
>>>
Name:
Age:
Gender:
```<issue_comment>username_1: One way is to do like this:
```
x=raw_input("Enter values: ")
a=x.split(' ')
```
And now you have the separate values in `a`.
Demonstration:
```
>>> x=raw_input("Enter values: ")
Enter values: 12 65 hello
>>> a=x.split(' ')
>>> a
['12', '65', 'hello']
```
Then you could do `age=a[0]` and so forth.
You could use the split directly like this:
```
a,b,c=raw_input("Enter values: ").split(' ')
```
One downside is that if the user does not give three values, you will get an error. If you split afterwards you can perform controls on the input first.
Note:
This method may cause problem if you want spaces in data, since space is used as a delimiter. Switch to another token in `split()` to allow spaces if needed.
Upvotes: 0 <issue_comment>username_2: No, [`input()`](https://docs.python.org/3/library/functions.html#input) only allows one string input to be supplied.
What you can do is this:
```
name, age, gender = input('Enter Name|Age|Gender:').split('|')
# user inputs "ABC|30|M"
print(name, age, gender)
# ABC 30 M
```
Now you just rely on user not having a `|` character in their name.
Or, of course, you can ask separate questions.
Upvotes: 2 <issue_comment>username_3: If `name` will always be letters, `age` will always be numbers and `gender` will always be letters, and they are in that order, this should work:
```
info=input("what is your name, age and gender?")
name=""
age=""
gender=""
var="n"
for i in info:
if var=="n":
try:
int(i)
var="a"
except:
name+=i
if var=="a":
try:
int(i)
age+=i
except:
var="g"
if var=="g":
gender+=i
print(name, age, gender)
```
It means that you won't have to worry about the user inputing a seperator.
Upvotes: 0
|
2018/03/16
| 968 | 3,055 |
<issue_start>username_0: I posted a question similar to this and the solution people provided worked, but it only worked when I comment out all my other CSS code. For some reason, the CSS is interfering with the JavaScript function to work properly. I want to change the background color to black when I click the button, but I want to change it back to white when I click on it again.
```
Light Switch
```
```
#switch {
position: relative;
display: inline-block;
width: 60px;
height: 34px;
margin: 0 auto;
}
#switch input {
display: none;
}
.slider {
position: absolute;
cursor: pointer;
top: 0;
left: 0;
right: 0;
bottom: 0;
background-color: grey;
-webkit-transition: .4s;
transition: .4s;
border-radius: 34px;
}
.slider:before {
position: absolute;
content: "";
height: 26px;
width: 26px;
left: 4px;
background-color: white;
bottom: 4px;
transition: .4s;
border-radius: 50%;
}
input:checked + .slider {
background-color: #1583ea;
}
input:checked + .slider:before {
transform: translateX(26px);
}
```
```
function start() {
var click = document.getElementById("switch");
click.addEventListener("click", toggle);
};
function toggle() {
var color = document.getElementById("body");
var backColor = color.style.backgroundColor;
color.style.backgroundColor = backColor === "black" ? "white" : "black";
};
start();
```<issue_comment>username_1: You want to use a `.toggle`, specifically, `classList.toggle("my-black");`
```js
var button = document.getElementsByTagName("button")[0];
var body = document.body;
button.addEventListener("click", function(evt){
body.classList.toggle("my-black");
});
```
```css
.my-black{
background:#424242;
}
html,body{
height:100%;
width:100%;
}
```
```html
asdfasdf
```
Upvotes: 1 <issue_comment>username_2: Your function was triggering twice due to the event listener on the switch div as opposed to the slider itself. I refactored your code a bit.
```js
function start() {
var toggleSwitch = document.getElementById("slider");
toggleSwitch.addEventListener("click", toggle);
};
function toggle() {
var backColor = document.body.style.backgroundColor;
document.body.style.backgroundColor = backColor === "black" ? "white" : "black";
};
start();
```
```css
#switch {
position: relative;
display: inline-block;
width: 60px;
height: 34px;
margin: 0 auto;
}
#switch input {
display: none;
}
.slider {
position: absolute;
cursor: pointer;
top: 0;
left: 0;
right: 0;
bottom: 0;
background-color: grey;
-webkit-transition: .4s;
transition: .4s;
border-radius: 34px;
}
.slider:before {
position: absolute;
content: "";
height: 26px;
width: 26px;
left: 4px;
background-color: white;
bottom: 4px;
transition: .4s;
border-radius: 50%;
}
input:checked + .slider {
background-color: #1583ea;
}
input:checked + .slider:before {
transform: translateX(26px);
}
```
```html
Light Switch
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 744 | 2,300 |
<issue_start>username_0: It's pretty simple, we have this line. I'd like to grab the info from:
`console.log($('#fourth-step tr:nth-child(3) td:nth-child(3)'));`
So the output is `#fourth-step tr:nth-child(3) td:nth-child(3)`.
This outputs a very long object but **I need only the selector information**.
I'm almost sure somewhere already is a thread about this problem but I couldn't find any, sorry.
To make things easier, there is a list of things that I've tried and they didn't work\*.
1. `console.log('z' + {obj});`
2. `console.log(JSON.stringify({obj}));`
3. `console.log(String({obj});`
*`{` and `}` - shortcut.*
Any ideas? :)<issue_comment>username_1: You want to use a `.toggle`, specifically, `classList.toggle("my-black");`
```js
var button = document.getElementsByTagName("button")[0];
var body = document.body;
button.addEventListener("click", function(evt){
body.classList.toggle("my-black");
});
```
```css
.my-black{
background:#424242;
}
html,body{
height:100%;
width:100%;
}
```
```html
asdfasdf
```
Upvotes: 1 <issue_comment>username_2: Your function was triggering twice due to the event listener on the switch div as opposed to the slider itself. I refactored your code a bit.
```js
function start() {
var toggleSwitch = document.getElementById("slider");
toggleSwitch.addEventListener("click", toggle);
};
function toggle() {
var backColor = document.body.style.backgroundColor;
document.body.style.backgroundColor = backColor === "black" ? "white" : "black";
};
start();
```
```css
#switch {
position: relative;
display: inline-block;
width: 60px;
height: 34px;
margin: 0 auto;
}
#switch input {
display: none;
}
.slider {
position: absolute;
cursor: pointer;
top: 0;
left: 0;
right: 0;
bottom: 0;
background-color: grey;
-webkit-transition: .4s;
transition: .4s;
border-radius: 34px;
}
.slider:before {
position: absolute;
content: "";
height: 26px;
width: 26px;
left: 4px;
background-color: white;
bottom: 4px;
transition: .4s;
border-radius: 50%;
}
input:checked + .slider {
background-color: #1583ea;
}
input:checked + .slider:before {
transform: translateX(26px);
}
```
```html
Light Switch
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 883 | 3,008 |
<issue_start>username_0: I'm building an online gallery in React, and it requires an external script in order to work properly.
There are 2 main components, namely Home.js and Single.js. Home.js displays some categories the images are organized in, and the Single.js is the detail view of a category (it contains all the photos under a specific category). The Router looks like this:
```
} />
} />
} />
```
I am loading the script 'main.js' using this function:
```
appendScripts() {
const main = document.createElement('script');
main.src = '/js/main.js';
main.async = true;
main.id = 'main';
document.body.appendChild(main);
}
```
Now the thing is that the script loads on the Home.js component, but it won't load on the Single.js component only the second time I access it through the home page (for the same category), even though it is appended in the DOM. And the same thing goes for accessing the home page through Single.js. If Single.js loads first, I need to access the Home.js 2 times through the Single.js for the script to load properly.
The components both have these functions called:
```
componentDidMount() {
this.appendScripts();
}
componentWillMount() {
this.props.getImages(this.state.id);
this.props.getCategory(this.state.id);
}
```
Any thoughts?<issue_comment>username_1: You want to use a `.toggle`, specifically, `classList.toggle("my-black");`
```js
var button = document.getElementsByTagName("button")[0];
var body = document.body;
button.addEventListener("click", function(evt){
body.classList.toggle("my-black");
});
```
```css
.my-black{
background:#424242;
}
html,body{
height:100%;
width:100%;
}
```
```html
asdfasdf
```
Upvotes: 1 <issue_comment>username_2: Your function was triggering twice due to the event listener on the switch div as opposed to the slider itself. I refactored your code a bit.
```js
function start() {
var toggleSwitch = document.getElementById("slider");
toggleSwitch.addEventListener("click", toggle);
};
function toggle() {
var backColor = document.body.style.backgroundColor;
document.body.style.backgroundColor = backColor === "black" ? "white" : "black";
};
start();
```
```css
#switch {
position: relative;
display: inline-block;
width: 60px;
height: 34px;
margin: 0 auto;
}
#switch input {
display: none;
}
.slider {
position: absolute;
cursor: pointer;
top: 0;
left: 0;
right: 0;
bottom: 0;
background-color: grey;
-webkit-transition: .4s;
transition: .4s;
border-radius: 34px;
}
.slider:before {
position: absolute;
content: "";
height: 26px;
width: 26px;
left: 4px;
background-color: white;
bottom: 4px;
transition: .4s;
border-radius: 50%;
}
input:checked + .slider {
background-color: #1583ea;
}
input:checked + .slider:before {
transform: translateX(26px);
}
```
```html
Light Switch
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 901 | 3,061 |
<issue_start>username_0: the below code is the viewController that is navigated with a navigationItem `navigationItem.leftBarButtonItem = UIBarButtonItem(title:` from a previous tableViewController and I am trying to produce a UIView in my viewController. However nothing is showing.
```
import UIKit
class SecondpgController: UIViewController {
var inputContainerView: UIView!
override func viewDidLoad() {
super.viewDidLoad()
view.backgroundColor = UIColor .gray
let inputContainerView = UIView()
self.view.addSubview(inputContainerView)
//inputContainerView.backgroundColor = UIColor(red: 162/255, green: 20/255, blue: 35/255, alpha: 1)
inputContainerView.backgroundColor = .white
inputContainerView.translatesAutoresizingMaskIntoConstraints = false
//inputContainerView.topAnchor.constraint(equalTo: self.view.topAnchor, constant: 300).isActive = true
//nputContainerView.leftAnchor.constraint(equalTo: self.view.leftAnchor, constant: 10).isActive = true
inputContainerView.centerXAnchor.constraint(equalTo: self.view.centerXAnchor, constant: 200).isActive = true
inputContainerView.centerYAnchor.constraint(equalTo: self.view.centerYAnchor, constant: 200).isActive = true
```
The app is running well however after reaching the view controller, Nothing is shown but only a grey screen.
Why is this happening and how can I solve it?<issue_comment>username_1: You want to use a `.toggle`, specifically, `classList.toggle("my-black");`
```js
var button = document.getElementsByTagName("button")[0];
var body = document.body;
button.addEventListener("click", function(evt){
body.classList.toggle("my-black");
});
```
```css
.my-black{
background:#424242;
}
html,body{
height:100%;
width:100%;
}
```
```html
asdfasdf
```
Upvotes: 1 <issue_comment>username_2: Your function was triggering twice due to the event listener on the switch div as opposed to the slider itself. I refactored your code a bit.
```js
function start() {
var toggleSwitch = document.getElementById("slider");
toggleSwitch.addEventListener("click", toggle);
};
function toggle() {
var backColor = document.body.style.backgroundColor;
document.body.style.backgroundColor = backColor === "black" ? "white" : "black";
};
start();
```
```css
#switch {
position: relative;
display: inline-block;
width: 60px;
height: 34px;
margin: 0 auto;
}
#switch input {
display: none;
}
.slider {
position: absolute;
cursor: pointer;
top: 0;
left: 0;
right: 0;
bottom: 0;
background-color: grey;
-webkit-transition: .4s;
transition: .4s;
border-radius: 34px;
}
.slider:before {
position: absolute;
content: "";
height: 26px;
width: 26px;
left: 4px;
background-color: white;
bottom: 4px;
transition: .4s;
border-radius: 50%;
}
input:checked + .slider {
background-color: #1583ea;
}
input:checked + .slider:before {
transform: translateX(26px);
}
```
```html
Light Switch
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,268 | 3,893 |
<issue_start>username_0: Given a flexbox container, how could I make sure that, if children have lots of content, they don't overflow the container?
```css
.container {
display: flex;
flex-direction: column;
background-color: #ccc;
height: 210px;
width: 200px;
}
.child {
display: flex;
width: 100px;
}
.first {
background-color: rgba(255, 0, 0, 0.2);
}
.second {
background-color: rgba(0, 255, 0, 0.2);
}
.third {
background-color: rgba(0, 0, 255, 0.2);
}
```
```html
first first first first first first first
second second second second second second second second second second second second second second second second second second second
third
```
[](https://i.stack.imgur.com/VjIeK.png)<issue_comment>username_1: Using `min-height: 210px;` instead of explicitly setting a height on your container gives you the desired affect by letting an extra long child to extend its height.
Also, great question writing! Nice to see diagrams of whats happening vs expected, ect.
Upvotes: -1 <issue_comment>username_2: You could use `overflow: auto` on `child` element. In case you want to hide `overflow` only on largest item then you could use `flex: 1` on that item.
```css
.container {
display: flex;
flex-direction: column;
background-color: #ccc;
height: 210px;
width: 200px;
}
.child {
display: flex;
width: 100px;
overflow: auto;
}
.first {
background-color: rgba(255, 0, 0, 0.2);
}
.second {
background-color: rgba(0, 255, 0, 0.2);
flex: 1;
}
.third {
background-color: rgba(0, 0, 255, 0.2);
}
```
```html
first first first first first first first
second second second second second second second second second second second second second second second second second second second
third
```
Upvotes: -1 <issue_comment>username_3: The children overflow their parent element because their intrinsic height (the height of their contents) is larger than the parent's height. You can advise the browser to ignore the intrinsic height by setting `min-height: 0` on the child elements. If you add `overflow: hidden` the result should be what you seem to expect:
```css
.container {
display: flex;
flex-direction: column;
background-color: #ccc;
height: 210px;
width: 200px;
}
.child {
width: 100px;
min-height: 0;
overflow: hidden;
}
.first {
background-color: rgba(255, 0, 0, 0.2);
}
.second {
background-color: rgba(0, 255, 0, 0.2);
}
.third {
background-color: rgba(0, 0, 255, 0.2);
}
```
```html
first first first first first first first
second second second second second second second second second second second second second second second second second second second
third
```
The children get height distributed among them proportionally to their content height. Overflowing content is hidden.
Upvotes: 6 [selected_answer]<issue_comment>username_4: flex has
Three values: flex-grow | flex-shrink | flex-basis;
you should apply particular max-height for first child.
```css
.first {
background-color: rgba(255, 0, 0, 0.2);
flex: 0 1 1;
max-height:70px;
}
```
otherwise if first div height increases it will dominate on other divs.
```css
.container {
display: flex;
flex-direction: column;
background-color: #ccc;
height: 210px;
width: 200px;
}
.child {
width: 100px;
overflow:hidden;
}
.first {
background-color: rgba(255, 0, 0, 0.2);
flex: 0 1 auto;
}
.second {
background-color: rgba(0, 255, 0, 0.2);
flex: 0 1 1;
}
.third {
background-color: rgba(0, 0, 255, 0.2);
flex: 0 1 1;
}
```
```html
first first first first first first first
second second second second second second second second second second second second second second second second second second second
third
```
Upvotes: 0
|
2018/03/16
| 745 | 2,437 |
<issue_start>username_0: I'm trying to create an image carousel style system, which displays multiple images at once; except instead of sliding between those images, the images fade.
So for example, 5 images might be displayed, then after a set period of time (say 3 seconds), they all fade at once to the next 5 images.
I was trying to use Slick Carousel to achieve this, but the behaviour isn't available by default - you can fade one element easily, but as soon as you want to display multiple images, it doesn't work.
Some people have had similar issues, and have experimented with solutions using Slick Carousel, however none of them are quite right.
Here's a jsfiddle - <http://jsfiddle.net/22e6q2rt/> - showing my progress so far. This sort of works, but the transitions aren't very nice. I'd like a nice crossfade, where one set of images fades out while the next set fades in. Here's the code:
```
$('.multipleslider').slick({
dots: false,
infinite: true,
speed: 0,
slidesToShow: 3,
autoplay: true,
autoplaySpeed: 1400,
slidesToScroll: 3,
cssEase: 'linear'
});
.slick-slide {
opacity: .5;
transition: opacity .5s ease-in-out;
}
div.slick-current {
opacity: 1;
transition: opacity .5s ease-in-out;
}
div.slick-active{
opacity: 1;
transition: opacity .5s ease-in-out;
}
```
I don't mind if this system is a standalone jquery / javascript solution, or uses an existing javascript plugin, such as Slick or Owl Carousel... but none of them seem suitable, and I've hit a wall! Any help would be amazing, thank you.<issue_comment>username_1: try playing with the timing functions in the slick-slider object. i've changed them for you here
```
$('.multipleslider').slick({
dots: false,
infinite: true,
speed: 1000,
slidesToShow: 3,
autoplay: false,
autoplaySpeed: 1400,
slidesToScroll: 3,
cssEase: 'ease-in'
});
```
I changed the autoplay to false and bumped the speed up to 1000, I'm guessing this is in miliseconds. On top of that, you can play with the cssEase to be either ease-in, ease-out, ease, cubic bezier, etc.
Upvotes: -1 <issue_comment>username_2: This is an example of react-slick with multiple elements almost fadeing and looks similar:
[carousal](https://codepen.io/arshanting/pen/pOKgZz)
```
$('.fade').slick({
autoplay: true,
arrows: false,
speed: 0,
dots: false,
infinite: true,
slidesToShow: 4,
slidesToScroll: 4,
cssEase: 'linear'
});
```
Upvotes: 0
|
2018/03/16
| 622 | 2,026 |
<issue_start>username_0: I'm looking to write a vim macro (not that I know how yet) eventually, as I'm sure I can do this in vim, but I don't know the basics yet.
I want to grab the branch name (or part of it anyway - that part of the regex should be straightforward) from a git commit message, and paste it, with a colon, to the top of the file, and hopefully leave the cursor at the end of that.
So the file looks a bit like this when I start:
```
$ <-- cursor here
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# On branch JIRA-1234-some-description-here
# Your branch is up to date with ...
```
I want to capture the `/On branch \([A-Z]*-[0-9]*\)/`, move back up to the top, paste it, and a colon, leaving the cursor after the colon, e.g.:
```
JIRA-1234: $ <-- cursor here
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# On branch JIRA-1234-some-description-here
# Your branch is up to date with ...
```
Ideally, also leave me in insert mode there.<issue_comment>username_1: try playing with the timing functions in the slick-slider object. i've changed them for you here
```
$('.multipleslider').slick({
dots: false,
infinite: true,
speed: 1000,
slidesToShow: 3,
autoplay: false,
autoplaySpeed: 1400,
slidesToScroll: 3,
cssEase: 'ease-in'
});
```
I changed the autoplay to false and bumped the speed up to 1000, I'm guessing this is in miliseconds. On top of that, you can play with the cssEase to be either ease-in, ease-out, ease, cubic bezier, etc.
Upvotes: -1 <issue_comment>username_2: This is an example of react-slick with multiple elements almost fadeing and looks similar:
[carousal](https://codepen.io/arshanting/pen/pOKgZz)
```
$('.fade').slick({
autoplay: true,
arrows: false,
speed: 0,
dots: false,
infinite: true,
slidesToShow: 4,
slidesToScroll: 4,
cssEase: 'linear'
});
```
Upvotes: 0
|
2018/03/16
| 800 | 2,648 |
<issue_start>username_0: I am testing in Safari 11.0.3
Site: <https://3milychu.github.io/met-erials>
I have the following function to create a header that pops out the selected item upon scroll
```
function scrollState () {
var elmnt = document.getElementById("title");
var rep = elmnt.offsetTop;
if (window.pageYOffset >= elmnt.offsetHeight) {
// $('input:not(:checked').parent().hide();
$('input:not(:checked').parent().css("display","none");
$("input:checked").css("display", "inline");
$("label").css("marginLeft", "35%");
$("label" ).css("fontSize", "4em");
$("label" ).css("textAlign", "center");
$("input:checked").css("float", "none");
$("input:checked").css("verticalAlign", "top");
$("input[type=radio]").css("width", "3em");
$("input[type=radio]").css("height", "3em");
$("input:checked").css("fontSize", "0.5em");
} else {
$("input:checked").css("display", "inline")
$("label").css("marginLeft", "0%");
$("label" ).css("textAlign", "none");
$("input:checked").css("float", "right");
$("input[type=radio]").css("width", "2em");
$("input[type=radio]").css("height", "2em");
$("input:checked").css("fontSize", "11px");
// $('input:not(:checked').parent().show();
$('input:not(:checked').parent().css("display","inline-block");
$("label").css("fontSize", "1.5em");
};
};
```
I call it by:
```
window.onscroll = function() {scrollState()};
```
Why is this not working in Safari? I commented out the .hide() method after seeing that Safari needs it to be replaced with .css("display","none").
This works in Chrome and Firefox as desired (when you use the .hide() and .show() methods)<issue_comment>username_1: try playing with the timing functions in the slick-slider object. i've changed them for you here
```
$('.multipleslider').slick({
dots: false,
infinite: true,
speed: 1000,
slidesToShow: 3,
autoplay: false,
autoplaySpeed: 1400,
slidesToScroll: 3,
cssEase: 'ease-in'
});
```
I changed the autoplay to false and bumped the speed up to 1000, I'm guessing this is in miliseconds. On top of that, you can play with the cssEase to be either ease-in, ease-out, ease, cubic bezier, etc.
Upvotes: -1 <issue_comment>username_2: This is an example of react-slick with multiple elements almost fadeing and looks similar:
[carousal](https://codepen.io/arshanting/pen/pOKgZz)
```
$('.fade').slick({
autoplay: true,
arrows: false,
speed: 0,
dots: false,
infinite: true,
slidesToShow: 4,
slidesToScroll: 4,
cssEase: 'linear'
});
```
Upvotes: 0
|
2018/03/16
| 1,550 | 5,530 |
<issue_start>username_0: I have a standalone application using Spring to connect to TIBCO (queues). Occasionally, for various reasons, TIBCO connection is closed by the server. Most of the things are recovering from this. However, sometimes JmsTemplate is not able to send response because of the error below. I have a retry in place but the same error keeps coming (see trace below).
Details that may be important:
I am using DefaultMessageListenerContainer to get the request and send the response in that receiving thread. Also, I am using the same connection factory for both DefaultMessageListenerContainer and JmsTemplate.
```
Caused by: org.springframework.jms.IllegalStateException: Session is closed; nested exception is javax.jms.IllegalStateException: Session is closed
at org.springframework.jms.support.JmsUtils.convertJmsAccessException(JmsUtils.java:279)
at org.springframework.jms.support.JmsAccessor.convertJmsAccessException(JmsAccessor.java:169)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:487)
at org.springframework.jms.core.JmsTemplate.send(JmsTemplate.java:559)
at org.springframework.jms.core.JmsTemplate.convertAndSend(JmsTemplate.java:682)
at org.springframework.jms.core.JmsTemplate.convertAndSend(JmsTemplate.java:670)
at org.springframework.integration.jms.JmsSendingMessageHandler.send(JmsSendingMessageHandler.java:149)
at org.springframework.integration.jms.JmsSendingMessageHandler.handleMessageInternal(JmsSendingMessageHandler.java:116)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
... 83 more
Caused by: javax.jms.IllegalStateException: Session is closed
at com.tibco.tibjms.TibjmsxSessionImp._createProducer(TibjmsxSessionImp.java:1067)
at com.tibco.tibjms.TibjmsxSessionImp.createProducer(TibjmsxSessionImp.java:5080)
at org.springframework.jms.core.JmsTemplate.doCreateProducer(JmsTemplate.java:1114)
at org.springframework.jms.core.JmsTemplate.createProducer(JmsTemplate.java:1095)
at org.springframework.jms.core.JmsTemplate.doSend(JmsTemplate.java:591)
at org.springframework.jms.core.JmsTemplate$3.doInJms(JmsTemplate.java:562)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:484)
... 89 more
```
Communication with the TIBCO queue is done using Spring framework. Here is the configuration. A message is received by DefaultMessageListenerContainer, processed and JmsTemplate is used to send back the response. Connection factory is shared between receiver and sender (can this be an issue?).
```
```
One more detail that might help. I just noticed that the first exception is actually different. There is a "Connection is closed" exception first followed by multiple "Session is closed" exception (on retry).
```
Caused by: javax.jms.JMSException: Connection is closed
at com.tibco.tibjms.TibjmsxLink.sendRequest(TibjmsxLink.java:322)
at com.tibco.tibjms.TibjmsxLink.sendRequest(TibjmsxLink.java:286)
at com.tibco.tibjms.TibjmsxLink.sendRequestMsg(TibjmsxLink.java:261)
at com.tibco.tibjms.TibjmsxSessionImp._createProducer(TibjmsxSessionImp.java:1075)
at com.tibco.tibjms.TibjmsxSessionImp.createProducer(TibjmsxSessionImp.java:5080)
at org.springframework.jms.core.JmsTemplate.doCreateProducer(JmsTemplate.java:1114)
at org.springframework.jms.core.JmsTemplate.createProducer(JmsTemplate.java:1095)
at org.springframework.jms.core.JmsTemplate.doSend(JmsTemplate.java:591)
at org.springframework.jms.core.JmsTemplate$3.doInJms(JmsTemplate.java:562)
at org.springframework.jms.core.JmsTemplate.execute(JmsTemplate.java:484)
... 89 more
```<issue_comment>username_1: This answer might help <https://stackoverflow.com/a/24494739/208934> In particular:
>
> When using JMS you shouldn't really cache the JMS Session (and
> anything hanging of that such as a Producer). The reason being is that
> the JMS Session is the unit of work within JMS and so should be a
> short lived object. In the Java EE world that JMS Session might also
> be enlisted with a global transaction for example and so needs to be
> scoped correctly.
>
>
>
It sounds like you're reusing the same session for multiple operations.
I have seen this problem in some of my tests and telling Spring to use a pool of connections usually fixes it. If you were using active-mq then it can be done with the property `spring.activemq.pooled=true`. I'm not 100% how Tibco accomplishes the same thing.
You might get get same results by defining a `PooledConnectionFactory` bean that will override the default.
Upvotes: 0 <issue_comment>username_2: It seems that Spring does not handle properly this case. I solved this issue by overriding JmsTemplate (code below) and handling the exception myself (cleaning the session and the connection). I hope this helps.
```
public T execute(SessionCallback action, boolean startConnection) throws JmsException {
try {
return super.execute(action, startConnection);
} catch (JmsException jmse) {
logger.error("Exception while executing in JmsTemplate (will cleanup session & connection): ", jmse);
Object resourceHolder =
TransactionSynchronizationManager.getResource(getConnectionFactory());
if (resourceHolder != null && resourceHolder instanceof JmsResourceHolder) {
((JmsResourceHolder)resourceHolder).closeAll();
}
throw jmse;
}
}
```
Upvotes: 1
|
2018/03/16
| 830 | 2,916 |
<issue_start>username_0: I'm trying to make a function called loadFixtures available to all Jest tests.
I have the following line within the jest config object inside package.json:
```
"globalSetup": "/src/test/js/config/setup-globals.js"
```
setup-globals.js contains:
```
module.exports = function() {
function loadFixtures(filename) {
console.info('loadFixtures is working');
}
}
```
Within my tests I have, for example:
```
beforeEach(() => {
loadFixtures('tooltip-fixture.html');
});
```
However when I run Jest I get the following for each test:
```
ReferenceError: loadFixtures is not defined
```
I verified that the setup-globals.js file is definitely being found and loaded in by Jest before the tests execute.
Can anyone assist in identifying where I've gone wrong here? I've spent pretty much an entire day trying to debug without luck.<issue_comment>username_1: You should be using `setupFiles` and not `globalSetup`.
```
// jest config
"setupFiles": [
"/src/test/js/config/setup-globals.js"
]
```
then `src/test/js/config/setup-globals.js`:
```
global.loadFixtures(filename) {
console.info('loadFixtures is working');
}
```
references: <https://medium.com/@justintulk/how-to-mock-an-external-library-in-jest-140ac7b210c2>
Upvotes: 5 [selected_answer]<issue_comment>username_2: If you bootstrapped your application using `npx create-react-app` (CRA), you do not need to add the `setupFiles` key under your `jest` key in the `package.json` file (CRA prevents overriding that key).
what you simply need to do is to add the file `setupTests.js` in the root of your `SRC` folder, and populate it with the snippet below:
```js
import { configure } from 'enzyme';
import Adapter from 'enzyme-adapter-react-16';
configure({
adapter: new Adapter(),
});
```
>
> remember you must have earlier installed the right versions of `enzyme` and `enzyme-adapter-react`
>
>
>
CRA has been wired to automatically load the `setupTests.js` file in the `src` folder if it exists. Hence after adding these, you can then go over to your test and do `import {shallow} from enzyme` without triggering an error.
if you are not using `Create-react-app`, all you need to do, in addition to adding the file above to your `src` folder is to add the key `setupFiles` into the `jest` key in your package.json. it should look like this:
```
"jest": {
"setupFiles": ['/src/setupTests.js'],
}
```
and you are good to go.
Cheers!
Upvotes: 3 <issue_comment>username_3: You're defining a function in a different scope. How about you create a separate module and import it directly in your test files. Or if you really want to define it in the global scope, try using the following code in your `setup-globals.js` file.
```js
module.exports = function() {
global.loadFixtures = function(filename) {
console.info('loadFixtures is working');
}
}
```
Upvotes: 0
|
2018/03/16
| 1,258 | 4,716 |
<issue_start>username_0: I think it is a little ridiculous but it's hard to find information about what is this file. I've found a lot info how to get this `Apple Push Notification Authentication Key`, but i also want to know exactly what is it.
Here is some info i have found:
**Benefits:**
* No need to re-generate the push certificate every year;
* One auth key
can be used for all your apps;
* Same for sandbox and Production.
From [Apple Docs](https://developer.apple.com/library/content/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/APNSOverview.html):
>
> Token-based provider connection trust: A provider using the
> HTTP/2-based API can use JSON web tokens (JWT) to provide validation
> credentials for connection with APNs. In this scheme, you provision a
> public key to be retained by Apple, and a private key which you retain
> and protect. Your providers then use your private key to generate and
> sign JWT provider authentication tokens. Each of your push
> notification requests must include a provider authentication token.
>
>
> You can use a single, token-based connection between a provider and
> APNs can to send push notification requests to all the apps whose
> bundle IDs are listed in your online developer account.
>
>
> Every push notification request results in an HTTP/2 response from
> APNs, returning details on success or failure to your provider.
> Further check **Token-Based Provider-to-APNs Trust** section.
>
>
>
**Questions:**
* What is actually the .p8 file?
* What programm can open it? (Keychain didn't work for me)
* Is there a way to convert it to `.pem` or `.p12`?
* A little flow-out question in order to not create a new topic: Does the server side operate with .p8 the same way as .p12 or it should be additional tools added?<issue_comment>username_1: File extensions are just a convention, but most likely the `.p8` extension is used to indicate that it is a PKCS#8 PrivateKeyInfo (or EncryptedPrivateKeyInfo).
I'd expect the Keychain program to be able to open it as "a key", but not having a mac at hand I can't say. It should open with [SecItemImport](https://developer.apple.com/documentation/security/1395728-secitemimport?language=objc) (`kSecFormatOpenSSL`, `kSecItemTypePrivateKey`).
>
> Is there a way to convert it to .pem or .p12?
>
>
>
Assuming you mean "certificate" by `.pem`, no. If you mean PEM encoded, sure. It's either "BEGIN PRIVATE KEY" or "BEGIN ENCRYPTED PRIVATE KEY", depending.
It can also, technically, be converted into a PKCS#12. But Apple's PKCS#12 importer won't import (last I saw) private keys that it can't figure out what certificate they belong with (from the same PKCS#12).
This is just a private key, there's no certificate (thus no expiration). So certificate-based approaches don't make sense.
>
> Does the server side can operate with .p8 the same way as .p12 or it should be additional tools added?
>
>
>
This depends entirely on the details of the protocol, which I don't know. If the protocol transported the certificate then different machinery is involved with the conversion. If it just transported a signature and the server looked up the public key for verification then nothing changed server side.
Upvotes: 3 <issue_comment>username_2: It's a text file! The .p8 extension signifies a simple text file containing public/private key. You can open it with any text editor (TextEdit, vim, Sublime Text) to see your key.
Upvotes: 2 <issue_comment>username_3: The following is the state of my research:
The APNS .p8 file contains the **PRIVATE KEY** that is used to **SIGN** the JWT content for APNS messages.
The file itself is a pure text file, the KEY inside is formatted in PEM format.
The part between the -----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY----- is a base64 formatted ASN.1 PKCS#8 representation of the key itself. Some can use the following web service to extract its contents ([ASN1JS](https://lapo.it/asn1js/)).
The KEY itself is 32 bytes long and is used to create the required ECDSA P-256 SHA-256 signature for the JWT. The resulting JWT looks like this '*{JWT header base64 encoded}.{JWT payload base64 encoded}.Signature (64 bytes) base64 encoded*'.
There are a lot of web services to decode such tokens, but some couldn't check the signature, as the corresponding PUBLIC KEY isn't known (Apple keeps it secret when providing the PRIVATE KEY).
EDIT: It seems, that the PUBLIC KEY is also included in the .p8 file, it can be extracted via OpenSSL (and is visible when decoding the ASN.1 content: the 520 bit stream).
>
> openssl ec -in AuthKey\_1<KEY>.p8 -pubout -out
> AuthKey\_1<KEY>\_Public.p8
>
>
>
Upvotes: 5
|
2018/03/16
| 1,339 | 4,241 |
<issue_start>username_0: I need something like this
```
php
if http://growtopiajaw.my.vg
else if https://growtopiajaw.my.vg
then
header('Location: https://growtopiajaw.my.vg/homepage.html', true, 301);
exit();
?
```
Basiclly, if I type in the URL growtopiajaw.my.vg, it will automaticlly redirect to <https://growtopiajaw.my.vg/homepage.html>. But when I type in the URL <https://growtopiajaw.my.vg>, it will keep refreshing in an infinite loop.
I know that some people will try to access the <https://growtopiajaw.my.vg> page.
I don't want my site to be problematic for users. Also, you can try visiting the site <https://www.growtopiajaw.my.vg>. You can see that it keeps refreshing the page non-stop.
So, I am seeking help from anyone who can help me. Thank you!
>
> EDIT:
>
>
>
Okay, so my question is not quite clear. What I actually meant was something like this. Redirect <http://growtopiajaw.my.vg> and <https://growtopiajaw.my.vg> (if the user went to this URL) to a specific page on https which makes the link <https://growtopiajaw.my.vg/homepage.html>. I currently have the code below in <http://growtopiajaw.my.vg/index.html>.
`php
header('Location: https://growtopiajaw.my.vg/homepage.html', true, 301);
exit();
?`
The server will automaticlly load the index.html so it will redirect to the page https:/growtopiajaw.my.vg/homepage.html. (I cannot post more than 8 links. sorry)<issue_comment>username_1: Place a php file named `index.php` in your root directory. Write your code in it.
```
php
header('Location: https://growtopiajaw.my.vg/homepage.html');
exit;
</code
```
Now all requests to `http://growtopiajaw.my.vg` and `https://growtopiajaw.my.vg` will be redirected to `https://growtopiajaw.my.vg/homepage.html`
Upvotes: 1 <issue_comment>username_2: So there are two possible scenarios that may apply here...
### Scenario 1 - Redirect all `HTTP` traffic to `HTTPS`
You can achieve this by simply adding an `.htaccess` to the root of your project with the following contents...
```
RewriteEngine On
RewriteCond %{HTTPS} off
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301]
```
Notice the type of redirect here... it is `301` which translates to **permanent** redirect (vs. `302`, which translates to **temporary** redirection)
[Source: GoDaddy](https://www.godaddy.com/help/redirect-http-to-https-automatically-8828)
### Scenario 2 - Redirect (only) the *landing page* to `HTTPS`
A landing page is basically `PAGE_NAME.EXT`. This can have many forms... for example, consider the following excerpt from a hosting provider--
>
> Our Web servers look for index files in this order:
>
>
> index.html index.htm index.shtml index.php index.php5 index.php4
> index.php3 index.cgi default.html default.htm home.html home.htm
> Index.html Index.htm Index.shtml Index.php Index.cgi Default.html
> Default.htm Home.html Home.htm placeholder.html
>
>
> If the top level of
> your website contains a file with any of those names, that file will
> be shown when visitors don't specify a file name.
>
>
>
[Source: TigerTech](https://support.tigertech.net/index-file-names)
For simplicity sake, let's say your **default** landing page is `index.html`.
In which case, simply create `index.html` in the root of your project and add the following--
```
php
header("HTTP/1.1 301 Moved Permanently");
header("Location: https://growtopiajaw.my.vg/homepage.html");
exit();
?
```
Now, any attempts to load `//growtopiajaw.my.vg` should take the user to `https://growtopiajaw.my.vg/homepage.html`
Notice though, this will redirect ONLY IF the user enters the `growtopiajaw.my.vg` URL --- if they go to `growtopiajaw.my.vg/about.html` then it will undoubtedly take them to a `HTTP` version of such a page.
Upvotes: 3 [selected_answer]<issue_comment>username_3: I suggest you rename homepage.html to index.php and then add the following code above the `doctype` tag
```
php
if( !isset( $_SERVER['HTTPS'] ) ) {
header( "location: https://$_SERVER[HTTP_HOST]$_SERVER[REQUEST_URI]" );
exit();
}
?
```
This is all based on not being able to setup 301 rules on the server itself. Otherwise use what @username_2 has suggested.
Upvotes: 1
|
2018/03/16
| 2,640 | 7,593 |
<issue_start>username_0: I am trying to work with jupyter notebook, but when I open a file I receive the following error:
The kernel has died, and the automatic restart has failed. It is possible the kernel cannot be restarted. If you are not able to restart the kernel, you will still be able to save the notebook, but running code will no longer work until the notebook is reopened.
In the CMD I see the following:
```
(base) C:\Users\<NAME>>jupyter notebook
[W 19:05:33.006 NotebookApp] Error loading server extension jupyterlab
Traceback (most recent call last):
File "C:\Users\<NAME>\AppData\Roaming\Python\Python36\site-packages\notebook\notebookapp.py", line 1451, in init_server_extensions
mod = importlib.import_module(modulename)
File "C:\Anaconda3\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 994, in \_gcd\_import
File "", line 971, in \_find\_and\_load
File "", line 953, in \_find\_and\_load\_unlocked
ModuleNotFoundError: No module named 'jupyterlab'
[I 19:05:33.122 NotebookApp] Serving notebooks from local directory: C:\Users\<NAME>
[I 19:05:33.122 NotebookApp] 0 active kernels
[I 19:05:33.122 NotebookApp] The Jupyter Notebook is running at:
[I 19:05:33.122 NotebookApp] http://localhost:8888/?token=99a355c23c6617857e387f53d0af607ae26b89c20598336e
[I 19:05:33.122 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 19:05:33.122 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://localhost:8888/?token=<PASSWORD>
[I 19:05:33.247 NotebookApp] Accepting one-time-token-authenticated connection from ::1
[I 19:05:42.699 NotebookApp] Creating new notebook in
[I 19:05:43.563 NotebookApp] Kernel started: 1433cbbf-f4b9-4dd3-be19-e91d7ee3d82f
Traceback (most recent call last):
File "C:\Anaconda3\lib\runpy.py", line 193, in \_run\_module\_as\_main
"\_\_main\_\_", mod\_spec)
File "C:\Anaconda3\lib\runpy.py", line 85, in \_run\_code
exec(code, run\_globals)
File "C:\Anaconda3\lib\site-packages\ipykernel\_launcher.py", line 15, in
from ipykernel import kernelapp as app
File "C:\Anaconda3\lib\site-packages\ipykernel\\_\_init\_\_.py", line 2, in
from .connect import \*
File "C:\Anaconda3\lib\site-packages\ipykernel\connect.py", line 13, in
from IPython.core.profiledir import ProfileDir
ModuleNotFoundError: No module named 'IPython'
[I 19:05:46.555 NotebookApp] KernelRestarter: restarting kernel (1/5), new random ports
Traceback (most recent call last):
File "C:\Anaconda3\lib\runpy.py", line 193, in \_run\_module\_as\_main
"\_\_main\_\_", mod\_spec)
File "C:\Anaconda3\lib\runpy.py", line 85, in \_run\_code
exec(code, run\_globals)
File "C:\Anaconda3\lib\site-packages\ipykernel\_launcher.py", line 15, in
from ipykernel import kernelapp as app
File "C:\Anaconda3\lib\site-packages\ipykernel\\_\_init\_\_.py", line 2, in
from .connect import \*
File "C:\Anaconda3\lib\site-packages\ipykernel\connect.py", line 13, in
from IPython.core.profiledir import ProfileDir
ModuleNotFoundError: No module named 'IPython'
[I 19:05:49.591 NotebookApp] KernelRestarter: restarting kernel (2/5), new random ports
Traceback (most recent call last):
File "C:\Anaconda3\lib\runpy.py", line 193, in \_run\_module\_as\_main
"\_\_main\_\_", mod\_spec)
File "C:\Anaconda3\lib\runpy.py", line 85, in \_run\_code
exec(code, run\_globals)
File "C:\Anaconda3\lib\site-packages\ipykernel\_launcher.py", line 15, in
from ipykernel import kernelapp as app
File "C:\Anaconda3\lib\site-packages\ipykernel\\_\_init\_\_.py", line 2, in
from .connect import \*
File "C:\Anaconda3\lib\site-packages\ipykernel\connect.py", line 13, in
from IPython.core.profiledir import ProfileDir
ModuleNotFoundError: No module named 'IPython'
[I 19:05:52.620 NotebookApp] KernelRestarter: restarting kernel (3/5), new random ports
Traceback (most recent call last):
File "C:\Anaconda3\lib\runpy.py", line 193, in \_run\_module\_as\_main
"\_\_main\_\_", mod\_spec)
File "C:\Anaconda3\lib\runpy.py", line 85, in \_run\_code
exec(code, run\_globals)
File "C:\Anaconda3\lib\site-packages\ipykernel\_launcher.py", line 15, in
from ipykernel import kernelapp as app
File "C:\Anaconda3\lib\site-packages\ipykernel\\_\_init\_\_.py", line 2, in
from .connect import \*
File "C:\Anaconda3\lib\site-packages\ipykernel\connect.py", line 13, in
from IPython.core.profiledir import ProfileDir
ModuleNotFoundError: No module named 'IPython'
[W 19:05:53.595 NotebookApp] Timeout waiting for kernel\_info reply from 1433cbbf-f4b9-4dd3-be19-e91d7ee3d82f
[I 19:05:55.632 NotebookApp] KernelRestarter: restarting kernel (4/5), new random ports
WARNING:root:kernel 1433cbbf-f4b9-4dd3-be19-e91d7ee3d82f restarted
Traceback (most recent call last):
File "C:\Anaconda3\lib\runpy.py", line 193, in \_run\_module\_as\_main
"\_\_main\_\_", mod\_spec)
File "C:\Anaconda3\lib\runpy.py", line 85, in \_run\_code
exec(code, run\_globals)
File "C:\Anaconda3\lib\site-packages\ipykernel\_launcher.py", line 15, in
from ipykernel import kernelapp as app
File "C:\Anaconda3\lib\site-packages\ipykernel\\_\_init\_\_.py", line 2, in
from .connect import \*
File "C:\Anaconda3\lib\site-packages\ipykernel\connect.py", line 13, in
from IPython.core.profiledir import ProfileDir
ModuleNotFoundError: No module named 'IPython'
[W 19:05:58.671 NotebookApp] KernelRestarter: restart failed
[W 19:05:58.671 NotebookApp] Kernel 1433cbbf-f4b9-4dd3-be19-e91d7ee3d82f died, removing from map.
ERROR:root:kernel 1433cbbf-f4b9-4dd3-be19-e91d7ee3d82f restarted failed!
[W 19:05:58.705 NotebookApp] 410 DELETE /api/sessions/fd456273-adb3-48cd-92f8-d531c9b8f7a8 (::1): Kernel deleted before session
[W 19:05:58.709 NotebookApp] Kernel deleted before session
[W 19:05:58.709 NotebookApp] 410 DELETE /api/sessions/fd456273-adb3-48cd-92f8-d531c9b8f7a8 (::1) 4.00ms referer=http://localhost:8888/notebooks/Untitled11.ipynb?kernel\_name=python3
```
I have tried to uninstall and then reinstall some modules. However, I was not able to solve the problem. any ideas? THANKS!!<issue_comment>username_1: It seems like there is something wrong with the jupyter installation. I recommend upgrading anaconda to v5, which should install jupyter by default(not sure if that is the case for v3, which is what seems to be used in this case). Try installing the missing modules using conda install if you want to stick with conda v3.
```
conda install -c conda-forge jupyterlab
conda install -c anaconda ipython
```
[IPython](https://anaconda.org/anaconda/ipython), [Jupyterlab](https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html)
Upvotes: 3 <issue_comment>username_2: I got the same error. Its because we haven't given administrator privilege. You have to install anaconda using administrator. Or run the app as administrator.
Just right click on jupyter or spyder then select run as administrator.
Or while installing anaconda select install as administrator
Upvotes: 1 <issue_comment>username_3: I have tried different methods to solve this issue but none of them worked for me, finally solved by re-installing jupyter .
Open your anaconda console and run the following command.
for Installing Jupyter note book on anaconda type:
pip install jupyterlab
To run Jupyter from anaconda simply type
jupyter-lab
Now, you will be able to solve kernal dead issue with new jupyter.
Upvotes: 0
|
2018/03/16
| 888 | 3,037 |
<issue_start>username_0: I have three components:
```
const Comp0 = () => 1;
const Comp1 = () => 2;
const Comp2 = () => 3;
```
I have also a class, with state:
`state = { activeComponent: 0 }`
This `activeComponent` can be changed by user to `1`, `2` or `0`.
In render, I have:
```
return (
{React.createElement(`Comp${this.state.activeComponent}`)};
}
```
It should work... theoretically. However - Im getting a really weird error. Two errors.
1. `Warning: is using uppercase HTML. Always use lowercase HTML tags in React.`
2. `Warning: The tag is unrecognized in this browser. If you meant to render a React component, start its name with an uppercase letter.`
How is that possible that they appear simultaneously?<issue_comment>username_1: You can just do a function with a mapping like this:
```
const stateArray = [Comp0, Comp1, Comp2];
const getComp = (Comp) =>
const getCompFromArray = (i) => getComp(stateArray[i]);
```
Upvotes: 1 <issue_comment>username_2: You could simply render the dynamic tag like
```
const Tag = `Comp${this.state.activeComponent}`;
return (
}
```
According to the **[docs](https://reactjs.org/docs/jsx-in-depth.html#choosing-the-type-at-runtime)**:
>
> You cannot use a general expression as the React element type. If you
> do want to use a general expression to indicate the type of the
> element, just assign it to a `capitalized` variable first.
>
>
>
In your case it doesn't work because, you are passing the string name to React.createElement whereas for a React Component you need to pass the component like
```
React.createElement(Comp0);
```
and for a normal DOM element you would pass a string like
```
React.createElement('div');
```
and since you write
```
`Comp${this.state.activeComponent}`
```
what you get is
```
React.createElement('Comp0')
```
which isn't quite understandable to react and it throws a warning
>
> Warning: is using uppercase HTML. Always use lowercase HTML
> tags in React.
>
>
>
Upvotes: 3 <issue_comment>username_3: If you were to create a custom component element with `React.createElement`, you have to pass the direct class/function, instead of its name (that's only for DOM elements), to it, e.g. `React.createElement(Shoot0)` instead of `React.createElement('Shoot0')`;
You can circumvent the issue by putting the components you intend for in array and index them
```js
const Shoot0 = () => 1;
const Shoot1 = () => 2;
const Shoot2 = () => 3;
const Shoots = [Shoot0, Shoot1, Shoot2];
class App extends React.Component {
constructor(props) {
super(props);
this.state = {
activeComponent: 0
};
}
componentDidMount() {
setInterval(() => {
this.setState((prevState) => {
return {
activeComponent: (prevState.activeComponent + 1) % 3
}
})
}, 1000)
}
render() {
return React.createElement(Shoots[this.state.activeComponent])
}
}
ReactDOM.render(, document.getElementById('app'))
```
```html
```
Upvotes: 2
|
2018/03/16
| 1,113 | 3,720 |
<issue_start>username_0: When I run my project index, it shows this error. I have googled it but I have not found a proper solution for this error. So, please someone help me.
The Error message:
>
> "The parameters dictionary contains a null entry for parameter
> 'chapterIdS' of non-nullable type 'System.Int32' for method
> 'System.Web.Mvc.ActionResult Index(Int32)' in
> 'opr.Controllers.QuizeController'. An optional parameter must be a
> reference type, a nullable type, or be declared as an optional
> parameter.
> Parameter name: parameters"
>
>
>
This is my Index code:
```
@model List
@{
ViewBag.Title = "Index";
}
#### Select Chapter
@using (Html.BeginForm("Index", "Quize", FormMethod.Get))
{
#### Quizes
---
foreach (var std in Model)
{
@Html.RadioButton("searchBy",@std.Chapter\_Name, true)
@std.Chapter\_Name
---
### Questions
---
@foreach (var quesion in std.C\_QuestionTable)
{
| @quesion.QuestionText |
foreach (var answer in quesion.C\_AnswerTable)
{
| @answer.Options |
}
}
}
}
this my controller
public class QuizeController : Controller
{
examsEntities db = new examsEntities();
public ActionResult Index(int chapterIdS)
{
List ques = new List();
ViewBag.ques = db.C\_QuestionTable.Where(w => w.Id == chapterIdS).ToList();
List model = new List();
model = db.Chapters.Where(w=>w.Id==chapterIdS).ToList();
return View(model);
}
}
```<issue_comment>username_1: You can just do a function with a mapping like this:
```
const stateArray = [Comp0, Comp1, Comp2];
const getComp = (Comp) =>
const getCompFromArray = (i) => getComp(stateArray[i]);
```
Upvotes: 1 <issue_comment>username_2: You could simply render the dynamic tag like
```
const Tag = `Comp${this.state.activeComponent}`;
return (
}
```
According to the **[docs](https://reactjs.org/docs/jsx-in-depth.html#choosing-the-type-at-runtime)**:
>
> You cannot use a general expression as the React element type. If you
> do want to use a general expression to indicate the type of the
> element, just assign it to a `capitalized` variable first.
>
>
>
In your case it doesn't work because, you are passing the string name to React.createElement whereas for a React Component you need to pass the component like
```
React.createElement(Comp0);
```
and for a normal DOM element you would pass a string like
```
React.createElement('div');
```
and since you write
```
`Comp${this.state.activeComponent}`
```
what you get is
```
React.createElement('Comp0')
```
which isn't quite understandable to react and it throws a warning
>
> Warning: is using uppercase HTML. Always use lowercase HTML
> tags in React.
>
>
>
Upvotes: 3 <issue_comment>username_3: If you were to create a custom component element with `React.createElement`, you have to pass the direct class/function, instead of its name (that's only for DOM elements), to it, e.g. `React.createElement(Shoot0)` instead of `React.createElement('Shoot0')`;
You can circumvent the issue by putting the components you intend for in array and index them
```js
const Shoot0 = () => 1;
const Shoot1 = () => 2;
const Shoot2 = () => 3;
const Shoots = [Shoot0, Shoot1, Shoot2];
class App extends React.Component {
constructor(props) {
super(props);
this.state = {
activeComponent: 0
};
}
componentDidMount() {
setInterval(() => {
this.setState((prevState) => {
return {
activeComponent: (prevState.activeComponent + 1) % 3
}
})
}, 1000)
}
render() {
return React.createElement(Shoots[this.state.activeComponent])
}
}
ReactDOM.render(, document.getElementById('app'))
```
```html
```
Upvotes: 2
|
2018/03/16
| 1,063 | 2,981 |
<issue_start>username_0: I have a huge `.txt` file (15 GB) and having almost 30 million lines.
I want to put its lines to different files based on the `4th column.` And the `unique` number of `4th` column is around `2 million.`
```
file1.txt
1 10 ABC KK-LK
1 33 23 KK-LK
2 34 32 CK-LK,LK
11 332 2 JK@
11 23 2 JK2
```
Right now, I can separate these lines to different files in the **same folder** as following:
```
awk '{ print $0 >> $4"_sep.txt" }' file1.txt
```
And it results in `4` different files as:
```
KK-LK_sep.txt
1 10 ABC KK-LK
1 33 23 KK-LK
```
and
```
CK-LK,LK_sep.txt
2 34 32 CK-LK,LK
```
and
```
JK@_sep.txt
11 332 2 JK@
```
and finally,
```
JK2_sep.txt
11 23 2 JK2
```
What I want is, to not to put 2 million files in one folder, to separate them into 20 different folders. I can make folders as folder1,2,3....:
```
mkdir folder{1..20}
```
With the answers below, I suppose something like broken below code would work:
```
#!/bin/env bash
shopt -s nullglob
numfiles=(*)
numfiles=${#numfiles[@]}
numdirs=(*/)
numdirs=${#numdirs[@]}
(( numfiles -= numdirs ))
echo $numfiles
var1=$numfiles
awk -v V1=var1 '{
if(V1 <= 100000)
{
awk '{ print $0 >> $4"_sep.txt" }' file1.txt
}
else if(V1 => 100000)
{
cd ../folder(cnt+1)
awk '{ print $0 >> $4"_sep.txt" }' file1.txt
}
}'
```
But then, how can I make this a loop and stop adding up to the `folder1` once it has `100.000` files in it, and start adding files to `folder2` and so on?<issue_comment>username_1: something like this?
count the unique keys and increment bucket after threshold.
```
count += !keys[$4]++;
bucket=count/100000;
ibucket=int(bucket);
ibucket=ibucket==bucket?ibucket:ibucket+1;
folder="folder"ibucket
```
Upvotes: 0 <issue_comment>username_2: Maybe this is what you want (untested since your question doesn't include an example we can test against):
```
awk '
!($4 in key2out) {
if ( (++numKeys % 100000) == 1 ) {
dir = "dir" ++numDirs
system("mkdir -p " dir)
}
key2out[$4] = dir "/" $4 "_sep.txt"
}
{ print > key2out[$4] }
' file1.txt
```
That relies on GNU awk for managing the number of open files internally. With other awks you'd need to change that last line to `{ print >> key2out[$4]; close(key2out[$4]) }` or otherwise handle how many concurrently open files you have to avoid getting a "too many open files" error, e.g. if your $4 values are usually grouped together then more efficiently than opening and closing the output file on every single write, you could just do it when the $4 value changes:
```
awk '
$4 != prevKey { close(key2out[prevKey]) }
!($4 in key2out) {
if ( (++numKeys % 100000) == 1 ) {
dir = "dir" ++numDirs
system("mkdir -p " dir)
}
key2out[$4] = dir "/" $4 "_sep.txt"
}
{ print >> key2out[$4]; prevKey=$4 }
' file1.txt
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 2,879 | 11,546 |
<issue_start>username_0: The following comes from a qsort function implementation given as a solution to one of the K&R challenge questions. The challenge is to read a list of words and sort them according to the number of occurrences. The struct Word is also shown. Link to the full code: <http://clc-wiki.net/wiki/K%26R2_solutions:Chapter_6:Exercise_4>
```
typedef struct WORD
{
char *Word;
size_t Count;
struct WORD *Left;
struct WORD *Right;
} WORD;
...
CompareCounts(const void *vWord1, const void *vWord2)
{
int Result = 0;
WORD * const *Word1 = vWord1;
WORD * const *Word2 = vWord2;
assert(NULL != vWord1);
assert(NULL != vWord2);
/* ensure the result is either 1, 0 or -1 */
if((*Word1)->Count < (*Word2)->Count)
{
Result = 1;
}
else if((*Word1)->Count > (*Word2)->Count)
{
Result = -1;
}
else
{
Result = 0;
}
return Result;
}
```
My question is about these lines:
```
WORD * const *Word1 = vWord1;
WORD * const *Word2 = vWord2;
```
Is this a declaration of a constant pointer to a constant variable? Or something else? And why does it have to be defined this way for the sort to work?<issue_comment>username_1: >
> Is this a declaration of a constant pointer to a constant variable?
>
>
>
No, it's just a pointer to a constant variable, **not** a constant pointer.
To make it clear, you are sorting an array of `WORD*`, for this specific problem, this `WORD*` pointer is a whole:
```
typedef WORD *wordptr;
```
Then the following declaration is more clear:
```
wordptr const *Word1 = vWord1;
```
>
> And why does it have to be defined this way for the sort to work?
>
>
>
This is to make sure that a comparator won't modify the content of the pointer.
Upvotes: 1 <issue_comment>username_2: The source code you linked to is a small application that reads in a text sample, generates a tree data structure that contains word frequency (how many times each word appears in the text sample), and then prints out the list of words from most frequent to least frequent.
```
/*
Chapter 6. Structures
Write a program that prints out the distinct words in its
input sorted into decreasing order of frequency of occurrence.
Precede each word by its count.
Author: <NAME>
*/
```
The pattern used in this application has a classical and elegant K&R feel about it. The first step is to process the text sample generating a tree structure in which each node contains a piece of text (a word from the text sample) along with a frequency count of how many times the piece of text is found. The second step is to then sort the tree nodes by the frequency counts. The third step is to print the sorted tree nodes in order of frequency to provide a list of the text pieces found along with how many times the text piece was found in the text sample.
The tree used is a [binary tree](https://en.wikipedia.org/wiki/Binary_tree) and the tree nodes have the following structure:
```
typedef struct WORD
{
char *Word; // pointer to the text piece, a word of text
size_t Count; // frequency count for this word
struct WORD *Left; // pointer to tree node child left
struct WORD *Right; // pointer to tree node child right
} WORD;
```
The tree structure is used in order to be efficient about either determining if a text piece has already been found and just incrementing the count or adding the text piece with a count of one to our data storage if it does not.
However the sorting step uses a different criteria for the order of items in the tree than for the processing of the text sample. The text sample uses the text pieces as the way to order the tree nodes but for the actual output we need an order based on the frequency counts. So we need to sort the nodes of the tree on the frequency counts.
Since this is an in memory tree, the first thought for the program would be to create a list of the tree nodes in an array and then sort the list. However sorting an array usually requires moving array elements around except for the special case of the array already being sorted. This approach would also double the amount of memory used for the tree nodes since a copy of the nodes is being made.
Rather than making a copy of the tree nodes and then sorting that list, instead the program creates a list of pointers to the tree nodes and then sorts the list of pointers by referencing the tree nodes the pointers point to.
**A Bit About the `qsort()` interface**
The function definition of `CompareCounts(const void *vWord1, const void *vWord2)` means that `vWord1` is a pointer to a `const` variable whose type is unknown or could be anything.
If we look at the `qsort()` function declaration it looks like:
`void qsort (void* base, size_t num, size_t size,
int (*comparator)(const void*,const void*));`
So the comparison function used with `qsort()` must have a compatible argument list or interface description or a modern C compiler will issue an error.
Traditionally with the comparison function used with `qsort()` as well as `bsearch()`, we would have a comparison function that would look like:
```
CompareCounts(const void *vWord1, const void *vWord2)
```
In the comparison function we would then take the `void *` arguments and cast them to a pointer of the the actual type that is to be used in the comparison. After that we then use the local, properly typed variables to do the comparison operation.
What `qsort()` does is to take two elements of the array that it wants to compare and calls the comparison function with pointers to those two elements. The use of `void *` is to work around the type checking of the C compiler.
The interface specified that the `void *` pointer is pointing to something that is `const` because `qsort()` doesn't want you to change the data that it is providing. It is asking you to test the two data items provided and indicate which is greater or lesser or equal in the collating sequence you are using to sort the data.
The reason for the `void *` pointer is because the `qsort()` function does not know what the data is or how the data is structured. `qsort()` only knows the number of bytes, the size, of each data element so that it can iterate through the array of items, element by element. This allows the array to be any size of `struct` or other type of data.
**The Specific Example Comparison Function**
The interface, how the arguments were casted, for `CompareCounts()` looked strange to me until I reviewed the source code you linked to. That program generates a tree structure then generates an array of pointers which point to the actual nodes in the tree. It is this array of pointers to nodes that is passed to `qsort()` to sort.
So the array of data provided to `qsort()` is an array each element of which points to a tree node which is a `WORD` element stored in the tree data structure. The array of pointers are sorted based on the data the pointers point to.
In order to access a particular node by using the array passed to the `qsort()` function we have to take the array element and dereference it to get the actual tree node.
Since `qsort()` passes a pointer to the array element, the `void *vWord1` is a pointer to an array element and we have to dereference that pointer to get the actual array element, a pointer to a tree element. However it is not the pointer values we want to use as the sorting criteria but rather what the pointers point to. This requires us to dereference the pointer of the array element in order to access the data of the `WORD` element in the tree we want to compare.
`WORD * const *Word1 = vWord1;` does a cast of the `void *` pointer `vWord1` to be a pointer to a const pointer to a `WORD`. What this means is that `Word1` is a pointer, which `qsort()` uses to point to the item to be sorted, that is a pointer which is `const` (the array element itself which `qsort()` does not want you to change) and the `const` pointer that `Word1` points to, points to a `WORD` (the pointer to the tree node which is the data that the array contains).
What each node of the tree contains is a text word along with a count as to how many times that word is found in a sample of text. The `qsort()` function is being used to sort the nodes of the tree which results from examining the text sample input from most frequent to least frequent. The list of node pointers is what is provided to `qsort()`.
So the sort is not sorting the array based on the array values but rather sorting based on what the array values, tree node pointers, point to.
By the way When I try a sample compile, I am seeing a warning `warning C4090: 'initializing': different 'const' qualifiers` with Visual Studio 2015 for the lines:
```
WORD * const *Word1 = vWord1;
WORD * const *Word2 = vWord2;
```
However with the following change the compiler warning goes away:
```
const WORD * const * Word1 = vWord1;
const WORD * const * Word2 = vWord2;
```
This change is actually in line with what `qsort()` is asking, that none of the data should be changed whether the pointer from the array element or the data that the pointer from the array element points to.
Anyway, the expression `(*Word1)->Count` is taking the pointer to the array element provided by `qsort()`, dereferencing it to get the pointer to the tree node, and then dereferencing the pointer to the tree node in order to get the member of the `WORD` struct we want to sort against. The sort uses the frequency count of the words, stored in `Count` as the sorting criteria.
**Addendum topic: Playing with const**
A question raised is that with this kind of complex definition, `const WORD * const * Word1 = vWord1;` what are various ways to generate compiler errors by breaking `const` and seeing when using `const` can provide an additional safeguard against inadvertently modifying something that should not be modified.
If we use this definition without the `const` modifier we would have `WORD * * Word1 = vWord1;` which means that we have a pointer variable `Word1` that has a meaning of something like:
Word1 -> ptr -> WORD
where `Word1` is our pointer variable which points to some unknown pointer which in turn points to a variable of type `WORD`.
Lets look at several different variations of definitions.
```
WORD * * Worda = vWord1; // no const
WORD * * const Wordb = vWord1; // Wordb is const pointer which points to a non-const pointer which points to a non-const WORD
WORD * const * Wordc = vWord1; // WordC is a non-const pointer which points to a const pointer which points to a non-const WORD
const WORD * const * Wordd = vWord1; // Wordd is a non-const pointer which points to a const pointer which points to a const WORD
const WORD * const * const Worde = vWord1; // Worde is a const pointer which points to a const pointer which points to a const WORD
```
In the source code of the question with a definition of `WORD * const * Word1 = vWord1;` then `Word1` is a non-const pointer which points to a const pointer which points to a non-const `WORD`. So lets look at several different kinds of assignments:
```
Word1 = vWord2; // replace the value of the pointer Word1, allowed since non-const
(*Word1)++; // Error: increment value of pointer pointed to by Word1, not allowed since pointer pointed to is const so compiler error
(*Word1)->Count++; // increment value of variable pointed to by the pointer pointed to by Word1, allowed since non-const
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 2,684 | 11,018 |
<issue_start>username_0: I want like to filter the data so that only these rows are kept that show the sequencing of locations for each ID. Any idea how I can accomplish this with SQL? Note that the same location can return multiple times, but I want to map each subsequent change in location (i.e. the trajectories).
I do not know how to generate the sequencing column (based on Location) as depicted here, nor how to obtain the desired result below. See:
[](https://i.stack.imgur.com/uEPGx.jpg)
Does anyone knows how to accomplish this in SQL Server?
Thanx<issue_comment>username_1: >
> Is this a declaration of a constant pointer to a constant variable?
>
>
>
No, it's just a pointer to a constant variable, **not** a constant pointer.
To make it clear, you are sorting an array of `WORD*`, for this specific problem, this `WORD*` pointer is a whole:
```
typedef WORD *wordptr;
```
Then the following declaration is more clear:
```
wordptr const *Word1 = vWord1;
```
>
> And why does it have to be defined this way for the sort to work?
>
>
>
This is to make sure that a comparator won't modify the content of the pointer.
Upvotes: 1 <issue_comment>username_2: The source code you linked to is a small application that reads in a text sample, generates a tree data structure that contains word frequency (how many times each word appears in the text sample), and then prints out the list of words from most frequent to least frequent.
```
/*
Chapter 6. Structures
Write a program that prints out the distinct words in its
input sorted into decreasing order of frequency of occurrence.
Precede each word by its count.
Author: <NAME>
*/
```
The pattern used in this application has a classical and elegant K&R feel about it. The first step is to process the text sample generating a tree structure in which each node contains a piece of text (a word from the text sample) along with a frequency count of how many times the piece of text is found. The second step is to then sort the tree nodes by the frequency counts. The third step is to print the sorted tree nodes in order of frequency to provide a list of the text pieces found along with how many times the text piece was found in the text sample.
The tree used is a [binary tree](https://en.wikipedia.org/wiki/Binary_tree) and the tree nodes have the following structure:
```
typedef struct WORD
{
char *Word; // pointer to the text piece, a word of text
size_t Count; // frequency count for this word
struct WORD *Left; // pointer to tree node child left
struct WORD *Right; // pointer to tree node child right
} WORD;
```
The tree structure is used in order to be efficient about either determining if a text piece has already been found and just incrementing the count or adding the text piece with a count of one to our data storage if it does not.
However the sorting step uses a different criteria for the order of items in the tree than for the processing of the text sample. The text sample uses the text pieces as the way to order the tree nodes but for the actual output we need an order based on the frequency counts. So we need to sort the nodes of the tree on the frequency counts.
Since this is an in memory tree, the first thought for the program would be to create a list of the tree nodes in an array and then sort the list. However sorting an array usually requires moving array elements around except for the special case of the array already being sorted. This approach would also double the amount of memory used for the tree nodes since a copy of the nodes is being made.
Rather than making a copy of the tree nodes and then sorting that list, instead the program creates a list of pointers to the tree nodes and then sorts the list of pointers by referencing the tree nodes the pointers point to.
**A Bit About the `qsort()` interface**
The function definition of `CompareCounts(const void *vWord1, const void *vWord2)` means that `vWord1` is a pointer to a `const` variable whose type is unknown or could be anything.
If we look at the `qsort()` function declaration it looks like:
`void qsort (void* base, size_t num, size_t size,
int (*comparator)(const void*,const void*));`
So the comparison function used with `qsort()` must have a compatible argument list or interface description or a modern C compiler will issue an error.
Traditionally with the comparison function used with `qsort()` as well as `bsearch()`, we would have a comparison function that would look like:
```
CompareCounts(const void *vWord1, const void *vWord2)
```
In the comparison function we would then take the `void *` arguments and cast them to a pointer of the the actual type that is to be used in the comparison. After that we then use the local, properly typed variables to do the comparison operation.
What `qsort()` does is to take two elements of the array that it wants to compare and calls the comparison function with pointers to those two elements. The use of `void *` is to work around the type checking of the C compiler.
The interface specified that the `void *` pointer is pointing to something that is `const` because `qsort()` doesn't want you to change the data that it is providing. It is asking you to test the two data items provided and indicate which is greater or lesser or equal in the collating sequence you are using to sort the data.
The reason for the `void *` pointer is because the `qsort()` function does not know what the data is or how the data is structured. `qsort()` only knows the number of bytes, the size, of each data element so that it can iterate through the array of items, element by element. This allows the array to be any size of `struct` or other type of data.
**The Specific Example Comparison Function**
The interface, how the arguments were casted, for `CompareCounts()` looked strange to me until I reviewed the source code you linked to. That program generates a tree structure then generates an array of pointers which point to the actual nodes in the tree. It is this array of pointers to nodes that is passed to `qsort()` to sort.
So the array of data provided to `qsort()` is an array each element of which points to a tree node which is a `WORD` element stored in the tree data structure. The array of pointers are sorted based on the data the pointers point to.
In order to access a particular node by using the array passed to the `qsort()` function we have to take the array element and dereference it to get the actual tree node.
Since `qsort()` passes a pointer to the array element, the `void *vWord1` is a pointer to an array element and we have to dereference that pointer to get the actual array element, a pointer to a tree element. However it is not the pointer values we want to use as the sorting criteria but rather what the pointers point to. This requires us to dereference the pointer of the array element in order to access the data of the `WORD` element in the tree we want to compare.
`WORD * const *Word1 = vWord1;` does a cast of the `void *` pointer `vWord1` to be a pointer to a const pointer to a `WORD`. What this means is that `Word1` is a pointer, which `qsort()` uses to point to the item to be sorted, that is a pointer which is `const` (the array element itself which `qsort()` does not want you to change) and the `const` pointer that `Word1` points to, points to a `WORD` (the pointer to the tree node which is the data that the array contains).
What each node of the tree contains is a text word along with a count as to how many times that word is found in a sample of text. The `qsort()` function is being used to sort the nodes of the tree which results from examining the text sample input from most frequent to least frequent. The list of node pointers is what is provided to `qsort()`.
So the sort is not sorting the array based on the array values but rather sorting based on what the array values, tree node pointers, point to.
By the way When I try a sample compile, I am seeing a warning `warning C4090: 'initializing': different 'const' qualifiers` with Visual Studio 2015 for the lines:
```
WORD * const *Word1 = vWord1;
WORD * const *Word2 = vWord2;
```
However with the following change the compiler warning goes away:
```
const WORD * const * Word1 = vWord1;
const WORD * const * Word2 = vWord2;
```
This change is actually in line with what `qsort()` is asking, that none of the data should be changed whether the pointer from the array element or the data that the pointer from the array element points to.
Anyway, the expression `(*Word1)->Count` is taking the pointer to the array element provided by `qsort()`, dereferencing it to get the pointer to the tree node, and then dereferencing the pointer to the tree node in order to get the member of the `WORD` struct we want to sort against. The sort uses the frequency count of the words, stored in `Count` as the sorting criteria.
**Addendum topic: Playing with const**
A question raised is that with this kind of complex definition, `const WORD * const * Word1 = vWord1;` what are various ways to generate compiler errors by breaking `const` and seeing when using `const` can provide an additional safeguard against inadvertently modifying something that should not be modified.
If we use this definition without the `const` modifier we would have `WORD * * Word1 = vWord1;` which means that we have a pointer variable `Word1` that has a meaning of something like:
Word1 -> ptr -> WORD
where `Word1` is our pointer variable which points to some unknown pointer which in turn points to a variable of type `WORD`.
Lets look at several different variations of definitions.
```
WORD * * Worda = vWord1; // no const
WORD * * const Wordb = vWord1; // Wordb is const pointer which points to a non-const pointer which points to a non-const WORD
WORD * const * Wordc = vWord1; // WordC is a non-const pointer which points to a const pointer which points to a non-const WORD
const WORD * const * Wordd = vWord1; // Wordd is a non-const pointer which points to a const pointer which points to a const WORD
const WORD * const * const Worde = vWord1; // Worde is a const pointer which points to a const pointer which points to a const WORD
```
In the source code of the question with a definition of `WORD * const * Word1 = vWord1;` then `Word1` is a non-const pointer which points to a const pointer which points to a non-const `WORD`. So lets look at several different kinds of assignments:
```
Word1 = vWord2; // replace the value of the pointer Word1, allowed since non-const
(*Word1)++; // Error: increment value of pointer pointed to by Word1, not allowed since pointer pointed to is const so compiler error
(*Word1)->Count++; // increment value of variable pointed to by the pointer pointed to by Word1, allowed since non-const
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,072 | 4,168 |
<issue_start>username_0: I imported the modules fs, lodash, yargs and express and then used typeof on the variables in which I required them.All of them except fs are shown as function
1) I can understand if its an object because we use functions residing in that module but all the module being functions are not making sense to me (Sorry if its sounds stupid)
2)
```
const express = require('express');
let app = express();
```
can you please explain the above code snippet(2nd line)?can we store an executing function inside a variable ? are we storing the return value of express inside the app or the entire function and we further use functions like app.get() later just like an object
Second point is somehow related to first and it will be really helpful if someone can explain it to me
Thank You<issue_comment>username_1: JavaScript is a programming language different to others such as Java, maybe that is what is confusing you. Instead it is inspired on Scheme.
In JavaScript, functions are first-class citizens, meaning that they can have properties and methods just like any other object, and that they can be passed and assigned just as any object. See for example <https://developer.mozilla.org/en-US/docs/Glossary/First-class_Function> for an introduction to first-class functions or <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions> for an intro to JavaScript functions.
This is a core concept in JavaScript, thus you maybe want to have a look to some introduction book to grasp properly the language basics.
Regarding the code:
```
const express = require('express');
let app = express();
```
First line is storing a function into a "variable", in fact a constant, express: `express=require('express');`
Then, we can invoke such function using the name of the "variable" (or constant) it is stored: `express()`
Obviously, the previous steps can only be done because of JavaScript allowing functions being treated as "objects"
Finally, we can store the result of the call to the function in a variable as usual: `app = express();`
Upvotes: 2 [selected_answer]<issue_comment>username_2: To answer you first question, it depends on what you are exporting from the module, either you are exporting multiple values or just one value.
For example I'm creating a module Math.js to have sum and sub methods and I want this module to return these methods.
```
const sum = function(a, b) {
return a + b;
};
const sub = function (a, b) {
return a -b;
};
module.exports = {
sum,
sub
};
```
In code file say app.js, I'll require and use it as follow:
```
const Math = require('./Math');
const sum = Math.sum(10, 20);
const sub = Math.sub(20, 10);
console.log('Sum: ', sum);
console.log('Sub: ', sub);
```
You see, the purpose of Math.js is to return an object containing multiple values.
On the other hand, lets say I want to create a separate module for each Math function. Thus Sum.js as follow:
```
module.exports = function(a, b) {
return a + b;
};
```
and Sub.js as follow:
```
module.exports = function(a ,b) {
return a - b;
};
```
Now in my app.js file I'll use these two modules as follow:
```
const sum = require('./Sum');
const sub = require('./Sub');
console.log('Sum: ', sum(10, 20));
console.log('Sub: ', sub(20, 10));
```
So in the above example, I'm only returing one method per module.
In case of fs module it exports multiple values and in case of express it returns a function that is used to initialize express application.
To answer you second question, First of all understand the concept of Functions are First Class Objects, It means you can do whatever you do with an object,
* Pass it to other function as arguments
* Return a function from a function etc
* Store it to a variable and then call it using like variableName() etc.
That's why in the example code, express module is returning a function, which is stored in express named constant. Then on second line, app variable is initialized with what is the return value of express function after its invocation.
```
const express = require('express');
let app = express();
```
Upvotes: 0
|
2018/03/16
| 625 | 2,414 |
<issue_start>username_0: Is there any way, within a Laravel console command, to clear the session and/or the remember me cookie?
The console command is designed to deactivate users, and so if a user is deactivated while they are logged in, I want them to automatically be logged out.
But I'm not sure this is possible. Ideas?<issue_comment>username_1: Instead of that why don't you try this approach? <https://laracasts.com/discuss/channels/laravel/middleware-where-user-status-is-not-active?page=1>
basicly, have a middleware that checks if the current user is active, in case it is not, redirect him to login
Upvotes: 0 <issue_comment>username_2: Following process may help you. You can create command of following process.
You need to delete session files from "sessions" folder.
Normally Laravel stores sessions in "storage\framework\sessions" folder.
If you delete all files from session folder , all active user will automatically logged out.
You can find out session file using id from session folder and delete those files who are deactivated.
==========================================================
I had worked to prepare the solution
update config/filesystems.php file as following.
```
'disks' => [
'local' => [
'driver' => 'local',
'root' => storage_path('app'),
],
'adminfile' => [
'driver' => 'local',
'root' => storage_path('framework'),
],
```
Then you can use following code to delete the session file for particular user.
In my case , session element stored in "login\_web\_59ba36addc2b2f9401580f014c7f58ea4e30989d" , so I have used "***if(preg\_match('/login\_web\_(.*)/i',$key))\***" in below code
```
$files = Storage::disk('adminfile')->allFiles("sessions");
$users = user::where('is_deactivate',true);
foreach($users as $user){
foreach($files as $file){
if($file!="sessions/.gitignore" and Storage::disk('adminfile')->exists($file)){
$contents = unserialize(Storage::disk('adminfile')->get($file));
foreach($contents as $key=>$value){
if(!is_array($value) and !is_object($value)){
if(preg_match('/login\_web\_(.*)/i',$key))
{
if($value==$user->id){
Storage::disk('adminfile')->delete($file);
}
}
}
}
}
}
}
```
Upvotes: -1
|
2018/03/16
| 606 | 2,290 |
<issue_start>username_0: I've the following get\_or\_create method.
```
class LocationView(views.APIView):
def get_or_create(self, request):
try:
location = Location.objects.get(country=request.data.get("country"), city=request.data.get("city"))
print(location)
return Response(location, status=status.HTTP_200_OK)
except Location.DoesNotExist:
serializer = LocationSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
else:
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
def get(self, request):
return self.get_or_create(request)
def post(self, request):
return self.get_or_create(request)
```
This works fine for creating a new location,
However if the location exists, I get the following error,
```
TypeError: Object of type 'Location' is not JSON serializable
[16/Mar/2018 10:10:08] "POST /api/v1/bouncer/location/ HTTP/1.1" 500 96971
```
This is my model serializer,
```
class LocationSerializer(serializers.ModelSerializer):
id = serializers.IntegerField(read_only=True)
class Meta:
model = models.Location
fields = ('id', 'country', 'city', 'longitude', 'latitude')
```
What am I doing wrong here<issue_comment>username_1: JSON dumps only works with basic types (str, int, float, bool, None). You're trying to dump an object, which is not 'dump-able'. Convert the object to a dictionary, as for example:
```
location_dict = {
'id': location.id,
'country': location.country,
'city': location.city,
'longitude': location.longitude,
'latitude': location.latitude
}
return Response(location_dict, status=status.HTTP_200_OK)
```
Upvotes: -1 <issue_comment>username_2: For some reason you have bypassed all the logic that DRF does for you, so that you never use your serializer; you are passing your Location object directly to the JSON response in the `try` block.
Instead of doing that, you should instantiate your serializer from the model instance object, then pass that serializer data to the response, like you do in the `except` block.
Upvotes: 1
|
2018/03/16
| 306 | 1,090 |
<issue_start>username_0: Is it possible to compute the average time of a user in a query?
Like:
```
reportedDate userID
------------ ------
2018-03-17 00:27:15 1
2018-03-17 00:32:28 1
```<issue_comment>username_1: JSON dumps only works with basic types (str, int, float, bool, None). You're trying to dump an object, which is not 'dump-able'. Convert the object to a dictionary, as for example:
```
location_dict = {
'id': location.id,
'country': location.country,
'city': location.city,
'longitude': location.longitude,
'latitude': location.latitude
}
return Response(location_dict, status=status.HTTP_200_OK)
```
Upvotes: -1 <issue_comment>username_2: For some reason you have bypassed all the logic that DRF does for you, so that you never use your serializer; you are passing your Location object directly to the JSON response in the `try` block.
Instead of doing that, you should instantiate your serializer from the model instance object, then pass that serializer data to the response, like you do in the `except` block.
Upvotes: 1
|
2018/03/16
| 382 | 1,350 |
<issue_start>username_0: I am using indexed\_search and I have the problem that the WHOLE SITE is crawled - including the navigation. So if I search for a word which is part of the main navigation all sites are displayed in the result.
I have added `<--TYPO3SEARCH_begin-->` and `<--TYPO3SEARCH_end-->` in my template - and these markers are included in the HTML output correctly. The markers do not surround the navigation, of course.
I am using:
```
Typo3: 8.7.8
tx_indexed_search 8.7.9
site_crawler 6.1.1
```<issue_comment>username_1: Try adding <--TYPO3SEARCH\_end--> and the very beginning of the document. Thus everything is ignored until the begin marker.
Upvotes: 1 <issue_comment>username_2: I have found out what cost me several hours... It's unbelievable what happened to me. I copied the `<--TYPO3SEARCH_begin-->` pattern from a tutorial.
Today I looked at the code again - and now I noticed, that the pattern was not colored in green in my IDE - like all the other HTML comments. Hmmm...
Finally I found out that these are not the same:
```
<–-TYPO3SEARCH_begin-–>
<--TYPO3SEARCH_begin-->
```
The first line has a dash (minus) symbol which is not the standard character, but some odd UTF-8 sign. (Hex 93).
Don't know where I copied the pattern from, but that guy must have a strange kind of humor
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,421 | 5,753 |
<issue_start>username_0: Does the following code have an undefined behaviour?
```
[[ gnu::pure ]]
static const MyClass &myClass() noexcept
{
static const MyClass s_myClass;
return s_myClass;
}
```
According to [gcc docs](https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#Common-Function-Attributes), the `pure` attribute is for functions which have no effects except the return value and this return value depends only on the parameters and/or global variables.
On the one hand, this function does not have any observable effects other than its return value and it always returns the same value. So it is completely safe to optimise away multiple calls to this function. This is I think what the `pure` attribute is for.
On the other hand, this function needs to construct the `MyClass` object on the first invocation. This includes calling a `MyClass` constructor and setting an implicit *is-initialised* flag to true. This could count as an effect besides the return value (although it is not visible from the outside).
---
This code works on `gcc`, but `clang` optimises away the `MyClass` construction part and makes `myClass()` return an uninitialised object. A `clang` developer insists it is because of the undefined behaviour.
See this bug report: <https://bugs.llvm.org/show_bug.cgi?id=36750> (note it says `gnu::const`, but using `gnu::pure` produces the same result).<issue_comment>username_1: The documentation contains this note:
>
> Interesting non-pure functions are functions with infinite loops or those depending on volatile memory or other system resource, that may change between two consecutive calls
>
>
>
Since the static variable constructor and invisible *is-initialized* flag are both changing memory between function calls, it would indicate that the function isn't pure.
The documentation for `pure` also notes that it's similar to `const`. It doesn't explicitly say this under `pure`, but under `const` it says:
>
> Likewise, a function that calls a non-`const` function usually must not be `const`.
>
>
>
Since your function is calling a constructor which is not `pure` or `const`, it seems like it might violate the rules of the attribute. But see the comment below.
Upvotes: -1 <issue_comment>username_2: I think there are two potential issues with initialization of a static local variable.
[First](http://eel.is/c++draft/stmt.dcl#4.sentence-2):
>
> If the initialization exits by throwing an exception, the initialization is not complete, so it will be tried again the next time control enters the declaration
>
>
>
This implies that consecutive calls to this function could very much have different behavior - the first throws and the second doesn't. That seems to violate the spirit and intent of `pure`.
[Second](http://eel.is/c++draft/stmt.dcl#4.sentence-3):
>
> If control enters the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for completion of the initialization.
>
>
>
This implies that the interpretation of the body must be based on intrinsic state of the function - there needs to be locking, etc. That also seems to violate the spirit and intent of `pure`.
Upvotes: 3 <issue_comment>username_3: All we have to go on is the text of the gcc attribute `pure`, from [here](https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#Common-Function-Attributes):
>
> pure
>
>
> Many functions have no effects except the return value and their return value depends only on the parameters and/or global variables. Calls to such functions can be subject to common subexpression elimination and loop optimization just as an arithmetic operator would be. These functions should be declared with the attribute pure. For example,
>
>
>
> ```
> int square (int) __attribute__ ((pure));
>
> ```
>
> says that the hypothetical function square is safe to call fewer times than the program says.
>
>
> Some common examples of pure functions are strlen or memcmp. Interesting non-pure functions are functions with infinite loops or those depending on volatile memory or other system resource, that may change between two consecutive calls (such as feof in a multithreading environment).
>
>
> The pure attribute imposes similar but looser restrictions on a function’s defintion than the const attribute: it allows the function to read global variables. Decorating the same function with both the pure and the const attribute is diagnosed.
>
>
>
This is less technical than a typical [c++](/questions/tagged/c%2b%2b "show questions tagged 'c++'") standard text would be, but this is what we have to work with.
I'll lay out the tests I read:
* Has no effects except the return value
* The return value depends only on the parameters and/or global variables
* Calls can be subject to common subexpression elimination/loop optimization like an arithmetic operator
* The hypothetical function is safe to call fewer times than the program says.
The core of this is eliminating *duplicate* calls, not *all* calls.
Examples of things that aren't pure:
* Infinite loops
* Depend on volatile memory or other system resource
* May change between two consecutive calls
Nowhere in this description does it say "you can eliminate the first call to this function" -- it says you can eliminate duplicate calls to the function.
Clang's "optimization" results in the body of the function never running. The purpose of `[[ gnu:pure ]]` is to remove *duplicate* calls, not to eliminate *all* calls. As such, clang is clearly in the wrong.
There are probably attributes you could call `pure` that would permit the optimization clang is doing, but `[[gnu:pure]]` is not that attribute.
Upvotes: 3
|
2018/03/16
| 545 | 2,031 |
<issue_start>username_0: Kotlin redefines a number of primitive constants that are already defined in Java, such as `Long.MAX_VALUE` or `Double.NaN`.
What is the a difference between them and which should be preferred when coding in Kotlin?
To be clear, I'm referring, for example, to:
```
kotlin.Long.Companion.MAX_VALUE
java.lang.Long.MAX_VALUE
```<issue_comment>username_1: The Kotlin `Long` class (and all others which have a counterpart in Java) are wrappers, which depending if the type is nullable or not represent the value internally (on the JVM) with `int` (primitive, not nullable) or `Integer` (object, nullable).
Take a look [here](https://github.com/JetBrains/kotlin/blob/1.2.30/core/builtins/native/kotlin/Primitives.kt#L578):
>
> Represents a 64-bit signed integer.
>
> On the JVM, non-nullable values of this type are represented as values of the primitive type `long`.
>
>
>
Both the Kotlin and the Java version of `MAX_VALUE` are exactly `9223372036854775807L`, but since `kotlin.Long` is a wrapper, its implementation can change which may change `MAX_VALUE`, so it will be better to stick to `kotlin.Long.Companion.MAX_VALUE`.
Upvotes: 1 <issue_comment>username_2: This is a general phenomenon: a large part of Kotlin standard library is basically the same as in Java. This is so that
1. Kotlin can provide more precise types: nullable vs non-nullable or `List` vs `MutableList`;
2. This part of the library can be accessed from Kotlin-JS or Kotlin-Native, or other future Kotlin implementations.
For both reasons, stick with the Kotlin standard library types and methods unless you have some specific need to use the Java ones.
`Long` (and `Int`, etc.) are even something of a special case, because they correspond to more than one Java type: in some contexts they end up as the primitive (`long`), in others as its boxed version (`java.lang.Long`). You mostly don't care about this difference in Kotlin, so there's even better reason to keep using `kotlin.Long`.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 3,087 | 11,817 |
<issue_start>username_0: I am currently working on an app and one of the features is to locate the nearest pharmacy. However the one of the classes PharmacyMapFragment is a Fragment class and when I try to call it, the app crashes and I get this error message:
>
> 'android.content.ActivityNotFoundException: Unable to find explicit
> activity class
> {com.example.junai.testapp2/com.example.junai.testapp2.PharmacyMapFragment};
> have you declared this activity in your AndroidManifest.xml?'
>
>
>
I thought that your not supposed to declare a Fragment class in the manifest?
I have included the code for the 3 parts that are relevant to this problem. Can someone please help with this?
Main Activity:
```
package com.example.junai.testapp2;
import android.content.Intent;
import android.os.Bundle;
import android.support.v4.app.FragmentActivity;
import android.view.View;
import android.widget.Button;
public class MainActivity extends FragmentActivity {
private Button btn_nearest_pharmacy;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
btn_nearest_pharmacy = (Button) findViewById(R.id.btn_nearest_pharmacy);
btn_nearest_pharmacy.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
startActivity(new Intent(MainActivity.this, PharmacyMapFragment.class));
}
});
}
}
```
PharmacyMapFragment Class:
```
package com.example.junai.testapp2;
import android.app.DialogFragment;
import android.app.Fragment;
import android.app.FragmentManager;
import android.app.FragmentTransaction;
import android.content.pm.PackageManager;
import android.location.Location;
import android.os.Bundle;
import android.support.annotation.NonNull;
import android.support.v4.app.ActivityCompat;
import android.support.v4.app.FragmentActivity;
import android.support.v4.content.ContextCompat;
import android.util.Log;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.Toast;
import com.android.volley.Request;
import com.android.volley.RequestQueue;
import com.android.volley.Response;
import com.android.volley.VolleyError;
import com.android.volley.toolbox.JsonObjectRequest;
import com.android.volley.toolbox.Volley;
import com.google.android.gms.common.ConnectionResult;
import com.google.android.gms.common.api.GoogleApiClient;
import com.google.android.gms.location.LocationServices;
import com.google.android.gms.location.places.Places;
import com.google.android.gms.maps.CameraUpdateFactory;
import com.google.android.gms.maps.GoogleMap;
import com.google.android.gms.maps.MapFragment;
import com.google.android.gms.maps.OnMapReadyCallback;
import com.google.android.gms.maps.model.BitmapDescriptorFactory;
import com.google.android.gms.maps.model.CameraPosition;
import com.google.android.gms.maps.model.LatLng;
import com.google.android.gms.maps.model.MarkerOptions;
import org.json.JSONArray;
import org.json.JSONObject;
import java.util.ArrayList;
/**
* Fragment used to display a map of the current location with the nearest pharmacies and other
* nearby pharmacies
*
* References -------------------------------------------------------------------------------------/
* https://github.com/googlemaps/android-samples/blob/master/tutorials/CurrentPlaceDetailsOnMap/app
* /src/main/java/com/example/currentplacedetailsonmap/MapsActivityCurrentPlace.java#
* https://developers.google.com/maps/documentation/android-api/current-place-tutorial
* https://developers.google.com/maps/documentation/android-api/hiding-features
*/
public class PharmacyMapFragment extends Fragment implements OnMapReadyCallback,
GoogleApiClient.OnConnectionFailedListener, GoogleApiClient.ConnectionCallbacks {
//static/default values for settings
private static final String TAG = LogTag.pharmacyLogFragment;
private static final int PERMISSIONS_REQUEST_ACCESS_FINE_LOCATION = 1;
private static final int DEFAULT_ZOOM = 15;
private static final String KEY_CAMERA_POSITION = "camera_position";
private static final String KEY_LOCATION = "location";
private final LatLng mDefaultLocation = new LatLng(56.463190, -3.038596 );
//used objects
private GoogleMap mMap;
private CameraPosition mCameraPosition;
private GoogleApiClient mGoogleApiClient;
private boolean mLocationPermissionGranted;
private Location mLastKnownLocation;
private ArrayList pharmacies = new ArrayList<>();
public PharmacyMapFragment() {}
@Override //generates layout
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
return inflater.inflate(R.layout.fragment\_pharmacy\_map, container, false);
}
@Override //setup method
public void onViewCreated(View view, Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//used for smoother transitions if this is not the first state
if (savedInstanceState != null) {
mLastKnownLocation = savedInstanceState.getParcelable(KEY\_LOCATION);
mCameraPosition = savedInstanceState.getParcelable(KEY\_CAMERA\_POSITION);
}
//set up api client for api calls
mGoogleApiClient = new GoogleApiClient.Builder(getActivity())
.enableAutoManage((FragmentActivity) getActivity(), this)
.addConnectionCallbacks(this)
.addApi(LocationServices.API)
.addApi(Places.GEO\_DATA\_API)
.addApi(Places.PLACE\_DETECTION\_API)
.build();
mGoogleApiClient.connect();
}
@Override
public void onMapReady(GoogleMap map) {
mMap = map;
updateLocationUI(); //updates the UI, called first in case of previous sessions
getDeviceLocation(); //get current device location
getPharmaciesFromAPI(); //gets pharmacies from google places search api
}
//Used to update the UI
private void updateLocationUI() {
if (mMap == null) { //if map is null exit method
return;
}
//if we have locations permission set permission to true
if (ContextCompat.checkSelfPermission(getActivity().getApplicationContext(),
android.Manifest.permission.ACCESS\_FINE\_LOCATION)
== PackageManager.PERMISSION\_GRANTED) {
mLocationPermissionGranted = true;
} else {
ActivityCompat.requestPermissions(getActivity(),
new String[]{android.Manifest.permission.ACCESS\_FINE\_LOCATION},
PERMISSIONS\_REQUEST\_ACCESS\_FINE\_LOCATION);
}
//if we have location permissions update the ui
if (mLocationPermissionGranted) {
mMap.setMyLocationEnabled(true);
mMap.getUiSettings().setMyLocationButtonEnabled(true);
} else {
mMap.setMyLocationEnabled(false);
mMap.getUiSettings().setMyLocationButtonEnabled(false);
mLastKnownLocation = null;
}
}
private void getDeviceLocation() {
//Get location permissions and try to locate current position
if (ContextCompat.checkSelfPermission(getActivity().getApplicationContext(),
android.Manifest.permission.ACCESS\_FINE\_LOCATION)
== PackageManager.PERMISSION\_GRANTED) {
mLocationPermissionGranted = true;
} else {
ActivityCompat.requestPermissions(getActivity(),
new String[]{android.Manifest.permission.ACCESS\_FINE\_LOCATION},
PERMISSIONS\_REQUEST\_ACCESS\_FINE\_LOCATION);
}
if (mLocationPermissionGranted) {
mLastKnownLocation = LocationServices.FusedLocationApi
.getLastLocation(mGoogleApiClient);
}
// Set the map's camera position to the current location of the device.
if (mCameraPosition != null) {
mMap.moveCamera(CameraUpdateFactory.newCameraPosition(mCameraPosition));
} else if (mLastKnownLocation != null) {
mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(
new LatLng(mLastKnownLocation.getLatitude(),
mLastKnownLocation.getLongitude()), DEFAULT\_ZOOM));
} else {
Toast.makeText(getActivity(), "Cannot get location", Toast.LENGTH\_SHORT).show();
Log.d(TAG, "Current location is null. Using defaults.");
mMap.moveCamera(CameraUpdateFactory.newLatLngZoom(mDefaultLocation, DEFAULT\_ZOOM));
mMap.getUiSettings().setMyLocationButtonEnabled(false);
}
}
@Override //used when the premission result comes back
public void onRequestPermissionsResult(int requestCode, @NonNull String permissions[],
@NonNull int[] grantResults) {
mLocationPermissionGranted = false;
switch (requestCode) {
case PERMISSIONS\_REQUEST\_ACCESS\_FINE\_LOCATION: {
// If request is cancelled, the result arrays are empty.
if (grantResults.length > 0
&& grantResults[0] == PackageManager.PERMISSION\_GRANTED) {
mLocationPermissionGranted = true;
}
}
}
updateLocationUI();
}
//gets list of pharmacies as a json object from the api
public void getPharmaciesFromAPI(){
double lat,lng;
if(mLastKnownLocation != null) {
lat = mLastKnownLocation.getLatitude();
lng = mLastKnownLocation.getLongitude();
} else {
lat = mDefaultLocation.latitude;
lng = mDefaultLocation.longitude;
}
// Instantiate the RequestQueue.
RequestQueue queue = Volley.newRequestQueue(getActivity().getApplicationContext());
String API\_KEY = getString(R.string.API\_KEY);
String url ="https://maps.googleapis.com/maps/api/place/nearbysearch/json?"
+ "location=" + lat + "," + lng + "&rankby=distance&type=pharmacy&key=" + API\_KEY;
// Request a string response from the provided URL.
JsonObjectRequest jsObjRequest = new JsonObjectRequest(Request.Method.GET, url, null,
new Response.Listener() {
@Override
public void onResponse(JSONObject response) {
parseJSON(response); //parse the json response into pharmacy objects
}
}, new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
Log.d(TAG, error.toString());
Toast.makeText(getActivity(), "Cannot connect to the internet", Toast.LENGTH\_SHORT)
.show();
}
});
// Add the request to the RequestQueue.
queue.add(jsObjRequest);
}
//parses the json
private void parseJSON(JSONObject response){
try {
JSONArray results = response.getJSONArray("results");
pharmacies.clear();
for(int i=0; i pharmacies) {
for(int i=0;i
```
fragment\_pharmacy\_map.xml
```
```
AndroidManifest.xml:
```
xml version="1.0" encoding="utf-8"?
```<issue_comment>username_1: Dude you are trying to start fragment as an Activity by using intent. That is wrong. You should use FragmentManager as described [here](https://stackoverflow.com/questions/5159982/how-do-i-add-a-fragment-to-an-activity-with-a-programmatically-created-content-v).
Upvotes: 1 <issue_comment>username_2: An activity can be represented as a single screen and a fragment as a subview inside that.
You cannot directly show a fragment, you need a host to attach it on, that is, you need an activity which will host your fragment. You can put single or multiple fragments in an activity and the fragments will be shown in that.
Here you called startActivity, this method is used to launch an activity, you passed a PharmacyMapFragment,it is a fragment and not an activity. Thereby, it failed to look a activity with that type and threw that error.
TL;DR
Create a activity, put the PharmacyMapFragment inside it, by placing it in the activity's layout or using Fragment transaction and then call startActivity with the class name of that activity
Tutorial: <https://developer.android.com/guide/components/fragments.html#Adding>
Upvotes: 2 <issue_comment>username_3: `Intent`s are meant to start a new *`activity`* but you are using that to start a *`Fragment`*, So that is the error.
But Don't worry, not a big deal.
In **MainActivity.java**,
Replace this code
```
startActivity(new Intent(MainActivity.this, PharmacyMapFragment.class));
```
with this one
```
getSupportFragmentManager().beginTransaction().replace(R.id.container, new PharmacyMapFragment()).commit();
```
Make sure that the `auto-import` is enabled.
This will do the trick.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,379 | 4,656 |
<issue_start>username_0: I am trying to print out which nodes are being accessed via the getter by overriding my objects getter with a proxy. I am trying to basically test which parts of this large object are not being used by my application. The problem I am having is being able to add some way of identifying what a getters parents are. Here is what I have so far
```
function tracePropAccess(obj) {
return new Proxy(obj, {
get(target, propKey, receiver) {
console.log("Get fired on ", propKey);
return Reflect.get(target, propKey, receiver);
}
});
}
const testObj = {
a: 1,
b: {
a: "no",
b: "yes"
},
c: {
a: "green",
b: {
a: "blue",
b: "orange"
}
}
};
const tracer = tracePropAccess(testObj);
tracer.b.a;
tracer.a;
tracer.c.c.a;
```
This works great in showing me the prop key - however it only the key from the *first level*. I am not sure how to approach this with this proxy as this single function overrides all the proxies in the provided object. Is there any way to possibly pass in an objects parents/children? Possibly I am approaching this incorrectly as well - so I am looking for any input. Thanks!<issue_comment>username_1: You could use the reflection and check if it is an object. If so, return a proxy, if not, return the value.
**It does not work on unfinished objects, because the proxy does not know when an object is returned as result or a proxy is to use**
Example:
```
{ foo: { bar: { baz: 42 } } }
```
and
```
tracer.foo.bar
```
does not work, because it should return
```
{ baz: 42 }
```
but it returns a new proxy instead, which leads to strange results. The main problem is to know which more keys are coming and with this notation, it is impossible to know what the next or no key is.
```js
function tracePropAccess(obj) {
return new Proxy(obj, {
get(target, propKey, receiver) {
console.log("Get fired on ", propKey);
var temp = Reflect.get(target, propKey, receiver);
return temp && typeof temp === 'object'
? tracePropAccess(temp)
: temp;
}
});
}
const testObj = { a: 1, b: { a: "no", b: "yes" }, c: { a: "green", b: { a: "blue", b: "orange" } } };
const tracer = tracePropAccess(testObj);
console.log(tracer.b.a);
console.log(tracer.a);
console.log(tracer.c.c.a);
```
With path
```js
function tracePropAccess(obj, path) {
path = path || [];
return new Proxy(obj, {
get(target, propKey, receiver) {
var newPath = path.concat(propKey);
console.log("Get fired on ", newPath);
var temp = Reflect.get(target, propKey, receiver);
return temp && typeof temp === 'object'
? tracePropAccess(temp, newPath)
: temp;
}
});
}
const testObj = { a: 1, b: { a: "no", b: "yes" }, c: { a: "green", b: { a: "blue", b: "orange" } } };
const tracer = tracePropAccess(testObj);
console.log(tracer.b.a);
console.log(tracer.a);
console.log(tracer.c.c.a);
```
With path at the end.
```js
function tracePropAccess(obj, path) {
path = path || [];
return new Proxy(obj, {
get(target, propKey, receiver) {
var temp = Reflect.get(target, propKey, receiver),
newPath = path.concat(propKey);
if (temp && typeof temp === 'object') {
return tracePropAccess(temp, newPath);
} else {
console.log("Get fired on ", newPath);
return temp;
}
}
});
}
const testObj = { a: 1, b: { a: "no", b: "yes" }, c: { a: "green", b: { a: "blue", b: "orange" } } };
const tracer = tracePropAccess(testObj);
console.log(tracer.b.a);
console.log(tracer.a);
console.log(tracer.c.c.a);
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: I encountered the same requirement and came across this question while researching.
This is solvable by creating a factory function for the proxy traps which recursively prepends the current path within the object:
```js
const reporter = (path = []) => ({
get(target, property) {
console.log(`getting ${path.concat(property).join('.')}`)
const value = Reflect.get(target, property)
if (typeof value === 'object') {
return new Proxy(value, reporter(path.concat(property)))
} else {
return value
}
},
})
const o = new Proxy({
a: 1,
b: {
c: 2,
d: [ 3 ],
},
}, reporter())
let tmp = o.a // 1, logs a
tmp = o.b.c // 2, logs b, b.c
tmp = o.b.d[0] // 3 logs b, b.d, b.d.0
```
Upvotes: 0
|
2018/03/16
| 346 | 1,462 |
<issue_start>username_0: I am importing some python functions into a Jupyter Lab notebook and then using them within the notebook. But I am going back and forth between making alterations to the functions and then rerunning them in the Jupyter Lab notebook. The only way I have found to get Jupyter Lab to use the updated code is to restart the kernel and then rerun everything. While this works fine, it is a bit cumbersome because I need to run everything in the notebook again.
Is there a better way to allow Jupyter Lab to see the new changes in an imported function while still retaining all previously set variables?<issue_comment>username_1: You can reload the module you are importing you function from.
Suppose in your notebook you have:
```
from mymodule import myfunction
myfunction()
# execute old version of myfunction
```
Then you go and change `myfunction` in `mymodule.py`. Reload the module:
```
import importlib
importlib.reload(mymodule)
```
If you call `myfunction()` now, the new version will be executed.
Upvotes: 2 <issue_comment>username_2: You can also use the reload magic by placing this in your notebook. It will automatically reload code.
```
%reload_ext autoreload
%autoreload 2
```
The only time this may cause confusion is, if you instantiated an object, change the code and then wonder, why the already instantiated object does not have the new functions. Besides this case, it works well.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 299 | 1,110 |
<issue_start>username_0: Trying to get the value of an element to a tcl variable
sample stats.xml file
```
abc
1224.3414
```
In the above case trying to extract only one element value either name or number
```
set value
regexp "<$number>(.*?)" stats.xml value
```<issue_comment>username_1: You can reload the module you are importing you function from.
Suppose in your notebook you have:
```
from mymodule import myfunction
myfunction()
# execute old version of myfunction
```
Then you go and change `myfunction` in `mymodule.py`. Reload the module:
```
import importlib
importlib.reload(mymodule)
```
If you call `myfunction()` now, the new version will be executed.
Upvotes: 2 <issue_comment>username_2: You can also use the reload magic by placing this in your notebook. It will automatically reload code.
```
%reload_ext autoreload
%autoreload 2
```
The only time this may cause confusion is, if you instantiated an object, change the code and then wonder, why the already instantiated object does not have the new functions. Besides this case, it works well.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,368 | 4,447 |
<issue_start>username_0: I am trying to loop through a calendar web table and if the calendar date is greater than or equal to current date click the radio button the corresponds to it. The rest of the code works but that.
updated code
```
dowhileloop: do {
// holds dates from 5th column in list
List < WebElement > payDates = driver.findElements(By.xpath("//table[@id='changeStartWeekGrid_rows_table']//tr[position()>1]/td[position()=5]"));
// prints out dates from 5th column in list
List < String > texts = payDates.stream().map(WebElement::getText).collect(Collectors.toList());
System.out.println("date ->" + texts);
//Begin for-loop
for (WebElement pd: payDates) {
System.out.println("sample1-> " + pd.getText());
SimpleDateFormat dateFormat = new SimpleDateFormat("MM/dd/yyyy");
Date payDate = dateFormat.parse(pd.getText());
System.out.println("sample2-> " + dateFormat.format(payDate));
if (payDate.after(new Date())) {
System.out.println("inside for loop");
String radiobutton = "//TBODY[@id='changeStartWeekGrid_rows_tbody']/TR[7]/TD[1]/DIV[1]/DIV[1]/DIV[1]";
WebElement calrow = driver.findElement(By.xpath(pd + radiobutton));
calrow.click();
Thread.sleep(10000);
PS_OBJ_CycleData.donebtn(driver).click();
break dowhileloop;
}
} //** END third inner for-loop****
```
---- Requested Radio Button HTML code ----
[](https://i.stack.imgur.com/9tCuc.png)
```
< div id = "changeStartWeekGrid.store.rows[5].cells[0]_widget"
tabindex = "0"class = "revitRadioButton dijitRadio dijitRadioChecked"
onfocus = "var registry = require('dijit/registry'); registry.byId('changeStartWeekGrid').changeActiveCell('changeStartWeekGrid_row_5_cell_0', event);"
onblur = "var registry = require('dijit/registry'); registry.byId('changeStartWeekGrid').blurActiveCell('changeStartWeekGrid.store.rows[5]', 'changeStartWeekGrid.store.rows[5].cells[0]');" >
< div class = "revitRadioButtonIcon" > < /div> < /div>
```
----- Additional radio button HTML code ----
[](https://i.stack.imgur.com/nvldC.png)
```
```
------------console view ----
[](https://i.stack.imgur.com/TnoLw.png)<issue_comment>username_1: Added the pseudo code... Hope it helps
```
List payDates = driver.findElements(By.xpath("//tr[starts-with(@id,'changeStartWeekGrid\_row\_')]/td[contains(@id,'\_cell\_4')/span]"));
int reqIndex = 0;
for(WebElement pd;payDates) {
//Use class java.time.LocalDate to do the parsing andcomparison with current date.
if (condition satisfied)
break
else
reqIndex++;
}
driver.findElement(By.xpath("//tr[@id='changeStartWeekGrid\_row\_'" + reqIndex +"]")/path to radio button).click()
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: ```
/* The below code doesn't loop through the table. The logic is , The Calendar Webtable starts from Sunday and the First Element that is clickable is the first day of the month. So based on which is the first clickable element and the date to select, the getRowColumnIndex function tells which row and column to select. This way you never traverse through the Webtable and the operation is faster*/
public static void selectCalendarDate(WebDriver driver, String tableId, int dateToSelect) {
Map daysMap= new HashMap();
daysMap.put("SUNDAY", 0);
daysMap.put("MONDAY", 1);
daysMap.put("TUESDAY", 2);
daysMap.put("WEDNESDAY", 3);
daysMap.put("THURSDAY", 4);
daysMap.put("FRIDAY", 5);
daysMap.put("SATURDAY", 6);
int firstElement = daysMap.get(LocalDate.now().minusDays(LocalDate.now().getDayOfMonth()-1).getDayOfWeek().toString());
int rowColumn[] = getRowColumnIndex(dateToSelect, firstElement);
rowColumn[0]= rowColumn[0]+2;
WebElement elementtoClick= driver.findElement(By.xpath(tableId+"/tbody[1]/tr["+rowColumn[0]+"]/td["+rowColumn[1]+"]"));
elementtoClick.click();
}
public static int[] getRowColumnIndex(int date, int firstElement) {
int row = 0, column = 0;
if (firstElement > 0) {
date = date + firstElement;
row = (date % 7) > 0 ? (date / 7) + 1 : (date / 7);
column = ((date % 7) == 0) ? 7 : (date % 7);
} else {
row = (date / 7) ;
column = ((date % 7) == 0) ? 7 : (date % 7);
}
int rowcolumn[] = { row, column };
return (rowcolumn);
}
```
Upvotes: 0
|
2018/03/16
| 647 | 1,935 |
<issue_start>username_0: In the following code (taken from cpp reference with an extra cout added) why don't we see `...finished waiting.` after the first `cv.notify_all`?
```
#include
#include
#include
#include
std::condition\_variable cv;
std::mutex cv\_m; // This mutex is used for three purposes:
// 1) to synchronize accesses to i
// 2) to synchronize accesses to std::cerr
// 3) for the condition variable cv
int i = 0;
void waits()
{
std::unique\_lock lk(cv\_m);
std::cerr << "Waiting... \n";
cv.wait(lk, []{return i == 1;});
std::cerr << "...finished waiting. i == 1\n";
}
void signals()
{
std::this\_thread::sleep\_for(std::chrono::seconds(1));
{
std::lock\_guard lk(cv\_m);
std::cerr << "Notifying...\n";
}
cv.notify\_all();
std::cerr << "I should see i here...\n";
std::this\_thread::sleep\_for(std::chrono::seconds(1));
{
std::lock\_guard lk(cv\_m);
i = 1;
std::cerr << "Notifying again...\n";
}
cv.notify\_all();
}
int main()
{
std::thread t1(waits), t2(waits), t3(waits), t4(signals);
t1.join();
t2.join();
t3.join();
t4.join();
}
```
Output:
```
Waiting...
Waiting...
Waiting...
Notifying...
I should see i here...
Notifying again...
...finished waiting. i == 1
...finished waiting. i == 1
...finished waiting. i == 1
```<issue_comment>username_1: Because this line:
```
cv.wait(lk, []{return i == 1;});
```
you passed a predicate, thus it will wake up only if the predicate returns `true`. After first `notify_all()`, the predicate is not satisfied.
Upvotes: 2 <issue_comment>username_2: Because it *hasn't* finished waiting.
You told it to wait until `i` is `1`, but it is still `0`. So it will keep waiting.
This is the case even though you "notified" it — you notified it that a change happened, but actually a change *didn't* happen, so it went back to waiting.
`notify()` is not `ignore_the_predicate_i_gave_you_and_stop_waiting()`.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 744 | 2,715 |
<issue_start>username_0: I have a lot of action creators that I am trying to organize into a few different files. Each separate file has exported functions like:
```
export function addData(data) {
return {
type: 'ADD_DATA',
data
}
}
```
Before they were separated into distinct files, they were imported like:
```
import { addData } from './actions/actionCreators'
```
However with the structure now like
```
├─┬ actions
│ ├── actionsCreators1.js
│ ├── actionsCreators2.js
│ └── index.js
```
I would like to consolidate them in the index file and export them as they were initially named. What I have tried is:
*actions/index.js*
```
import * as actions1 from './actionCreators1'
import * as actions2 from './actionCreators2'
export default {
...actions1,
...actions2
}
```
However the named functions are undefined when imported. I can bring them in individually `import { action1, action2 } from './actionCreatos1'` and it will work, however it's not ideal to have to write everything in two places.
### How can consolidate these imports and export them as one object?<issue_comment>username_1: I believe the problem is in your *actions/index.js*
```
import * as actions1 from './actionCreators1'
import * as actions2 from './actionCreators2'
export default {
...actions1,
...actions2
}
```
Since you're exporting as `default`, you would have to import like this:
```
import actions from '../actions'
actions.action1(...)
```
This will **not** work:
```
import { action1 } from '../actions'
actions.action1(...) // Error: action1 is undefined
```
The destructuring syntax can only grab *named* exports, but you're using a *default* export.
I'd like to be able to create named exports with object spread, but alas, it is not valid syntax:
```
import * as actions1 from './actionCreators1'
import * as actions2 from './actionCreators2'
export {
...actions1, // object spread cannot be used in an export like this
...actions2
}
```
Upvotes: 0 <issue_comment>username_2: Rather than relying on spread, you can use the module system itself to pass the names through, e.g.
```
// actions/index.js
export * from './actionCreators1'
export * from './actionCreators2'
```
so as long as the names in the files don't collide, they will just be passed through nicely.
Upvotes: 5 [selected_answer]<issue_comment>username_3: In searching for a consistent export / import pattern, this is one possible solution.
using this for all imports:
`import alpha from './alpha'` (no star)
For the exports:
```
export default {
...module.exports,
...alpha
}
```
This groups everything with a `export` before it, as well as everything exported from `alpha`.
Upvotes: 1
|
2018/03/16
| 549 | 1,972 |
<issue_start>username_0: have server digitalocean cloud Ubuntu 16.04.2 x64 Apache 2.4 PHP 7.0.3 and SSL certificate Symantec, I was a Development application web site and am use Laravel 5 v 2.0 in localion host
what is a step for upload file in server cloud?
What is uplaod file in location to the server?
How to deploy laravel application on my cloud server?<issue_comment>username_1: I believe the problem is in your *actions/index.js*
```
import * as actions1 from './actionCreators1'
import * as actions2 from './actionCreators2'
export default {
...actions1,
...actions2
}
```
Since you're exporting as `default`, you would have to import like this:
```
import actions from '../actions'
actions.action1(...)
```
This will **not** work:
```
import { action1 } from '../actions'
actions.action1(...) // Error: action1 is undefined
```
The destructuring syntax can only grab *named* exports, but you're using a *default* export.
I'd like to be able to create named exports with object spread, but alas, it is not valid syntax:
```
import * as actions1 from './actionCreators1'
import * as actions2 from './actionCreators2'
export {
...actions1, // object spread cannot be used in an export like this
...actions2
}
```
Upvotes: 0 <issue_comment>username_2: Rather than relying on spread, you can use the module system itself to pass the names through, e.g.
```
// actions/index.js
export * from './actionCreators1'
export * from './actionCreators2'
```
so as long as the names in the files don't collide, they will just be passed through nicely.
Upvotes: 5 [selected_answer]<issue_comment>username_3: In searching for a consistent export / import pattern, this is one possible solution.
using this for all imports:
`import alpha from './alpha'` (no star)
For the exports:
```
export default {
...module.exports,
...alpha
}
```
This groups everything with a `export` before it, as well as everything exported from `alpha`.
Upvotes: 1
|
2018/03/16
| 400 | 1,421 |
<issue_start>username_0: I have this:
```
getReadableStream() {
const readableStream = new Readable({
read(size) {
return false;
}
});
this.readableStreams.push(readableStream);
return readableStream;
}
```
however it would be nice if I could push to the array and return the item in the same call, I am looking for this:
```
getReadableStream() {
return this.readableStreams.push(new Readable({
read(size) {
return false;
}
}));
}
```
but of course `Array.prototype.push` doesn't return the item that was pushed. Any way to do this with JavaScript? Ideally I don't want to create a new array, keep the original array.<issue_comment>username_1: An ugly solution would be to override the Arrays prototype:
```
Array.prototype.pushAndReturn = function(el){
this.push(el);
return el;
};
```
So you can do
```
return readableStreams.pushAndReturn(new Readable())
```
---
Alternatively just create a helper:
```
const push = (arr, el) => arr.push(el) && el;
```
So you can do:
```
return push(readableStreams, new Readable())
```
Upvotes: 1 <issue_comment>username_2: If you want to do it in one line you could use the [comma operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Comma_Operator):
```
return this.readableStreams.push(readableStream), readableStream;
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 2,154 | 6,349 |
<issue_start>username_0: I was experimenting with different integer types in Visual Studio project in Windows using a simple exchange sort algorithm below. The processor is Intel. The code was compiled in Release x64. The optimization setting is "Maximize Speed (/O2)". The command line corresponding to the compilation settings is
```
/permissive- /GS /GL /W3 /Gy /Zc:wchar_t /Zi /Gm- /O2 /sdl /Fd"x64\Release\vc141.pdb" /Zc:inline /fp:precise /D "NDEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /errorReport:prompt /WX- /Zc:forScope /Gd /Oi /MD /Fa"x64\Release\" /EHsc /nologo /Fo"x64\Release\" /Fp"x64\Release\SpeedTestForIntegerTypes.pch" /diagnostics:classic
```
The code itself:
```
#include
#include
#include
void sort(int N, int A[], int WorkArray[]) // exchange sort
{
int i, j, index, val\_min;
for (j = 0; j < N; j++)
{
val\_min = 500000;
for (i = j; i < N; i++)
{
if (A[i] < val\_min)
{
val\_min = A[i];
index = i;
}
}
WorkArray[j] = A[j];
A[j] = val\_min;
A[index] = WorkArray[j];
}
}
int main()
{
std::vector A(400000), WorkArray(400000);
for(size\_t k = 0; k < 400000; k++)
A[k] = 400000 - (k+1);
clock\_t begin = clock();
sort(400000, &A[0], &WorkArray[0]);
clock\_t end = clock();
double sortTime = double(end - begin) / CLOCKS\_PER\_SEC;
std::cout << "Sort time: " << sortTime << std::endl;
return 0;
}
```
The `WorkArray` is only needed to save the vector before sorting.
The point is, this sorting took me 22.3 seconds to complete. The interesting part is that if I change type `int` to `size_t` for arrays `A`, `WorkArray` (both in `std::vector` and in the argument list of function `sort`), as well as for `val_min`, the time increases to 67.4! This is **threefold** slower! The new code is below:
```
#include
#include
#include
void sort(int N, size\_t A[], size\_t WorkArray[]) // exchange sort
{
int i, j, index;
size\_t val\_min;
for (j = 0; j < N; j++)
{
val\_min = 500000U;
for (i = j; i < N; i++)
{
if (A[i] < val\_min)
{
val\_min = A[i];
index = i;
}
}
WorkArray[j] = A[j];
A[j] = val\_min;
A[index] = WorkArray[j];
}
}
int main()
{
std::vector A(400000), WorkArray(400000);
for(size\_t k = 0; k < 400000; k++)
A[k] = 400000 - (k+1);
clock\_t begin = clock();
sort(400000, &A[0], &WorkArray[0]);
clock\_t end = clock();
double sortTime = double(end - begin) / CLOCKS\_PER\_SEC;
std::cout << "Sort time: " << sortTime << std::endl;
return 0;
}
```
Note that I still keep type `int` for function local variables `i`, `j`, `index`, `N`, and so the only two arithmetical operations that are `i++` and `j++` should take the same amount of time to perform in both cases. Therefore, this slowdown has to do with other reasons. Is it related to the memory alignment issue or register sizes or something else?
But **the most outrageous** part was when I changed `int` to `unsigned int`. Both `unsigned int` and `int` occupy the same number of bytes which is 4 (`sizeof` showed that). But the runtime for `unsigned int` was 65.8 s! While the first outcome was somewhat ok to accept, the second one totally confuses me! Why is there such a significant difference in time it takes to run such a simple algorithm that does not even involve sign checks?
Thanks to all addressing both of these questions. Where can I start reading more about these hardware-level optimization peculiarities? I don't care about the sorting algorithm itself, it's here for illustration of the problem only.
UPDATE: once again, I stress the fact that **I use ints for array indices in all three cases**.<issue_comment>username_1: I tried this code in VS2017. I succeeded in reproducing.
I modified the code as follows so that the time is almost the same.
The cause seems to be due to the implicit casting of the array index.
```
#include
#include
#include
using namespace std;
// exchange sort
template
void sort(index\_t size, elem\_t\* a, elem\_t\* b)
{
index\_t index = 0, i, j;
elem\_t min;
for (j = 0; j < size; j++)
{
min = 500000;
for (i = j; i < size; i++)
{
if (a[i] < min)
{
min = a[i];
index = i;
}
}
b[j] = a[j];
a[j] = min;
a[index] = b[j];
}
}
template
void test() {
//vector a(size);
//vector b(size);
elem\_t a[size];
elem\_t b[size];
for (index\_t k = 0; k < size; k++)
a[k] = (elem\_t)(size - (k + 1));
clock\_t begin = clock();
sort(size, &a[0], &b[0]);
clock\_t end = clock();
double sortTime = double(end - begin) / CLOCKS\_PER\_SEC;
cout << "Sort time: " << sortTime << endl;
}
int main()
{
const int size = 40000;
cout << "" << endl;
test();
cout << endl;
cout << "" << endl;
test();
cout << endl;
cout << "" << endl;
test();
cout << endl;
cout << "" << endl;
test();
cout << endl;
cout << "" << endl;
test();
cout << endl;
cout << "" << endl;
test();
cout << endl;
cout << "" << endl;
test();
cout << endl;
}
```
Personally, I do not like implicit casting.
For troubleshooting this kind of problem, increase the warning level to the maximum, and resolve all warnings, and then convert to generic code. This will help you identify the problem.
The result of this code appears as a result of various combinations.
* signed vs unsigned:
[In C, why is "signed int" faster than "unsigned int"?](https://stackoverflow.com/questions/34165099/in-c-why-is-signed-int-faster-than-unsigned-int)
* type size (int32 vs int64)
* array index assembly code
* vc++ optimization: /O2 (Maximum Optimization (Favor Speed))
+ this make fast for (int / int).
Upvotes: 3 <issue_comment>username_2: Inspecting the generated assembly for all 3 variants (`int`, `unsigned`, `size_t`), the big difference is that in the `int` case the loop in the `sort` function is unrolled and uses SSE instructions (working on 8 ints at a time), while in the other 2 cases it does neither. Interestingly enough, the `sort` function is called in the `int` case, while it is inlined into `main` in the other two (likely due to the increased size of the function due to the loop unrolling).
I'm compiling from the command line using `cl /nologo /W4 /MD /EHsc /Zi /Ox`, using `dumpbin` to get the disassembly, with toolset `Microsoft (R) C/C++ Optimizing Compiler Version 19.12.25830.2 for x64`.
I get execution times of around 30 seconds for `int` and 100 seconds for the other two.
Upvotes: 4 [selected_answer]
|
2018/03/16
| 732 | 2,994 |
<issue_start>username_0: i'm abstracting my firebase api calls out of my react native components to a service layer. This is working well for calls that return a promise, but this one call, onAuthStateChanged, doesn't return a promise.
Without the service layer i would just do:
```
firebase.auth().onAuthStateChanged(user => {
if (user) {
//logged in
} else { //not logged in }
```
Now i want to put everything in my services/api.js, i tried several things already but the latest was this:
```
export const userLoggedin = () => {
firebase.auth().onAuthStateChanged(user => {
if (user) {
return true
} else {
return false
}
})
}
```
Then in my app.js i want to check if userLoggedin returns true or false, so i can navigate the user depending if he is already logged in.
```
if (userLoggedin()) {
// logged in
} else {
// logged out
}
```
now this last part is always going to be in the else, because the userLoggedin return true way later, and it doesn't wait for it.
What is a good solution for this problem?<issue_comment>username_1: You can create a promise around a call that otherwise doesn't support promises:
```
export const userLoggedIn = () => {
return new Promise((resolve, reject) => {
firebase.auth().onAuthStateChanged(user => {
resolve(!!user);
})
});
}
```
Upvotes: 2 <issue_comment>username_2: Though a promise *works* for a one-off check, the auth state has the potential to change multiple times over your application's lifetime, if the user decides to sign out later, for example. That, and your app might actually render before firebase is done checking whether the user is logged in, resulting in a false auth check even if they're actually logged in.
Here's my suggested approach: use component state to keep track of the current user, as well as whether the auth state is currently being checked.
Note: this is reliable because the `onAuthStateChanged` callback will *always* fire at least once. If the auth check finishes first, the callback will get called immediately after it's attached.
```
import React from 'react'
import firebase from 'firebase'
class App extends React.Component {
state = {
authenticating: true,
user: null,
}
componentDidMount() {
firebase.auth().onAuthStateChanged(user => {
this.setState({ user, authenticating: false })
})
}
render() {
if (this.state.authenticating) {
// we're still checking the auth state,
// maybe render a "Logging in..." loading screen here for example
return
}
if (this.state.user === null) {
// not logged in
return
}
// here, auth check is finished and user is logged in, render normally
return
}
}
```
Also, side note, if you ever do just want to check somewhere in your app whether or not the user is currently logged in, `firebase.auth().currentUser !== null` works as well.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 534 | 1,668 |
<issue_start>username_0: I'm working on a react application. I'm stuck on image animation. When the mouse is hover the image it scale to 1.2 times original, but it should be come with transition time of 2s.i I'm able to scale the image but how should I add transition time on. My code:
```
this.setState({hovered: false})}
onMouseOver={() => this.setState({hovered: true})}
style={{transform: `${this.state.hovered ? 'scale(1.2,1.2)' : 'scale(1,1)'}`}}
```
I don't know where to put transition thing.<issue_comment>username_1: Have you tried a using a transition yet?
Sorry normally would have put this in a comment, but I'm not points rich yet.
```
style={{transform: `${this.state.hovered ? 'scale(1.2,1.2)' : 'scale(1,1)'}`, transition: `${this.state.hovered ? '0.5s' : '0.5s;`}}
```
You will likely need to adjust it a bit, but this would work on a standard css styled element.
<https://css-tricks.com/almanac/properties/t/transition/>
Upvotes: 2 [selected_answer]<issue_comment>username_2: [React animation](https://reactjs.org/docs/animation.html)
Quick exmaple this should work:
```
import ReactCSSTransitionGroup from 'react-addons-css-transition-group';
this.setState({hovered: false})}
onMouseOver={() => this.setState({hovered: true})}
style={{transform: `${this.state.hovered ? 'scale(1.2,1.2)' : 'scale(1,1)'}`}}
```
css code
```
.example-enter {
opacity: 0.01;
}
.example-enter.example-enter-active {
opacity: 1;
transition: opacity 500ms ease-in;
}
.example-leave {
opacity: 1;
}
.example-leave.example-leave-active {
opacity: 0.01;
transition: opacity 300ms ease-in;
}
```
Upvotes: 0
|
2018/03/16
| 890 | 3,166 |
<issue_start>username_0: I am trying to catch errors instead of throwing them. I attempted try/catch but realised that didn't work so im at a bit of a loose end. The two errors are mongodb related, however i think node/express errors are still thrown.
```
app.get( "/test/:lastName", function ( reqt, resp ) {
var driverName = reqt.params.lastName
mongoClient.connect( "mongodb://localhost/enteprise",
function( err, db ) {
if ( err ) {
console.error("1 mess " + err)
}
var database = db.db( "enteprise" )
var collection = database.collection( "driversCol" )
collection.findOne( { lastName : driverName },
function( err, res ) {
if ( err ) {
console.error("2 mess " + err)
}
resp.json( res.rate )
db.close()
})
})
})
```
**CASES**
if I `curl -X GET localhost:3000/test/Owen` then '43' should be returned as thats the rate.
currently if i `curl -X GET localhost:3000/test/Cressey` it throws a type error as its not in the database.
How do I catch errors given the above code and example?<issue_comment>username_1: First, I think you should use `next()` to send back the errors you are getting in your callbacks:
```
app.get( "/test/:lastName", function ( req, resp, next ) {
// some code...
if ( err ) {
return next("1 mess " + err)
}
```
About the TypeError, it's happening because you are getting `undefined` from MongoDB when there's not any result, but that's not an error so it will not enter the `if`. You should check the existence of objects in these cases, for example:
```
resp.json( res && res.rate || {} )
```
[Here](http://expressjs.com/en/api.html#app.param) you have another example about how to handle error in these cases:
```
app.param('user', function(req, res, next, id) {
// try to get the user details from the User model and attach it to the request object
User.find(id, function(err, user) {
if (err) {
next(err);
} else if (user) {
req.user = user;
next();
} else {
next(new Error('failed to load user'));
}
});
});
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: MongoDB will return an empty object if the requested query cannot be found. So you need to consider how to handle those responses, as username_1 posits in his answer.
However, errors can be handled differently than using `next()`, such as passing the `resp` to a function for handling errors that calls `resp.json()` with an error object or other custom message and adding a statusCode so that the callee of the endpoint can also handle errors.
For instance, change wherever you call this:
```
if ( err ) {
console.error("1 mess " + err)
}
```
To this:
```
if ( err ) {
console.error("1 mess " + err)
handleErrors(resp, err)
}
```
Then create the function `handleErrors()`:
```
function handleErrors(resp, err) {
// its up to you how you might parse the types of errors and give them codes,
// you can do that inside the function
// or even pass a code to the function based on where it is called.
resp.statusCode = 503;
const error = new Error(err);
resp.send({error})
}
```
Upvotes: 1
|
2018/03/16
| 176 | 518 |
<issue_start>username_0: My JSON looks like this:
```
{Sean/Projet: 6, EC2: 1},{EC3:5,Weekend:5}
```
How do I dynamically access only the key, for example: "Sean/Projet" or "EC2".
Thank you.<issue_comment>username_1: Found out a solution:
```
var keys = Object.keys(myJson);
console.log(keys[0]);
```
But how do I find out which key has the greater number?
Upvotes: 0 <issue_comment>username_2: You can find that out by sorting the object by values.
var sortedKeys = keys.sort((a,b)=>{return b-a})
Upvotes: 1
|
2018/03/16
| 506 | 1,631 |
<issue_start>username_0: I have a div with an image on the left and some text on the right. I'm using flexbox so that as the screen narrows the text pops below the image.
What I want to have happen on narrowing the screen is to have not only the text narrow but the image to shrink as well and maintain its position. Instead what is happening is the text shrinks and the image moves down the page (staying the same size) to maintain being centered. I suspect this has something to do with the `align-items: center` trying to maintain centering but I removed that then the image just stretches.
How can I resolve it?
```css
.bio {
display: flex;
flex-wrap: wrap;
justify-content: space-around;
flex-direction: row;
width: 80%;
margin: auto;
margin-bottom: 30px;
align-items: center;
}
.bio-text {
flex: 200px;
}
.bio-text p {
line-height: 25px;
font-family: "Nunito sans", sans-serif;
}
.bio-h3-container {
text-align: center;
}
.headshot {
max-width: 100%;
height: auto;
margin-left: 40px;
margin-right: 40px;
}
```
```html

### Bio
...bunch of text here...
```<issue_comment>username_1: You need to change
```
.bio {flex-wrap: wrap;}
```
to
```
.bio {flex-wrap: nowrap;}
```
You might also need to put the image in a div and define a width other than max at 100%
Upvotes: 0 <issue_comment>username_2: You could make the image a `background-image` by using inline styling i.e. `style={{backgroundImage: Headshot}}`
Then put the css class `background-size: contain` on it. This should make it so the image resizes to the div
Upvotes: 1
|
2018/03/16
| 633 | 2,914 |
<issue_start>username_0: I've just found some EJB behaviour which looks rather surprizing to me.
Here is the code sample (for sure MyBean, beanA, beanB are EJBs using CMT):
```
@Stateless
public class MyBean {
@TransactionAttribute(TransactionAttributeType.REQUIRED)
public void myMethod(){
try {
beanA.methodA(); /* annotated as REQUIRED */
} catch (Exception e) {
beanB.methodB(); /* annotated as NOT_SUPPORTED */
}
}
}
```
Lets's say methodA takes more than transaction timeout to execute, so once it returns myMethod receives TransactionRolledbackException, which is successfully caught then in "myMethod".
I would expect "methodB" be called so far, as accordingly with EJB spec it must be called without any transaction context.
But actually, the "beanB" proxy simply returns another TransactionRolledbackException, the "methodB" is not executed.
Looking through EJB spec I do not see anything to prove that container should or even might behave that way.
Do I miss something? Any hint would be appreciated.
UPDATE
At least for Websphere, this behaviour appears to be timeout-specific. The "rollbackOnly" flag which for example is set when "methodA" throws a RuntimeException, does not prevent "methodB" from execution. Only timeout flag does.<issue_comment>username_1: The EJB specification does not specifically address this scenario, other than to indicate that once a transaction has been marked for rollback, then "*Continuing transaction is fruitless*", and for the handling of `NOT_SUPPORTED`, the specification indicates it "*does not prescribe how the container should manage the execution of a method with an unspecified transaction context*".
All versions of WebSphere Application Server have taken the approach that the best way to handle the scenario where an EJB method has been marked for rollback only is to prevent all further actions the container has control over, so that the transaction may be rolled back as quickly as possible, ensuring the timely release of resources (such as database locks). Allowing a call to a `NOT_SUPPORTED` EJB method would result in the marked for rollback transaction being suspended; and thus continue to hold onto resources that could block or already be blocking other transactions. For this reason, WebSphere prevents such activity.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Faced the similar issue a while back . When the container mark the transaction for rollback you will not be able to do any other EJB calls after that. You can think about a solution of annotating **@RequiresNew** for beanA.methodA() , so that it will not share the global transaction of myMethod() and will always use new transaction. Therefore anything happens to this new transaction will not affect the global transaction so that you can proceed with your further EJB calls.
Upvotes: 0
|
2018/03/16
| 670 | 2,720 |
<issue_start>username_0: I have created a spring boot application with spring cloud sleuth. For POC purposes, I used zipkin on
my local machine and I am able to instrument a external service which is not instrumented by creating
manual span. I reffered below link.
<https://cloud.spring.io/spring-cloud-sleuth/1.2.x/multi/multi__customizations.html>
Now, When I move to PCF environment, then I am unable to collect proper custom spans.
PCF metrics always shows parent span and service with total time taken.
Could anyone please let me know where I am going wrong.
Zipkin Output:-
[](https://i.stack.imgur.com/ldSAM.png)
PCF Metrics:-
[](https://i.stack.imgur.com/mY7dH.png)
**UPDATE**
screen shot for Zipkin with @NewSpan.
[](https://i.stack.imgur.com/VH2zX.png)
PCF metrics screen shot without call hierachy
[](https://i.stack.imgur.com/qLg5Y.png)<issue_comment>username_1: The EJB specification does not specifically address this scenario, other than to indicate that once a transaction has been marked for rollback, then "*Continuing transaction is fruitless*", and for the handling of `NOT_SUPPORTED`, the specification indicates it "*does not prescribe how the container should manage the execution of a method with an unspecified transaction context*".
All versions of WebSphere Application Server have taken the approach that the best way to handle the scenario where an EJB method has been marked for rollback only is to prevent all further actions the container has control over, so that the transaction may be rolled back as quickly as possible, ensuring the timely release of resources (such as database locks). Allowing a call to a `NOT_SUPPORTED` EJB method would result in the marked for rollback transaction being suspended; and thus continue to hold onto resources that could block or already be blocking other transactions. For this reason, WebSphere prevents such activity.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Faced the similar issue a while back . When the container mark the transaction for rollback you will not be able to do any other EJB calls after that. You can think about a solution of annotating **@RequiresNew** for beanA.methodA() , so that it will not share the global transaction of myMethod() and will always use new transaction. Therefore anything happens to this new transaction will not affect the global transaction so that you can proceed with your further EJB calls.
Upvotes: 0
|
2018/03/16
| 639 | 1,526 |
<issue_start>username_0: My question is similar to [this question](https://stackoverflow.com/questions/12329853/how-to-rearrange-pandas-column-sequence). In answers of that question i saw few ways to do what i am trying to do below. But answers are old and i feel that there might be a fast and an efficient way to do what i am looking for.
```
import pandas as pd
df =pd.DataFrame({'a':[1,2,3,4],'b':[2,4,6,8]})
df['x']=df.a + df.b
df['y']=df.a - df.b
df
a b x y
0 1 2 3 -1
1 2 4 6 -2
2 3 6 9 -3
3 4 8 12 -4
```
Now I want to move x and y columns to beginning. I can always do below.
```
df = df[['x','y','a','b']]
df
x y a b
0 3 -1 1 2
1 6 -2 2 4
2 9 -3 3 6
3 12 -4 4 8
```
But I don't want that solution. I want an efficient solution that would move columns x and y to beginning without mentioning names of other columns as my dataframe is going to change and i might not know names of all the columns<issue_comment>username_1: You can using `insert`
```
df.insert(loc=0, column='x', value=df.a + df.b)
df.insert(loc=1, column='y', value=df.a - df.b)
df
Out[325]:
x y a b
0 3 -1 1 2
1 6 -2 2 4
2 9 -3 3 6
3 12 -4 4 8
```
Upvotes: 1 <issue_comment>username_2: ```
import pandas as pd
df =pd.DataFrame({'a':[1,2,3,4],'b':[2,4,6,8]})
a = list(df.columns)
df['x']=df.a + df.b
df['y']=df.a - df.b
b = list(df.columns)
diff = list(set(b) - set(a))
df[diff+a]
df
y x a b
-1 3 1 2
-2 6 2 4
-3 9 3 6
-4 12 4 8
```
Upvotes: 0
|
2018/03/16
| 849 | 2,789 |
<issue_start>username_0: I want to have show dates translated in "de" and in this format:
```
Thu, Mar 8, 2018 - 6:30
```
I have in "app.php":
```
'locale' => 'de',
```
Then in a view Im using:
```
{{$post->date->toDayDateTimeString()}}
```
But the result still appears in "en":
```
Thu, Mar 8, 2018 6:30 AM
```
But for example using other carbon method: `{$post->date->diffForHumans()}}` the result appears in "de", but I dont want that format, I want the `toDayDateTimeString()` format.
Do you know how to also have this format toDayDateTimeString() translated another language? So is possible to show the date like "Thu, Mar 8, 2018 - 6:30" in another language?<issue_comment>username_1: You can use `setLocale('de')` method.
Here is an example
```
php
\Carbon\Carbon::setLocale('de')
?
{{$post->date->toDayDateTimeString()}}
```
Hope this helps
Upvotes: 1 <issue_comment>username_2: Laravel comes with `Carbon` package, if you set locale in your app. Carbon will parse the date automatically with the language choosen.
[Check this](http://carbon.nesbot.com/docs/#api-localization)
```
Carbon::setLocale('de');
echo Carbon::getLocale(); // de
echo Carbon::now()->addYear()->diffForHumans(); // in 1 Jahr
Carbon::setLocale('en');
echo Carbon::getLocale(); // en
```
Upvotes: 1 <issue_comment>username_3: In order to get the date translated to German, you'll have to set the appropriate [locale](https://secure.php.net/manual/en/function.setlocale.php), before using `Carbon`.
As a test in your view, this should be enough:
```
php setlocale(LC_TIME, 'de_DE'); ?
{{ $post->date->formatLocalized('%a, %b %d, %Y %H:%M') }}
```
For other format options, refer to the [strftime()](https://secure.php.net/manual/en/function.strftime.php) function manual.
You will have to make sure that the locale you want is enabled in your system.
I'm gonna assume you're in a Linux environment, so you'll have to edit the `locale.gen` file (Arch Linux and Ubuntu have it in `/etc/locale.gen`, other distros might have it elsewhere), and uncomment the appropriate lines:
```
# de_DE ISO-8859-1
# de_DE.UTF-8 UTF-8
```
will have to be changed to
```
de_DE ISO-8859-1
de_DE.UTF-8 UTF-8
```
After updating the `locale.gen` file, you'll need to run `locale-gen`, in order for those new locales you've just uncommented to be available from PHP.
`$ locale-gen`
Now confirm that the new locales are available, with:
`$ locale -a`
Now, go back to your browser and you'll successfully see the date/time translated.
**Edited:**
As we later found out, the `toDayDateTimeString()` method from the `Carbon` class, does not work with localisation, so the `formatLocalized()` has to be used.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 899 | 3,075 |
<issue_start>username_0: I am trying to get to grips with the basics of Python, running basic functions and playing around with its capabilities, however, i am having some trouble understanding just how to invoke the other functions, below is an example of me playing around with functions, which function call would i write to invoke the other 3 and how would i write it? I know i must pass through string arguments also.
```
def hotel_cost(nights):
return 140 * nights
def plane_ride_cost(city):
if city == "Charlotte":
return 183
elif city == "Tampa":
return 220
elif city == "Pittsburgh":
return 222
elif city == "Los Angeles":
return 475
def rental_car_cost(days):
total_car = days * 40
if days >= 7:
total_car -= 50
elif days >= 3:
total_car -= 20
return total_car
def trip_cost(city, days):
return rental_car_cost(days) + plane_ride_cost(city) + hotel_cost(days)
```<issue_comment>username_1: You can use `setLocale('de')` method.
Here is an example
```
php
\Carbon\Carbon::setLocale('de')
?
{{$post->date->toDayDateTimeString()}}
```
Hope this helps
Upvotes: 1 <issue_comment>username_2: Laravel comes with `Carbon` package, if you set locale in your app. Carbon will parse the date automatically with the language choosen.
[Check this](http://carbon.nesbot.com/docs/#api-localization)
```
Carbon::setLocale('de');
echo Carbon::getLocale(); // de
echo Carbon::now()->addYear()->diffForHumans(); // in 1 Jahr
Carbon::setLocale('en');
echo Carbon::getLocale(); // en
```
Upvotes: 1 <issue_comment>username_3: In order to get the date translated to German, you'll have to set the appropriate [locale](https://secure.php.net/manual/en/function.setlocale.php), before using `Carbon`.
As a test in your view, this should be enough:
```
php setlocale(LC_TIME, 'de_DE'); ?
{{ $post->date->formatLocalized('%a, %b %d, %Y %H:%M') }}
```
For other format options, refer to the [strftime()](https://secure.php.net/manual/en/function.strftime.php) function manual.
You will have to make sure that the locale you want is enabled in your system.
I'm gonna assume you're in a Linux environment, so you'll have to edit the `locale.gen` file (Arch Linux and Ubuntu have it in `/etc/locale.gen`, other distros might have it elsewhere), and uncomment the appropriate lines:
```
# de_DE ISO-8859-1
# de_DE.UTF-8 UTF-8
```
will have to be changed to
```
de_DE ISO-8859-1
de_DE.UTF-8 UTF-8
```
After updating the `locale.gen` file, you'll need to run `locale-gen`, in order for those new locales you've just uncommented to be available from PHP.
`$ locale-gen`
Now confirm that the new locales are available, with:
`$ locale -a`
Now, go back to your browser and you'll successfully see the date/time translated.
**Edited:**
As we later found out, the `toDayDateTimeString()` method from the `Carbon` class, does not work with localisation, so the `formatLocalized()` has to be used.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 838 | 2,556 |
<issue_start>username_0: I have a csv file with two columns: date string in ISO8601 and a linux timestamp. How do I use `awk` to get the output in the following format: col-1: original ISO; col-2: convert timestamp (2) to ISO8601; col-3: diff between the two times (say, in ms)
Example:
**Input**:
```
2018-01-09T16:55:22.545+0000,1515508979185
```
**Output**:
```
2018-01-09T16:55:22.545+0000,2018-01-09T14:42:59.185+0000,36743360
```<issue_comment>username_1: You can use `setLocale('de')` method.
Here is an example
```
php
\Carbon\Carbon::setLocale('de')
?
{{$post->date->toDayDateTimeString()}}
```
Hope this helps
Upvotes: 1 <issue_comment>username_2: Laravel comes with `Carbon` package, if you set locale in your app. Carbon will parse the date automatically with the language choosen.
[Check this](http://carbon.nesbot.com/docs/#api-localization)
```
Carbon::setLocale('de');
echo Carbon::getLocale(); // de
echo Carbon::now()->addYear()->diffForHumans(); // in 1 Jahr
Carbon::setLocale('en');
echo Carbon::getLocale(); // en
```
Upvotes: 1 <issue_comment>username_3: In order to get the date translated to German, you'll have to set the appropriate [locale](https://secure.php.net/manual/en/function.setlocale.php), before using `Carbon`.
As a test in your view, this should be enough:
```
php setlocale(LC_TIME, 'de_DE'); ?
{{ $post->date->formatLocalized('%a, %b %d, %Y %H:%M') }}
```
For other format options, refer to the [strftime()](https://secure.php.net/manual/en/function.strftime.php) function manual.
You will have to make sure that the locale you want is enabled in your system.
I'm gonna assume you're in a Linux environment, so you'll have to edit the `locale.gen` file (Arch Linux and Ubuntu have it in `/etc/locale.gen`, other distros might have it elsewhere), and uncomment the appropriate lines:
```
# de_DE ISO-8859-1
# de_DE.UTF-8 UTF-8
```
will have to be changed to
```
de_DE ISO-8859-1
de_DE.UTF-8 UTF-8
```
After updating the `locale.gen` file, you'll need to run `locale-gen`, in order for those new locales you've just uncommented to be available from PHP.
`$ locale-gen`
Now confirm that the new locales are available, with:
`$ locale -a`
Now, go back to your browser and you'll successfully see the date/time translated.
**Edited:**
As we later found out, the `toDayDateTimeString()` method from the `Carbon` class, does not work with localisation, so the `formatLocalized()` has to be used.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 241 | 895 |
<issue_start>username_0: As per the documentation on the website,Istio-Auth provides strong service-to-service and end-user authentication.
I am not able to find more info on end-user authentication. I see all the good stuff about service to service authentication but not the end-user authentication to the services. Is it supported ? Where to find some information on it ?<issue_comment>username_1: This end-user auth doc is shared with Istio community. <https://docs.google.com/document/d/1rU0OgZ0vGNXVlm_WjA-dnfQdS3BsyqmqXnu254pFnZg/edit#heading=h.kt4x5aalmyhf>, not sure whether you can view. We are actively working on this feature and will have the basic soon. In the meantime, we will update the website soon.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Note you might need to belong to istio-dev google group to view the doc. The group is public and free to join.
Upvotes: 0
|
2018/03/16
| 1,438 | 4,948 |
<issue_start>username_0: A recent Visual Studio Studio upgrade to version 15.6.2 included a Visual C++ compiler update which causes warnings for the following code at the `push_back` line due to `[[nodiscard]]`:
```
#include
struct [[nodiscard]] S
{
int i;
};
int main()
{
std::vector ~~v;
v.push\_back({ 1 }); // causes warning C4834
}~~
```
The compiler is invoked like this (note that no high warning level needs to be specified to reproduce, but `/std:c++latest` is required, and `/permissive-` is optional):
```none
cl /nologo /EHsc /permissive- /std:c++latest test.cpp
```
The warning comes from within Visual C++'s own `std::vector`-implementation code and says:
```none
warning C4834: discarding return value of function with 'nodiscard' attribute
```
*(see below for complete warning output)*
Compiler version:
```none
Microsoft (R) C/C++ Optimizing Compiler Version 19.13.26128 for x64
```
My theory is that this warning is caused by:
1. Visual C++ implementing `push_back` in terms of `emplace_back`,
2. `emplace_back` returning a reference in C++17, and
3. the somewhat curious (to me) implementation of the latter function in Visual C++ if the `_HAS_CXX17` macro is set.
However, regardless of any internal library code, doesn't Visual C++ violate the standard in producing those diagnostic messages? The latest Clang and GCC versions over at wandbox.org do not produce any warnings for the same code.
I maintain that library-internal usages of `nodiscard` user types like `S` should never cause warnings, because this would render the feature unusable in practice, but the short description of `nodiscard` in §10.6.7/2 [dcl.attr.nodiscard] is a bit vague on that topic.
Or does the standard simply "discourage" such warnings, and it is all really a QoI issue, albeit a rather serious one in this case, so serious that it should probably be filed as a bug report at Microsoft?
---
Here's the complete warning:
```none
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.13.26128\include\vector(996): warning C4834: discarding return value of function with 'nodiscard' attribute
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.13.26128\include\vector(995): note: while compiling class template member function 'void std::vector>::push\_back(\_Ty &&)'
with
[
\_Ty=S
]
test.cpp(11): note: see reference to function template instantiation 'void std::vector>::push\_back(\_Ty &&)' being compiled
with
[
\_Ty=S
]
test.cpp(10): note: see reference to class template instantiation 'std::vector>' being compiled
with
[
\_Ty=S
]
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.13.26128\include\vector(1934): warning C4834: discarding return value of function with 'nodiscard' attribute
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.13.26128\include\vector(1933): note: while compiling class template member function 'void std::vector>::\_Umove\_if\_noexcept1(S \*,S \*,S \*,std::true\_type)'
with
[
\_Ty=S
]
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.13.26128\include\vector(1944): note: see reference to function template instantiation 'void std::vector>::\_Umove\_if\_noexcept1(S \*,S \*,S \*,std::true\_type)' being compiled
with
[
\_Ty=S
]
```<issue_comment>username_1: >
> However, regardless of any internal library code, doesn't Visual C++ violate the standard in producing those diagnostic messages?
>
>
>
This question doesn't really make sense. The standard mandates that if a program violates a diagnosable rule, an implementation must issue a diagnostic. The standard doesn't say anything about disallowing diagnostics for otherwise well-formed programs.
There are many, many common compiler warnings that are diagnostics on well-formed code. Which is great! It's really a quality of implementation issue to ensure that the warnings you get are valuable. In this case, this obviously is not a valuable warning, so you should submit a bug report to Microsoft. But not because this warning violates the standard - simply because it's not a useful warning.
Upvotes: 4 [selected_answer]<issue_comment>username_2: This is not technically a MSVC bug, however warnings for `[[nodiscard]]` on reference types are *discouraged* in the standard, not prohibited.
Per our conversation, this is a reproducing example:
```
struct [[nodiscard]] S{ int i;};
template
T& return\_self(T& \_in){
return \_in;
}
int main() {
S s{1};
return\_self(s);
}
```
The C++ standard has a very similar example where they discourage the warning to be elicited ([dcl.attr.nodiscard]):
```
struct [[nodiscard]] error_info { /* ... */ };
error_info &foo();
void f() { foo(); } // warning not encouraged: not a nodiscard call, because neither
// the (reference) return type nor the function is declared nodiscard
```
Upvotes: 2
|
2018/03/16
| 971 | 3,894 |
<issue_start>username_0: There's some third party code that looks like the following. Assume there are two classes, classA and classB, their definitions are correct and they work as expected.
```
// this function does not have a *throws Exception* in its definition
public Collection doSomething(String someStr) {
List ClassAList = new ArrayList();
int a = 0;
while (a < 5) {
classAList.add(
new classA(
someStr,
new Callable() {
public classB call() throws Exception {
// do some work here
try {
// try something new
} catch (Exception e) { // <---- THIS EXCEPTION
// do something exceptional
throw e;
}
// do more work here, because why not?
}
}
)
);
a++;
}
return classAList;
}
```
This is a contrived example, so it may not be completely sensible. However, it accurately reflects the code I have.
What I'm trying to figure out is, what happens if the exception is thrown? Does the exception break through, and the entire *doSomething* function fails? (I suspect this is the case, but the upper level function doesn't have a "throws Exception", so may be I'm wrong?)
I'm relatively new to Java, and have definitely not seen this style of programming before (also, if you know what it's called, so I can research it more, please let me know) - I have a background in C and Python, so -\_\_(o.O)\_\_/-
*Note*: this is an extension to something, and therefore cannot be run directly for debugging.
*Note 2*: This question was marked as a possible duplicate of [this](https://stackoverflow.com/questions/19913834/rethrowing-an-exception-why-does-the-method-compile-without-a-throws-clause). I am not asking how this compiles without the exception thrown clause. I'm trying to figure out where/when the exception is thrown.<issue_comment>username_1: This can't happen, because `Exception` is a checked exception. You don't have a throws clause in your method signature. My guess is that the compiler will complain about your example code.
You will either have to switch that to a `RuntimeException`, which is unchecked, or add the throws clause to your method signature.
Even if you do that, I'd say this is a bad design.
Catching an exception suggests that you can do something sensible to recover from it. It might be next to nothing and logging the fact, or something more substantial like cleaning up resources and ending the task gracefully.
But catching and re-throwing seems a waste to me. I'd rather add the throws clause and not catch the exception so somebody up the line can do something sensible to recover from the problem.
Upvotes: 1 <issue_comment>username_2: Code you provided is just defining `Callable's`. It does not executed in `doSomething` method, so it won't throw exceptions there. Exceptions can be thrown where you actually execute the `Callable` (via call() or submitting it to a ExecutorService).
Check out the this example <https://www.journaldev.com/1090/java-callable-future-example>
Upvotes: 1 <issue_comment>username_3: I assume the `Callable` type here is `java.util.concurrent.Callable`.
The exception will actually never be thrown on the stack frame of `doSomething`. This is because it is defined in an anonymous inner class that implements `Callable`. It is not called in `doSomething` at all. The instance of anonymous class is then passed into `classA`'s constructor, creating a new instance of `classA` and putting it into a list. This explains why `doSomething` does not need `throws`
Now, if you were to access the list, get the `Callable` from the instance of `classA`, and call `Callable.call()`, you *do* need to handle the checked exception either by adding a ``try...catch`or adding a`throws` clause in the surrounding method, like this:
```
public void doSomethingElse() throws Exception{
ClassAList.get(0).getCallable().call(); // exception might be thrown here!
}
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 675 | 2,628 |
<issue_start>username_0: I have a drop-down list in A1 with "Y" and "N". I want to write a code in cell B1 to get value "True" if A1 is "Y" and "False" if A1 is "N".
```
Sub YCheck()
Dim CellCheck As Boolean
Set Worksheets("Sheet1").Range("B1").Value = CellCheck
If Worksheets("Sheet1").Range("A1").Value = Y Then
CellCheck = True
ElseIf Worksheets("Sheet1").Range("A1").Value = N Then
CellCheck = False
End If
End Sub
```
It doesn't seem to work. Anyone could help me? Thank you!<issue_comment>username_1: This can't happen, because `Exception` is a checked exception. You don't have a throws clause in your method signature. My guess is that the compiler will complain about your example code.
You will either have to switch that to a `RuntimeException`, which is unchecked, or add the throws clause to your method signature.
Even if you do that, I'd say this is a bad design.
Catching an exception suggests that you can do something sensible to recover from it. It might be next to nothing and logging the fact, or something more substantial like cleaning up resources and ending the task gracefully.
But catching and re-throwing seems a waste to me. I'd rather add the throws clause and not catch the exception so somebody up the line can do something sensible to recover from the problem.
Upvotes: 1 <issue_comment>username_2: Code you provided is just defining `Callable's`. It does not executed in `doSomething` method, so it won't throw exceptions there. Exceptions can be thrown where you actually execute the `Callable` (via call() or submitting it to a ExecutorService).
Check out the this example <https://www.journaldev.com/1090/java-callable-future-example>
Upvotes: 1 <issue_comment>username_3: I assume the `Callable` type here is `java.util.concurrent.Callable`.
The exception will actually never be thrown on the stack frame of `doSomething`. This is because it is defined in an anonymous inner class that implements `Callable`. It is not called in `doSomething` at all. The instance of anonymous class is then passed into `classA`'s constructor, creating a new instance of `classA` and putting it into a list. This explains why `doSomething` does not need `throws`
Now, if you were to access the list, get the `Callable` from the instance of `classA`, and call `Callable.call()`, you *do* need to handle the checked exception either by adding a ``try...catch`or adding a`throws` clause in the surrounding method, like this:
```
public void doSomethingElse() throws Exception{
ClassAList.get(0).getCallable().call(); // exception might be thrown here!
}
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,092 | 3,444 |
<issue_start>username_0: Upgraded to php 5.5 (I KNOW ITS OLD) and updated my phpMyAdmin version, aswell as MySQL to 5.7, and now MySQL wont start...
Looking in the mysqld log:
`2018-03-16T16:35:54.553247Z 0 [ERROR] InnoDB: The Auto-extending innodb_system data file './ibdata1' is of a different size 640 pages (rounded down to MB) than specified in the .cnf file: initial 768 pages, max 0 (relevant if non-zero) pages!
2018-03-16T16:35:54.553310Z 0 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2018-03-16T16:35:55.153859Z 0 [ERROR] Plugin 'InnoDB' init function returned error.
2018-03-16T16:35:55.153924Z 0 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2018-03-16T16:35:55.153944Z 0 [ERROR] Failed to initialize builtin plugins.
2018-03-16T16:35:55.153964Z 0 [ERROR] Aborting`
Can someone please make me aware as to why this may be happening.<issue_comment>username_1: The configuration file /etc/my.cnf contains an innodb\_xxxx that refers to an autoextend with size of 768 pages. This can include paramaters like
innodb\_data\_home\_dir
innodb\_data\_file\_path
The full syntax describes this value
```
file_name:file_size[:autoextend[:max:max_file_size]]
```
Example
```
innodb_data_file_path=/var/lib/mysql/ibdata1:768M:autoextend
```
If you have access to the previous working MySQL (like another server of your friend), login to the mysql client and type following to view the running configuration
```
> SELECT @@GLOBAL.innodb_data_file_path;
+--------------------------------+
| @@GLOBAL.innodb_data_file_path |
+--------------------------------+
| ibdata1:10M:autoextend |
+--------------------------------+
```
Why this is happening?
======================
You have an existing innodb storage file that is smaller than the initial size specified in your my.cnf configuration.
Hope this clarify matters.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I followzed your advice and work out a solution, that's about 2 lines in bash, and there you are. I saw the following message upon starting up some mysql/mariadb server or docker image :
>
> The Auto-extending innodb\_system data file './ibdata1' is of a
> different size 0 pages than specified in the .cnf file: initial 768
> pages, max 0 (relevant if non-zero) pages!
>
>
>
Then a previous installation's been detected and the data file space is incompatible. A simple straightforward solution's to remove exisiting container images that may be backed up on the host. Quickstart the Docker `rmi` command is ready to achieve a clean but irreversible action, you're supposed to know what's the name of your docker image, e.g. mariadb:
```
docker rmi `docker images | grep mariadb | awk '{print $3}'`
```
Retry run the new mysql image :
```
docker run -it mypersonalrepo/mariadb:latest &
```
The database will be initialized with InnoDB 768M data of space.
Upvotes: 0 <issue_comment>username_3: For future users this is my experience:
I faced the same error and the problem was my local volume didn't exist.
```
version: "3.7"
services:
mysql-db:
image: mariadb:10.8
container_name: mysql-db
restart: unless-stopped
tty: true
ports:
- "4407:3306"
environment:
MYSQL_ROOT_PASSWORD: <PASSWORD>
volumes:
- ./db:/var/lib/mysql/
```
the folder `./db` did not exist.
after I made the folder in the current path, it worked correctly.
Upvotes: 2
|
2018/03/16
| 3,981 | 12,661 |
<issue_start>username_0: I have multiple .txt files which look like this:
```
header
header
header
header
header
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\AAA AAAAAAAA\AAAAA\BBBB BBBB & BBBBB BBBBB\CAM_07-0008\Farther Downg Gray Fox
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\AAA AAAAAAAA\AAAAA\BBBB BBBB & BBBBB BBBBB\CAM_07-0008\Farther Downg Direct Register Walk, Gait, Gray Fox, Stop
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\AAA AAAAAAAA\AAAAA\BBBB BBBB & BBBBB BBBBB\CAM_07-0008\Farther Downg Gray Fox
```
The width of the last 2 columns varies, but there is always 3 spaces between all the columns (3rd column is empty in this case).
I'm using this code to read in the example .txt:
```
read.fwf(filename.txt,skip=5,widths=c(12,16,19,76,83),fill=T,fileEncoding = "UTF-16")
```
But this code won't work properly on this .txt:
```
header
header
header
header
header
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\AAA AAAAAAAA\AAAAA AA\BBBB BBBB & BBBBB BBBBB\CAM_07-0008\Farther DowngBBB Gray Fox
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\AAA AAAAAAAA\AAAAA AA\BBBB BBBB & BBBBB BBBBB\CAM_07-0008\Farther DowngBBB Direct Register Walk, Gait, Gray Fox, Stop
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\AAA AAAAAAAA\AAAAA AA\BBBB BBBB & BBBBB BBBBB\CAM_07-0008\Farther DowngBBB Gray Fox
```
Is there a way to read in a .txt file with a fixed deliminator (3 spaces) instead of having to define the width of each column, since the column width varies between files.
The files also have some issues with encoding, so [here](http://dropbox.com/s/5st9k602bcfhzwk/first%20one%20-%20Copy.txt?dl=0) is the example file I'm using<issue_comment>username_1: One can read the file skipping header rows, then use `gsub` function to replace 3 spaces with a convenient separator (vertical bar used here):
```
> mytext = "01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg Gray Fox
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg Direct Register Walk, Gait, Gray Fox, Stop
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg Gray Fox"
> ddf = read.table(text=gsub(" ", "|", mytext), header=F, sep="|")
> ddf
V1 V2 V3 V4 V5 V6
1 01130009.JPG JPEG NA NA 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg
2 01130009.JPG JPEG NA NA 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg
3 01130009.JPG JPEG NA NA 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg
V7
1 Gray Fox
2 Direct Register Walk, Gait, Gray Fox, Stop
3 Gray Fox
```
Edit: As suggested by @username_2 in comments below, text has to be trimmed to remove trailing spaces using `gsub(" *$", "", ...)`. Alternatively, following function is from [How to trim leading and trailing whitespace in R?](https://stackoverflow.com/questions/2261079/how-to-trim-leading-and-trailing-whitespace-in-r):
```
trim.trailing <- function (x) sub("\\s+$", "", x)
```
For text files, one can use readLines to read the text file:
```
> mytext = readLines(file('testfile.txt')) # read file text
> mytext = mytext[-c(1:5)] # remove first 5 rows ('header')
> mytext = gsub("\\s+$", "", mytext) # remove trailing spaces
> mytext = gsub(" ", "|", mytext) # change separator
> ddf = read.table(text=mytext, header=F, sep='|') # read columns from text
> ddf
V1 V2 V3 V4 V5 V6
1 01130009.JPG JPEG NA NA 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg
2 01130009.JPG JPEG NA NA 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg
3 01130009.JPG JPEG NA NA 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg
V7
1 Gray Fox
2 Direct Register Walk, Gait, Gray Fox, Stop
3 Gray Fox
```
Alternatively, one can first read them to a data.frame of one variable, then manipulate the rows to get desired result:
```
> ddf1 = read.table(file='testfile.txt', sep = '\n', skip=5)
> mytext = gsub("\\s+$", "", unlist(ddf1$V1))
> ddf2 = read.table(text=gsub(" ", "|", mytext), header=F, sep='|')
> ddf2
V1 V2 V3 V4 V5 V6
1 01130009.JPG JPEG NA NA 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg
2 01130009.JPG JPEG NA NA 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg
3 01130009.JPG JPEG NA NA 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg
V7
1 Gray Fox
2 Direct Register Walk, Gait, Gray Fox, Stop
3 Gray Fox
```
Upvotes: 2 <issue_comment>username_2: I don't know if there are good tools that look for multi-char delimiters, and you aren't the first to ask about it. Most (incl `read.table`, `read.delim`, and `readr::read_delim`) require a single-byte separator.
One method, though certainly not efficient for large files, is to load them in line-wise and do the splitting yourself.
(Consumable data that the bottom.)
```
x <- readLines(textConnection(file1))
x <- x[x != 'header'] # or x <- x[-(1:5)]
```
(I'm guessing it isn't always the literal `header`, so I'm assuming it's either a fixed count or you can easily "know" which is which.)
```
spl <- strsplit(x, ' ')
str(spl)
# List of 3
# $ : chr [1:31] "01130009.JPG" "JPEG" "" "" ...
# $ : chr [1:20] "01130009.JPG" "JPEG" "" "" ...
# $ : chr [1:7] "01130009.JPG" "JPEG" "" "" ...
```
This seems ok, except that in your examples, there are lots of blanks on the right ...
```
spl[[1]]
# [1] "01130009.JPG"
# [2] "JPEG"
# [3] ""
# [4] ""
# [5] "2/5/2018 3:53:44 PM"
# [6] "G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg"
# [7] "Gray Fox"
# [8] ""
# [9] ""
# [10] ""
# [11] ""
# [12] ""
# [13] ""
# [14] ""
# [15] ""
# [16] ""
# [17] ""
# [18] ""
# [19] ""
# [20] ""
# [21] ""
# [22] ""
# [23] ""
# [24] ""
# [25] ""
# [26] ""
# [27] ""
# [28] ""
# [29] ""
# [30] ""
# [31] ""
```
So if you know how many columns there are, then you can easily remove extras:
```
spl <- lapply(spl, `[`, 1:7)
```
and then check the output:
```
as.data.frame(do.call(rbind, spl), stringsAsFactors = FALSE)
# V1 V2 V3 V4 V5
# 1 01130009.JPG JPEG 2/5/2018 3:53:44 PM
# 2 01130009.JPG JPEG 2/5/2018 3:53:44 PM
# 3 01130009.JPG JPEG 2/5/2018 3:53:44 PM
# V6
# 1 G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg
# 2 G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg
# 3 G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg
# V7
# 1 Gray Fox
# 2 Direct Register Walk, Gait, Gray Fox, Stop
# 3 Gray Fox
```
This works equally well with your second example:
```
x <- readLines(textConnection(file2))
x <- x[x != 'header'] # or x <- x[-(1:5)]
spl <- lapply(strsplit(x, ' '), `[`, 1:7)
as.data.frame(do.call(rbind, spl), stringsAsFactors = FALSE)
# V1 V2 V3 V4 V5
# 1 01130009.JPG JPEG 2/5/2018 3:53:44 PM
# 2 01130009.JPG JPEG 2/5/2018 3:53:44 PM
# 3 01130009.JPG JPEG 2/5/2018 3:53:44 PM
# V6
# 1 G:\\AAA AAAAAAAA\\AAAAA AA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther DowngBBB
# 2 G:\\AAA AAAAAAAA\\AAAAA AA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther DowngBBB
# 3 G:\\AAA AAAAAAAA\\AAAAA AA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther DowngBBB
# V7
# 1 Gray Fox
# 2 Direct Register Walk, Gait, Gray Fox, Stop
# 3 Gray Fox
```
Consumable data:
```
# note: replaced single '\' with double '\\' for R string-handling only
file1 <- 'header
header
header
header
header
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg Gray Fox
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg Direct Register Walk, Gait, Gray Fox, Stop
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther Downg Gray Fox '
file2 <- 'header
header
header
header
header
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA AA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther DowngBBB Gray Fox
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA AA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther DowngBBB Direct Register Walk, Gait, Gray Fox, Stop
01130009.JPG JPEG 2/5/2018 3:53:44 PM G:\\AAA AAAAAAAA\\AAAAA AA\\BBBB BBBB & BBBBB BBBBB\\CAM_07-0008\\Farther DowngBBB Gray Fox '
```
Upvotes: 3 [selected_answer]
|
2018/03/16
| 861 | 2,713 |
<issue_start>username_0: I know that I can use:
```
S = pymc.MCMC(model1)
from pymc import Matplot as mcplt
mcplt.plot(S)
```
and that will give me a figure with three plots but all I want is just a single plot of the histogram. Then I want to normalise the histogram and then make a plot a smooth curve of the distribution rather than the bars of the histogram. Could anyone help me to code this so I can get a final plot of the distribution?<issue_comment>username_1: If you have `matplotlib` installed for plotting, and `scipy` for doing a kernel density estimate ([many KDE functions exist](https://jakevdp.github.io/blog/2013/12/01/kernel-density-estimation/)), then you could do something similar to the following (based on [this example](https://pymc-devs.github.io/pymc/tutorial.html#fitting-the-model-with-mcmc), where `'late_mean'` is one of the names of the sampled parameters in that case):
```
from pymc.examples import disaster_model
from pymc import MCMC
import numpy as np
M = MCMC(disaster_model) # you could substitute your own model
# perform sampling of model
M.sample(iter=10000, burn=1000, thin=10)
# get numpy array containing the MCMC chain of the parameter you want: 'late_mean' in this case
chain = M.trace('late_mean')[:]
# import matplotlib plotting functions
from matplotlib import pyplot as pl
# plot histogram (using 15 bins, but you can choose whatever you want) - density=True returns a normalised histogram
pl.hist(chain, bins=15, histtype='stepfilled', density=True)
ax = pl.gca() # get current axis
# import scipy gaussian KDE function
from scipy.stats import gaussian_kde
# create KDE of samples
kde = gaussian_kde(chain)
# calculate KDE at a range of points (in this case based on the current plot, but you could choose a range based on the chain)
vals = np.linspace(ax.get_xlim()[0], ax.get_xlim()[1], 100)
# overplot KDE
pl.plot(vals, kde(vals), 'b')
pl.xlabel('late mean')
pl.ylabel('PDF')
# show the plot
pl.show()
# save the plot
pl.savefig('hist.png', dpi=200)
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Nowadays I believe [`arviz.plot_kde()`](https://python.arviz.org/en/stable/examples/plot_kde.html) is the way forward. Quoting the example directly from the docs page:
```
import matplotlib.pyplot as plt
import numpy as np
import arviz as az
az.style.use("arviz-doc")
data = az.load_arviz_data("centered_eight")
# Combine posterior draws for from xarray of (4,500) to ndarray (2000,)
y_hat = np.concatenate(data.posterior_predictive["obs"].values)
az.plot_kde(
y_hat,
label="Estimated Effect\n of SAT Prep",
rug=True,
plot_kwargs={"linewidth": 2},
rug_kwargs={"alpha": 0.05},
)
plt.show()
```
Upvotes: 0
|
2018/03/16
| 1,202 | 4,876 |
<issue_start>username_0: I am trying to get suggestion from the community in order to make the best practices.
Please bear with me, with the following example:
Suppose that you work with **half open intervals**, that is, *something that you know when it starts.*
For example
* There can be `HalfOpenInterval` restricted to a day. Example: you say "from 1:00 pm afterwards" (until the end of the day). Let's call it **ClockInterval**
* There can be `HalfOpenInterval` restricted to the existence of universe. Example: you say "from 9 of july of 1810 we declare the indepedency" (until the end of the cosmos.. hypothetically). Let's call it **Period**
For both entities types: you work with a collection of them, so you usually have `slices` of **clocks** and **periods** in your code.
So now comes the problem: *you must find the enclosing interval for a given time* (`func FindEnclosingHalfOpenInterval`) for *both* clocks and periods, so you start writing the code...
And well, i get into this matter... how i should organize the code in order to write only once the common func. (`func FindEnclosingHalfOpenInterval`).
So i get into this code: <https://play.golang.org/p/Cy7fFaFzYJR>
But i keep wondering if there is a better way to define common behaviour for collection of slices.
Please reader you shall realize that i need to do an "element by element" **conversion** for each type of slice (and i have a type of slice for each type of concrete HalfOpenInterval i define). So i wonder if there is there any way that allows me to introduce new types of `HalfOpenInterval` without having to do some adjustement and "automatically" gets the abilitiy to use the `func FindEnclosingHalfOpenInterval`?. Perhaps my rich-oo-java-based mind is not the correct way to face the problems in the simplistic-straight-ahead-go-world. I'm all hears, to any suggestion.
Further lectures:
* There is a good reading [here](https://go101.org/article/interface.html). Search for "Values of []T can't be directly converted to []I, even if type T implements interface type I."
* There is also [this article](https://github.com/golang/go/wiki/InterfaceSlice).<issue_comment>username_1: I think maybe this is overcomplicated. Define FindEnclosingHalfOpenInterval as a standalone function, and then for functions which need to work with HalfOpenInterval structs, make then accept an interface that has a EnclosingHalfOpenInterval method which usually just calls FindEnclosingHalfOpenInterval. That way you can do any kind of edge case data encapsulation without changing the signature.
And that's really all there is to it. FindEnclosingHalfOpenInterval is already shared because it's not explicitly part of any struct.
Upvotes: 1 <issue_comment>username_2: The key Problem here is you need to convert a slice from one type to another type.
Converting Slices
-----------------
The correct way to do this is to create a new slice and loop over it converting every single item. You can do this fast(er) if you create the array beforehand:
```
func ToIntervalsFromClockIntervals(clockIntervals []ClockInterval) HalfOpenIntervals {
intervals := make(HalfOpenIntervals, 0, len(clockIntervals))
for _, clockInterval := range clockIntervals {
intervals = append(intervals, clockInterval)
}
return intervals
}
```
Composition
-----------
Apart from that composition is another way you could solve the problem that you want to write the `GetEnclosingInterval` function only once. I'm not saying it is better: it has other advantages and disadvantages. What fits better depends on how else you use the slices apart from what you posted here.
Here my refactored suggestion (and fixed): <https://play.golang.org/p/Ko43hJUMpyT> (TehSṕhinX you forgot to make the **mutable method** `baseIntervals.add` with `pointer recievers` instead of`value recievers`(or whatever is the name for `non-pointer recievers`))
The `HalfOpenIntervals` does not exist any more. Instead you have two different types for `CLockIntervals` and `PeriodIntervals` and both have the sorting and `GetEnclosingInterval` function implemented via a common base struct.
For convenience I added a `Add` function and a `New...` function for each of them. This is where the disadvantages come in: Since `CLockIntervals` (and `PeriodIntervals`) is not a slice any more but a struct you will need convenience functions to work with the inner slice from the outside.
-- edit --
Over generalising
-----------------
Coming from a oo-oriented background myself I know the drive to avoid duplicated code at all costs.
Writing go code for more than 2 years now on a full time basis I learned that this is not always the best approach in go. Duplicating code in go for different types is a common thing I do these days. Not sure if all would agree with this statement, though.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 712 | 2,359 |
<issue_start>username_0: I'm trying to do a multiregression linear model with 10 predictors variables, but when I copy all the variables the model doesn't work.
```
regresion<-lm(DO~NO3+NO2+SO4+NH4+Mg+Ca+PO4+pH+CI+CE, data=s1)
```
the output is:
```
Call:
lm(formula = DO ~ NO3 + NO2 + SO4 + NH4 + Mg + Ca + PO4 + pH +
CI + CE, data = s1)
```
Residuals:
```
ALL 7 residuals are 0: no residual degrees of freedom!
```
Coefficients: (4 not defined because of singularities)
```
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.20979 NA NA NA
NO3 0.27132 NA NA NA
NO2 -128.83424 NA NA NA
SO4 0.04334 NA NA NA
NH4 0.12088 NA NA NA
Mg 3.59376 NA NA NA
Ca 5.37956 NA NA NA
PO4 NA NA NA NA
pH NA NA NA NA
CI NA NA NA NA
CE NA NA NA NA
```
Residual standard error:
```
NaN on 0 degrees of freedom
```
Multiple R-squared:
```
1, Adjusted R-squared: NaN
```
F-statistic:
```
NaN on 6 and 0 DF, p-value: NA
```
The model tolerance is until 5 variables, and more than this it's an error?<issue_comment>username_1: You are not getting an error. In R errors are called "Errors" while situations that might or might not be a serious problem are called "warnings". You are being warned that it only takes the data from 6 of your variables to construct a "perfect" model, one that has 100% accuracy for predicting your outcome. How this might come about is a matter of speculation at the moment. To clear up that question you should post the output of `str(s1)` and `summary(s1)` and probably the top 20 lines of your dataset as it would be viewed in a text editor.
Upvotes: 1 <issue_comment>username_2: >
> *ALL 7 residuals are 0: no residual degrees of freedom!*
>
>
>
This implies you only have 7 observations in the data frame `s1`, but you are providing 10 predictors. You need to at least have more observations than predictors, otherwise all the variance will be explained perfectly, and your "model" won't be statistical (hence the NA's for all p-vals).
Upvotes: 0
|
2018/03/16
| 1,061 | 3,898 |
<issue_start>username_0: Basically the route /users is not working , when I enter there it brings the component AppComponent instead the UsersComponent.
Why the route /users does not load the correctly component?
app.module.ts
```
import { BrowserModule } from '@angular/platform-browser';
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { RouterModule, Routes } from '@angular/router';
import { AppComponent } from './app.component';
import { UsersComponent } from './users/users.component';
import { AppRouting } from './app-routing.component'
@NgModule({
declarations: [
AppComponent,
UsersComponent
],
imports: [
BrowserModule,
AppRouting,
FormsModule
// other imports here
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
```
app.component.ts
```
import { Component } from '@angular/core';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
title = 'app';
}
```
user.component.ts
```
import { Component, OnInit } from '@angular/core';
@Component({
selector: 'app-users',
templateUrl: './users.component.html',
styleUrls: ['./users.component.css']
})
export class UsersComponent implements OnInit {
constructor() { }
ngOnInit() {
}
}
```
app-routing.component.ts
```
import { UsersComponent } from './users/users.component';
import { NgModule } from '@angular/core';
import {RouterModule,Routes} from '@angular/router';
const routes:Routes =[
{path:'',redirectTo:'/', pathMatch:'full'},
//{path:'appcomponent',component:AppComponent},
{ path: 'users', component: UsersComponent }
];
@NgModule({
imports: [RouterModule.forRoot(routes)],
exports:[RouterModule],
})
export class AppRouting{
}
```
In the html code I just want to show this:
user.component.html
```
users works!
```<issue_comment>username_1: You need to provide `router-outlet` in the `AppComponent` template.
```
...
...
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Ah. I now understand. If you want to display routed components to take up the full display area, you need to define your app.component.html to ONLY have the router outlet. No other elements.
I've done something similar whereby I have a component with a menu. I want to show some components without that menu, such as the login component.
I did it using multiple levels of router outlets.
**App Component**
I defined the root application component with only a router outlet. Then I route into that component when I want a component to appear without the menu.
```
```
**Shell Component**
Then I defined a "shell" component with a second router outlet. Here is where I defined my menu. Anything I want to appear with the menu, I route to this router outlet.
```
```
**Routing Module**
The routes are then configured using the `children` property to define the routes that are routed into the ShellComponent.
That way none of the components need to know if the menu should be on or off. It's all determined by the route configuration.
```
RouterModule.forRoot([
{
path: '',
component: ShellComponent,
children: [
{ path: 'welcome', component: WelcomeComponent },
{ path: 'movies', component: MovieListComponent },
{ path: '', redirectTo: 'welcome', pathMatch: 'full' }
]
},
{ path: 'login', component: LoginComponent }
{ path: '**', component: PageNotFoundComponent }
])
```
[](https://i.stack.imgur.com/8Kqxr.png)
Shows the area of the app router outlet.
[](https://i.stack.imgur.com/i2U0r.png)
Shows the area of the shell router outlet.
Upvotes: 2
|
2018/03/16
| 258 | 944 |
<issue_start>username_0: I have a variable I'm getting as input:
```
@Input()
public isNotReplay: boolean | false;
```
if this variable is true I want to apply a class with a style:
```
.main-chat {
overflow: auto;
}
```
I've gotten this to work with the terinary operator like this:
```
```
however, I only want to apply my main-chat class if it's true and not have to use some-other-class. Is there a better way of doing this?<issue_comment>username_1: You have a few options, but I think what you are looking for is the following:
`ngClass` can take an object. The object properties are the classes that will be given to the HTML element conditionally, based on the property's value.
For whatever its worth, you're code : `@Input() public isNotReplay: boolean | false;` looks like it should probably be `@Input() public isNotReplay: boolean = false;`
Upvotes: 4 [selected_answer]<issue_comment>username_2: ```
```
Upvotes: 1
|
2018/03/16
| 503 | 1,632 |
<issue_start>username_0: ```js
$scope.identityProof = ["karnataka","Tamilnadu","Andhra","Delhi","Kerala","Bihar"]
```
```html
Yes
No
{{message}}
Default Otions:
--Choose a Document--
```
I have two radio bottons "Yes" and "No". Dropdown values are coming from service response. So I'm displaying the values as it is.
1) When the page will load, that time dropdown will show "--Cjoose a Document -- " for first time.
2) If I will select the dropdown value as "Yes", I don't want to do anything and it should work asusual.
3) Incased if I will select the dropdown value as "No", I want to set "Delhi" in Dropdown. It should visible in a dropsown.<issue_comment>username_1: Your radio button should also be inside tag for angular to recognize the change in value. Then you can use `ng-change` to achieve what you want.
```
Yes
No
```
Here is the [jsfiddle username_2k](https://jsfiddle.net/gilango/r1b0foaj/). Hope this helps!
Upvotes: 2 [selected_answer]<issue_comment>username_2: It's an easy `ng-change` logic what you are looking for:
### View
```
Yes
No
--Choose a Document--
```
### AngularJS application
```
var myApp = angular.module('myApp', []);
myApp.controller('MyCtrl', function($scope) {
$scope.radioValue = false;
$scope.identityProof = ["karnataka", "Tamilnadu", "Andhra", "Delhi", "Kerala", "Bihar"];
$scope.completiondetail = {
IDProofType: null
};
$scope.validateRadio = function() {
if ($scope.radioValue) {
$scope.completiondetail.IDProofType = 'Delhi';
}
}
});
```
**> [demo fiddle](http://jsfiddle.net/bvbL1k3z/)**
Upvotes: 0
|
2018/03/16
| 842 | 2,396 |
<issue_start>username_0: My intent is to get a list of all day of the month/year that I choose. Besides that, I'll add extra html into the final result.
I created this php function, but noticed that it's better, because I'll edit it in the html, if I do it on my tag.
The code list all days and name of week days of the month/year that I send (I also convert the language result).
I tried but couldn't even produce a nice base to start, I'm troubled to find a way to replicate the same idea using javascript.
my php function
```
function getDays($month = 0, $year = 0){
$list = array();
$month = $month == 0 ? (int)date('m') : $month;
$year = $year == 0 ? (int)date('Y') : $year;
$weekName = ['Mon' => 'Seg', 'Tue' => 'Ter', 'Wed' => 'Qua', 'Thu' => 'Qui', 'Fri' => 'Sex', 'Sat' => 'Sáb', 'Sun' => 'Dom'];
for($d=1; $d<=31; $d++)
{
$time=mktime(12, 0, 0, $month, $d, $year);
if (date('m', $time)==$month) {
$list[]= date('d', $time) . ' | ' . $weekName[date('D', $time)].'.';
}
}
echo json_encode($list);
}
```<issue_comment>username_1: With a little twist, the following get the job done...
```
function getDays(year, month)
{
var dias = ['Dom', 'Seg', 'Ter', 'Qua', 'Qui', 'Sex', 'Sab'];
var dSta = new Date(year,month,1);
var dEnd = new Date(year,month+1,0);
var ret = [];
for(var i = dSta.getDate(); i <= dEnd .getDate();i++)
{
ret.push(i + ' - ' + dias[new Date(year, month, i).getDay()]);
}
return ret;
}
```
After that, just call
```
console.log(getDays(2018,2));
```
Upvotes: 0 <issue_comment>username_2: Seeing you did come up with an answer, here is an optimised version since there is no need to create an array each time and no need for a new date inside the loop. Also months start at 0:
```js
var dias = ['Sun-Dom', 'Mon-Seg', 'Tue-Ter', 'Wed-Qua', 'Thu=Qui', 'Fri-Sex', 'Sat-Sab'];
function getDays(year, month) {
month--; // JS months start at 0
var dSta = new Date(year, month, 1);
var dEnd = new Date(year, month + 1, 0);
var ret = [];
var idx = dSta.getDay()-1; // 0th day
for (var i = dSta.getDate(), end = dEnd.getDate(); i <= end; i++) {
ret.push(i + ' - ' + dias[(i+idx)%7]);
}
return ret;
}
console.log(getDays(2018,2)); // 2 = Feb which is month 1 in JS
console.log(getDays(2020,2));
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,124 | 2,654 |
<issue_start>username_0: We have a txt file which contains 3 columns and 100000 rows. We would like to find the row of specific number for example: `1.232422567` which is in first column but no idea about the number of its row.
for instance look at following example:
```
...
5.98735973963 4.3453 1.09877345
6.21376876 5.78789 2.11255
1.232422567 0.009044 9.886778893
0.1213445 0.938763 8.9978444
...
```
We want to call `1.232422567` and see its complete row
```
1.232422567 0.009044 9.886778893
```
to use in calculations for example in `h=a+b` where `h= 0.009044 + 9.886778893`.
Please note that: We do not know in what row that mentioned numbers is. We want search the file, find that number and its complete row and use the components of found row in calculation.<issue_comment>username_1: This is one way. Output is a `numpy` array.
```
import pandas as pd
from io import StringIO
mystr = """5.98735973963 4.3453 1.09877345
6.21376876 5.78789 2.11255
1.232422567 0.009044 9.886778893
0.1213445 0.938763 8.9978444"""
# Replace below with pd.read_table('my_file.txt', header=None, sep='\s+')
df = pd.read_csv(StringIO(mystr), delim_whitespace=True, header=None)
res = df[df[0] == 1.232422567].values[0]
# array([ 1.23242257e+00, 9.04400000e-03, 9.88677889e+00])
```
Then to apply `h = a + b`, you can use `numpy` conveniently:
```
h = np.sum(res[1:])
```
Or if you want a list:
```
res = df[df[0] == 1.232422567].values[0].tolist()
# [1.232422567, 0.009044, 9.886778892999999]
```
Conversion to floats is handled by `pandas`.
Upvotes: 3 [selected_answer]<issue_comment>username_2: This is a way in pure python
```
file_path = "numbers.txt"
number = 1.232422567
number_str = str(number)
with open(file_path) as f:
for i, line in enumerate(f):
numbers = line.split()
if number_str in numbers:
print("{} is in line {}: {}".format(number_str, i, line))
break
```
Upvotes: 0 <issue_comment>username_3: This you can achieve through `pandas` library in python. Here is the sample code for you.
```
import pandas as pd
data = pd.read_csv('filenane.txt', sep=" ", header=None)
data.columns = ["a", "b", "c"]
foundColmn = data[data['a']==1.232422567][['b','c']]
print(foundColmn)
```
Upvotes: 0 <issue_comment>username_4: I see you are talking about `any function` instead of summation. You should call the components of the output array.
```
a=res[1] #the first row is 0
b=res[2]
```
Now you can use them in any function
`h= a+b`
***This solution is based on the chosen answer of this question.***
Upvotes: 0
|
2018/03/16
| 848 | 2,036 |
<issue_start>username_0: How to order numbers stored as string in db by OrderBy function
```
mytable.OrderBy(s => s.Year);
```
for ex =>
* `string Year= 2013/2014`
* `string Year= 2017/2018`<issue_comment>username_1: This is one way. Output is a `numpy` array.
```
import pandas as pd
from io import StringIO
mystr = """5.98735973963 4.3453 1.09877345
6.21376876 5.78789 2.11255
1.232422567 0.009044 9.886778893
0.1213445 0.938763 8.9978444"""
# Replace below with pd.read_table('my_file.txt', header=None, sep='\s+')
df = pd.read_csv(StringIO(mystr), delim_whitespace=True, header=None)
res = df[df[0] == 1.232422567].values[0]
# array([ 1.23242257e+00, 9.04400000e-03, 9.88677889e+00])
```
Then to apply `h = a + b`, you can use `numpy` conveniently:
```
h = np.sum(res[1:])
```
Or if you want a list:
```
res = df[df[0] == 1.232422567].values[0].tolist()
# [1.232422567, 0.009044, 9.886778892999999]
```
Conversion to floats is handled by `pandas`.
Upvotes: 3 [selected_answer]<issue_comment>username_2: This is a way in pure python
```
file_path = "numbers.txt"
number = 1.232422567
number_str = str(number)
with open(file_path) as f:
for i, line in enumerate(f):
numbers = line.split()
if number_str in numbers:
print("{} is in line {}: {}".format(number_str, i, line))
break
```
Upvotes: 0 <issue_comment>username_3: This you can achieve through `pandas` library in python. Here is the sample code for you.
```
import pandas as pd
data = pd.read_csv('filenane.txt', sep=" ", header=None)
data.columns = ["a", "b", "c"]
foundColmn = data[data['a']==1.232422567][['b','c']]
print(foundColmn)
```
Upvotes: 0 <issue_comment>username_4: I see you are talking about `any function` instead of summation. You should call the components of the output array.
```
a=res[1] #the first row is 0
b=res[2]
```
Now you can use them in any function
`h= a+b`
***This solution is based on the chosen answer of this question.***
Upvotes: 0
|
2018/03/16
| 1,202 | 3,745 |
<issue_start>username_0: I will try to explain this.
Say I have multiple items in a list:
```
var products = new List();
products.Add(JObject.Parse("{ colour: 'Pinks', importance: 100, variant: '0' }"));
products.Add(JObject.Parse("{ colour: 'Blacks', importance: 200, variant: '0' }"));
products.Add(JObject.Parse("{ colour: 'Purples', importance: 150, variant: '0' }"));
products.Add(JObject.Parse("{ colour: 'Pinks', importance: 100, variant: '1' }"));
products.Add(JObject.Parse("{ colour: 'Yellows', importance: 200, variant: '1' }"));
products.Add(JObject.Parse("{ colour: 'Oranges', importance: 500, variant: '1' }"));
products.Add(JObject.Parse("{ colour: 'Blues', importance: 100, variant: '2' }"));
products.Add(JObject.Parse("{ colour: 'Pinks', importance: 400, variant: '2' }"));
products.Add(JObject.Parse("{ colour: 'Greys', importance: 120, variant: '2' }"));
products.Add(JObject.Parse("{ colour: 'Pinks', importance: 100, variant: '2' }"));
```
Now I would like to get the last product of each variant in the list.
In this case,it would be:
```
products.Add(JObject.Parse("{ colour: 'Purples', importance: 150, variant: '0' }"));
products.Add(JObject.Parse("{ colour: 'Oranges', importance: 500, variant: '1' }"));
products.Add(JObject.Parse("{ colour: 'Pinks', importance: 100, variant: '2' }"));
```
Can this be done with linq?
---
Without like, it can be done like this:
```
var variant = products[0].SelectToken("variant").ToString();
var productList = new List();
var count = products.Count - 1;
for (var i = 0; i < count; i++)
{
var product = products[i];
var nextProduct = products[i + 1];
var productVariant = nextProduct.SelectToken("variant").ToString();
if (productVariant.Equals(variant)) continue;
productList.Add(product);
variant = productVariant;
}
productList.Add(products.Last());
```<issue_comment>username_1: So, because I am using `List();` I have to create an AnonymousType and then do my grouping.
I did this:
```
var lastProducts = products
.Select(m => new
{
Colour = m.SelectToken("colour").ToString(),
Variant = m.SelectToken("variant").ToString(),
Importance = Convert.ToInt32(m.SelectToken("importance").ToString())
})
.GroupBy(m => m.Variant)
.Select(m => m.Last());
```
Which I believe is the same as my method above.
Upvotes: 0 <issue_comment>username_2: You can group your `products` by `variant` and then select last item in each group:
```
var products = new List();
products.Add(JObject.Parse("{ colour: 'Pinks', importance: 100, variant: '0' }"));
products.Add(JObject.Parse("{ colour: 'Blacks', importance: 200, variant: '0' }"));
products.Add(JObject.Parse("{ colour: 'Purples', importance: 150, variant: '0' }"));
products.Add(JObject.Parse("{ colour: 'Pinks', importance: 100, variant: '1' }"));
products.Add(JObject.Parse("{ colour: 'Yellows', importance: 200, variant: '1' }"));
products.Add(JObject.Parse("{ colour: 'Oranges', importance: 500, variant: '1' }"));
products.Add(JObject.Parse("{ colour: 'Blues', importance: 100, variant: '2' }"));
products.Add(JObject.Parse("{ colour: 'Pinks', importance: 400, variant: '2' }"));
products.Add(JObject.Parse("{ colour: 'Greys', importance: 120, variant: '2' }"));
products.Add(JObject.Parse("{ colour: 'Pinks', importance: 100, variant: '2' }"));
var query = from p in products
group p by p.Property("variant").Value into varGroup
select varGroup.Last();
```
Upvotes: 0 <issue_comment>username_3: You can `GroupBy` the "variant" property and keep the last of each group:
```
var ans = products.GroupBy(p => p.Property("variant").Value).Select(pg => pg.Last()).ToList();
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 538 | 2,034 |
<issue_start>username_0: I know that PHP is a Server-Side and Javascript is Client-side , But I'm wondering if it's possible to do the following .
I have a javascript function that calls an Ajax call to a PHP file , Here is the code:
```
function reloadData(fileName){
$.ajax({
//I want to insert the fileName parameter before .php
url: 'php echo $filePath ."/fileName.php";?',
type: 'GET',
async: true,
success: function( data ){
//Do Some Thing With Returned Data.
}
});
}
```
I want to pass PHP file name to the reloadData function
```
reloadData('get_data');
```
So that the url within ajax will be:
```
url: 'get_data.php',
```
Is it possible?<issue_comment>username_1: Set your filepath as a global variable in your js so that you can make use of it throught you code. Then make use of this variable like this in your function:
```js
var filePath = "php echo $filePath ?/";
function reloadData(fileName){
$.ajax({
//I want to insert the fileName parameter before .php
url: filePath + fileName.trim() + '.php',
type: 'GET',
async: true,
success: function( data ){
//Do Some Thing With Returned Data.
}
});
}
```
Upvotes: 1 <issue_comment>username_2: Just change
```
url: 'fileName.php',
```
To:
```
url: `${fileName}.php`,
```
Upvotes: 0 <issue_comment>username_3: Do like this, where you use the *filename* variable and split the url string like this `' + fileName + '`
```
function reloadData(fileName){
$.ajax({
//I want to insert the fileName parameter before .php
url: 'php echo $filePath ."/' + fileName + '.php";?',
type: 'GET',
async: true,
success: function( data ){
//Do Some Thing With Returned Data.
}
});
}
```
---
As a note though, when PHP echo this server side you might need to do something like this
```
'php echo $filePath ."/"?' + filename + '.php'
```
Upvotes: 2
|
2018/03/16
| 442 | 1,634 |
<issue_start>username_0: I'm a student and have a question. I'm not getting the correct output in our textbook.
```
first = 'I'
second = 'love'
third = 'Python'
sentence = first + '' + second + '' + third + '.'
```
Output:
```
I love Python.
```
When I run it, nothing happens. Can someone explain why? Thanks in advance!<issue_comment>username_1: Set your filepath as a global variable in your js so that you can make use of it throught you code. Then make use of this variable like this in your function:
```js
var filePath = "php echo $filePath ?/";
function reloadData(fileName){
$.ajax({
//I want to insert the fileName parameter before .php
url: filePath + fileName.trim() + '.php',
type: 'GET',
async: true,
success: function( data ){
//Do Some Thing With Returned Data.
}
});
}
```
Upvotes: 1 <issue_comment>username_2: Just change
```
url: 'fileName.php',
```
To:
```
url: `${fileName}.php`,
```
Upvotes: 0 <issue_comment>username_3: Do like this, where you use the *filename* variable and split the url string like this `' + fileName + '`
```
function reloadData(fileName){
$.ajax({
//I want to insert the fileName parameter before .php
url: 'php echo $filePath ."/' + fileName + '.php";?',
type: 'GET',
async: true,
success: function( data ){
//Do Some Thing With Returned Data.
}
});
}
```
---
As a note though, when PHP echo this server side you might need to do something like this
```
'php echo $filePath ."/"?' + filename + '.php'
```
Upvotes: 2
|
2018/03/16
| 685 | 2,219 |
<issue_start>username_0: **What I want to do:**
Render a list of items as a component inside of another component
```
...
...
```
**What happens:**
I get this error:
```
Syntax Error: Adjacent JSX elements must be wrapped in an enclosing tag
```
What my code looks like:
```
const RenderBuildings = (props) => {
return (
props.buildings.allComps.map((building, index) => {
let id = index + 1
{id}
{building.name}
{building.address}
{building.status}
)}
)
}
```
**What I suspect is happening:**
Seems pretty clear that the whole thing should be somehow be wrapped in a div, but I'm not sure how to make that work with a map function. How do you wrap the whole response in that?<issue_comment>username_1: Try below, `.map` requires a `return` statement when body is in curly braces.
```js
const RenderBuildings = (props) => {
return (
{props.buildings.allComps.map((building, index) => {
let id = index + 1;
return (
{id}
{building.name}
{building.address}
{building.status}
);
});}
);
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: I am assuming you are building a table
```
import React, { Fragment } from 'react'
const RenderBuildings = (props) => {
return (
{
props.buildings.allComps.map((building, index) => {
let id = index + 1
return (
{id}
{building.name}
{building.address}
{building.status}
)
})
}
)
}
```
Upvotes: 2 <issue_comment>username_3: I believe this is what you're trying to achieve:
```js
const RenderBuildings = (props) => {
return (
props.buildings.allComps.map((building, index) => {
let id = index + 1
{id}
{building.name}
{building.address}
{building.status}
)}
)
```
Upvotes: 0 <issue_comment>username_4: This is correct code for you..
```
const RenderBuildings = (props) => {
return (
{props.buildings.allComps.map((building, index) => {
let id = index + 1;
return (
{id}
{building.name}
{building.address}
{building.status}
);
});}
);
}
```
As you mention you try this but i think you forgot to add return in map funtion. Because I was also done similar mistake when i just start coding in es6.
Upvotes: 1
|
2018/03/16
| 415 | 1,459 |
<issue_start>username_0: I have an issue where I'm unable to start any docker machines whilst connected to a the WiFi network in my local Starbucks, receiving the following output;
>
> $ docker-machine start
>
> Starting "default"...
>
> (default) Check network to re-create if needed...
>
> **Error setting up host only network on machine start: host-only cidr conflicts with the network address of a host interface**
>
>
>
This does not happen when connected to my home network, or whilst using my mobile hotspot. Is there any workaround for this?<issue_comment>username_1: There is a collision between the docker machine and the network that is being set up for wifi. Try creating a new docker machine with other ip:
```
docker-machine create --driver virtualbox --virtualbox-hostonly-cidr "192.168.123.99/24" mymachine
```
Use it:
```
docker-machine env mymachine
```
This is a new machine in addition to the 'default' one. You might see that it won't have your previous work (images, etc).
Upvotes: 4 [selected_answer]<issue_comment>username_2: Your docker-machine start failed.
So either you create new VM or repair existing VM.
1. For new VM option, use --virtualbox-hostonly-cidr "10.10.10.1/24"
(replace 10.10.10.1/24 with whatever subnet you want VM to use.)
2. For already created VM(start failed), bring up the virtual box UI and change it in the network preferences.
Use docker-machine ls to list existing VMs.
Upvotes: 1
|
2018/03/16
| 412 | 1,505 |
<issue_start>username_0: I'm doing a small script with jquery, I'd like, that by clicking on the checkbox, I'll take the first textarea and pass it to the 2 text area without numbers, but, that textarea 1 will be as it is in 2
```js
$(document).ready(function() {
function validate() {
if (document.getElementById('cheker').checked) {
results = document.getElementById("all").value;
final = results.string.replace(/\d+/g, '');
document.getElementById("filtrado").value(final);
} else {
}
}
});
```
```html
```<issue_comment>username_1: There is a collision between the docker machine and the network that is being set up for wifi. Try creating a new docker machine with other ip:
```
docker-machine create --driver virtualbox --virtualbox-hostonly-cidr "192.168.123.99/24" mymachine
```
Use it:
```
docker-machine env mymachine
```
This is a new machine in addition to the 'default' one. You might see that it won't have your previous work (images, etc).
Upvotes: 4 [selected_answer]<issue_comment>username_2: Your docker-machine start failed.
So either you create new VM or repair existing VM.
1. For new VM option, use --virtualbox-hostonly-cidr "10.10.10.1/24"
(replace 10.10.10.1/24 with whatever subnet you want VM to use.)
2. For already created VM(start failed), bring up the virtual box UI and change it in the network preferences.
Use docker-machine ls to list existing VMs.
Upvotes: 1
|
2018/03/16
| 384 | 1,325 |
<issue_start>username_0: I am loading my android `WebView` using
```
mywebview.loadDataWithBaseURL("file:///android_asset/", new String(result), "text/html", "UTF-8", null);
```
HTML rendered successfully in `WebView`. now I want to open another HTML file from that HTML using button for that I am using below code in HTML.
```
Start
```
but it does not work. my asset html file directory is - `assets\data\1\htmlfile.html`<issue_comment>username_1: There is a collision between the docker machine and the network that is being set up for wifi. Try creating a new docker machine with other ip:
```
docker-machine create --driver virtualbox --virtualbox-hostonly-cidr "192.168.123.99/24" mymachine
```
Use it:
```
docker-machine env mymachine
```
This is a new machine in addition to the 'default' one. You might see that it won't have your previous work (images, etc).
Upvotes: 4 [selected_answer]<issue_comment>username_2: Your docker-machine start failed.
So either you create new VM or repair existing VM.
1. For new VM option, use --virtualbox-hostonly-cidr "10.10.10.1/24"
(replace 10.10.10.1/24 with whatever subnet you want VM to use.)
2. For already created VM(start failed), bring up the virtual box UI and change it in the network preferences.
Use docker-machine ls to list existing VMs.
Upvotes: 1
|
2018/03/16
| 811 | 2,612 |
<issue_start>username_0: I was trying to use regexp\_substr to get each correct column name from a column list string.
The query is like:
```
select regexp_substr(v_keep, '(^|[(,) ]*)' || r.column_name || '($|[(,) ]+)', 1, 1, 'i')
from dual;
```
But the result is not correct.
The v\_keep can be any column name list like abc, abc\_abc, abc1 or (abc, abc\_abc, abc1).
The r.column\_name can be like abc or ab.
**- If the input v\_keep is (abc, abc\_abc, abc1) and the r.column\_name is
ab, it will return null.**
**- If the input v\_keep is (abc, abc\_abc, abc1) and the r.column\_name is
abc, it will return the column name just abc.**
Can anyone help me to fix it by just modify the pattern inside the regexp\_substr ?<issue_comment>username_1: Why not just use a `case` and `like`?
```
select (case when replace(replace(v_keep, '(', ','), '(', ',')) like '%,' || r.column_name || ',%'
then r.column_name
end)
```
I don't recommend storing lists in a comma-delimited string, but if you are, this is one way to identify individual elements of the list.
Upvotes: 2 <issue_comment>username_2: It's pretty simple, you just need to add a subexpression so you can pull out the part of the string you want. (A subexpression is a section of the regexp in parentheses.) In this case the last argument is 2, because you want the part of the match that corresponds to the second group of parentheses.
```
regexp_substr(v_keep, '(^|[(,) ]*)(' || r.column_name || ')($|[(,) ]+)', 1, 1, 'i', 2)
```
Gordon's solution will have better performance, though.
Edit: working example -
```
with testdata as (select '(abc, abc_abc, abc1)' as v_keep, 'abc' as column_name from dual)
select regexp_substr(v_keep, '(^|[(,) ]*)(' || r.column_name || ')($|[(,) ]+)', 1, 1, 'i', 2)
from testdata r;
```
Upvotes: 0 <issue_comment>username_3: Since this is PL/SQL code to see if a value is in a string, try this which avoids the overhead of hitting the database, and calling REGEXP. Just keep it straight SQL. I hate the nested REPLACE calls but I was trying to avoid using REGEXP\_REPLACE although it could be done in one call if you did use it.
```
set serveroutput on;
set feedback off;
declare
v_keep varchar2(50) := '(abc, abc_abc, abc1)';
compare varchar2(10) := 'abc_';
begin
if instr(',' || replace(replace(replace(v_keep, ' '), '('), ')') || ',', ',' || compare || ',') > 0 then
dbms_output.put_line('Column ''' || compare ||''' IS in the keep list');
else
dbms_output.put_line('Column ''' || compare ||''' IS NOT in the keep list');
end if;
end;
```
Upvotes: 0
|
2018/03/16
| 660 | 2,229 |
<issue_start>username_0: I am a beginner to coding. Is it possible to join two queries into one? I have googled but I didn't get what exactly I am looking for. My first query is:
```
SELECT filename
, course.cname
, DATE_FORMAT(pd_course_file.creationDate,"%D %M %Y") as date
, type
FROM pd_course_file
JOIN course
ON pd_course_file.course_id = course.id
```
Now whatever the result of type column for example the result of type I got pdf then my next query will be
```
Select icons from fileType where type = 'pdf'
```
How can I join both queries?<issue_comment>username_1: if the type is the file type you could use a join
```
SELECT filename
, course.cname
,DATE_FORMAT(pd_course_file.creationDate,"%D %M %Y") as date
, type
, ft.icon
FROM pd_course_file p
INNER JOIN course ON pd_course_file.course_id = course.id
INNER JOIN fileType ft ON ft.type = pd_course_file.type and ft.type='pdf'
```
oterwise if ever is a pdf but you have not a relatio you could use a cross join
```
SELECT filename
, course.cname
,DATE_FORMAT(pd_course_file.creationDate,"%D %M %Y") as date
, type
, ft.icon
FROM pd_course_file p
INNER JOIN course ON pd_course_file.course_id = course.id
CROSS JOIN fileType ft
where ft.type = 'pdf'
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: You could add a subquery as part of the select. Like so...
```
SELECT filename
, course.cname
, DATE_FORMAT(pd_course_file.creationDate,"%D %M %Y") as date
, type
, (select icons from filetype where type = 'PDF') as icons
FROM pd_course_file
JOIN course
ON pd_course_file.course_id = course.id
```
But that will get you a list of all the icons from the fileType table in put them in the icons column. If there's something else to key off of in the fileType table, you'd want to add that to the selecting subquery. For example:
```
SELECT filename
, course.cname
, DATE_FORMAT(pd_course_file.creationDate,"%D %M %Y") as date
, type
, (select icon from filetype ft where ft.type = 'PDF' and ft.file_course_id = pd_course_file.course_id) as icon
FROM pd_course_file
JOIN course
ON pd_course_file.course_id = course.id
```
Upvotes: -1
|
2018/03/16
| 914 | 2,688 |
<issue_start>username_0: I wrote a program which displays the points between points which the user types in.
For example: User types points `A(1|1)` and `B(10|10)`, so the program returns all the points which make a line between `A` and `B` (therefore `(2|2) (3|3)(4|4)(5|5)(6|6)` etc..(imagine a 2 dimensional array).
For calculating the points in between I've used a recursive function which looks like this:
```
void line(struct point A, struct point B) {
struct point M;
if ((A.x - B.x >= -1 && A.x - B.x <= 1) && (A.y - B.y >= -1 && A.y - B.y <= 1)) {
printf("P(%i|%i) P(%i|%i)\n", A.x, A.y, B.x, B.y);
}
else {
M.x = (A.x + B.x) / 2;
M.y = (A.y + B.y) / 2;
line(A, M);
line(M, B);
}
}
```
Now I have to visualize this, means create a 2 dimensional array whcih shows point A and B and the points in between can be every character(\* or 0 for example). How do I do that, since I dont know how to save the values of the recursive function? I tried a lot of thinking, but did not find a solution.<issue_comment>username_1: Declare a static two dimensional array inside your recursive function. And finally return that array.
This is just an example but not the final solution. Hence you have to apply this example in your codes:-
```
int** line(struct point A, struct point B)
{
static int** array2D =(int **)malloc(somesize * sizeof(int *));
static int i=0;
static int j=0;
.
.
M.x = (A.x + B.x) / 2;
M.y = (A.y + B.y) / 2;
array2D[i][j++] = M.x;
array2D[i++][j] = M.y;
.
.
return array2D;
```
Upvotes: 1 <issue_comment>username_2: You can pass a pointer to a structure and store the points into an array in that structure:
```
struct curve {
size_t pos, size;
struct point *points;
};
void line(struct curve *cp, struct point A, struct point B) {
if ((A.x - B.x >= -1 && A.x - B.x <= 1)
&& (A.y - B.y >= -1 && A.y - B.y <= 1)) {
if (cp->pos >= cp->size) {
int new_size = (cp->size + cp->size / 2 + 6);
struct point *np = realloc(cp->points, sizeof(*np) * new_size);
if (np == NULL) {
fprintf(stderr, "out of memory\n");
exit(1);
}
cp->points = np;
cp->size = new_size;
}
if (cp->pos == 0) {
cp->points[cp->pos++] = A;
}
cp->points[cp->pos++] = B;
} else {
struct point M;
M.x = (A.x + B.x) / 2;
M.y = (A.y + B.y) / 2;
line(cp, A, M);
line(cp, M, B);
}
}
```
Initial call:
```
struct curve c = { 0, 0, NULL };
line(&c, A, B);
```
Upvotes: 0
|
2018/03/16
| 828 | 3,160 |
<issue_start>username_0: I want my `QListWidget` to update with the new item as it is added, but it only updates with all of the items once the function has ended. I have tried using `update()` and `repaint()`, but neither work. I actually had to use `repaint()` on the Widget itself just to get it to show up before the end, but none of the items do. Here is a brief view of the first item to add:
```
def runPW10(self):
self.PWList.setVisible(True)
self.PWList.setEnabled(True)
# This repaint() has to be here for even the List to show up
self.PWList.repaint()
....
self.PWList.addItem('Executing Change Password Tool')
# This does not help
self.PWList.repaint()
....
```
There is more to the function, but it is long and this should include what it needed. Please let me know if more is required.
What am I doing wrong that makes this List not update as the item is added?<issue_comment>username_1: Add [`QApplication.processEvents()`](http://pyqt.sourceforge.net/Docs/PyQt4/qcoreapplication.html#processEvents).
>
> `QCoreApplication.processEvents (QEventLoop.ProcessEventsFlags flags = QEventLoop.AllEvents)`
>
>
> Processes all pending events for the calling thread according to the specified flags until there are no more events to process.
>
>
> You can call this function occasionally when your program is busy performing a long operation (e.g. copying a file).
>
>
>
Your widget originally will be shown but unresponsive. To make the application responsive, add `processEvents()` calls to some whenever you add an item.
Do keep in mind that this can affect performance *a lot*. This lets the whole application loop execute including any queued events. ***Don't*** add this to performance sensitive loops.
Also consider that this allows your user to interact with the application, so make sure that any interactions that can happen either are not allowed, such as `somebutton.enabled(False)`, or are handled gracefully, like a `Cancel` button to stop a long task.
See the [original C++ docs](http://doc.qt.io/archives/qt-4.8/qcoreapplication.html#processEvents) for further information, since `pyqt` is a direct port.
Upvotes: 3 [selected_answer]<issue_comment>username_2: To complete username_1's answer on this point:
>
> Also consider that this allows your user to interact with the application, so make sure that any interactions that can happen either are not allowed, such as `somebutton.enabled(False)`, or are handled gracefully, like a `Cancel` button to stop a long task.
>
>
>
You may want to use the [`QEventLoop.ExcludeUserInputEvents`](http://pyqt.sourceforge.net/Docs/PyQt4/qeventloop.html#ProcessEventsFlag-enum) flag this way: `QCoreApplication.processEvents(QEventLoop.ExcludeUserInputEvents)` to refresh the GUI while preventing the user to activate any widgets.
>
> `QEventLoop.ExcludeUserInputEvents`
>
>
> `0x01`
>
>
> Do not process user input events, such as `ButtonPress` and `KeyPress`. Note that the events are not discarded; they will be delivered the next time `processEvents()` is called without the `ExcludeUserInputEvents` flag.
>
>
>
Upvotes: 2
|
2018/03/16
| 548 | 1,913 |
<issue_start>username_0: I'm building a service backend that is being sent a "delivery report" after successfully sending a SMS to a user.
The report itself is XML **POST**ed to our "endpoint" with the content-type *application/xml*.
I'm using Postman to make sure that everything is working correctly. I've made a test using regular JSON and can return the data without issues, however, no matter what I try with XML I basically get no indication that anything is being sent to the server.
*(Test with JSON)*
[](https://i.stack.imgur.com/hiMPG.png)
*(Test with XML)*
[](https://i.stack.imgur.com/Hz8BX.png)
Here's my simple PHP script:
```
php
header('Content-Type: application/xml; charset=utf-8');
print_r(json_decode(file_get_contents("php://input"), true));
print_r($HTTP_RAW_POST_DATA);
?
```
I feel like I've tried everything. Been looking at past issues posted to SO and other places, it simply won't work for me. I'm hoping for some answers that at least points me in the right direction here.
Cheers.<issue_comment>username_1: Your trying to json\_decode XML data. You should use something like SimpleXML.
Instead of...
```
print_r(json_decode(file_get_contents("php://input"), true));
```
You should use ...
```
$xml = new SimpleXMLElement(file_get_contents("php://input"));
echo $xml->asXML();
```
You should be able to get the information by (for example)...
```
echo (string)$xml->id;
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: json\_decode can't read XML, seems like you're trying to parse XML with json\_decode. if you want to output the received XML, use echo (or if it's for debugging purposes, use var\_dump, eg `var_dump(file_get_contents("php://input"));`), or if you want to parse the XML, use DOMDocument.
Upvotes: 0
|
2018/03/16
| 757 | 2,777 |
<issue_start>username_0: So, I love the idea of VueX modules and separating my data out, as it makes it far easier to reason when there are large sets of data... but I hate having to refer to them as nested objects in the store's state.
This is how the module currently works:
contactData.js:
```
export const contactData = {
state: {
contactInfo: null,
hasExpiredContacts: false
},
mutations: {
updateContactInfo(state, data) {
state.contactInfo = data;
},
updateExpired(state, data) {
state.hasExpiredContacts = data;
}
}
}
```
store.js:
```
import Vue from 'vue';
import Vuex from 'vuex';
import { contactData } from './contactData.js';
Vue.use(Vuex);
export default new Vuex.Store({
modules: { contactData },
state: {
otherData: null
}
});
```
Which would return as:
```
store: {
state: {
contactData: {
contactInfo: null,
hasExpiredContacts: false
},
otherData: null
}
}
```
Is there anyway to, instead, display it as the following, while still using a module?
```
store: {
state: {
contactInfo: null,
hasExpiredContacts: false,
otherData: null
}
}
```<issue_comment>username_1: Since there's [no deep merge possible still in ES6/ES7](https://stackoverflow.com/questions/27936772/how-to-deep-merge-instead-of-shallow-merge), you can't do it like the way you want.
---
You need to make your own function or find a suitable library to deep merge the objects to make it work.
Here's a possible solution using [lodash](https://lodash.com/docs/4.17.2#merge):
```
modules: { _.merge(contactData, { state: { otherData: null } } ) }
```
Upvotes: 0 <issue_comment>username_2: I'm not sure that flattening out all your state would necessarily be a great idea if the project grew larger, as you'd have to be wary of property name clashes.
However, ignoring that, you could perhaps create flat getters for all module state automatically. Since this just provides alternative access all actions and mutations will work in the normal way.
```js
const modules = {
contactData,
user,
search,
...
}
const flatStateGetters = (modules) => {
const result = {}
Object.keys(modules).forEach(moduleName => {
const moduleGetters = Object.keys(modules[moduleName].getters || {});
Object.keys(modules[moduleName].state).forEach(propName => {
if (!moduleGetters.includes(propName)) {
result[propName] = (state) => state[moduleName][propName];
}
})
})
return result;
}
export const store = new Vuex.Store({
modules,
getters: flatStateGetters(modules),
state: {
otherData: null
}
})
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 612 | 2,343 |
<issue_start>username_0: I would like to mock `io.vertx.ext.jdbc.JDBCClient` to unit-test the below `verticle` code:
```java
class A {
private final Helper help = new Helper();
public JsonObject checkEmailAvailability(String email, JDBCClient jdbc) throws SignUpException {
JsonObject result = new JsonObject();
jdbc.getConnection(conn -> help.startTx(conn.result(), beginTans -> {
JsonArray emailParams = null;
emailParams = new JsonArray().add(email);
System.out.println(email);
help.queryWithParams(conn.result(), SQL_SELECT_USER_EMAIL, emailParams, res -> {
if (res.getNumRows() >= 1) {
result.put("message", "Email already registered");
}
});
}));
return result;
}
}
```<issue_comment>username_1: Since there's [no deep merge possible still in ES6/ES7](https://stackoverflow.com/questions/27936772/how-to-deep-merge-instead-of-shallow-merge), you can't do it like the way you want.
---
You need to make your own function or find a suitable library to deep merge the objects to make it work.
Here's a possible solution using [lodash](https://lodash.com/docs/4.17.2#merge):
```
modules: { _.merge(contactData, { state: { otherData: null } } ) }
```
Upvotes: 0 <issue_comment>username_2: I'm not sure that flattening out all your state would necessarily be a great idea if the project grew larger, as you'd have to be wary of property name clashes.
However, ignoring that, you could perhaps create flat getters for all module state automatically. Since this just provides alternative access all actions and mutations will work in the normal way.
```js
const modules = {
contactData,
user,
search,
...
}
const flatStateGetters = (modules) => {
const result = {}
Object.keys(modules).forEach(moduleName => {
const moduleGetters = Object.keys(modules[moduleName].getters || {});
Object.keys(modules[moduleName].state).forEach(propName => {
if (!moduleGetters.includes(propName)) {
result[propName] = (state) => state[moduleName][propName];
}
})
})
return result;
}
export const store = new Vuex.Store({
modules,
getters: flatStateGetters(modules),
state: {
otherData: null
}
})
```
Upvotes: 2 [selected_answer]
|
2018/03/16
| 738 | 2,366 |
<issue_start>username_0: This is probably a very basic question but I haven't been able to find the answer so here goes...
Question:
Is there an say way to sort the values alphabetically while also removing any duplicate instances?
Here's what I have:
```
data = ['Car | Book | Apple','','Book | Car | Apple | Apple']
df = pd.DataFrame(data,columns=['Labels']
print(df)
Labels
0 Car | Book | Apple
1
2 Book | Car | Apple | Apple
```
Desired Output:
```
Labels
0 Apple | Book | Car
1
2 Apple | Book | Car
```
Thanks!<issue_comment>username_1: `df['Labels'].str.split('|')` will split the string on `|` and return a list
```
#0 [Car , Book , Apple]
#1 []
#2 [Book , Car , Apple , Apple]
#Name: Labels, dtype: object
```
See that there are extra spaces in the resulting list elements. One way to remove those is by applying `str.strip()` to each element in the list:
```
df['Labels'].str.split('|').apply(lambda x: map(str.strip, x))
#0 [Car, Book, Apple]
#1 []
#2 [Book, Car, Apple, Apple]
#Name: Labels, dtype: object
```
Finally we apply the `set` constructor to remove duplicates, sort the values, and join them back together using `" | "` as a separator:
```
df['Labels'] = df['Labels'].str.split('|').apply(
lambda x: " | ".join(sorted(set(map(str.strip, x))))
)
print(df)
# Labels
#0 Apple | Book | Car
#1
#2 Apple | Book | Car
```
Upvotes: 2 <issue_comment>username_2: `str.join` after `str.split`
```
df=df.replace({' ':''},regex=True)
df.Labels.str.split('|').apply(set).str.join('|')
Out[339]:
0 Apple|Book|Car
1
2 Apple|Book|Car
Name: Labels, dtype: object
```
Base on the comment adding `sorted`
```
df.Labels.str.split('|').apply(lambda x : sorted(set(x),reverse=False)).str.join(' | ')
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: One way is to use `pd.Series.map` with `sorted` & `set` after splitting by `|`:
```
import pandas as pd
data = ['Car | Book | Apple','','Book | Car | Apple | Apple']
df = pd.DataFrame(data,columns=['Labels'])
df['Labels'] = df['Labels'].map(lambda x: ' | '.join(sorted(set(x.split(' | ')))))
# Labels
# 0 Apple | Book | Car
# 1
# 2 Apple | Book | Car
```
Upvotes: 2
|
2018/03/16
| 915 | 3,715 |
<issue_start>username_0: I have a general paradigm from which I'd like to create objects. Currently, I have it written in such a way that the parent class builds the paradigm from a configuration object given to it by the subclass. The configuration object defines all of the class' characteristics, and the class is also able to overload any methods that need to be slightly different from one implementation of the paradigm to the next.
This approach is convenient as it allows me to:
* avoid a long list of parameters for the constructor,
* prevent the subclass from needing to define the attributes in any particular order, and
* prevent the subclass from needing to explicitly create instances that the parent constructor can imply from the configuration.
As much as I liked the cleanliness and flexibility of sending this configuration object to `super()`, I've found a glaring limitation to this approach:
I cannot make a reference to `this` until `super` has completed.
```
class Paradigm {
constructor(config) {
this.name = config.name;
this.relationships = config.relationships;
}
}
class Instance extends Paradigm {
constructor(name) {
super({
name: name,
relationships: {
children: {
foo: new Foo(this) // Error: `this` referenced before `super`
}
}
});
}
}
class Foo {
constructor(parent) {
this.parent = parent;
}
}
```
*NOTE: The configuration object is typically significantly larger than this, which is why I'm using it at all.*
I can imagine there are many ways to approach this, but I'm most interested in what would be the correct way.
* Should I be using functions in place of statements that require a reference to `this` and only execute them when they are needed?
* Should I keep the same approach, but simply replace `super({...})` with `this.build({...})`?
* Should I abandon the idea of the configuration object altogether and instead take a different approach?<issue_comment>username_1: A possible way would be to change your Paradigma to this:
```
class Paradigm {
constructor(config) {
Object.assign(this, config(this));
}
}
```
So that you can do:
```
class Instance extends Paradigm {
constructor(name) {
super((context) => ({
name: name,
relationships: {
children: {
foo: new Foo(context)
}
}
}));
}
}
```
But actually this is a bad way of solving it...
---
Another way could be to do it the other way round:
```
const Paradigma = cls => class Paradigma extends cls {
constructor(...args){
super(config => setTimeout(() => Object.assign(this, config), 1), ...args);
}
};
```
So you can do:
```
const Instance = Paradigma(class {
constructor(build, name){
build({name});
}
});
new Instance("Test");
```
Upvotes: 0 <issue_comment>username_2: The fact that `this` is needed before `super` often suggests that a constructor contains logic that should be moved to separate method.
In some cases some workarounds are possible, as long as they don't cause problems. E.g., a part of configuration that requires `this` can be instantiated later:
```
constructor(name) {
const config = {
name: name,
relationships: {
children: {};
}
};
super(config);
config.relationships.children.foo = new Foo(this);
}
```
Of course, this presumes that `relationships` is just stored as a reference and isn't processed on construction.
In more severe cases class hierarchy (`Instance` and `Paradigm`) may be required to be converted to ES5 classes, since ES6 classes can extend regular functions but not vice versa.
Upvotes: 2 [selected_answer]
|
2018/03/16
| 1,321 | 5,458 |
<issue_start>username_0: I am trying to create an xml file from my data class using JaxB and download it.
I have created marshaller object from java object. However I am unable to return this object as xml file. When I hit that method I get an empty xml file is downloaded. I am newbie to File IO coding, please help me in this regard.
Following is my Controller method
```
import java.io.FileOutputStream;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import javax.annotation.Resource;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.validation.constraints.NotNull;
import javax.xml.bind.JAXBContext;
import javax.xml.bind.Marshaller;
import org.apache.commons.collections.CollectionUtils;
import org.springframework.context.annotation.Scope;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
@Controller
public class MyController{
@RequestMapping(value = PRODUCT_CODE_PATH_VARIABLE_PATTERN, method = RequestMethod.GET)
public void downloadProduct(@PathVariable("productCode") @NotNull final String productCode, final HttpServletRequest request, final HttpServletResponse response) throws Exception
{
response.setContentType("application/xml");
response.setHeader("Content-Disposition", "attachment; filename=" + productCode + ".xml");
final ProductDataJaxb productDataJaxb = getProductJaxObj(productCode);
final JAXBContext jaxbContext = JAXBContext.newInstance(ProductDataJaxb.class);
final Marshaller marshaller = jaxbContext.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
final OutputStream outputStream = new FileOutputStream(productCode + ".xml");
marshaller.marshal(productDataJaxb, outputStream);
}
}
```
My data class whose xml representation is what I want in downloaded xml file.
```
import java.util.List;
import javax.xml.bind.annotation.XmlElement;
import javax.xml.bind.annotation.XmlRootElement;
import javax.xml.bind.annotation.XmlType;
@XmlRootElement(name = "Product")
@XmlType(propOrder ={ "code", "categories", "description" })
public class ProductDataJaxb
{
private String code;
private String description;
private List categories;
/\*\*
\* @return the code
\*/
@XmlElement
public String getCode()
{
return code;
}
/\*\*
\* @param code
\* the code to set
\*/
public void setCode(final String code)
{
this.code = code;
}
/\*\*
\* @return the description
\*/
@XmlElement
public String getDescription()
{
return description;
}
/\*\*
\* @param description
\* the description to set
\*/
public void setDescription(final String description)
{
this.description = description;
}
/\*\*
\* @return the categories
\*/
@XmlElement
public List getCategories()
{
return categories;
}
/\*\*
\* @param categories
\* the categories to set
\*/
public void setCategories(final List categories)
{
this.categories = categories;
}
}
```<issue_comment>username_1: In your `downloadProduct` method you should add/change/remove a few things:
* Change the return-type to `ProductDataJaxb`
* Add the [`@ResponseBody`](https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/bind/annotation/ResponseBody.html) annotation to indicate the return-value
should go into the HTTP response
* Specify output media-type `"application/xml"` in the `@RequestMapping` annotation to indicate which `Content-Type` should be set in the HTTP response
* Remove all the JAXB marshalling stuff, because Spring will do it for you.
* You don't need the `HttpServletRequest` and `HttpServletResponse` method parameters anymore
Your method will look like this:
```
@RequestMapping(value = PRODUCT_CODE_PATH_VARIABLE_PATTERN,
method = RequestMethod.GET,
produces = "application/xml")
@ResponseBody
public ProductDataJaxb downloadProduct(@PathVariable("productCode") @NotNull final String productCode) throws Exception
{
final ProductDataJaxb productDataJaxb = getProductJaxObj(productCode);
return productDataJaxb;
}
```
Upvotes: 0 <issue_comment>username_2: After some trial and error I got solution to implement my requirement as follows.
```
@RequestMapping(value = PRODUCT_CODE_PATH_VARIABLE_PATTERN, method = RequestMethod.GET)
public void downloadProduct(@PathVariable("productCode") @NotNull final String productCode, final HttpServletRequest request,
final HttpServletResponse response) throws Exception
{
response.setContentType("application/xml");
response.setHeader("Content-Disposition", "attachment; filename=" + productCode + ".xml");
final ProductDataJaxb productDataJaxb = getProductJaxObj(productCode);
final JAXBContext jaxbContext = JAXBContext.newInstance(ProductDataJaxb.class);
final Marshaller marshaller = jaxbContext.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE);
marshaller.marshal(productDataJaxb, response.getOutputStream());
}
```
I only had to pass output Stream object of HttpResponse object to marshal method to get this done. Gosh! I seriously need to study IO concepts now.... :)
Upvotes: 3 [selected_answer]
|
2018/03/16
| 797 | 2,946 |
<issue_start>username_0: How can I refactor this redux reducer code with ES6? Concat I can replace with `...` right? But what about Object.assign?
```
import { FETCH_REQUEST, FETCH_SUCCESS, FETCH_FAILURE } from '../actions';
const initialState = {
isFetching: false,
beersArray: [],
errorMessage: '',
skip: 0,
limit: 0,
hasMore: true,
};
export default function(state = initialState, action) {
switch (action.type) {
case FETCH_REQUEST:
return Object.assign({}, state, {
isFetching: true,
skip: action.skip,
limit: action.limit,
hasMore: true,
});
case FETCH_SUCCESS:
return Object.assign({}, state, {
isFetching: false,
beersArray: state.beersArray.concat(action.beers),
hasMore: action.hasMore,
});
case FETCH_FAILURE:
return Object.assign({}, state, {
isFetching: false,
errorMessage: action.errorMessage,
hasMore: false,
});
default:
return state;
}
}
```<issue_comment>username_1: You can replace `Object.assign` with. Be aware that `...state` (spread properties) are proposal, so your environment need to support them.
```
return { ...state, isFetching: true, skip: action.skip, limit: action.limit, hasMore: true };
```
What about `concat`, you can write
```
{ beersArray: [...state.beersArray, ...action.beers] }
```
This will just destruct the items from the `state.beersArray` and `action.beers` , put them into the new array and assign to the `beersArray`.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can use Spread operator on `key-value` objects and `arrays` as well.
```
export default function(state = initialState, action) {
switch (action.type) {
case FETCH_REQUEST:
return {...state, { isFetching: true, skip: action.skip, limit: action.limit, hasMore: true}};
case FETCH_SUCCESS:
return {...state, { isFetching: false, beersArray: [...state.beersArray, ...action.beers], hasMore: action.hasMore });
case FETCH_FAILURE:
return { ...state, { isFetching: false, errorMessage: action.errorMessage, hasMore: false });
default:
return state;
}
}
```
Transitions
===========
`state.beersArray.concat(action.beers)` to `[...state.beersArray, ...action.beers]`
`Object.assign({}, state, {other props}}` to `{ ...state, ...state, ...{other props} }`
---
### [`Spread syntax (...)`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax)
>
> [`Spread syntax`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax) allows an iterable such as an array expression or string to be expanded in places where zero or more arguments (for function calls) or elements (for array literals) are expected, or an object expression to be expanded in places where zero or more key-value pairs (for object literals) are expected.
>
>
>
Upvotes: 0
|
2018/03/16
| 1,638 | 3,556 |
<issue_start>username_0: I am struggling to convert an array of floats (numbers with decimal places) to datetime. What I have is a huge array with non-integers (like those produced with Microsoft Excel) that denotes days after a certain date.
If I do it for only 1 float number, say 28.79167, starting for an initial date 01/01/2014, I would do it like below:
```
date = datetime(2014,01,01) + timedelta(days=28.79167)
print date
Out[142]: datetime.datetime(2014, 1, 29, 19, 0, 0, 288000)
```
which looks correct!
But, when I have an array, say the one below:
```
dcc = np.arange(0,10,0.5)
print dcc
array([ 0. , 0.5, 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. ,
5.5, 6. , 6.5, 7. , 7.5, 8. , 8.5, 9. , 9.5])
```
Then I would do it like:
```
date = [datetime(2014,01,01) + timedelta(days=dcc[i]) for i in dcc]
```
which gives:
```
/usr/bin/ipython:1: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future #! /usr/bin/python
print date
Out[138]:
[datetime.datetime(2014, 1, 1, 0, 0),
datetime.datetime(2014, 1, 1, 0, 0),
datetime.datetime(2014, 1, 1, 12, 0),
datetime.datetime(2014, 1, 1, 12, 0),
datetime.datetime(2014, 1, 2, 0, 0),
datetime.datetime(2014, 1, 2, 0, 0),
datetime.datetime(2014, 1, 2, 12, 0),
datetime.datetime(2014, 1, 2, 12, 0),
datetime.datetime(2014, 1, 3, 0, 0),
datetime.datetime(2014, 1, 3, 0, 0),
datetime.datetime(2014, 1, 3, 12, 0),
datetime.datetime(2014, 1, 3, 12, 0),
datetime.datetime(2014, 1, 4, 0, 0),
datetime.datetime(2014, 1, 4, 0, 0),
datetime.datetime(2014, 1, 4, 12, 0),
datetime.datetime(2014, 1, 4, 12, 0),
datetime.datetime(2014, 1, 5, 0, 0),
datetime.datetime(2014, 1, 5, 0, 0),
datetime.datetime(2014, 1, 5, 12, 0),
datetime.datetime(2014, 1, 5, 12, 0)]
```
which obviously it's not what I wanted.
Another idea to get integers was to simply convert dates to seconds and then use timedelta like before, but look what happens:
```
date = [datetime(2014,01,01) + timedelta(seconds=dcc[i]*86400) for i in dcc*86400]
```
Note the use of 'seconds' in the timedelta now instead of 'days'. This gives the following:
```
IndexError: index 43200 is out of bounds for axis 0 with size 20
```
I tried many web searches, but either nobody else has come up with the same problem or I am making a huge mistake somewhere...
Could anyone help? Thanks in advance!<issue_comment>username_1: try make the datetime first, then add corresponding to each
Upvotes: -1 <issue_comment>username_2: As @Kasramvd said, your list comprehension is wrong. You are already accessing the items in `dcc`, therefore you do not need the indices anymore.
I get:
```
In [33]: date = [datetime(2014,1,1) + timedelta(days=i) for i in dcc]
In [34]: date
Out[34]:
[datetime.datetime(2014, 1, 1, 0, 0),
datetime.datetime(2014, 1, 1, 12, 0),
datetime.datetime(2014, 1, 2, 0, 0),
datetime.datetime(2014, 1, 2, 12, 0),
datetime.datetime(2014, 1, 3, 0, 0),
datetime.datetime(2014, 1, 3, 12, 0),
datetime.datetime(2014, 1, 4, 0, 0),
datetime.datetime(2014, 1, 4, 12, 0),
datetime.datetime(2014, 1, 5, 0, 0),
datetime.datetime(2014, 1, 5, 12, 0),
datetime.datetime(2014, 1, 6, 0, 0),
datetime.datetime(2014, 1, 6, 12, 0),
datetime.datetime(2014, 1, 7, 0, 0),
datetime.datetime(2014, 1, 7, 12, 0),
datetime.datetime(2014, 1, 8, 0, 0),
datetime.datetime(2014, 1, 8, 12, 0),
datetime.datetime(2014, 1, 9, 0, 0),
datetime.datetime(2014, 1, 9, 12, 0),
datetime.datetime(2014, 1, 10, 0, 0),
datetime.datetime(2014, 1, 10, 12, 0)]
```
Upvotes: 0
|
2018/03/16
| 1,678 | 2,999 |
<issue_start>username_0: Trying to merge files of the same format of an entire folder to a merged file.
```
head File1.txt
11 116701285 204
11 116701286 209
11 116701287 209
11 116701288 208
11 116701289 209
11 116701290 208
11 116701291 208
11 116701292 210
11 116701293 209
11 116701294 213
head File2.txt
11 116701285 188
11 116701286 192
11 116701287 191
11 116701288 191
11 116701289 191
11 116701291 191
11 116701292 194
11 116701293 194
11 116701294 199
```
.........
```
head FileN.txt
11 116701285 190
11 116701286 192
11 116701287 191
11 116701288 189
11 116701289 191
11 116701290 192
11 116701291 193
11 116701292 197
11 116701293 196
11 116701294 199
```
The desired output (the number of columns after first two columns will correspond to the number of files. First two column is same in all files. There is no header in the file)
```
11 116701285 188 204 190
11 116701286 192 209 192
11 116701287 191 209 191
11 116701288 191 208 189
11 116701289 191 209 191
11 116701290 191 0 192
11 116701291 191 208 193
11 116701292 194 210 197
11 116701293 194 209 196
11 116701294 199 213 199
```
If element the entry is not present fill in 0. Used join but could only do two files.<issue_comment>username_1: Could you please try following `awk` and let me know if this helps you.
```
awk 'FNR==NR{a[$1,$2]=$3;next} {a[$1,$2]=a[$1,$2]?a[$1,$2] OFS $3:$3} END{for(i in a){print i,a[i]}}' File*.txt
```
Upvotes: 1 <issue_comment>username_2: This will work for you :
```
#!/bin/bash
## Assuming all your file are named as *FileN.txt* and so on
endresult="$( awk '{print $1 "\t" $2}' File1.txt )"
for file in F*.txt
do
endresult=$( paste <( echo "$endresult" ) <( awk '{ if(length($3) != 0) { print $3 }else{print 0}}' $file ) )
done
#Replacing empty values for zeroes
endresult=$( echo "$endresult" | awk -v max_rows=$( echo "$endresult" | awk '{print NF}' | sort -r | head -1 ) 'BEGIN{ OFS="\t" }{ for(i=1;i < (max_rows + 1);i++) { if(length($i) == 0 ){ $i = 0 } } ; print $0}' )
```
### Edit : Im grabbing the first two columns from the 1st file as you stated these two were equal in all files.
Regards!
Upvotes: 0 <issue_comment>username_3: You may use this `awk`:
```
awk '{
k=$1 OFS $2
}
FNR == NR {
v[++n] = k
}
{
a[ARGIND,k] = $3
}
END {
for(j=1; j<=n; j++) {
printf "%s", v[j]
for (i=1; i
```
```
11 116701285 204 188 190
11 116701286 209 192 192
11 116701287 209 191 191
11 116701288 208 191 189
11 116701289 209 191 191
11 116701290 208 0 192
11 116701291 208 191 193
11 116701292 210 194 197
11 116701293 209 194 196
11 116701294 213 199 199
```
If you want one liner then use:
```
awk '{k=$1 OFS $2} FNR==NR{v[++n]=k} {a[ARGIND,k] = $3} END{for(j=1; j<=n; j++) {printf "%s", v[j]; for (i=1; i
```
`column -t` is used for tabular output.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,044 | 2,838 |
<issue_start>username_0: Common way to keep clients in sync with server in real time is to make Websocket/SSE connection and push all updates this way. This is obviously very efficient, but also requires us too set up a server to handle all those persistent connections and to communicate with the rest of our infrastructure.
While I was looking into video streaming solutions, I learned that current way to go there is to put your data in form of static files, allow clients request whatever and whenever they need and let highly optimized servers like nginx do the rest for you.
So I started thinking if this could be also the way to go with message communication. Just put all data you want your clients to have fresh and synced into form of static files and set up nginx to serve them. Taking advantage of things like http/2, memcached, last-modified tags and request limiting would reduce overheat from clients polling the same files over and over again to absolute minimum. And not only we could get away without having to maintain additional communication protocol, but we could awoid invoking our backend code at all.
Do I miss something here?<issue_comment>username_1: Could you please try following `awk` and let me know if this helps you.
```
awk 'FNR==NR{a[$1,$2]=$3;next} {a[$1,$2]=a[$1,$2]?a[$1,$2] OFS $3:$3} END{for(i in a){print i,a[i]}}' File*.txt
```
Upvotes: 1 <issue_comment>username_2: This will work for you :
```
#!/bin/bash
## Assuming all your file are named as *FileN.txt* and so on
endresult="$( awk '{print $1 "\t" $2}' File1.txt )"
for file in F*.txt
do
endresult=$( paste <( echo "$endresult" ) <( awk '{ if(length($3) != 0) { print $3 }else{print 0}}' $file ) )
done
#Replacing empty values for zeroes
endresult=$( echo "$endresult" | awk -v max_rows=$( echo "$endresult" | awk '{print NF}' | sort -r | head -1 ) 'BEGIN{ OFS="\t" }{ for(i=1;i < (max_rows + 1);i++) { if(length($i) == 0 ){ $i = 0 } } ; print $0}' )
```
### Edit : Im grabbing the first two columns from the 1st file as you stated these two were equal in all files.
Regards!
Upvotes: 0 <issue_comment>username_3: You may use this `awk`:
```
awk '{
k=$1 OFS $2
}
FNR == NR {
v[++n] = k
}
{
a[ARGIND,k] = $3
}
END {
for(j=1; j<=n; j++) {
printf "%s", v[j]
for (i=1; i
```
```
11 116701285 204 188 190
11 116701286 209 192 192
11 116701287 209 191 191
11 116701288 208 191 189
11 116701289 209 191 191
11 116701290 208 0 192
11 116701291 208 191 193
11 116701292 210 194 197
11 116701293 209 194 196
11 116701294 213 199 199
```
If you want one liner then use:
```
awk '{k=$1 OFS $2} FNR==NR{v[++n]=k} {a[ARGIND,k] = $3} END{for(j=1; j<=n; j++) {printf "%s", v[j]; for (i=1; i
```
`column -t` is used for tabular output.
Upvotes: 3 [selected_answer]
|
2018/03/16
| 1,058 | 3,754 |
<issue_start>username_0: I have the following code to display a certain string according to which checkboxes are checked in my Android app:
```
public String createOrderSummary(){
CheckBox whippedCheckBox = findViewById(R.id.whipped);
boolean whippedCream = whippedCheckBox.isChecked();
CheckBox chocoBox = findViewById(R.id.chocolate);
boolean chocolate = chocoBox.isChecked();
if (whippedCream && chocolate){
return "both selected";
}else if(whippedCream || chocolate){
if (whippedCream){
return "whippedcream";
}else if (chocolate){
return "chocolate";
}
}else{
return "none checked";
}
return "";
}
```
I receive a warning at `line 14` saying `condition chocolate is always true`, why is that?
Also when I switch lines to be:
```
if (whippedCream){
return "whipped cream";
}else if(whippedCream && chocolate){
return "both selected";
}else{
return "none selected";
}
```
I receive warning at `line 3` saying `condition always false`.<issue_comment>username_1: Let's consider part of your condition:
```
if (whippedCream || chocolate) { // either whippedCream is true or chocolate is true
if (whippedCream) { // whippedCream is true
return "whippedcream";
} else if (chocolate) { // whippedCream is not true, so chocolate must be true
return "chocolate";
}
}
```
therefore this condition can be simplified:
```
if (whippedCream || chocolate) { // either whippedCream is true or chocolate is true
if (whippedCream) { // whippedCream is true
return "whippedcream";
} else { // chocolate must be true
return "chocolate";
}
}
```
Of course, the full condition can be further simplified by eliminating the inner condition:
```
if (whippedCream && chocolate) { // both true
return "both selected";
} else if (whippedCream) { // only whippedCream is true
return "whippedcream";
} else if (chocolate) { // only chocolate is true
return "chocolate";
} else {
return "none checked";
}
```
Your alternative condition:
```
if (whippedCream){
return "whipped cream";
}else if(whippedCream && chocolate){
return "both selected";
}else{
return "none selected";
}
```
is simply wrong. If `whippedCream` is true, you'll never check if both `whippedCream` and `chocolate` are true, since the condition of `else if` is only evaluated if all the preceding conditions are false.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Let's consider the code
```
}else if(whippedCream || chocolate){
if (whippedCream){
return "whippedcream";
}else if (chocolate){
return "chocolate";
}
```
We'll only check `if (whippedCream)` if `whippedCream` or `chocolate` are `true`. So, when we get to the `else if (chocolate)`, then we know `whippedCream` is `false`, or we would not be in the 2nd `else`. We also know `chocolate` is true or we wouldn't be in this block.
Upvotes: 0 <issue_comment>username_3: The message is because of how you have nested your ifs. You are entering `if (whippedCream || chocolate)` only if one of them is true. You then check each one individually when a simple else would suffice.
Upvotes: 0 <issue_comment>username_4: It is logical phenomenon:
If this condition is true: `if(whippedCream || chocolate)` That means one of them either `whippedCream` or `chocolate` is `true` or both.
So inside it there is `if (whippedCream) else if(chocolate)` that means the second `if` in the `else if` is unnecessary. Because one of them is `true`, so if`(whippedCream)`is `false` then the `else` means chocolate is `true` and the condition `if(chocolate)` is unnecessary.
Upvotes: 0
|
2018/03/16
| 1,552 | 5,569 |
<issue_start>username_0: I'm adding a body class to my page based on browser support for the Google Speech API. The idea is to then check for that class and serve the appropriate search form based on browser support.
Javascript/Jquery is inlined in the document head:
```
$.noConflict();
jQuery(document).ready(function($) {
$(function() {
if (Modernizr.speechrecognition) {
$("body").addClass("voice-searchable");
}
});
});
```
Checking the page source, I can see that the class `voice-searchable` is being added in the appropriate browsers.
PHP:
```
add_filter( 'genesis_search_form', 'my_search_form', 10, 4);
function my_search_form( $form ) {
$classes = get_body_class();
if (in_array('voice-searchable',$classes)) {
// NEXT 2 LINES FOR DEBUGGING
print_r ($classes);
print (' FUBAR');
$form = 'CODE FOR VOICE SEARCH FORM HERE';
} else {
// NEXT 2 LINES FOR DEBUGGING
print_r ($classes);
print (' BARFOO');
$form = 'CODE FOR TEXT ONLY FORM HERE';
}
return $form;
}
```
And finally this in my `speech-input.js`:
```
jQuery(document).ready(function ($) {
$(function () {
if (Modernizr.speechrecognition) {
$("body").addClass("voice-searchable");
}
$.ajax({
type: "POST",
url: "/wp-content/themes/my-child-theme/include/search.php",
dataType: "json",
async: false,
data: {
'action': 'my_search_ajax_request',
},
success: function (data) {
//If the success function is execute,
//then the Ajax request was successful.
//Add the data we received in our Ajax
//request to the "breadcrumb_search" div.
$(".breadcrumb_search").html(data);
},
error: function (xhr, ajaxOptions, thrownError) {
var errorMsg = "Ajax request failed: " + xhr.responseText;
$(".breadcrumb_search").html(errorMsg);
}
});
});
});
(window, jQuery, window.Window_Ready);
```
Despite the existence of the desired body class, the 2nd form is being returned, and the printed array of body classes does not include `voice-searchable`. I've tried changing the priority from `10`to a higher number, but without success.
Why does the PHP filter not parse the added body class? How can I get it to see it? Do I need to wrap it in a function that fires later?
Thanks for looking.
EDIT:
@loganbertram Thank you. I'm totally new to using Ajax, but I tried this:
```
/*
* Ajax for speech recognition; place in functions.php
*/
function my_search_ajax_enqueue() {
// Enqueue javascript on the frontend.
wp_enqueue_script(
'my-search-ajax-script',
get_stylesheet_directory_uri() . '/js/speech-input.js',
array('jquery')
);
// The wp_localize_script allows us to output the ajax_url path for our script to use.
wp_localize_script(
'my-search-ajax-script',
'my_search_ajax_obj',
array( 'ajaxurl' => admin_url( 'admin-ajax.php' ) )
);
}
add_action( 'wp_enqueue_scripts', 'my_search_ajax_enqueue' );
```
plus this (in search.php):
```
function my_search_ajax_request() {
add_filter( 'genesis_search_form', 'my_search_form', 99, 4);
function my_search_form( $form ) {
$classes = get_body_class();
if (in_array('voice-searchable',$classes)) {
$form = 'CODE FOR FORM HERE';
} else {
$form = 'CODE FOR TEXT ONLY FORM HERE';
}
return $form;
}
die();
}
add_action( 'wp_ajax_my_search_ajax_request', 'my_search_ajax_request' );
add_action( 'wp_ajax_nopriv_my_search_ajax_request', 'my_search_ajax_request' );
```
And finally, this in `speech-input.js`:
```
jQuery(document).ready(function ($) {
$(function () {
if (Modernizr.speechrecognition) {
$("body").addClass("voice-searchable");
}
$.ajax({
type: "POST",
url: "/wp-content/themes/child-theme-name/include/search.php",
dataType: "json",
async: false,
data: {
'action': 'my_search_ajax_request',
},
success: function (data) {
//If the success function is execute,
//then the Ajax request was successful.
//Add the data we received in our Ajax
//request to the "breadcrumb_search" div.
$(".breadcrumb_search").html(data);
},
error: function (xhr, ajaxOptions, thrownError) {
var errorMsg = "Ajax request failed: " + xhr.responseText;
$(".breadcrumb_search").html(errorMsg);
}
});
});
});
(window, jQuery, window.Window_Ready);
```
Which gives a 500 error & displays the error message. Obviously not the right syntax. Most of the examples I could find were for retrieving data from variables or getting form values, nothing so simple as just rendering some HTML from a PHP file. If anyone out there could point me to an example of how to do the latter, I'd appreciate it.
EDIT: I solved the AJAX question & will post the answer in a separate topic.<issue_comment>username_1: PHP is server-side and fires first. Then the browser receives it and processes the JS adding the class. In short, when the PHP runs to render the page, the target class hasn't been added. You could after changing class add an ajax request to the salient bits of the PHP instead.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Put this code to your functions.php then check
Please remove all js that your are using for add class to body .
```
add_filter( 'body_class','my_body_classes' );
function my_body_classes( $classes ) {
$classes[] = 'voice-searchable';
return $classes;
}
```
Upvotes: 0
|
2018/03/16
| 826 | 3,103 |
<issue_start>username_0: I am wondering if someone can help me out with this since I am new to react-native and redux.
I want to populate a FlatList with just a name and I am using this classes (along with others obviously):
actions.js
```
const LOAD_CLIENT = 'LOAD_CLIENT';
export async function loadClient() {
const client = await Parse.Query("Client");
return {
type: LOAD_CLIENT,
client: client,
};
}
```
reducers.js
```
import { LOAD_CLIENT } from '../actions'
const initialState = {
objectId: null,
name: null,
};
function client(state: State = initialState, action: Action): State {
if (action.type === LOAD_CLIENT) {
let {objectId, name} = action.client; // de-structuring action data
return {
objectId,
name,
};
}
return state;
}
module.exports = client;
```
listScreen.js
```
import {
loadClient
} from './actions'
class SettingsScreen extends Component {
render() {
return (
{item.name}}
/>
)
}
}
export default SettingsScreen
```
What do I need to be able to populate the list with client.name?
I followed Redux basics to get here but now I am stuck.<issue_comment>username_1: You are forgetting to connect your component.
```
import {connect} from 'react-redux;
{...}
const mapStateToProps = ({reducerName})=>{
const {name, objectId} = reducerName;
return {name, objectId}
}
```
after this name and objectId will be available on component props.
But remember that the flatList data needs an array so your reducer should have a variable clients:[] that will populate on fetch with all the data.
Flatlist wont work if u pass only an object, you need to pass an array of objects
Upvotes: 0 <issue_comment>username_2: Please change something like this in your file.
reducer.js
```
import { LOAD_CLIENT } from '../actions'
const initialState = {
data: null,
};
function client(state: State = initialState, action: Action): State {
if (action.type === LOAD_CLIENT) {
return Object.assign({}, state, {
data: action.client
});
}
return state;
}
module.exports = client;
```
listScreen.js
```
import { loadClient } from './actions'
import { connect } from 'react-redux';
class SettingsScreen extends Component {
constructor(props){
super(props);
this.state={
client: null,
}
}
componentDidMount(){
this.props.loadClientData();
}
componentWillReceiveProps(nextProps) {
if (nextProps.data != null) {
this.setState({
client: nextProps.data
});
}
}
render() {
return (
{item.name}}
/>
)
}
}
const mapStateToProps = (state) => ({
data: state.reducers.data
});
const mapDispatchToProps = (dispatch) => ({
loadClientData: () => dispatch(loadClient())
})
export default connect(mapStateToProps, mapDispatchToProps)(SettingsScreen)
```
or you can refer to this link for your practice.
<https://medium.com/@imranhishaam/advanced-redux-with-react-native-b6e95a686234>
Upvotes: 1
|
2018/03/16
| 982 | 3,077 |
<issue_start>username_0: I am trying to strip the string {"$outer":{}, (starts from curly brace and end with comma) in my input but I could not do it.
My input is like below
```
{"$outer":{},"size":"10","query":{"$outer":{},"match":{"$outer":{},"_all":{"$outer":{},"query":"VALLE","operator":"and"}}}}
```
I tried the below ways but both did not help me.
First Approach:
```
val dropString = "\"$outer\":{},"
val payLoadTrim = payLoadLif.dropWhile(_ == dropString).reverse.dropWhile(_ == dropString).reverse
```
This one did not do anything. Here is the output:
```
{"$outer":{},"size":"10","query":{"$outer":{},"match":{"$outer":{},"_all":{"$outer":{},"query":"VALLE","operator":"and"}}}}
```
Second Approach:
```
def stripAll(s: String, bad: String): String = {
@scala.annotation.tailrec def start(n: Int): String =
if (n == s.length) ""
else if (bad.indexOf(s.charAt(n)) < 0) end(n, s.length)
else start(1 + n)
@scala.annotation.tailrec def end(a: Int, n: Int): String =
if (n <= a) s.substring(a, n)
else if (bad.indexOf(s.charAt(n - 1)) < 0) s.substring(a, n)
else end(a, n - 1)
start(0)
}
```
Output from Second:
```
size":"10","query":{"$outer":{},"match":{"$outer":{},"_all":{"$outer":{},"query":"VALLE","operator":"
```
and Desired Output:
```
{"size":"10","query":{"match":{"_all":{"query":"VALLE","operator":"and"}}}
```<issue_comment>username_1: You might want to use `replace`:
```
val input = """{"$outer":{},"size":"10","query":{"$outer":{},"match":{"$outer":{},"_all":{"$outer":{},"query":"VALLE","operator":"and"}}}}"""
input.replace("\"$outer\":{},", "")
```
which returns:
```
{"size":"10","query":{"match":{"_all":{"query":"VALLE","operator":"and"}}}}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: When you need to cut off an exact prefix/suffix you can use [`.stripPrefix`](http://scala-lang.org/api/current/scala/collection/immutable/StringOps.html#stripPrefix(prefix:String):String)/[`.stripSuffix`](http://scala-lang.org/api/current/scala/collection/immutable/StringOps.html#stripSuffix(suffix:String):String) methods:
```
@ "abcdef".stripPrefix("abc")
res: String = "def"
@ "abcdef".stripSuffix("def")
res: String = "abc"
```
Note that if the string doesn't have such prefix (or suffix), it will stay unchanged:
```
@ "abcdef".stripPrefix("foo")
res: String = "abcdef"
@ "abcdef".stripSuffix("buh")
res: String = "abcdef"
```
Sometimes it's important to cut off from the beginning (or end), so if when you use `.replace`, you should be careful and add `^...` (or `...$`) to the regex pattern, otherwise it may find a match somewhere in the middle and replace it.
---
Bonus: if you just want to check that a string has a given prefix/suffix, you can use [`.startsWith`](http://scala-lang.org/api/current/scala/collection/immutable/StringOps.html#stripSuffix(suffix:String):String)/[`.endsWith`](http://scala-lang.org/api/current/scala/collection/immutable/StringOps.html#stripSuffix(suffix:String):String) methods (plus `.startsWith` can also take an offset).
Upvotes: 0
|
2018/03/16
| 1,657 | 5,545 |
<issue_start>username_0: I am struggling with I gues a simple thing.
I try to make an element (phone img) fixed when user scrolls more than 600px, and make it unfixed again when he is at the end of this section. But it isn't fixed when he scrolls back to top. Why? what i am doing wrong?
Generally i am trying to make some transitions and animations inside of this phone conected to scroll event, some kind of tutorial how to use app.
Here is a codepen with my problem:
<https://codepen.io/anon/pen/YaGqRB?editors=1010>
and my ugly JQuery:
```
$(window).scroll(function () {
if ($(window).scrollTop() > 584) {
$('.phone-container').addClass('phone-container-fixed');
}if ($(window).scrollTop() > 2201) {
$('.phone-container').removeClass('phone-container-fixed');
$('.phone-container').addClass('phone-container-fixed-bot');
}
});
```
css:
```
.phone-container{
position: relative;
top: 40px;
}
.phone-container-fixed{
position: fixed;
top: 50px;
}
.phone-container-fixed-bot{
position: absolute;
top: 2420px;
}
```<issue_comment>username_1: When you scroll back up, you probably want to remove the class you've added.
Is this what you're looking for?
```
$(window).scroll(function () {
if($(window).scrollTop() <= 600) {
$('.phone-container').removeClass('phone-container-fixed');
}
if ($(window).scrollTop() > 600) {
$('.phone-container').removeClass('phone-container-fixed-bot');
$('.phone-container').addClass('phone-container-fixed');
}
if ($(window).scrollTop() > 2201) {
$('.phone-container').removeClass('phone-container-fixed');
$('.phone-container').addClass('phone-container-fixed-bot');
}
});
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: This is because you are not removing the .phone-container-fixed-bot class when the user scrolls up..
```
$(window).scroll(function () {
if ($(window).scrollTop() > 600) {
$('.phone-container').addClass('phone-container-fixed');
}if ($(window).scrollTop() > 2201) {
$('.phone-container').removeClass('phone-container-fixed');
$('.phone-container').addClass('phone-container-fixed-bot');
} if ($(window).scrollTop() < 2201) {
$(".phone-container").removeClass('phone-container-fixed-bot');
}if ($(window).scrollTop() < 600) {
$(".phone-container").removeClass('phone-container-fixed');
}
});
```
Now this is dirty but you can also check if the div has a class before attempting to remove it.
```
if ($( "#mydiv" ).hasClass( "foo" )) {
}
```
Upvotes: 0 <issue_comment>username_3: Your css is overlapping. You have two ways:
Adding `!important` to your `phone-container-fixed`:
```js
$(window).scroll(function () {
if ($(window).scrollTop() > 600) {
$('.phone-container').addClass('phone-container-fixed');
}
if ($(window).scrollTop() > 2201) {
$('.phone-container').removeClass('phone-container-fixed'); $('.phone-container').addClass('phone-container-fixed-bot');
}
});
$(window).scroll(function() {
var windowTop = $(window).scrollTop();
$("#output").html(windowTop);
});
```
```css
.one{
postion: absolute;
height: 700px;
background-color: lightgreen;
}
.two{
postion: absolute;
height: 3000px;
width: auto;
}
.phone-container{
position: relative;
top: 40px;
}
.phone-container-fixed{
position: fixed !important;
top: 50px !important;
}
.phone-container-fixed-bot{
position: absolute;
top: 2420px;
}
```
```html
You have scrolled the page by:

```
Or, you can remove the class `phone-container-fixed-bot` when scroll is lower than 2201 (you can add it to the first if condition):
```js
$(window).scroll(function () {
if ($(window).scrollTop() > 600) {
$('.phone-container').removeClass('phone-container-fixed-bot');
$('.phone-container').addClass('phone-container-fixed');
}
if ($(window).scrollTop() > 2201) {
$('.phone-container').removeClass('phone-container-fixed');
$('.phone-container').addClass('phone-container-fixed-bot');
}
});
$(window).scroll(function() {
var windowTop = $(window).scrollTop();
$("#output").html(windowTop);
});
```
```css
.one{
postion: absolute;
height: 700px;
background-color: lightgreen;
}
.two{
postion: absolute;
height: 3000px;
width: auto;
}
.phone-container{
position: relative;
top: 40px;
}
.phone-container-fixed{
position: fixed;
top: 50px;
}
.phone-container-fixed-bot{
position: absolute;
top: 2420px;
}
```
```html
You have scrolled the page by:

```
*The second one isn't the best solution (if-else statements), but you can work with this logic*
Upvotes: 1 <issue_comment>username_4: You will need to introduce the `else` part of the condition
```
$(window).scroll(function() {
if ($(window).scrollTop() > 600) {
$('.phone-container').addClass('phone-container-fixed');
} else {
$('.phone-container').removeClass('phone-container-fixed');
}
if ($(window).scrollTop() > 2201) {
$('.phone-container').addClass('phone-container-fixed-bot');
} else {
$('.phone-container').removeClass('phone-container-fixed-bot');
}
});
```
**[Updated Codepen](https://codepen.io/bhuwanb9/pen/EEgyGz)**
Upvotes: 0
|
2018/03/16
| 482 | 1,839 |
<issue_start>username_0: I have currently switched to Ubuntu and want to continue using it. So,I want to copy all my codes that was in windows workspace to Ubuntu's eclipse workspace so that I can use that code whenever required. So I copied all codes and pasted in ubuntu's eclipse workspace but facing problem that when I open eclipse in ubuntu,it is still empty. It is not recognizing any of the copied projects. How can I use it in Ubuntu?<issue_comment>username_1: You need to import the projects into Eclipse, regardless of whether you pasted the files into the work space. Go to File>Import, choose the type of project it is that you want to import, and follow the prompts.
[Here is a simple tutorial.](http://help.eclipse.org/kepler/index.jsp?topic=%2Forg.eclipse.platform.doc.user%2Ftasks%2Ftasks-importproject.htm)
Also, maybe look into using version control such as git. Here are some resources on that. I really like TortoiseGit.
[Git for Beginners](https://product.hubspot.com/blog/git-and-github-tutorial-for-beginners)
[TortoiseGit](https://tortoisegit.org/)
Upvotes: 1 <issue_comment>username_2: 1. *Paste* your Java folder/directory that contain the source code in your home folder.
2. *Create* a new project in Eclipse, or simply *use an existing project.*
3. *Right-click* on the project tag. Or simply click on **File** and then **Import**.
4. *Select Import*, and follow up until you reach your saved Java folder/directory with the source code.
Upvotes: 0 <issue_comment>username_3: you can follow the below steps.
a. Right click on your project and click **export**.
b. select **file system** and save in desktop or u r preferred loc.
c. In Linux eclipse menu click **file** and select **import** option.
d. select **existing projects in to workspace** and give the project path.
Upvotes: 1 [selected_answer]
|