date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/19
| 764 | 2,519 |
<issue_start>username_0: I have to make clusters in categorical data. I am using following k-modes code to make cluster, and check optimum number of clusters using elbow method:
```
set.seed(100000)
cluster.results <-kmodes(data_cluster, 5 ,iter.max = 100, weighted = FALSE )
print(cluster.results)
k.max <- 20
wss <- sapply(1:k.max,
function(k){set.seed(100000)
sum(kmodes(data_cluster, k, iter.max = 100 ,weighted = FALSE)$withindiff)})
wss
plot(1:k.max, wss,
type="b", pch = 19, frame = FALSE,
xlab="Number of clusters K",
ylab="Total within-clusters sum of squares")
```
My Questions are:
1. Is there any other method in Kmodes for checking Optimum number of clusters?
2. Each seed is giving a different size of nodes, hence I am trying different seeds, and setting the seed with least total within-sum of squares, is this approach correct?
3. How to check if my clusters are stable?
4. I want to apply/predict this cluster in new data (of another year). How to do that?
5. Is there any other method of clustering categorical data?<issue_comment>username_1: Hope this helps:
```
install.packages( "NbClust", dependencies = TRUE )
library ( NbClust )
Data_Sim <- rbind ( matrix ( rbinom ( 250, 2, 0.25 ), ncol = 5 ),
matrix ( rbinom (250, 2, 0.75 ), ncol = 5 ))
colnames ( Data_Sim ) <- letters [ 1:5 ]
Clusters <- NbClust ( Data_Sim, diss = NULL, distance = "euclidean",
min.nc = 2, max.nc = 10, method = "kmeans", index = "all",
alphaBeale = 0.1 )
hist ( Clusters$Best.nc [ 1, ], breaks = max ( na.omit (
Clusters$Best.nc [ 1, ])))
```
Upvotes: -1 <issue_comment>username_2: My answer only concerns the question 5.
You can use mixutre models to cluster categorical data (see for instance the latent class model). The standard approaches consider a mixture of multinomial distributions.
Classical information criteria (like BIC or ICL) can be used to automatically select the number of clusters.
Mixtures permit to compute the probabilities of classification of a new observation, and thus to quantify the risk of misclassification.
If you are interested in this method, you can use the R package VarSelLCM. To cluster categorical data, you dataset must be a data.frame and each variable must be stored in factor.
Here is an example of code (number of clusters is allowed to be between 1 and 6)
```
require(VarSelLCM)
out <- VarSelCluster(data_cluster, 1:6, vbleSelec=FALSE)
summary(out)
VarSelShiny(out)
```
Upvotes: 0
|
2018/03/19
| 623 | 2,251 |
<issue_start>username_0: I have a global variable(object) named X. Also i have two different thread named ThreadA and ThreadB which uses the X at the same time.
ThreadA add +1 for 100 times and ThreadB multiplies with 2 for two times.
When i started them at the same time, concurrency occurs and i saw something
```
1
2
3
6
7
14.. etc
```
i already solved this problem by adding
```
ReentrantLock lock = new ReentrantLock();
lock.lock();
..
lock.unlock();
```
mechanism.
Now my code works
```
1
2
3
4
...
100
200
400
```
But i want to add timeout so if ThreadA takes more than 2 seconds,want to release lock so ThreadB can manipulate it.
Want to see something like
```
1
(wait 100ms)
2
(wait 100ms)
3
...
20
400
800
801
( wait 100ms)
802
...
```
I already tried trylock(2000,TimeUnit.MILLISECONDS) but it decides only entering time.<issue_comment>username_1: That kind of functionality doesn't exist and for good reason. It would be inherently unsafe to release a lock based on time alone, with no regards to the internal state that the lock is supposed to protect.
Whereas a timeout for acquiring a lock makes sense, having a timeout for the duration of the lock would cause more trouble than it would be of use. Once you've acquired the lock, you can do whatever you want until you decide to release it. If you get a deadlock (a situation that could get some use from a timed lock) that's unfortunate, but it's solved by fixing your design instead of setting timeouts on locks.
As your code is just general threading test code I guess you were just assuming that such a mechanism exists, but if you can come up with a situation where you'd need something like that, it can probably be coded in a different way.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Java provides Condition class to achieve similar functionality.
You use await() method of condition class with time of 2 seconds and else call lock.unlock(). Though remember, the wait time might be larger than 2 seconds as well due to inherent nature of threads. Also, this will be error prone and you will need to be careful about different cases. Refer the docs:<https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/Condition.html>
Upvotes: 0
|
2018/03/19
| 727 | 2,511 |
<issue_start>username_0: When I add one product in my form, show this error.
My code ts:
```
this.products = this.ps.getProduct();
this.addform = this.fb.group({
'invoice_number': new FormControl('', [Validators.required, Validators.nullValidator]),
'Subtotal': new FormControl('', Validators.required),
'products': this.fb.array([
]),
'total': new FormControl('', Validators.required),
});
```
Model class:
```
export class Sale {
invoice_number: number;
products: Products[];
}
```
My HTML code:
```
|
| |
```
In my HTML doesn't display nothing, also show my error in console.
Can you suggest what is the problem, how to solution this?<issue_comment>username_1: More slow
I think you have a service ps that have a method getProducts() some like
```
getProduct(id:any)
{
return this.httpClient.get("url/"+id);
}
```
the url/Id return you a json like, e.g.
```
'invoice_number':222152
'product':[
{item:'some',total:120,subtotal:130},
{item:'other',total:110,subtotal:140}
]
```
Well, you must subscribe in ngOnInit to the service and, when you get the data create a formGroup
```
ngOnInit()
{
this.ps.getProduct('123').subscribe((data:any)=>{
this.createform(data)
}
}
//see that createForm is a function that return a formGroup
//this formGorup have two fields, "invoice_number" and "products"
//products is an array. We use a function that return a array
//of controls and "products" is thif.fb.array(the array of controls)
createForm(data:any)
{
return this.fb.group({
invoice_number:[data? data.invoice_number:null,
[Validators.required, Validators.nullValidator]],
products:this.fb.array(this.getProducts(data? data.products:null)
})
}
//getProducts return an array of Controls -really an array of group controls-
getProducts(products:any)
{
let controls[]=[];
if (products)
{
products.forEach(x=>{
controls.push(
this.fb.group({
total:[x.total,Validators.required]
subtotal:[x.total,Validators.required]
}
);
}
}
return controls;
}
```
Then your html it's like
```
|
| |
```
See that you can use this.addForm=this.createForm() and the Form have empty values.
NOTA: I use "subtotal" (not "Subtotal") choose use lowercase or CamelCase to named the variables
Upvotes: 0 <issue_comment>username_2: Try adding formGroupName like:
```
|
|
|
```
Upvotes: 1
|
2018/03/19
| 592 | 2,007 |
<issue_start>username_0: So I'm integrating an API from a 3rd party company and I'm facing this strange situation.
I fetch the endpoint with the following code
```
$client = $this->client = new Client([
'base_uri' => 'https://xxx.xxxxxxxxxx.com/api/',
'timeout' => 15
]);
$this->requestConfig = [
'auth' => [
'<EMAIL>',
'xxxxx'
],
'headers' => [
'cache-control' => 'no-cache',
'content-type' => 'application/x-www-form-urlencoded'
],
];
$response = $this->client->get($url, $this->requestConfig);
$content = $response->getBody()->getContents();
```
Now the fun comes, if I var\_dump content I get:
```
string(66) ""[{\"ExternalId\":\"38\",\"AgencyReference\":\"45436070356676\"}]""
```
Now I know this response is bad, response type if not set a json, json is URL encoded and everything smells bad.
I've been trying to parse this string for a while.
urldecode doesn't work either.
Question is simple, given a response like that, how can I get a normal array?
Currently using PHP 7.1<issue_comment>username_1: So finally found this:
[Remove backslash \ from string using preg replace of php](https://stackoverflow.com/questions/18526979/remove-backslash-from-string-using-preg-replace-of-php)
To solve my issue.
In this case it was that the escaped quotes where malforming the json. My final code looks like this.
```
$response = $response->getBody()->getContents();
$clean = stripslashes($response);
$clean = substr($clean, 1, -1);
dd(json_decode($clean));
```
Please never write your API's like this...
Just looks awfull
Upvotes: 2 <issue_comment>username_2: BTW, `Content-Type` makes sense when you send (in request or response) some content. But you send a GET request, that has no content.
And to specify preferred content type for the response, you should use `Accept` HTTP header, `Accept: application/json` for example.
I'm not sure this will solve your problem, but just make things clear and correct ;)
Upvotes: 0
|
2018/03/19
| 1,375 | 4,838 |
<issue_start>username_0: I'm new to microservices and Spring Boot. I have a few Spring Cloud microservices with a Zuul gateway running on port 8080.
```
browser
|
|
gateway (:8080)
/ \
/ \
/ \
resource UI (:8090)
```
There is a UI microservice on port 8090, which has a controller with a method inside, returning index.html.
I configured Zuul routing like this for UI (I'm using Eureka too):
```
zuul:
routes:
ui:
path: /ui/**
serviceId: myproject.service.ui
stripPrefix: false
sensitiveHeaders:
```
If I call `http://localhost:8080/ui/` everything works fine and I see rendering of my index.html.
Is it possible to configure Spring Cloud in some way to make the following flow work: calling `http://localhost:8080/` redirects us to controller of UI microservice, which returns index.html?
So the idea is to open UI from the root of my site.
Thanks in advance!<issue_comment>username_1: If you would like to have on Zuul the UI(front-end) you can add the static content in *resources/static* folder(html, css and js files). On this way your proxy is able to render the index.html (of course you must have an index.html in static folder). O this way on `http://localhost:8080` the proxy will render index.html; also you can have another paths but all these path are managed by index.html)
About routing, the Zuul only redirect the http request. *<http://localhost:8080/ui/>*. On `8080` is running the proxy (Zuul) BUT `/ui` is the context path of resource server. Se when you make a call on this path *<http://localhost:8080/ui/>* the proxy will redirect to resource server and will actually make a request to *<http://localhost:8090/ui/>*
It is a difference between browser path and http request path. The browser path is managed by index.html and the http request is managed by Zuul. I don't know if I was explicit enough.
One more thing... You can have the same path (`/ui`) on http request and index.html and when your browser will access the *<http://localhost:8080/ui/>* a .js file with http request method will make an http request to *<http://localhost:8080/ui/>* and then will be redirected to *<http://localhost:8090/ui/>* and the response from the resource server will be rendered on the page from *<http://localhost:8080/ui/>*.
Upvotes: 1 <issue_comment>username_2: Finally, I've made my code work! Thanks to @pan for mentioning [Zuul Routing on Root Path](https://stackoverflow.com/a/38986985/3399581) question and @RzvRazvan for revealing about how Zuul routing works.
I've just added **controller** to Zuul routed Gateway microservice with one endpoint to redirect from root `http://localhost:8080/` to `http://localhost:8080/ui/`:
```
@Controller
public class GateController {
@GetMapping(value = "/")
public String redirect() {
return "forward:/ui/";
}
}
```
**Zuul** properties for redirecting from **Gateway microservice** on port **8080** as `http://localhost:8080/ui/` to **UI microservice**, which implemented as a separate Spring Boot application on port **8090** as `http://localhost:8090/ui/`:
```
zuul:
routes:
ui:
path: /ui/**
serviceId: myproject.service.ui
stripPrefix: false
sensitiveHeaders:
```
UI microservice's properties:
```
server:
port: 8090
servlet:
contextPath: /ui
```
Eventually, this call `http://localhost:8080/` redirects us to controller of UI microservice, which returns view `index.html`:
```
@Controller
public class UiController {
@GetMapping(value = "/")
public String index() {
return "index.html";
}
}
```
---
Actually, I had another problem with rendering static content in such architecture, but it was connected with configuration of my front-end, which I develop using [Vue.js](https://vuejs.org/) framework. I will describe it here in a few sentences, in case it might be helpful for someone.
I have the following folders structure of UI microservice:
```
myproject.service.ui
└───src/main/resources
└───public
|───static
| ├───css
| └───js
└───index.html
```
All content of `public` folder is generated by `npm run build` task from **webpack** and **vue.js**. First time, I called my `http://localhost:8080/` I got **200 OK** for `index.html` and **404** for all other static resources, because they was called like this:
```
http:\\localhost:8080\static\js\some.js
```
So it was configured wrong public path for static assets in webpack. I changed it in `config\index.js`:
```
module.exports = {
...
build: {
...
assetsPublicPath: '/ui/',
...
}
...
}
```
And static assets became to be called properly. e.g.:
```
http:\\localhost:8080\ui\static\js\some.js
```
Upvotes: 4 [selected_answer]
|
2018/03/19
| 2,607 | 10,261 |
<issue_start>username_0: Why does `Clock.systemDefaultZone().instant()` return a different time than `LocalTime.now()`?
I understand that `LocalTime` has no timezone, but it shows just what my system clock (in tray on my computer) shows, right? Both "use" default time zone (`Europe/Moscow`), so time shall be the same?
My computer clock is `Europe/Moscow`, so both shall show exactly my computer time?
```
System.out.println(Clock.systemDefaultZone().instant()); // 2018-03-19T10:10:27.156Z
System.out.println(Clock.systemDefaultZone().getZone()); // Europe/Moscow
System.out.println(LocalTime.now()); // 13:10:27.166
```<issue_comment>username_1: If I found out correctly, the Instant returned by `.instant()` does not take care of any timezone information. With the correct timezone (`ZoneId` returned by `Clock.systemDefaultZone().getZone()`) you can get a `ZonedDateTime` from the Instant though (which does provide timezone information).
Example
-------
```
System.out.println(Clock.systemDefaultZone().instant());
System.out.println(Clock.systemDefaultZone().instant().atZone(Clock.systemDefaultZone().getZone()));
```
Output
------
```
2018-03-19T10:30:47.032Z
2018-03-19T13:30:47.048+03:00[Europe/Moscow]
```
Upvotes: 2 <issue_comment>username_2: To understand those results, we must first see how the `Clock` is intended to work. [Taking a look at the javadoc](https://docs.oracle.com/javase/8/docs/api/java/time/Clock.html), we can see the following description for the methods:
>
> `public abstract Instant instant()`
>
>
> Gets the current instant of the clock.
>
> This returns an instant representing the **current instant as defined by the clock**.
>
>
>
>
> `public abstract ZoneId getZone()`
>
>
> Gets the time-zone being used to create dates and times.
>
> **A clock will typically obtain the current instant and then convert that to a date or time using a time-zone**. This method returns the time-zone used.
>
>
>
So the `instant()` method will get the current instant as a `java.time.Instant`, which is a class that always works in UTC. And the key point here is: *"as defined by the clock*".
The `Clock` class allows you to create lots of different clock definitions - such as a [fixed clock that always returns the same thing](https://docs.oracle.com/javase/8/docs/api/java/time/Clock.html#fixed-java.time.Instant-java.time.ZoneId-) - and the most common is the one returned by `systemDefaultZone()`, which uses the system's current date/time.
As the `instant()` method returns a `java.time.Instant` and this class works only in UTC, the result will always be UTC.
The `getZone()` method will return the timezone used to create dates and times, and this is done by combining the `Instant` (returned by `instant()`) with the `ZoneId` returned by `getZone()`.
You can create a clock with any timezone you want, but `systemDefaultZone()` just uses the JVM default timezone, which is - in your case - Europe/Moscow.
When you call `LocalTime.now()`, [it internally uses the clock returned by `systemDefaultZone()`](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/time/LocalTime.java#243).
Then, it uses the results from `instant()` and `getZone()`, and [combine both to get the `LocalTime`](http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/8-b132/java/time/LocalTime.java#273).
Usage
-----
According to javadoc:
>
> Use of a Clock is optional. All key date-time classes also have a now() factory method that uses the system clock in the default time zone. The primary purpose of this abstraction is to allow alternate clocks to be plugged in as and when required. Applications use an object to obtain the current time rather than a static method. This can simplify testing.
>
>
>
So I wouldn't use the clock directly. Instead, I'd use the `now` methods from each class, depending on what I need.
* If I want the current moment in UTC: `Instant.now()`
* only the current date: `LocalDate.now()`
and so on...
The `now()` method without parameters can be very handy and convenient, but it has some drawbacks. It always uses the JVM default timezone behind the scenes. The problem, though, is that the default timezone can be changed at anytime, either by JVM/system's config or even by any application running in the same VM, and you have no control over it.
To not depend on the default configurations, you can use the alternatives `now(ZoneId)` (which uses an explicity timezone), or `now(Clock)`, which [makes your code more testable](https://stackoverflow.com/questions/27067049/unit-testing-a-class-with-a-java-8-clock) - see examples [here](http://blog.indrek.io/articles/unit-testing-classes-that-depend-on-time/).
Upvotes: 0 <issue_comment>username_3: java.time
public abstract class Clock extends Object
A clock providing access to the current instant, date and time using a time-zone.
Instances of this class are used to find the current instant, which can be interpreted using the stored time-zone to find the current date and time. As such, a clock can be used instead of System.currentTimeMillis() and TimeZone.getDefault().
Use of a Clock is optional. All key date-time classes also have a now() factory method that uses the system clock in the default time zone. The primary purpose of this abstraction is to allow alternate clocks to be plugged in as and when required. Applications use an object to obtain the current time rather than a static method. This can simplify testing.
Best practice for applications is to pass a Clock into any method that requires the current instant. A dependency injection framework is one way to achieve this:
```
public class MyBean {
private Clock clock; // dependency inject
...
public void process(LocalDate eventDate) {
if (eventDate.isBefore(LocalDate.now(clock)) {
...
}
}
}
```
This approach allows an alternate clock, such as fixed or offset to be used during testing.
The system factory methods provide clocks based on the best available system clock This may use System.currentTimeMillis(), or a higher resolution clock if one is available.
Implementation Requirements:
This abstract class must be implemented with care to ensure other classes operate correctly. All implementations that can be instantiated must be final, immutable and thread-safe.
The principal methods are defined to allow the throwing of an exception. In normal use, no exceptions will be thrown, however one possible implementation would be to obtain the time from a central time server across the network. Obviously, in this case the lookup could fail, and so the method is permitted to throw an exception.
The returned instants from Clock work on a time-scale that ignores leap seconds, as described in Instant. If the implementation wraps a source that provides leap second information, then a mechanism should be used to "smooth" the leap second. The Java Time-Scale mandates the use of UTC-SLS, however clock implementations may choose how accurate they are with the time-scale so long as they document how they work. Implementations are therefore not required to actually perform the UTC-SLS slew or to otherwise be aware of leap seconds.
Implementations should implement Serializable wherever possible and must document whether or not they do support serialization.
Implementation Note:
The clock implementation provided here is based on System.currentTimeMillis(). That method provides little to no guarantee about the accuracy of the clock. Applications requiring a more accurate clock must implement this abstract class themselves using a different external clock, such as an NTP server.
Since:
1.8
java.time.Clock
public static Clock systemDefaultZone()
Obtains a clock that returns the current instant using the best available system clock, converting to date and time using the default time-zone.
This clock is based on the best available system clock. This may use System.currentTimeMillis(), or a higher resolution clock if one is available.
Using this method hard codes a dependency to the default time-zone into your application. It is recommended to avoid this and use a specific time-zone whenever possible. The UTC clock should be used when you need the current instant without the date or time.
The returned implementation is immutable, thread-safe and Serializable. It is equivalent to system(ZoneId.systemDefault()).
Returns:
a clock that uses the best available system clock in the default zone, not null
See Also:
ZoneId.systemDefault()
java.time.Clock
public abstract Instant instant()
Gets the current instant of the clock.
This returns an instant representing the current instant as defined by the clock.
Returns:
the current instant from this clock, not null
Throws:
DateTimeException - if the instant cannot be obtained, not thrown by most implementations
Upvotes: 1 <issue_comment>username_4: While some of the other Answers have correct information, here is a simple summary.
UTC
===
**`Instant` is in UTC**, always, by definition.
```
Instant instant = Instant.now() // Captures the current moment in UTC. Your default time zone settings are irrelevant.
```
Implicit default time zone
==========================
Calling **`LocalTime.now()` implicitly applies your JVM’s current default time zone.**
When you type this code:
```
LocalTime.now()
```
… the JVM at runtime does this:
```
LocalTime.now( ZoneId.systemDefault() )
```
Not obvious which is why I recommend *always* passing the desired/expected time zone explicitly as the optional argument.
```
ZoneId z = ZoneId.systemDefault() ; // Make explicit the fact that you are intentionally relying on the user’s JVM’s current default time zone.
LocalTime lt = LocalTime.now( z ) ; // Capture the current moment in the Wall-clock time used by people of a particular region (a time zone).
```
Beware: The user’s JVM’s current default time zone can be changed at any moment during runtime. So if the zone is critical, confirm with the user as to their intended time zone, and pass as optional argument.
```
ZoneId z = ZoneId.of( “Africa/Tunis” ) ; // Or “Europe/Moscow”, whatever.
LocalTime lt = LocalTime.now( z ) ;
```
Upvotes: 0
|
2018/03/19
| 1,670 | 5,195 |
<issue_start>username_0: I need to write a program that, when given a list of integers, it finds all 2-pairs of integers that have the same product. i.e. a 2-pair is 2 distinct pairs of integers lets say [(a,b),(c,d)] where a\*b = c\*d but a ≠ b ≠ c ≠ d.
The range of integers should be from 1 to 1024. What I would like to implement is that when the web page is opened the user is prompted by a pop up in which he will enter the array of integers, i.e `[1,2,3,7,8,9,6]` etc for instance from the input `[1,2,3,7,8,9,6]` the output should be `[(9,2),(3,6)]` since both evaluate to `18`.
The coding I did so far is very basic and can be seen below. What I've done so far is the pop-up box alert the input etc, but can't seem to understand how to make the program check for the pairs and give the sum. Thanks in advance to this community who's helping me out to better understand and learn javascript!
I've done my fair bit of research below, definitely different question than mine but have gone through them.
* [Find a pair of elements from an array whose sum equals a given number](https://stackoverflow.com/questions/4720271/find-a-pair-of-elements-from-an-array-whose-sum-equals-a-given-number)
* <https://www.w3resource.com/javascript-exercises/javascript-array-exercise-26.php>
**Code:**
```js
function evaluate() {
const input = prompt("Please enter the array of integers in the form: 1,2,3,1")
.split(',')
.map(item => item.trim());
function pairs(items) {
}
if (input == "" || input == null) {
document.writeln("Sorry, there is nothing that can be calculated.");
} else {
document.writeln("Your calculation is: ");
document.writeln(pairs(input) + " with a starting input string of: " + input);
}
}
evaluate()
```<issue_comment>username_1: We could mutate the equation:
```
a * b = c * d | : b
a = c * d : b
```
So actually we just need to get all different combinations of three numbers (b, c, d) and check if the result (a) is also in the given range:
```
while(true){
// shuffle
const [b, c, d] = items;
const a = c * d / b;
if(items.includes(a + ""))
return true;
}
return false;
```
Now you only need to shuffle the array to go through all different combinations. You can find an algorithm [here](https://stackoverflow.com/questions/5752002/find-all-possible-subset-combos-in-an-array)
Upvotes: 0 <issue_comment>username_2: You could iterate the array and a copy of the array beginning by the actual index plus one for getting the products. Store the result in an object with product as key.
Then get the keys (products) of the object, filter it to get only the results with two or more products.
```js
var array = [1, 2, 3, 7, 8, 9, 6],
result = {},
pairs;
array.forEach(function (a, i) {
array.slice(i + 1).forEach(function (b) {
(result[a * b] = (result[a * b] || [])).push([a, b]);
});
});
pairs = Object
.keys(result)
.filter(function (k) { return result[k].length >= 2; })
.map(function(k) { return result[k]; });
console.log(pairs);
```
Upvotes: 2 <issue_comment>username_3: Assuming that you are given an array such as `[1,2,3,7,8,9,6]` and a value `18` and you need to find pairs that multiply to `18` then, use the following approach
**Convert them to a map** - *O(n)*
```
var inputArr = [1,2,3,7,8,9,6];
var map = inputArr.reduce( (acc, c) => {
acc[ c ] = true; //set any truthy value
return acc;
},{});
```
**Iterate an `inputArr` and see if its compliment is available in the `map`** - *O(n)*
```
var output = [];
var mulValue = 18;
inputArr.forEach( s => {
var remainder = mulValue/s;
if ( map[s] && map[remainder] )
{
output.push( [ s, remainder ] );
map[s] = false;
map[remainder] = false;
}
});
```
**Demo**
```js
var inputArr = [1, 2, 3, 7, 8, 9, 6];
var map = inputArr.reduce((acc, c) => {
acc[c] = true; //set any truthy value
return acc;
}, {});
var output = [];
var mulValue = 18;
inputArr.forEach(s => {
var remainder = mulValue / s;
if (map[s] && map[remainder]) {
output.push([s, remainder]);
map[s] = false;
map[remainder] = false;
}
});
console.log(output);
```
Upvotes: 0 <issue_comment>username_4: You can try something like this:
### Idea:
* Loop over the array to compute product. Use this iterator(*say `i`*) as get first operand(*say `op1`*).
* Now again loop over same array but the range will start from `i+1`. This is to reduce number of iteration.
* Now create a temp variable that will hold product and operand.
* On every iteration, add value to product in `hashMap`.
* Now loop over `hashMap` and remove any value that has length that is less than 2.
```js
function sameProductValues(arr) {
var hashMap = {};
for (var i = 0; i < arr.length - 1; i++) {
for (var j = i + 1; j < arr.length; j++) {
var product = arr[i] * arr[j];
hashMap[product] = hashMap[product] || [];
hashMap[product].push([arr[i], arr[j]]);
}
}
for(var key in hashMap) {
if( hashMap[key].length < 2 ) {
delete hashMap[key];
}
}
console.log(hashMap)
}
sameProductValues([1, 2, 3, 7, 8, 9, 6])
```
Upvotes: 0
|
2018/03/19
| 850 | 3,100 |
<issue_start>username_0: I am trying to make sure a function parameter is an async function.
So I am playing around with the following code:
```
async def test(*args, **kwargs):
pass
def consumer(function_: Optional[Coroutine[Any, Any, Any]]=None):
func = function_
consumer(test)
```
But it doesn't work.
I am presented with the following error during type checking in PyCharm:
```none
Expected type 'Optional[Coroutine]', got '(args: Tuple[Any, ...], kwargs: Dict[str, Any]) -> Coroutine[Any, Any, None]' instead
```
Can anyone give me some hints how to solve this?<issue_comment>username_1: I can't help you too much, especially because right now (PyCharm 2018.2) this error is not raised in Pycharm anymore.
At present, type hints are somewhere between reliable metadata for reflection/introspection and glorified comments which accept anything the user puts in. For normal data structures this is great (my colleague even made a validation framework based on typing), but things get more complicated when callbacks and async functions come into play.
Take a look at these issues:
<https://github.com/python/typing/issues/424> (open as of today) - async typing
<https://github.com/python/mypy/issues/3028> (open as of today) - var-args callable typing
I would go with:
```
from typing import Optional, Coroutine, Any, Callable
async def test(*args, **kwargs):
return args, kwargs
def consumer(function_: Optional[Callable[..., Coroutine[Any, Any, Any]]] = None):
func = function_
return func
consumer(test)
```
I don't guarantee they meant exactly that, but my hint is built like this:
`Optional` - sure, can be `None` or something, in this case:
`Callable` - something which can be invoked with `()`, `...` stands for any argument, and it produces:
`Coroutine[Any, Any, Any]` - this is copied from OP, and very general. You suggest that this `function_` may be `await`-ed, but also receive stuff `send()`-ed by `consumer`, and be `next()`-ed / iterated by it. It may well be the case, but...
If it's just `await`-ed, then the last part could be:
`Awaitable[Any]`, if you actually await for something or
`Awaitable[None]`, if the callback doesn't return anything and you only expect to `await` it.
Note: your `consumer` isn't `async`. It will not really `await` your `function_`, but either `yield from` it, or do some `loop.run_until_complete()` or `.create_task()`, or `.ensure_future()`.
Upvotes: 5 [selected_answer]<issue_comment>username_2: You are looking for:
```
FuncType = Callable[[Any, Any], Awaitable[Any]]
def consumer(function_: FuncType = None):
pass # TODO: do stuff
```
Why is the type structured like that? If you declare a function `async`, what you actually do is wrap it in a new function with the given parameters, which returns a `Coroutine`.
---
Since this might be relevant to some people who come here, this is an example of an `await`able function type with in and out typed:
```
OnAction = Callable[[Foo, Bar], Awaitable[FooBar]]
```
It is a function that takes `Foo`, `Bar` and returns a `FooBar`
Upvotes: 6
|
2018/03/19
| 1,193 | 4,267 |
<issue_start>username_0: I am trying to create an `input` field that expands at least in width dynamically with the length of the string the user entered, probably even multiline.
Is that possible with an [`input`](https://material.angular.io/components/input/overview) element in Angular Material 2?
With the `textarea` field from Angular Material 2 I only managed to expand the textarea in height, not in width with the following code:
```html
```
[**also on StackBlitz**](https://stackblitz.com/edit/angular-3ikrwd?file=app%2Finput-autosize-textarea-example.html).
In case of the `textarea` the `scrollbar` should be invisible or replaced by a smaller one. And most important pressing `Enter` should not create a new line but trigger an action only.<issue_comment>username_1: You can use ngStyle to bind width of the mat-form-field to a calculated value, and use the input event on the input to set that value. For example, here's an input who's width follows the text width over 64px:
```
{{elasticInput.value}}
export class InputTextWidthExample {
@ViewChild('hiddenText') textEl: ElementRef;
minWidth: number = 64;
width: number = this.minWidth;
resize() {
setTimeout(() => this.width = Math.max(this.minWidth, this.textEl.nativeElement.offsetWidth));
}
}
```
Obviously, this example uses a hidden span element for getting the text width, which is a little hacky. There is surely more than one way to calculate a string's width, including [this](https://blog.mastykarz.nl/measuring-the-length-of-a-string-in-pixels-using-javascript/).
[Here is the example on Stackblitz](https://stackblitz.com/edit/angular-tca2fb).
Upvotes: 4 [selected_answer]<issue_comment>username_2: I now created a more suitable solution for this problem. After I found a perfect solution in jQuery and @Obsidian added the [corresponding JS code](https://stackoverflow.com/posts/comments/85895149?noredirect=1). I tried to adapt it for Angular `input` and came up with the following.
I also added some scenarios that support cutting and pasting strings.
[**Here is a demo on StackBlitz**](https://stackblitz.com/edit/angular-input-dynamic-width) and the corresponding code:
**Template:**
```html
#invisibleTextID {
white-space: pre;
}
// prevents flickering:
::ng-deep .mat-form-field-flex {
width: 102% !important;
}
{{ inString }}
```
**Resize method:**
```js
@ViewChild('invisibleText') invTextER: ElementRef;
inString: string = '';
width: number = 64;
resizeInput() {
// without setTimeout the width gets updated to the previous length
setTimeout ( () =>{
const minWidth = 64;
if (this.invTextER.nativeElement.offsetWidth > minWidth) {
this.width = this.invTextER.nativeElement.offsetWidth + 2;
} else {
this.width = minWidth;
}
}, 0)
}
```
Upvotes: 1 <issue_comment>username_3: I needed dynamic width for `matInput` inside a `mat-table` (dynamic size) and I experimented with several appraoches based on [*G.Tranter*'s solution](https://stackoverflow.com/a/49365470/8389244). However, viewing a dynamic number of elements depending on the rendered table cells was impractical. Therefore, I cam up with a rather practical solution for this:
Whenever the data is updated, I create an element, fill it with the updated data, add it to the HTML `body` (or `div` in my case), extract the 's `offsetWidth` and remove it immediately from the HTML document. While testing, I couldn't observe any flickering or rendering issue with this (though it is possible, I think).
**Template**
```
...
ifPattern |
|
...
```
**Resize Method**
```
displayedColumns: string[] = ["col1", "col2", "col3"];
minWidth: number = 64; // pixel
columnWidth: number[] = [this.minWidth, this.minWidth, this.minWidth];
resize(column_index: number, value: string) {
setTimeout(() => {
let span = document.createElement('span');
span.innerText = value;
let div = document.getElementById('case-cleaning-overview')
div?.append(span);
this.columnWidth[column_index] = Math.max(this.minWidth, span.offsetWidth);
div?.removeChild(span);
});
}
```
Upvotes: 0 <issue_comment>username_4: simple and only template solution :
```
```
Upvotes: 3
|
2018/03/19
| 725 | 2,596 |
<issue_start>username_0: I'm getting started with Angular animations and i'm stuck on that error:
>
> ERROR DOMException: Failed to execute 'animate' on 'Element': Partial
> keyframes are not supported.
>
>
>
I tried to google it without any success.
Here is my code:
app.component.ts:
```
import { Component } from '@angular/core';
import { trigger, state, style, transition, animate } from '@angular/animations';
@Component({
selector: 'app-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css'],
animations: [
trigger('square', [
state('normal', style({
background: 'transparent',
border: '2px solid black',
borderRadius : '0px'
})),
state('wild', style({
background: 'red',
border: 'Opx',
borderRadius:'100px'
})),
transition('normal => wild', animate(300))
])
]
})
export class AppComponent {
public currentState: string = "wild";
}
```
My app.component.html:
```
normal
wild
```
Thank your for your help !<issue_comment>username_1: It seems the problem was coming from my `border: 0px` CSS property within my second state. I replaced it with `"none"` and it works
Upvotes: 3 [selected_answer]<issue_comment>username_2: Contrary to your personal answer. You don't HAVE a Zero Pixel property. If you read your original question, it actually said Capital-Oh Pixels `Opx`. But the same principle applies. If you have errors or misspellings in your code, it'll break. Sometimes in weird ways. Zero Pixels `0px` would work just fine.
Upvotes: 2 <issue_comment>username_3: 'Partial Keyframes are not supported' error mainly happens when you misspell something while writing your animation function.
In the above case you have misspelled the 'border' property value in second state.
border property expects a number like 0, 1, 2. But in the above code, you used a character 'O' instead of '0'. Once you replace the 'O' with actual zero '0' it will work fine.

Upvotes: 2 <issue_comment>username_4: This error can also arise when a unit for your CSS value is omitted. E.g:
```
trigger('rotate180', [
transition(':enter', [
style({ transform: 'rotate(180)' }), // Doesn't Work!
animate('0.6s 1.4s cubic-bezier(0.65, 0, 0.35, 1)')
]),
state('*', style({ transform: '*' })),
]),
```
Will throw an error. The style should be:
```
{ transform: 'rotate(180deg)' } // Works!
```
Upvotes: 0
|
2018/03/19
| 1,238 | 3,556 |
<issue_start>username_0: I've scoured the internet for an answer to my problem. I am writing some code to input a formula into certain cells on a worksheet and despite very similar code working perfectly earlier in the macro, this section of code will not work with giving me the runtime error 1004: application-defined or object-defined error.
I have tried moving my code into a new workbook but the problem was not solved and I just can't see why it won't work.
The code below is where I define the sheets I am using
```
Sub InputFormulae()
Dim wksht As Worksheet
Dim wksht1 As Worksheet
Dim wksht2 As Worksheet
Dim wksht3 As Worksheet
Dim wksht4 As Worksheet
Dim wksht5 As Worksheet
Set wksht = ThisWorkbook.Worksheets("Coils same day remove & insert")
Set wksht1 = ThisWorkbook.Worksheets("Implants same day remove&insert")
Set wksht2 = ThisWorkbook.Worksheets("Implant inserted NO Removal")
Set wksht3 = ThisWorkbook.Worksheets("Implant inserted AND removed")
Set wksht4 = ThisWorkbook.Worksheets("Coil inserted NO removal")
Set wksht5 = ThisWorkbook.Worksheets("Coil inserted AND removed")
```
The code below is a part of the macro that is working
```
wksht.Activate
With wksht
i = Range("A" & Cells.Rows.Count).End(xlUp).Row
Do Until i = 1
If .Cells(i, 1) <> "" Then
Cells(i, 9).Formula = "=IF(A" & i & "=A" & i + 1 & ",IF(C" & i & "=C" & i + 1 & ",(H" & i & "-C" & i & "),(F" & i + 1 & "-C" & i & ")),IF(A" & i & "=A" & i - 1 & ",IF(C" & i & "=C" & i - 1 & ",(H" & i & "-C" & i & "),(H" & i & "-C" & i & ")),(H" & i & "-C" & i & ")))"
End If
i = i - 1
Loop
End With
```
And the code below here is the part that is not working
```
wksht3.Activate
With wksht3
i = Range("A" & Cells.Rows.Count).End(xlUp).Row
Do Until i = 1
If .Cells(i, 1) <> "" And .Cells(i, 3) <> "" And .Cells(i, 6) <> "" Then
Cells(i, 9).Formula = "=F" & i & "-C" & i & ")"
Else: Cells(i, 9).Value = "0"
End If
i = i - 1
Loop
End With
```
When I debug the code it highlights the Cells(i, 9).Formula = "=F" & i & "-C" & i & ")" line
Thanks for your time<issue_comment>username_1: ```
=F10-C10)
```
is not a valid formula so you get a 1004
Upvotes: 2 <issue_comment>username_2: The error you get is because VBA does not understand `"=F" & i & "-C" & i & ")"`. As far as it is a string, the easiest way to debug is to write either:
`debug.print "=F" & i & "-C" & i & ")"` on the line above and to see the immediate window for the value
or
`MsgBox "=F" & i & "-C" & i & ")"` on the line above and to see the string in a `MsgBox`.
Based on the result you would know how to continue.
Upvotes: 2 <issue_comment>username_3: 1. Start with putting a period in front of every Range and Cells within a With ... End With.
2. Brackets come in pairs.
3. Don't turn real numbers into text-that-looks-like-a-number.
```
wksht3.Activate '<~~ totally unnecessary to use a With ... End With
With wksht3
i = .Range("A" & .Cells.Rows.Count).End(xlUp).Row
Do Until i = 1
If .Cells(i, 1) <> "" And .Cells(i, 3) <> "" And .Cells(i, 6) <> "" Then
.Cells(i, 9).Formula = "=F" & i & "-C" & i
Else
.Cells(i, 9).Value = 0
End If
i = i - 1
Loop
End With
```
Upvotes: 2 <issue_comment>username_4: FWIW, you could also just have your formula do the tests too:
```
With wksht3
i = Range("A" & Cells.Rows.Count).End(xlUp).Row
.Range("I1:I" & i).FormulaR1C1 = "=IF(OR(RC1="""",RC3="""",RC6=""""),0,RC6-RC3)"
End With
```
Upvotes: 0
|
2018/03/19
| 584 | 1,914 |
<issue_start>username_0: I want to cast BaseClass to DerivedClass std vector using a template:
```
template
vector CastTo(vector &input)
{
vector output;
for (auto obj : input)
{
output.push\_back(dynamic\_cast(obj));
}
return output;
}
```
Is this possible?
Right now the template is not being recognized and I not able to use it. Is there a mistake in the proposed way?
The usage is:
```
vector vec;
vector dVec = CastTo(vec);
```<issue_comment>username_1: ```
template
vector CastTo(vector &input)
^^^
```
Remove that (why the ALL\_UPPERCASE that's for macros?). Also, be aware that your example call might be confusing `T` and `U`.
Upvotes: 2 <issue_comment>username_2: Two things,
1. don't use the in the definition of `CastTo`,
2. swap the order of `T` and `U` in the `template` line.
The first is just how the syntax for function templates goes in C++: no template brackets after their name. (But you will need one when calling the function.)
The second is more tricky. If you had a `template` function and called `CastTo(vec)`, then `DerivedClass*` would be matched to the first parameter, `T`, leaving no way of determining `U`:
```
CastTo(vec): vector &input -> ???
===== T ===== =U= ↑ T
> template argument deduction/substitution failed:
> couldn't deduce template parameter ‘U’
```
If it's `template`, then `U` will be `DerivedClass*` and `T` can be found from the function's argument `vec`:
```
1. CastTo(vec): vector?? &input -> vector
===== U ===== =T= ↑ vec ↑ U
2. vec is a vector => T = BaseClass\*
> OK
```
This works (minimal example using primitive types and a C-style cast):
```
#include
using namespace std;
template
vector CastTo(vector &input)
{
vector output;
for (auto obj : input)
{
output.push\_back(U(obj));
}
return output;
}
int main() {
vector v{2,3,5};
vector w = CastTo(v);
return w[2]; // returns 5
}
```
Upvotes: 3 [selected_answer]
|
2018/03/19
| 866 | 2,900 |
<issue_start>username_0: I'd like to declare an API version my test application uses to build it and also display it in the application.
So I declared the version like that in the project's build.gradle:
```
buildscript {
ext {
...
api_version = '0.2.9'
}
...
}
```
Then in my app's build gradle, I use it:
```
android {
....
buildTypes {
release {
...
buildConfigField "String", "api_version", "$api_version"
}
debug {
...
buildConfigField "String", "api_version", "$api_version"
}
}
}
dependencies {
....
implementation "com.example.service:my_api:$api_version"
}
```
And finally, I use it in my app:
```
supportActionBar?.title = """${getString(R.string.app_name)} $VERSION_NAME API:${BuildConfig.api_version}"""
```
But on build I get the following error in the generated `BuildConfig.java` file:
```
public final class BuildConfig {
// Fields from build type: debug
public static final String api_version = 0.2.9;
}
```
The error is
```
......\BuildConfig.java
Error:(14, 54) error: ';' expected
```
I suppose BuildConfig.java should contain:
```
public static final String api_version = "0.2.9";
```
But I don't understand how to write it.<issue_comment>username_1: Try using double qoutes when you declare the variable.
```
buildscript {
ext {
...
api_version = "0.2.9"
}
...
}
```
Upvotes: -1 <issue_comment>username_2: use like this
```
buildscript {
}
ext {
androidCompileSdkVersion = 26
androidBuildToolsVersion = '26.0.2'
androidMinSdkVersion = 17
androidTargetSdkVersion = 26
}
```
you can access like `rootProject.ext.androidCompileSdkVersion`
Upvotes: 1 <issue_comment>username_3: In fact the right syntax is:
```
buildTypes {
release {
...
buildConfigField "String", "api_version", "\"$api_version\""
}
debug {
...
buildConfigField "String", "api_version", "\"$api_version\""
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_4: Use the following structure.
```
buildscript{
.........
}
android{
defaultConfig{
.....
}
buildTypes{
....
}
}
def support_package_version = "27.0.2"
dependencies{
.......
implementation "com.android.support:appcompat-v7:${support_package_version}"
..........
.......
}
```
Upvotes: 0 <issue_comment>username_5: I used variable like this way.
Declare a variable called **DAGER\_VERSION** and give it a value.(String)
>
> def DAGER\_VERSION = "2.21"
>
>
>
In android app.gradle file,inside dependencies block
you should write this way.
>
> You need to put double quotation to use a variable in Groovy.
>
>
>
```
implementation "com.google.dagger:dagger:${DAGER_VERSION}"
annotationProcessor "com.google.dagger:dagger-compiler:${DAGER_VERSION}"
```
Upvotes: 1
|
2018/03/19
| 1,324 | 2,663 |
<issue_start>username_0: How to remove "padding" around background image ?
Here is a demo :
<https://jsbin.com/dobucizaqi/edit?html,css,output>
```css
.foo {
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23f39c12' d='M3,19V5A2,2 0 0,1 5,3H19A2,2 0 0,1 21,5V19A2,2 0 0,1 19,21H5A2,2 0 0,1 3,19M17,12L12,7V10H8V14H12V17L17,12Z'/%3E%3C/svg%3E");
background-repeat: no-repeat;
background-size: 8em 8em;
height: 8em;
width: 8em;
}
.bar {
background-color: red;
align-items: center;
display: flex;
}
```
```html
Lorem ipsum
```
I tried to remove margins and paddings, without success.
I want both left borders to be aligned and remove the "padding" around horizontally and vertically.
Thanks.
V.<issue_comment>username_1: Simply adjust the viewBox of the SVG:
```css
.foo {
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='24' height='24' viewBox='3 3 18 18'%3E%3Cpath fill='%23f39c12' d='M3,19V5A2,2 0 0,1 5,3H19A2,2 0 0,1 21,5V19A2,2 0 0,1 19,21H5A2,2 0 0,1 3,19M17,12L12,7V10H8V14H12V17L17,12Z'/%3E%3C/svg%3E");
background-repeat: no-repeat;
background-size: 8em 8em;
height: 8em;
width: 8em;
}
.bar {
background-color: red;
align-items: center;
display: flex;
}
```
```html
Lorem ipsum
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Either you can adjust the viewbox of SVG, or you can do it using CSS:
```
.foo {
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23f39c12' d='M3,19V5A2,2 0 0,1 5,3H19A2,2 0 0,1 21,5V19A2,2 0 0,1 19,21H5A2,2 0 0,1 3,19M17,12L12,7V10H8V14H12V17L17,12Z'/%3E%3C/svg%3E");
background-repeat: no-repeat;
background-size: 130%;
background-color: rgba(0,0,0,0.25);
background-position: -10px -10px;
height: 4em;
width: 4em;
}
```
**Snippet**
```css
.foo {
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23f39c12' d='M3,19V5A2,2 0 0,1 5,3H19A2,2 0 0,1 21,5V19A2,2 0 0,1 19,21H5A2,2 0 0,1 3,19M17,12L12,7V10H8V14H12V17L17,12Z'/%3E%3C/svg%3E");
background-repeat: no-repeat;
background-size: 130%;
background-position: -10px -10px;
height: 4em;
width: 4em;
}
.bar {
background-color: red;
align-items: center;
display: flex;
}
```
```html
Lorem ipsum
```
**Preview**
[](https://i.stack.imgur.com/ozYuh.png)
Upvotes: 2
|
2018/03/19
| 1,093 | 2,330 |
<issue_start>username_0: I know the default block size is 64M, split is 64M,
then for files less than 64M , when the number of nodes increase from 1 to 6 , there will be only one node to do with the split, so the speed will not improve? Is that right?
If it is a 128M file, there will be 2 nodes to do with the 2 splits, the speed is faster than 1 node, if there are more than 3 nodes, the speed doesn't increase,Is that right?
I don't know if my understanding is correct.Thanks for any comment!<issue_comment>username_1: Simply adjust the viewBox of the SVG:
```css
.foo {
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='24' height='24' viewBox='3 3 18 18'%3E%3Cpath fill='%23f39c12' d='M3,19V5A2,2 0 0,1 5,3H19A2,2 0 0,1 21,5V19A2,2 0 0,1 19,21H5A2,2 0 0,1 3,19M17,12L12,7V10H8V14H12V17L17,12Z'/%3E%3C/svg%3E");
background-repeat: no-repeat;
background-size: 8em 8em;
height: 8em;
width: 8em;
}
.bar {
background-color: red;
align-items: center;
display: flex;
}
```
```html
Lorem ipsum
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Either you can adjust the viewbox of SVG, or you can do it using CSS:
```
.foo {
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23f39c12' d='M3,19V5A2,2 0 0,1 5,3H19A2,2 0 0,1 21,5V19A2,2 0 0,1 19,21H5A2,2 0 0,1 3,19M17,12L12,7V10H8V14H12V17L17,12Z'/%3E%3C/svg%3E");
background-repeat: no-repeat;
background-size: 130%;
background-color: rgba(0,0,0,0.25);
background-position: -10px -10px;
height: 4em;
width: 4em;
}
```
**Snippet**
```css
.foo {
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='24' height='24' viewBox='0 0 24 24'%3E%3Cpath fill='%23f39c12' d='M3,19V5A2,2 0 0,1 5,3H19A2,2 0 0,1 21,5V19A2,2 0 0,1 19,21H5A2,2 0 0,1 3,19M17,12L12,7V10H8V14H12V17L17,12Z'/%3E%3C/svg%3E");
background-repeat: no-repeat;
background-size: 130%;
background-position: -10px -10px;
height: 4em;
width: 4em;
}
.bar {
background-color: red;
align-items: center;
display: flex;
}
```
```html
Lorem ipsum
```
**Preview**
[](https://i.stack.imgur.com/ozYuh.png)
Upvotes: 2
|
2018/03/19
| 752 | 2,593 |
<issue_start>username_0: I have two sheets , Sheet1 and sheet2 .
Sheet 1 is my Source sheet and I am mentioning the item number in column A.
Sheet 2 is my destination sheet the contains the list of item number from the data base.
I am comparing the column A of source sheet with column E of my destination sheet, if they both have same item number then I am deleting the entire row.
I am using the below code for this. on 6 item number 4 are getting deleted and 2 are not getting deleted.
But, when I copy the same item number from the destination sheet to source sheet ,then it is getting deleted. I am not sure why this is happening. Could any one guide how I could figure this out.
below is the code
```
Sub spldel()
Dim srcLastRow As Long, destLastRow As Long
Dim srcWS As Worksheet, destWS As Worksheet
Dim i As Long, j As Long
Application.ScreenUpdating = False
Set srcWS = ThisWorkbook.Sheets("sheet1")
Set destWS = ThisWorkbook.Sheets("sheet2")
srcLastRow = srcWS.Cells(srcWS.Rows.count, "A").End(xlUp).Row
destLastRow = destWS.Cells(destWS.Rows.count, "E").End(xlUp).Row
For i = 5 To destLastRow - 1
For j = 1 To srcLastRow
' compare column E of both the sheets
If destWS.Cells(i, "E").Value = srcWS.Cells(j, "A").Value Then
destWS.Cells(i, "E").EntireRow.delete
End If
Next j
Next i
End Sub
```<issue_comment>username_1: Remember to loop in reverse order when you are trying to delete the rows otherwise rows may skipped from deletion even when they qualify the deletion criteria.
So the two For loops should be like this....
```
For i = destLastRow - 1 To 5 Step -1
For j = srcLastRow To 1 Step -1
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Here is another approach:
Rather than looping through each item everytime in your source and destination sheets, just use `MATCH` function:
```
Function testThis()
Dim destWS As Worksheet: Set destWS = ThisWorkbook.Worksheets("Sheet8") ' Change to your source sheet
Dim srcWS As Worksheet: Set srcWS = ThisWorkbook.Worksheets("Sheet12") ' Change to your destination sheet
Dim iLR As Long: iLR = srcWS.Range("L" & srcWS.Rows.count).End(xlUp).Row ' Make sure you change the column to get the last row from
Dim iC As Long
Dim lRetVal As Long
On Error Resume Next
For iC = 1 To iLR
lRetVal = Application.WorksheetFunction.Match(srcWS.Range("L" & iC), destWS.Range("A:A"), 0)
If Err.Number = 0 Then
destWS.Range("A" & lRetVal).EntireRow.Delete
End If
Err.Clear
Next
On Error GoTo 0
End Function
```
Upvotes: 1
|
2018/03/19
| 2,464 | 5,907 |
<issue_start>username_0: I am trying to get a `dictionary` to `pandas` `dataframe`. I am having trouble with a few things. I tried the following
```
data = {'applicableMargin': '12.50', 'marketType': 'N', 'totalBuyQuantity': '1,14,514', 'buyPrice1': '1,546.30', 'dayLow': '1,541.20', 'symbol': 'ACC', 'cm_adj_low_dt': '23-MAR-17', 'open': '1,571.50', 'sellPrice2': '1,547.85', 'sellPrice4': '1,547.95', 'cm_ffm': '13,249.84', 'buyPrice3': '1,546.00', 'css_status_desc': 'Listed', 'ndStartDate': '-', 'buyQuantity1': '43', 'totalTradedValue': '1,468.42', 'surv_indicator': '-', 'recordDate': '26-JUL-17', 'secDate': '16MAR2018', 'faceValue': '10.00', 'totalTradedVolume': '94,384', 'pricebandlower': '1,411.20', 'sellQuantity4': '16', 'averagePrice': '1,555.79', 'buyPrice2': '1,546.05', 'totalSellQuantity': '84,873', 'closePrice': '0.00', 'buyPrice4': '1,545.90', 'extremeLossMargin': '5.00', 'isinCode': 'INE012A01025', 'buyQuantity4': '48', 'sellPrice3': '1,547.90', 'bcEndDate': '-', 'buyQuantity5': '27', 'indexVar': '-', 'purpose': 'INTERIM DIVIDEND - RS 11/- PER SHARE', 'sellQuantity5': '286', 'series': 'EQ', 'low52': '1,380.40', 'dayHigh': '1,573.70', 'pricebandupper': '1,724.70', 'basePrice': '1,567.95', 'lastPrice': '1,546.05', 'sellQuantity2': '32', 'deliveryToTradedQuantity': '50.45', 'high52': '1,869.95', 'cm_adj_high_dt': '13-SEP-17', 'sellQuantity1': '67', 'buyQuantity2': '155', 'isExDateFlag': False, 'quantityTraded': '2,53,481', 'previousClose': '1,567.95', 'securityVar': '5.74', 'bcStartDate': '-', 'sellQuantity3': '25', 'ndEndDate': '-', 'buyQuantity3': '31', 'companyName': 'ACC Limited', 'sellPrice1': '1,547.65', 'adhocMargin': '-', 'sellPrice5': '1,548.00', 'change': '-21.90', 'exDate': '25-JUL-17', 'varMargin': '7.50', 'pChange': '-1.40', 'buyPrice5': '1,545.85', 'priceBand': 'No Band'}
pd_cols = []
for i in data:
pd_cols.append(i)
#fut_data = pd.DataFrame()
#fut_data.columns = pd_cols
fut_data = pd.DataFrame(data.items(), columns=pd_cols)
```
This gives error:
>
> Traceback (most recent call last):
> File "", line 1, in
> File "C:\Python34\lib\site-packages\pandas\core\frame.py", line 345, in >**init**
> raise PandasError('DataFrame constructor not properly called!')
> pandas.core.common.PandasError: DataFrame constructor not properly called!
>
>
>
After this I will have a bunch of more `dict` which will have the same `columns`. I want to add them all to the same `database`.
Thanks in advance.<issue_comment>username_1: Does this give you the output you want?
```
import pandas as pd
data = {'applicableMargin': '12.50', 'marketType': 'N', 'totalBuyQuantity': '1,14,514', 'buyPrice1': '1,546.30', 'dayLow': '1,541.20', 'symbol': 'ACC', 'cm_adj_low_dt': '23-MAR-17', 'open': '1,571.50', 'sellPrice2': '1,547.85', 'sellPrice4': '1,547.95', 'cm_ffm': '13,249.84', 'buyPrice3': '1,546.00', 'css_status_desc': 'Listed', 'ndStartDate': '-', 'buyQuantity1': '43', 'totalTradedValue': '1,468.42', 'surv_indicator': '-', 'recordDate': '26-JUL-17', 'secDate': '16MAR2018', 'faceValue': '10.00', 'totalTradedVolume': '94,384', 'pricebandlower': '1,411.20', 'sellQuantity4': '16', 'averagePrice': '1,555.79', 'buyPrice2': '1,546.05', 'totalSellQuantity': '84,873', 'closePrice': '0.00', 'buyPrice4': '1,545.90', 'extremeLossMargin': '5.00', 'isinCode': 'INE012A01025', 'buyQuantity4': '48', 'sellPrice3': '1,547.90', 'bcEndDate': '-', 'buyQuantity5': '27', 'indexVar': '-', 'purpose': 'INTERIM DIVIDEND - RS 11/- PER SHARE', 'sellQuantity5': '286', 'series': 'EQ', 'low52': '1,380.40', 'dayHigh': '1,573.70', 'pricebandupper': '1,724.70', 'basePrice': '1,567.95', 'lastPrice': '1,546.05', 'sellQuantity2': '32', 'deliveryToTradedQuantity': '50.45', 'high52': '1,869.95', 'cm_adj_high_dt': '13-SEP-17', 'sellQuantity1': '67', 'buyQuantity2': '155', 'isExDateFlag': False, 'quantityTraded': '2,53,481', 'previousClose': '1,567.95', 'securityVar': '5.74', 'bcStartDate': '-', 'sellQuantity3': '25', 'ndEndDate': '-', 'buyQuantity3': '31', 'companyName': 'ACC Limited', 'sellPrice1': '1,547.65', 'adhocMargin': '-', 'sellPrice5': '1,548.00', 'change': '-21.90', 'exDate': '25-JUL-17', 'varMargin': '7.50', 'pChange': '-1.40', 'buyPrice5': '1,545.85', 'priceBand': 'No Band'}
df = pd.DataFrame.from_dict([data])
print(df.iloc[:,:5])
```
When I run the above I get a 1-row dataframe:
```
adhocMargin applicableMargin averagePrice basePrice bcEndDate
0 - 12.50 1,555.79 1,567.95 -
```
If you have multiple similar dicts, place them all in a list like so:
```
df = pd.DataFrame.from_dict([data1,data2])
```
That results in a dataframe with one row per dict.
Upvotes: 1 <issue_comment>username_2: This works for me. Since this errors for you, you may have a copy-paste error.
```
fut_data = pd.DataFrame.from_dict(data, orient='index').T
print(fut_data)
# applicableMargin marketType totalBuyQuantity buyPrice1 dayLow symbol \
# 0 12.50 N 1,14,514 1,546.30 1,541.20 ACC
# cm_adj_low_dt open sellPrice2 sellPrice4 ... companyName \
# 0 23-MAR-17 1,571.50 1,547.85 1,547.95 ... ACC Limited
# buyPrice5 priceBand
# 0 1,545.85 No Band
# [1 rows x 67 columns]
```
You can append as follows:
```
df = pd.DataFrame.from_dict(data, orient='index').T
df = df.append(pd.DataFrame.from_dict(data2, orient='index').T)
```
Here `data2` is *another* similar dictionary.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Although the previous answers are better you can try this absurd solution if it works for you :
```
fut_data = pd.DataFrame(data,index=[0])
```
for adding more rows, you may try:
```
fut_data1 = pd.DataFrame(data1,index=[1])
fut_data.append(fut_data1)
```
or
```
fut_data1 = pd.DataFrame(data1,index=[i]) #where i is a loop variable
fut_data.append(fut_data1)
```
Upvotes: 0
|
2018/03/19
| 506 | 1,733 |
<issue_start>username_0: I have to check for the presence of a substring in a string in python. The problem comes from the fact that the substring contains a special character.
I am reading a feature from a csv file. The feature is a distance with numbers and its units:
```
12.4 miles
34 Kilómetros
800 metros
```
I have to read the feature, check the units and convert to metres.
```
for line in filename:
if 'miles' in line: #checking for miles is straight forward
#do whatever I have to do
if 'Kilómetros' in line: #the problem is here
#do whatever I have to do
```
Komodo will not let me save my .py file because of the special character in `Kilómetros`. Any help? Even if Komodo let me save the file, would this work?<issue_comment>username_1: Do the exact opposite. Check if all the characters are in a list you want. e.g.
```
text1 = 'abcabcabcabcabcabcabcabcabcabc'
for char in text1:
if char not in ['a', 'b', 'c']:
print('oops',text1)
text2 = 'abcabcabcabcaΞΞΞbcabcabcabcabcabc'
for char in text2:
if char not in ['a', 'b', 'c']:
print('oops',text2)
```
Upvotes: -1 <issue_comment>username_2: Komodo attempts to detect and set which encoding your file is using when the file is first opened. It might have missed the mark. You can see which encoding Komodo chose in the status bar at the top of the text editing area. Click the drop down to change it.
[](https://i.stack.imgur.com/7EbzQ.png)
For future Komodo questions you should use the [Komodo Forums](https://community.komodoide.com/).
Upvotes: 3 [selected_answer]
|
2018/03/19
| 349 | 1,327 |
<issue_start>username_0: How do I create a TFS query returning all Product Backlog Items that have State=Accepted OR have child items with State=Accepted?
In below example, "Commandline Util" should be included in the query result event when the State is "New" because it have a child with state "Accepted".
[Click for query example](https://i.stack.imgur.com/LIc8E.png)<issue_comment>username_1: You can do it in something like this ...
On your query your have to choose "Tree of workitems".
Adjust the fields you want to use to adapt my example where I'm using State to your Blocked field.
You can add/remove filters as needed.
Query result will bring all top level workitems matching your filters plus related children that match the children's filters.
Change the last filter options where it says match top-level workitems first to Match linked work items first.
[](https://i.stack.imgur.com/o3g22.png)
Upvotes: 0 <issue_comment>username_2: Just try below "Tree of workitems" query (Just replace Done with **Accepted** in your scenario ):
Note that select **Match linked work items first** under Filter options.
[](https://i.stack.imgur.com/O06xn.png)
Upvotes: 3 [selected_answer]
|
2018/03/19
| 442 | 1,617 |
<issue_start>username_0: I understood the concept of using throttle in redux-saga.
But I have a quick question, the timer which is gaven, when does he start ?
Example =>
`throttle(500, 'INPUT_CHANGED', handleInput)`
```
As soon as, the method gaven in second parameter start,
and not taking care about the completion of the method ?
```
.
```
Or as soon as, the method gaven in second parameter is finish ?
```<issue_comment>username_1: I haven't used `throttle` yet, but my understanding is as follow:
You should use `throttle` at the same position where you use `takeLatest` or `takeEvery`.
For example:
```js
// saga.js
function* handleInput(action) {
// This code will not be called more than once every 500ms
}
function* handleSomething(action) {
// This code will be called on every 'SOMETHING_CHANGED' action
}
export default function* rootSaga() {
yield all([
takeEvery('SOMETHING_CHANGED', handleSomething),
throttle(500, 'INPUT_CHANGED', handleInput),
])
}
// store.js
const sagaMiddleware = createSagaMiddleware()
sagaMiddleware.run(rootSaga)
```
I hope this was helpful for your.
Upvotes: 2 <issue_comment>username_2: [throttle](https://redux-saga.js.org/docs/api/#throttlems-pattern-saga-args "throttle") => Concerning the first argument :
*ms: Number - length of a time window in milliseconds during which actions will be ignored **after the action starts processing***
I believe so, that the timer begin right after the action is called and that we dont care about the completion of this task.
(Should have better read the doc, my bad)
Upvotes: 2 [selected_answer]
|
2018/03/19
| 1,012 | 4,278 |
<issue_start>username_0: I'm trying to set the background color of one child in my `listView` but for whatever reason the entire list gets the background color.
This is the selector
```
xml version="1.0" encoding="utf-8" ?
```
I've put it in the `listView` here
```
xml version="1.0" encoding="utf-8"?
```
I've also tried setting the background color in the `ListFragment` but for some reason I don't get the color that way at all.
Which I do like this
```
@Override
public void onStart() {
super.onStart();
ListView view = getListView();
TextView previousSelected = null;
int selectedPosition = 0;
if(view != null) {
int adapterSize = view.getAdapter().getCount();
if (selectedView != null) {
for (int i = 0; i < adapterSize; i++) {
Log.d(TAG, "view getchildat " + view.getAdapter().getView(i, null, view) + " selected " + selectedView);
if (((TextView) view.getAdapter().getView(i, null, view)).getText().toString().equalsIgnoreCase(((TextView) selectedView).getText().toString())){
previousSelected = (TextView) view.getAdapter().getView(i, null, view);
Log.d(TAG, "selected is " + previousSelected.getText().toString());
view.setSelection(i);
view.setSelected(true);
view.setFocusable(true);
selectedPosition = i;
}
}
} else if (tagScan.getLastFeedback()[0] != null) {
Log.d(TAG, "count fragment is " + count);
for(int i = 0; i < adapterSize; i++){
Log.d(TAG, "view getchildat " + ((TextView)view.getAdapter().getView(i, null, view)).getText().toString() + " selected " + tagScan.getLastFeedback()[count]);
if (((TextView) view.getAdapter().getView(i, null, view)).getText().toString().equalsIgnoreCase(tagScan.getLastFeedback()[count])){
previousSelected = (TextView) view.getAdapter().getView(i, null, view);
Log.d(TAG, "selected is " + previousSelected.getText().toString());
view.setSelection(i);
view.setSelected(true);
view.setFocusable(true);
selectedPosition = i;
}
}
}
if(previousSelected != null){
Log.d(TAG,"set color of the previous selection " + previousSelected.getText().toString());
previousSelected.setBackgroundColor(Color.BLUE);
previousSelected.setTextColor(Color.WHITE);
selectedView = previousSelected;
}
}
}
```<issue_comment>username_1: ```
new ArrayAdapter(this, R.layout.feedback\_list, feedback.getOptions()){
@Override
public View getView(int position, View convertView, ViewGroup parent) {
View view = super.getView(position, convertView, parent);
view.setBackgroundColor(Color.parseColor("#4286f4")); // change color code here as you want
return view;
}
};
```
**OR:**
change your `R.layout.feedback_list` if you don't want to set item background colors in *runtime*:
```
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you are using a custom adapter, you can set the background color with `view.setBackgroundColor(Color)` for a specific view. Create a new class for better clarity.
```
public class MyAdapter extends BaseAdapter {
public MyAdapter(List list, Context ctx) {
super(ctx, R.layout.row\_layout, list);
this.list = list;
this.context = ctx;
}
public View getView(int position, View convertView, ViewGroup parent) {
// inflate layout and reuse view for better performance
if (convertView == null) {
LayoutInflater inflater = (LayoutInflater)
context.getSystemService(Context.LAYOUT\_INFLATER\_SERVICE);
convertView = inflater.inflate(R.layout.row\_layout, parent, false);
}
...
/\*
\* HERE YOU SET YOUR BACKGROUND COLOR FOR THE VIEW
\*/
convertView.setBackgroundColor(0xFF00FF00);
...
return convertView;
}
// some other implemented methods from BaseAdapter
}
```
Upvotes: -1
|
2018/03/19
| 2,355 | 7,343 |
<issue_start>username_0: I am working on dataset with more than 230 variables among which I have about 60 categorical var with more than 6 six levels (no way to make preference ordering, example: Color)
My question is about any function that can help me to recode these variables without doing it by hand which requires a lot of work and time with a risk to make many mistakes!
I can work with **R** and **python**, so feel free to suggest the most efficient function that can do the job.
let's say, I have the dataset called `df` and the set of factorial columns is
```
clm=(clm1, clm2,clm3,....,clm60)
```
all of them are factors with a lot of levels:
```
(min=2, max=not important [may be 10, 30 or 100...etc])
```
Your help is much appreciated!<issue_comment>username_1: Here is a short example using `model.matrix` that should get you started:
```
df <- data.frame(
clm1 = gl(2, 6, 12, c("clm1.levelA", "clm1.levelB")),
clm2 = gl(3, 4, 12, c("clm2.levelA", "clm2.levelB", "clm2.levelC")));
# clm1 clm2
#1 clm1.levelA clm2.levelA
#2 clm1.levelA clm2.levelA
#3 clm1.levelA clm2.levelA
#4 clm1.levelA clm2.levelA
#5 clm1.levelA clm2.levelB
#6 clm1.levelA clm2.levelB
#7 clm1.levelB clm2.levelB
#8 clm1.levelB clm2.levelB
#9 clm1.levelB clm2.levelC
#10 clm1.levelB clm2.levelC
#11 clm1.levelB clm2.levelC
#12 clm1.levelB clm2.levelC
as.data.frame.matrix(model.matrix(rep(0, nrow(df)) ~ 0 + clm1 + clm2, df));
# clm1clm1.levelA clm1clm1.levelB clm2clm2.levelB clm2clm2.levelC
#1 1 0 0 0
#2 1 0 0 0
#3 1 0 0 0
#4 1 0 0 0
#5 1 0 1 0
#6 1 0 1 0
#7 0 1 1 0
#8 0 1 1 0
#9 0 1 0 1
#10 0 1 0 1
#11 0 1 0 1
#12 0 1 0 1
```
Upvotes: 2 <issue_comment>username_2: With `pandas` in `python3`, you can do something like:
```
import pandas as pd
df = pd.DataFrame({'clm1': ['clm1a', 'clm1b', 'clm1c'], 'clm2': ['clm2a', 'clm2b', 'clm2c']})
pd.get_dummies(df)
```
See the [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html) for more examples.
Upvotes: 0 <issue_comment>username_3: In R, the problem with the model.matrix approach proposed by @username_1 is that excepted for the first factor, the function drops the first level of each factor. Sometimes this is what you want but sometimes it is not (depending on the problem as underlined by @username_1).
There are several functions scattered in different packages to do that (eg package `caret` see [here](https://stackoverflow.com/questions/11952706/) for several examples).
I use the following function inspired by this [Stack Overflow answer](https://stackoverflow.com/a/48773476/2417437) by @Jaap
```
#'
#' Transform factors from a data.frame into dummy variables (one hot encoding)
#'
#' This function will transform all factors into dummy variables with one column
#' for each level of the factor (unlike the contrasts matrices that will drop the first
#' level). The factors with only two levels will have only one column (0/1 on the second
#' level). The ordered factors and logicals are transformed into numeric.
#' The numeric and text vectors will remain untouched.
#'
```
```r
make_dummies <- function(df){
# function to create dummy variables for one factor only
dummy <- function(fac, name = "") {
if(is.factor(fac) & !is.ordered(fac)) {
l <- levels(fac)
res <- outer(fac, l, function(fac, l) 1L * (fac == l))
colnames(res) <- paste0(name, l)
if(length(l) == 2) {res <- res[,-1, drop = F]}
if(length(l) == 1) {res <- res}
} else if(is.ordered(fac) | is.logical(fac)) {
res <- as.numeric(fac)
} else {
res <- fac
}
return(res)
}
# Apply this function to all columns
res <- (lapply(df, dummy))
# change the names of the cases with only one column
for(i in seq_along(res)){
if(any(is.matrix(res[[i]]) & ncol(res[[i]]) == 1)){
colnames(res[[i]]) <- paste0(names(res)[i], ".", colnames(res[[i]]))
}
}
res <- as.data.frame(res)
return(res)
}
```
Example :
```r
df <- data.frame(num = round(rnorm(12),1),
sex = factor(c("Male", "Female")),
color = factor(c("black", "red", "yellow")),
fac2 = factor(1:4),
fac3 = factor("A"),
size = factor(c("small", "middle", "big"),
levels = c("small", "middle", "big"), ordered = TRUE),
logi = c(TRUE, FALSE))
print(df)
#> num sex color fac2 fac3 size logi
#> 1 0.0 Male black 1 A small TRUE
#> 2 -1.0 Female red 2 A middle FALSE
#> 3 1.3 Male yellow 3 A big TRUE
#> 4 1.4 Female black 4 A small FALSE
#> 5 -0.9 Male red 1 A middle TRUE
#> 6 0.1 Female yellow 2 A big FALSE
#> 7 1.4 Male black 3 A small TRUE
#> 8 0.1 Female red 4 A middle FALSE
#> 9 1.6 Male yellow 1 A big TRUE
#> 10 1.1 Female black 2 A small FALSE
#> 11 0.2 Male red 3 A middle TRUE
#> 12 0.3 Female yellow 4 A big FALSE
make_dummies(df)
#> num sex.Male color.black color.red color.yellow fac2.1 fac2.2 fac2.3
#> 1 0.0 1 1 0 0 1 0 0
#> 2 -1.0 0 0 1 0 0 1 0
#> 3 1.3 1 0 0 1 0 0 1
#> 4 1.4 0 1 0 0 0 0 0
#> 5 -0.9 1 0 1 0 1 0 0
#> 6 0.1 0 0 0 1 0 1 0
#> 7 1.4 1 1 0 0 0 0 1
#> 8 0.1 0 0 1 0 0 0 0
#> 9 1.6 1 0 0 1 1 0 0
#> 10 1.1 0 1 0 0 0 1 0
#> 11 0.2 1 0 1 0 0 0 1
#> 12 0.3 0 0 0 1 0 0 0
#> fac2.4 fac3.A size logi
#> 1 0 1 1 1
#> 2 0 1 2 0
#> 3 0 1 3 1
#> 4 1 1 1 0
#> 5 0 1 2 1
#> 6 0 1 3 0
#> 7 0 1 1 1
#> 8 1 1 2 0
#> 9 0 1 3 1
#> 10 0 1 1 0
#> 11 0 1 2 1
#> 12 1 1 3 0
```
Created on 2018-03-19 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0).
Upvotes: 0
|
2018/03/19
| 2,432 | 7,470 |
<issue_start>username_0: I am parsing an excel file [session\_data,csv], the excel file looks like as follows:
```
Case, StartTime, EndTime
Case_1=Covering 3 time-slots,T00:00:00,T05:00:00
Case_2=Covering multiple time-slots,T00:15:00
Case_3=Covering one time-slot,T00:18:00,T00:47:00
```
I am parsing the file as follows:
```
${LIST}= Process Data File ${CURDIR}/session_data.csv
: FOR ${LINE} IN @{LIST}
\ Log ${LINE}
\ @{COLUMNS}= Split String ${LINE} separator=,
\ ${TESTCASE}= Get From List ${COLUMNS} 0
\ ${STARTTIME}= Get From List ${COLUMNS} 1
\ ${ENDTIME}= Get From List ${COLUMNS} 2
```
Now I do not need to run the loop for each case but only for a single case where my row (case column) starts with 'Case\_2' or 'Case\_3'
How can I add that condition?<issue_comment>username_1: Here is a short example using `model.matrix` that should get you started:
```
df <- data.frame(
clm1 = gl(2, 6, 12, c("clm1.levelA", "clm1.levelB")),
clm2 = gl(3, 4, 12, c("clm2.levelA", "clm2.levelB", "clm2.levelC")));
# clm1 clm2
#1 clm1.levelA clm2.levelA
#2 clm1.levelA clm2.levelA
#3 clm1.levelA clm2.levelA
#4 clm1.levelA clm2.levelA
#5 clm1.levelA clm2.levelB
#6 clm1.levelA clm2.levelB
#7 clm1.levelB clm2.levelB
#8 clm1.levelB clm2.levelB
#9 clm1.levelB clm2.levelC
#10 clm1.levelB clm2.levelC
#11 clm1.levelB clm2.levelC
#12 clm1.levelB clm2.levelC
as.data.frame.matrix(model.matrix(rep(0, nrow(df)) ~ 0 + clm1 + clm2, df));
# clm1clm1.levelA clm1clm1.levelB clm2clm2.levelB clm2clm2.levelC
#1 1 0 0 0
#2 1 0 0 0
#3 1 0 0 0
#4 1 0 0 0
#5 1 0 1 0
#6 1 0 1 0
#7 0 1 1 0
#8 0 1 1 0
#9 0 1 0 1
#10 0 1 0 1
#11 0 1 0 1
#12 0 1 0 1
```
Upvotes: 2 <issue_comment>username_2: With `pandas` in `python3`, you can do something like:
```
import pandas as pd
df = pd.DataFrame({'clm1': ['clm1a', 'clm1b', 'clm1c'], 'clm2': ['clm2a', 'clm2b', 'clm2c']})
pd.get_dummies(df)
```
See the [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html) for more examples.
Upvotes: 0 <issue_comment>username_3: In R, the problem with the model.matrix approach proposed by @username_1 is that excepted for the first factor, the function drops the first level of each factor. Sometimes this is what you want but sometimes it is not (depending on the problem as underlined by @username_1).
There are several functions scattered in different packages to do that (eg package `caret` see [here](https://stackoverflow.com/questions/11952706/) for several examples).
I use the following function inspired by this [Stack Overflow answer](https://stackoverflow.com/a/48773476/2417437) by @Jaap
```
#'
#' Transform factors from a data.frame into dummy variables (one hot encoding)
#'
#' This function will transform all factors into dummy variables with one column
#' for each level of the factor (unlike the contrasts matrices that will drop the first
#' level). The factors with only two levels will have only one column (0/1 on the second
#' level). The ordered factors and logicals are transformed into numeric.
#' The numeric and text vectors will remain untouched.
#'
```
```r
make_dummies <- function(df){
# function to create dummy variables for one factor only
dummy <- function(fac, name = "") {
if(is.factor(fac) & !is.ordered(fac)) {
l <- levels(fac)
res <- outer(fac, l, function(fac, l) 1L * (fac == l))
colnames(res) <- paste0(name, l)
if(length(l) == 2) {res <- res[,-1, drop = F]}
if(length(l) == 1) {res <- res}
} else if(is.ordered(fac) | is.logical(fac)) {
res <- as.numeric(fac)
} else {
res <- fac
}
return(res)
}
# Apply this function to all columns
res <- (lapply(df, dummy))
# change the names of the cases with only one column
for(i in seq_along(res)){
if(any(is.matrix(res[[i]]) & ncol(res[[i]]) == 1)){
colnames(res[[i]]) <- paste0(names(res)[i], ".", colnames(res[[i]]))
}
}
res <- as.data.frame(res)
return(res)
}
```
Example :
```r
df <- data.frame(num = round(rnorm(12),1),
sex = factor(c("Male", "Female")),
color = factor(c("black", "red", "yellow")),
fac2 = factor(1:4),
fac3 = factor("A"),
size = factor(c("small", "middle", "big"),
levels = c("small", "middle", "big"), ordered = TRUE),
logi = c(TRUE, FALSE))
print(df)
#> num sex color fac2 fac3 size logi
#> 1 0.0 Male black 1 A small TRUE
#> 2 -1.0 Female red 2 A middle FALSE
#> 3 1.3 Male yellow 3 A big TRUE
#> 4 1.4 Female black 4 A small FALSE
#> 5 -0.9 Male red 1 A middle TRUE
#> 6 0.1 Female yellow 2 A big FALSE
#> 7 1.4 Male black 3 A small TRUE
#> 8 0.1 Female red 4 A middle FALSE
#> 9 1.6 Male yellow 1 A big TRUE
#> 10 1.1 Female black 2 A small FALSE
#> 11 0.2 Male red 3 A middle TRUE
#> 12 0.3 Female yellow 4 A big FALSE
make_dummies(df)
#> num sex.Male color.black color.red color.yellow fac2.1 fac2.2 fac2.3
#> 1 0.0 1 1 0 0 1 0 0
#> 2 -1.0 0 0 1 0 0 1 0
#> 3 1.3 1 0 0 1 0 0 1
#> 4 1.4 0 1 0 0 0 0 0
#> 5 -0.9 1 0 1 0 1 0 0
#> 6 0.1 0 0 0 1 0 1 0
#> 7 1.4 1 1 0 0 0 0 1
#> 8 0.1 0 0 1 0 0 0 0
#> 9 1.6 1 0 0 1 1 0 0
#> 10 1.1 0 1 0 0 0 1 0
#> 11 0.2 1 0 1 0 0 0 1
#> 12 0.3 0 0 0 1 0 0 0
#> fac2.4 fac3.A size logi
#> 1 0 1 1 1
#> 2 0 1 2 0
#> 3 0 1 3 1
#> 4 1 1 1 0
#> 5 0 1 2 1
#> 6 0 1 3 0
#> 7 0 1 1 1
#> 8 1 1 2 0
#> 9 0 1 3 1
#> 10 0 1 1 0
#> 11 0 1 2 1
#> 12 1 1 3 0
```
Created on 2018-03-19 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0).
Upvotes: 0
|
2018/03/19
| 1,085 | 4,058 |
<issue_start>username_0: I've encontered a problem while developping an application using ASP.NET Core, Angular 5 and SignalR.
As a test, I made a simple chat app based on this sample : <https://codingblast.com/asp-net-core-signalr-chat-angular/>
But, when I modify the chat.component.ts file from
```
this.hubConnection.on('sendToAll', (nickname: string, message: string) => { this.messages.push(new Message(nickname, message)); });
// or this line
this.hubConnection.on('sendToAll', (nickname: string, message: string) => { this.addMessage(nickname, message); });
```
to
```
this.hubConnection.on('sendToAll', this.addMessage);
```
the DOM is not updated.
I've gone through step-by-step and I managed to figure out that when the callback arrives, it's actually another instance of the component... So every fields are undefined.
Here is the full component :
*chat.component.html*
```
**{{msg.nickname}}**: {{msg.message}}
Nickname
Message
Send
```
*chat.component.ts*
```
import { Component, OnInit } from '@angular/core';
import { HubConnection } from '@aspnet/signalr';
import { Message } from '../../models/message';
@Component({
selector: 'app-chat',
templateUrl: './chat.component.html',
styleUrls: ['./chat.component.css']
})
export class ChatComponent implements OnInit {
private hubConnection: HubConnection;
public nickname: string;
public message: string;
public messages: Message[];
constructor() {
this.messages = [];
}
ngOnInit() {
this.hubConnection = new HubConnection('http://localhost:55233/chat');
this.hubConnection.start().then(() => console.log('connection started')).catch(err => console.error(err));
this.hubConnection.on('sendToAll', this.addMessage);
}
private addMessage(nickname: string, message: string): void {
// this line is useful to avoid undefined error
// due to the callback pointing to another instance of this component...
if (this.messages === undefined) {
this.messages = [];
}
this.messages.push(new Message(nickname, message));
}
public sendMessage(): void {
this.hubConnection
.invoke('sendToAll', this.nickname, this.message)
.catch(err => console.error(err));
}
}
```
Here is what I've tried :
* Putting public everywhere
* Getting rid of ngOnInit
How can I use this line `this.hubConnection.on('sendToAll', this.addMessage);` in this context?
Thanks for your help.
**EDIT** :
Thanks to Supamiu, here's the solution :
```
this.hubConnection.on('sendToAll', this.addMessage.bind(this));
```<issue_comment>username_1: ```
this.hubConnection.on('sendToAll', this.addMessage);
```
Is the equivalent of
```
this.hubConnection.on('sendToAll', function(x) { this.addMessage(x); });
```
In javascript, you have to use closures when you write functions, or use the fat arrow. This would give
```
const that = this;
this.hubConnection.on('sendToAll', function(x) { that.addMessage(x); });
// or
this.hubConnection.on('sendToAll', x => { this.addMessage(x); });
```
Upvotes: 2 <issue_comment>username_2: Javascript provides a tool to inject context into a method call: [bind](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_objects/Function/bind).
You should change `this.addMessage` to `this.addMessage.bind(this)`.
Keep in mind that there's other solutions, like the one trichetriche explained, another one would be to change `addMessage` for an arrow function, as arrow functions keep `this` context:
```
private addMessage = (nickname: string, message: string) => {
// this line is useful to avoid undefined error
// due to the callback pointing to another instance of this component...
if (this.messages === undefined) {
this.messages = [];
}
this.messages.push(new Message(nickname, message));
}
```
By making this, you could still use `this.addMessage` without having to bind `this` context.
Upvotes: 2 [selected_answer]
|
2018/03/19
| 391 | 1,450 |
<issue_start>username_0: How do I keep a session in IVR? I am halfway done with an IVR application I am currently working on. I can collect customer ID & PIN to read out customer balance. After a customer confirms account balance, I want them to be able to continue with other options of PIN change, transfer credits to another customer without having to collect the Customer ID & PIN again till the call finally end. I am using Twilio webhook and my code is in php. Any help and idea will be appreciated.<issue_comment>username_1: Twilio Onboarding Support here.
You could send the pin and customer id to the action on that gather as a custom param, the action url would then be able to query the paramer $\_REQUEST[customID']:
```
php
require_once './vendor/autoload.php';
use Twilio\Twiml;
$response = new Twiml();
$gather = $response-gather(['input' => 'speech dtmf', 'timeout' => 3,
'numDigits' => 1], 'action' => '/action&customID='+ $custCode + '+&pin=' + $pin');
$gather->say('Please press enter your customer id and pin or say sales for sales.');
echo $response;
```
Upvotes: 1 <issue_comment>username_2: Twilio respects cookies , so you should have no problem with persistence .
You can view how twilio cookies with here <https://support.twilio.com/hc/en-us/articles/223136287-How-do-Twilio-cookies-work->
a more complex way would be to store the callsid and any data you want to persist in a database
Upvotes: 1 [selected_answer]
|
2018/03/19
| 2,119 | 7,072 |
<issue_start>username_0: ```
function reverseInPlace(str) {
var words = [];
words = str.split("\s+");
var result = "";
for (var i = 0; i < words.length; i++) {
return result += words[i].split('').reverse().join('');
}
}
console.log(reverseInPlace("abd fhe kdj"))
```
What I expect is `dba ehf jdk`, while what I'm getting here is `jdk fhe dba`. What's the problem?<issue_comment>username_1: you need to split the string by space
```js
function reverseInPlace(str) {
var words = [];
words = str.match(/\S+/g);
var result = "";
for (var i = 0; i < words.length; i++) {
result += words[i].split('').reverse().join('') + " ";
}
return result
}
console.log(reverseInPlace("abd fhe kdj"))
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: `Split` the string into words first before reversing the individual words
```
var input = "abd fhe kdj";
var output = input.split( " " ).map( //split into words and iterate via map
s => s.split("").reverse().join( "" ) //split individual words into characters and then reverse the array of character and join it back
).join( " " ); //join the individual words
```
Upvotes: 3 <issue_comment>username_3: You can make use of `split` and `map` function to create the reverse words.
You need to first split the `sentence` using `space` and then you can just reverse each word and join the reversed words again.
```js
function reverseWord (sentence) {
return sentence.split(' ').map(function(word) {
return word.split('').reverse().join('');
}).join(' ');
}
console.log(reverseWord("abd fhe kdj"));
```
Upvotes: 0 <issue_comment>username_4: This function should work for you:
```
function myFunction(string) {
return string.split("").reverse().join("").split(" ").reverse().join(" ")
};
```
Upvotes: 3 <issue_comment>username_5: **If you are looking like this o/p : "Javascript is Best" => "Best is Javascript"**
Then can be done by simple logic
```
//sample string
let str1 = "Javascript is Best";
//breaking into array
let str1WordArr = str1.split(" ");
//temp array to hold the reverse string
let reverseWord=[];
//can iterate the loop backward
for(let i=(str1WordArr.length)-1;i>=0;i--)
{
//pushing the reverse of words into new array
reverseWord.push(str1WordArr[i]);
}
//join the words array
console.log(reverseWord.join(" ")); //Best is Javascript
```
Upvotes: 1 <issue_comment>username_6: This solution **preserves the whitespaces characters** space `␣`, the tab `\t`, the new line `\n` and the carriage return `\r`) and preserves their order too (not reversed) :
```js
const sentence = "abd\t fhe kdj";
function reverseWords(sentence) {
return sentence
.split(/(\s+)/)
.map(word => /^\s+$/.test(word) ? word : word.split('').reverse().join(''))
.join('');
}
console.log(reverseWords(sentence));
```
Upvotes: 0 <issue_comment>username_7: ```
If you want to reverse a string by word:
Example: 'Welcome to JavaScript' to 'JavaScript to Welcome'
You can also do something like:
var str = 'Welcome to JavaScript'
function reverseByWord(s){
return s.split(" ").reverse().join(" ");
}
// Note: There is space in split() and Join() method
reverseByWord(str)
// output will be - JavaScript to Welcome
```
Upvotes: 0 <issue_comment>username_8: You can easily achieve that by the following code
```js
function reverseWords(str) {
// Go for it
let reversed;
let newArray=[];
reversed = str.split(" ");
for(var i = 0;i
```
Upvotes: 0 <issue_comment>username_9: ```js
const reverseWordIntoString = str => str.split(" ").map(word => word.split("").reverse().join('')).join(" ")
const longString = "My name is username_9";
const sentence = "I love to code";
const output = {
[longString]: reverseWordIntoString(longString),
[sentence]: reverseWordIntoString(sentence)
}
console.log(output);
```
Upvotes: 1 <issue_comment>username_10: the only mistake is while doing split you can do something like:
```
function reverseInPlace(str) {
var words = [];
words = str.split(" ");
console.log(words);
var result = "";
for (var i = 0; i < words.length; i++) {
result += words[i].split('').reverse().join('') +" ";
}
return result
}
console.log(reverseInPlace("abd fhe kdj"))
```
Upvotes: 0 <issue_comment>username_11: ```
function reverse(str) {
var rev = str.split("").map(word => word.split("").reverse().join("")).join(" ")
return rev
}
console.log(reverse("<NAME>")); // aymuos hsakarp
```
Upvotes: 0 <issue_comment>username_12: ```js
let s = “getting good at coding needs a lot of practice”
let b = s.split(" ").reverse().join(" ")
console.log(b)
```
Upvotes: 0 <issue_comment>username_13: -> Split the string into array of words `str.split(" ")`
-> map the each word and split into each characters `.map(word => word.split("")`
-> reverse the each word individually and join it
```
.map(word => word.split("").reverse().join("")
```
```js
const str = "abd fhe kdj";
const reverseWord = str => {
let reversed = "";
reversed = str.split(" ").map(word => word.split("").reverse().join("")).join(" ")
return reversed
}
console.log(reverseWord(str));
```
Upvotes: 0 <issue_comment>username_14: For people looking into this problem now, think whether your application doesn't need to deal with punctuation marks (e.g. not moving punctuation marks around) or flipped hyphenated words (e.g. check-in should be ni-kcehc or kcehc-ni ?).
For example:
* "ignorance is bliss?!" should really be "ecnarongi si !?ssilb" ?
* "Merry-go-round" should be "dnuor-og-yrreM" or "yrreM-og-dnuor" ?
The following solution doesn't move punctuation marks and it doesn't flip hyphenated words:
```
/**
* 1. Split "non letters"
* 2. Reverse each piece
* 3. Join the pieces again, replacing them by " "
* 4. Split all characters
* 5. Replace " " by the original text char
*/
function reverseWords(text) {
return text.split(/[^A-Za-zÀ-ÿ]/g)
.map(w => w.split("").reverse().join(""))
.join(" ")
.split("")
.map((char, i) => char === " " ? text[i] : char)
.join("")
}
```
Upvotes: 0 <issue_comment>username_15: ```js
function reverseInPlace(str) {
var words = [];
words = str.match(/\S+/g);
var result = "";
for (var i = 0; i < words.length; i++) {
result += words[i].split('').reverse().join('') + " ";
}
return result
}
console.log(reverseInPlace("abd fhe kdj"))
```
Upvotes: 0 <issue_comment>username_16: ```
reverseStringBywords = (str) =>{
let rev = ""
str.split(" ").forEach(s =>{
rev = rev + s.split("").reverse().join("") + " "
})
return rev
}
console.log("reverseStringBywords ", reverseByWord("JavaScript is awesome"));
```
Upvotes: 0 <issue_comment>username_16: ```
let reverseOfWords = (words) =>{
_words = words.split(" ").map(word => word.split('').reverse().join(""))
console.log("reverse of words ", _words.join(' '))
}
reverseofWords("hello how are you ")
```
Upvotes: 0
|
2018/03/19
| 686 | 2,685 |
<issue_start>username_0: I'm build a rather extensive form and I'm trying to make sure that users won't lose their data on simple browser reload. Fortunately, browsers nowadays refill data on a reload - and, indeed, inputs with `v-model` have it during `beforeMount`. The problem is, they lose it on `mounted`, because they get filled with data from the respective model which is, reasonably enough, empty.
Now, I suppose I could fill the model on `beforeMounted` with data extracted from DOM - but can I somehow get the reference of a `v-model` hooked to the object from DOM? Or maybe is there some other way you would do it?<issue_comment>username_1: Let's say you're creating a user, so you have the user model defined in your component as
```
data () {
return {
user: {
email: '',
password: ''
}
}
}
```
And your input element like (notice the [**lazy**](https://v2.vuejs.org/v2/guide/forms.html#lazy) modifier, for my example, I recommend you to use it, for performance purposes).
Now, you can set up a **watcher** for the `user` model that stores its values in `sessionStorage` (my choice, you can use localStorage instead).
Read the docs: <https://v2.vuejs.org/v2/guide/computed.html#Watchers>
in your component:
```
watch: {
user (newValues, oldValues) {
window.sessionStorage.setItem('userData', JSON.stringify(newValues))
}
}
```
And now, just be sure you load the values before you mount the component
```
created () { // created hook
let userData = window.sessionStorage.getItem('userData')
if (userData) {
try {
userData = JSON.parse(userData)
// now initiate the model
this.user.email = userData.email
this.user.password = <PASSWORD>
} catch (err) {
// userData was not a valid json string
}
}
}
```
Upvotes: 2 <issue_comment>username_2: Okay, I really want to leverage browser built-in capabilities for this - especially that with this simple trick I can use it in a global mixin/plugin and it's data structure agnostic. I save all required information in DOM and re-update the DOM after `mounted` is called.
```
export default {
beforeMount() {
document.querySelectorAll('input').forEach((el) => {
if (el.value) {
el.setAttribute('data-prefill', el.value)
}
})
},
mounted() {
this.$el.querySelectorAll('[data-prefill]').forEach((el) => {
this.$nextTick(function() {
el.value = el.getAttribute('data-prefill')
el.dispatchEvent(new Event('change'))
el.dispatchEvent(new Event('input'))
})
})
}
```
Upvotes: 2
|
2018/03/19
| 625 | 2,419 |
<issue_start>username_0: Is there any option to generate large amount of pages for Liferay 7.0?
In documentation (<https://dev.liferay.com/discover/portal/-/knowledge_base/7-0/creating-sites>) i found only creating pages through GUI.
I would like to use script to generate these pages, is there some sort of CLI or something more useful than mouse clicking?<issue_comment>username_1: Let's say you're creating a user, so you have the user model defined in your component as
```
data () {
return {
user: {
email: '',
password: ''
}
}
}
```
And your input element like (notice the [**lazy**](https://v2.vuejs.org/v2/guide/forms.html#lazy) modifier, for my example, I recommend you to use it, for performance purposes).
Now, you can set up a **watcher** for the `user` model that stores its values in `sessionStorage` (my choice, you can use localStorage instead).
Read the docs: <https://v2.vuejs.org/v2/guide/computed.html#Watchers>
in your component:
```
watch: {
user (newValues, oldValues) {
window.sessionStorage.setItem('userData', JSON.stringify(newValues))
}
}
```
And now, just be sure you load the values before you mount the component
```
created () { // created hook
let userData = window.sessionStorage.getItem('userData')
if (userData) {
try {
userData = JSON.parse(userData)
// now initiate the model
this.user.email = userData.email
this.user.password = <PASSWORD>
} catch (err) {
// userData was not a valid json string
}
}
}
```
Upvotes: 2 <issue_comment>username_2: Okay, I really want to leverage browser built-in capabilities for this - especially that with this simple trick I can use it in a global mixin/plugin and it's data structure agnostic. I save all required information in DOM and re-update the DOM after `mounted` is called.
```
export default {
beforeMount() {
document.querySelectorAll('input').forEach((el) => {
if (el.value) {
el.setAttribute('data-prefill', el.value)
}
})
},
mounted() {
this.$el.querySelectorAll('[data-prefill]').forEach((el) => {
this.$nextTick(function() {
el.value = el.getAttribute('data-prefill')
el.dispatchEvent(new Event('change'))
el.dispatchEvent(new Event('input'))
})
})
}
```
Upvotes: 2
|
2018/03/19
| 1,016 | 3,828 |
<issue_start>username_0: I have a simple POJO:
```
public class Entry {
private Integer day;
private String site;
private int[] cpms;
... // getters/setters/constructor
}
```
My log file which i would like to read seems like:
```
{ "day":"1", "site":"google.com", "cpms":"[1,2,3,4,5,6,7]"}
```
Jackson converts the String into Integer automatically according to docs. But it is not for field "cpms". What is the best way to read it?
I know that i can define "cpms" in constructor as string and then parse this string doing an array from it like this:
```
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.configure(DeserializationFeature.USE_JAVA_ARRAY_FOR_JSON_ARRAY, true);
this.cpm = objectMapper.readValue(cpms, int[].class);
```
but is there any smart conversion?<issue_comment>username_1: This conversion may also be done using a custom `JsonDeserializer`.
1) Implement a dedicated deserializer class, for example:
```
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.databind.DeserializationContext;
import com.fasterxml.jackson.databind.deser.std.StdDeserializer;
import java.io.IOException;
public class IntArrayDeserializer extends StdDeserializer {
public IntArrayDeserializer() {
super(int[].class);
}
@Override
public int[] deserialize(JsonParser jsonParser,
DeserializationContext deserializationContext) throws IOException {
String arrayAsString = jsonParser.getText();
if (arrayAsString.length() == 2) { // case of empty array "[]"
return new int[0];
}
String[] strIds = arrayAsString.substring(1, arrayAsString.length() - 1).split(",");
return Arrays.stream(strIds).mapToInt(Integer::parseInt).toArray();
}
}
```
2) Add `@JsonDeserialize(using = IntArrayDeserializer.class)` on `Entry.cpms` field.
3) Now the below test should pass:
```
@Test
public void deserializeExample() throws IOException {
String json = "{ \"day\":\"1\", \"site\":\"google.com\", \"cpms\":\"[1,2,3,4,5,6,7]\"}";
ObjectMapper mapper = new ObjectMapper();
Entry entry = mapper.readValue(json, Entry.class);
int[] expected = new int[] { 1, 2, 3, 4, 5, 6, 7 };
assertTrue(Arrays.equals(expected, entry.getCpms()));
}
```
The advantage of this approach is that this deserializer is reusable in case there are other fields to convert the same way, it is in line with Jackson API and there is no need to implement ad-hoc work-arounds.
Upvotes: 2 <issue_comment>username_2: As far as I know, Jackson doesn't have any built-in solution for that. Your array is wrapped with quotes, so it's a string. However there are some approaches that may suit you:
JSON creator
============
Define a constructor annotated with `@JsonCreator` and then parse the string:
```java
public class Entry {
private Integer day;
private String site;
private int[] cpms;
@JsonCreator
public Entry(@JsonProperty("cpms") String cpms) {
String[] values = cpms.replace("[", "").replace("]", "").split(",");
this.cpms = Arrays.stream(values).mapToInt(Integer::parseInt).toArray();
}
// Getters and setters
}
```
Custom deserializer
===================
Write your own deserializer then you can reuse it in other beans:
```java
public class QuotedArrayDeserializer extends JsonDeserializer {
@Override
public int[] deserialize(JsonParser jp,
DeserializationContext ctxt) throws IOException {
String rawValue = jp.getValueAsString();
String[] values = rawValue.replace("[", "").replace("]", "").split(",");
return Arrays.stream(values).mapToInt(Integer::parseInt).toArray();
}
}
```
```java
public class Entry {
private Integer day;
private String site;
@JsonDeserialize(using = QuotedArrayDeserializer.class)
private int[] cpms;
// Getters and setters
}
```
Upvotes: 3 [selected_answer]
|
2018/03/19
| 1,155 | 3,890 |
<issue_start>username_0: I have sample data file with following format data
```
Data_Set = "001" , Status = "TRUE" ;
Data_Set = "002" , Status = "TRUE" ;
Data_Set = "003" , Status = "TRUE" ;
Data_Set = "004" , Status = "TRUE" ;
Data_Set = "005" , Status = "TRUE" ;
Data_Set = "006" , Status = "TRUE" ;
```
and I want replace all above data as below format
```
RMV_Set = "001" , Status = "TRUE" ; (mistake)
```
Corrected , required output
```
RMV_Set = "001" , Status = "001" ;
RMV_Set = "002" , Status = "002" ;
RMV_Set = "003" , Status = "003" ;
RMV_Set = "004" , Status = "004" ;
RMV_Set = "005" , Status = "005" ;
RMV_Set = "006" , Status = "006" ;
```
I have tried following Regex method in Notepad++ and it won't work, please help me solve this
```
**Find** : Data_Set = "[0-9]*" , Status = "TRUE" ;
**Replace** : RMV_Set = "$1" , Status = "TRUE" ;
```<issue_comment>username_1: This conversion may also be done using a custom `JsonDeserializer`.
1) Implement a dedicated deserializer class, for example:
```
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.databind.DeserializationContext;
import com.fasterxml.jackson.databind.deser.std.StdDeserializer;
import java.io.IOException;
public class IntArrayDeserializer extends StdDeserializer {
public IntArrayDeserializer() {
super(int[].class);
}
@Override
public int[] deserialize(JsonParser jsonParser,
DeserializationContext deserializationContext) throws IOException {
String arrayAsString = jsonParser.getText();
if (arrayAsString.length() == 2) { // case of empty array "[]"
return new int[0];
}
String[] strIds = arrayAsString.substring(1, arrayAsString.length() - 1).split(",");
return Arrays.stream(strIds).mapToInt(Integer::parseInt).toArray();
}
}
```
2) Add `@JsonDeserialize(using = IntArrayDeserializer.class)` on `Entry.cpms` field.
3) Now the below test should pass:
```
@Test
public void deserializeExample() throws IOException {
String json = "{ \"day\":\"1\", \"site\":\"google.com\", \"cpms\":\"[1,2,3,4,5,6,7]\"}";
ObjectMapper mapper = new ObjectMapper();
Entry entry = mapper.readValue(json, Entry.class);
int[] expected = new int[] { 1, 2, 3, 4, 5, 6, 7 };
assertTrue(Arrays.equals(expected, entry.getCpms()));
}
```
The advantage of this approach is that this deserializer is reusable in case there are other fields to convert the same way, it is in line with Jackson API and there is no need to implement ad-hoc work-arounds.
Upvotes: 2 <issue_comment>username_2: As far as I know, Jackson doesn't have any built-in solution for that. Your array is wrapped with quotes, so it's a string. However there are some approaches that may suit you:
JSON creator
============
Define a constructor annotated with `@JsonCreator` and then parse the string:
```java
public class Entry {
private Integer day;
private String site;
private int[] cpms;
@JsonCreator
public Entry(@JsonProperty("cpms") String cpms) {
String[] values = cpms.replace("[", "").replace("]", "").split(",");
this.cpms = Arrays.stream(values).mapToInt(Integer::parseInt).toArray();
}
// Getters and setters
}
```
Custom deserializer
===================
Write your own deserializer then you can reuse it in other beans:
```java
public class QuotedArrayDeserializer extends JsonDeserializer {
@Override
public int[] deserialize(JsonParser jp,
DeserializationContext ctxt) throws IOException {
String rawValue = jp.getValueAsString();
String[] values = rawValue.replace("[", "").replace("]", "").split(",");
return Arrays.stream(values).mapToInt(Integer::parseInt).toArray();
}
}
```
```java
public class Entry {
private Integer day;
private String site;
@JsonDeserialize(using = QuotedArrayDeserializer.class)
private int[] cpms;
// Getters and setters
}
```
Upvotes: 3 [selected_answer]
|
2018/03/19
| 526 | 1,883 |
<issue_start>username_0: The current Beanstalk solution stack for Ruby + Puma uses the configuration file at `/opt/elasticbeanstalk/support/conf/pumaconf.rb` and ignores the `config/puma.rb` inside the Rails application directory.
I could override the file above with a custom one via `.ebextensions` but am hesitant because I'd like to avoid breakage in case the path to the PID or - more importantly - unix socket files changes in upcoming solution stack versions.
What is the best practice for customizing the Puma configuration on Beanstalk?<issue_comment>username_1: We use Ruby+Passenger, but it sounds similar enough to your situation. We need to customize the nginx configuration file stored at `/opt/elasticbeanstalk/support/conf/nginx_config.erb`, so we do it via `.ebextensions` and `sed`.
Here's an example to get you started:
**.ebextensions/01-edit-nginx.config**
```
container_commands:
01backup_config:
command: "cp -n /opt/elasticbeanstalk/support/conf/nginx_config.erb /opt/elasticbeanstalk/support/conf/nginx_config.erb.original"
02edit_config:
command: "sh -c \"sed '/string_to_insert_text_after/ i\
\ text_to_be_inserted;' /opt/elasticbeanstalk/support/conf/nginx_config.erb.original > /opt/elasticbeanstalk/support/conf/nginx_config.erb\""
```
This will make a backup copy of the configuration file (without overwriting it if it already exists, thanks to the `-n` flag) and then insert the line "text\_to\_be\_inserted" after the line "string\_to\_insert\_text\_after". You can pipe multiple `sed` commands together to insert multiple lines.
Upvotes: 3 [selected_answer]<issue_comment>username_2: On Amazon Linux 2, you can use `Procfile` to do this, e.g.
`web: puma -C /var/app/current/config/puma.rb`
See the docs at <https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ruby-platform-procfile.html> for more information.
Upvotes: 3
|
2018/03/19
| 360 | 1,248 |
<issue_start>username_0: ```
class User(val name: String)
```
I know that in constructor will be added this check
```
Intrinsics.checkParameterIsNotNull(name)
```
To make sure that `name` do not store null.
Is there a way to instrument to not generate such checks? Maybe an annotation?
We need this in Spring, when map json string to a class.
The validation is done by `@NotNull` and `@Valid`from javax
```
class User(@field:NotNull @field:Email val name: String)
```
so, checks are performed immediately after object creation, but since exceptions are thrown from constructor this is not possible.<issue_comment>username_1: You can see how to do it in [this post](https://medium.com/@BladeCoder/exploring-kotlins-hidden-costs-part-2-324a4a50b70), using either a compiler flag:
```
-Xno-param-assertions
```
Or Proguard:
```
-assumenosideeffects class kotlin.jvm.internal.Intrinsics {
static void checkParameterIsNotNull(java.lang.Object, java.lang.String);
}
```
Upvotes: 3 <issue_comment>username_2: With Proguard, you can add this rule to get ride of all null checks
```
-assumenosideeffects class kotlin.jvm.internal.Intrinsics {
static void checkParameterIsNotNull(java.lang.Object, java.lang.String);
}
```
Upvotes: 1
|
2018/03/19
| 311 | 977 |
<issue_start>username_0: I need to pass output from the query as a string along with URL how can i achieve this
my part of code is
```
cur.execute("SELECT logs.id from logs");
temp_list.append(str('https://qwetest.seem.nsww-rdnewww.net/logs/' + logs_id[logs_id_counter]))
```
here in temp\_list.append i need to pass url which should consists of logs id which i fetched from the query
how can i achieve this please help me
advance thanks<issue_comment>username_1: ```
temp_list.append('https://qwetest.seem.nsww-rdnewww.net/logs/' + str(logs_id[logs_id_counter]))
```
Upvotes: 0 <issue_comment>username_2: Unless I'm misunderstanding the issue, why not loop through the IDs and use string interpolation to generate the URL? Note the `f` in front of the string to denote that it is formatting with the curly braces.
```
logs_id = [1, 2, 3, 4]
temp_list = []
for id in logs_id:
temp_list.append(f"https://qwetest.seem.nsww-rdnewww.net/logs/{id}")
```
Upvotes: 1
|
2018/03/19
| 902 | 2,825 |
<issue_start>username_0: I am quite new to python. Can someone explain this line
```
exec("print(' '.join(map(lambda x: s[x::{0}], range({0}))))".format(ceil(sqrt(len(s)))))
```
What does `s[x::{0}]` and `range({0}))` mean ?
in below piece of code in detail?
This code is a solution for below hackerrank question :
<https://www.hackerrank.com/challenges/encryption/problem>
```
#!/bin/python3
import sys
from math import ceil, floor, sqrt
def encryption(s):
exec("print(' '.join(map(lambda x: s[x::{0}], range({0}))))".format(ceil(sqrt(len(s)))))
if __name__ == "__main__":
s = input().strip()
result = encryption(s)
```<issue_comment>username_1: This is a simplified version of your code, which you should be able to follow:
```
from math import ceil, sqrt
s = 'hello'
y = ceil(sqrt(len(s)))
# 3
res = ' '.join(map(lambda x: s[x::y], range(y)))
# 'hl eo l'
```
**Essential points**
* `y` is the rounded-up square root of the length of `s`, in this case sqrt(5) is rounded up to 3.
* `lambda` is an anonymous function which maps each value in `range(y)`, i.e. `0, 1, 2` is mapped to `s[x::y]`, i.e. return every `y`th element of sequence beginning from index `x`. See also [Understanding Python's slice notation](https://stackoverflow.com/questions/509211/understanding-pythons-slice-notation). `x` is arbitrary notation for a member of `range(y)`.
* Join all resulting values with a space to form a single string.
* In your original code `{0}` and `str.format` are used to incorporate `y` in your string in one line. In this case, I consider it convoluted and bad practice.
Upvotes: 4 [selected_answer]<issue_comment>username_2: This line is using the [`format()`](https://docs.python.org/3.4/library/functions.html#format) function on a string.
Thus, the `{0}` will be replaced by the first element in the format function which is `ceil(sqrt(len(s))))`
Upvotes: 1 <issue_comment>username_3: From the [documetnation](https://docs.python.org/3.4/library/string.html#format-string-syntax) you can see what `format()` does.
In your example:
```
exec("print(' '.join(map(lambda x: s[x::{0}], range({0}))))".format(ceil(sqrt(len(s)))))
```
You are inserting the result of `ceil(sqrt(len(s)))` in two places, namely:
```
lambda x: s[x::ceil(sqrt(len(s)))]
```
and
```
range(ceil(sqrt(len(s))))
```
Now, `exec` could be avoided here with simple variable:
```
def encryption(s):
var = ceil(sqrt(len(s)))
print(' '.join(map(lambda x: s[x::var], range(var))))
```
Upvotes: 0 <issue_comment>username_4: When using the str.format() method ([see here](https://docs.python.org/3/tutorial/inputoutput.html)), the brackets and characters in them will be replaced with the objects passed in the method. The "0" indicates replacement with the first character passed in the method.
Upvotes: 1
|
2018/03/19
| 1,122 | 4,346 |
<issue_start>username_0: According to [Dart's website](https://www.dartlang.org/guides/language/sound-dart)
>
> Dart is a sound language.
>
>
>
What's the meaning of "sound" in the above sentence?
I couldn't find any similar concept in other major programming languages. Can anyone give some other examples of **sound languages**?<issue_comment>username_1: That is NOT related to audio.
According to Wikipedia:
>
> That is, if a type system is both ***sound*** (meaning that it rejects all incorrect programs) and decidable (meaning that it is possible to write an algorithm that determines whether a program is well-typed)
>
>
>
(cf. <https://en.wikipedia.org/wiki/Type_system#Static_type_checking>)
For consideration about etymology, see "soundness".
TL;DR: in this context, it means "robust", "healthy".
Upvotes: 4 <issue_comment>username_2: Taken from [Dart's language guide](https://www.dartlang.org/guides/language/sound-dart)
>
> **What is soundness?**
>
>
> Soundness is about ensuring your program can’t get into certain invalid states. A sound type system means you can never get into a state where an expression evaluates to a value that doesn’t match the expression’s static type. For example, if an expression’s static type is String, at runtime you are guaranteed to only get a string when you evaluate it.
>
>
> Strong mode, like the type systems in Java and C#, is sound. It enforces that soundness using a combination of static checking (compile errors) and runtime checks. For example, assigning a String to int is a compile error. Casting an Object to a string using as String will fail with a runtime error if the object isn’t a string.
>
>
> Dart was created as an optionally typed language and is not sound. For example, it is valid to create a list in Dart that contains integers, strings, and streams. Your program will not fail to compile or run just because the list contains mixed types, even if the list is specified as a list of float but contains every type except floating point values.
>
>
> In classic Dart, the problem occurs at runtime—fetching a Stream from a list but getting another type results in a runtime exception and the app crashes. For example, the following code assigns a list of type dynamic (which contains strings) to a list of type int. Iterating through the list and subtracting 10 from each item causes a runtime exception because the minus operator isn’t defined for strings.
>
>
> **The benefits of soundness**
> A sound type system has several benefits:
>
>
> Revealing type-related bugs at compile time.
> A sound type system forces code to be unambiguous about its types, so type-related bugs that might be tricky to find at runtime are revealed at compile time.
>
>
> More readable code.
> Code is easier to read because you can rely on a value actually having the specified type. In sound Dart, types can’t lie.
>
>
> More maintainable code.
> With a sound type system, when you change one piece of code, the type system can warn you about the other pieces of code that just broke.
>
>
> Better ahead of time (AOT) compilation.
> While AOT compilation is possible without strong types, the generated code is much less efficient.
>
>
> Cleaner JavaScript.
> For web apps, strong mode’s more restrictive typing allows dartdevc to generate cleaner, more compact JavaScript.
>
>
>
---
>
> Bringing soundness to Dart required adding only a few rules to the Dart language. With strong mode enabled, the Dart analyzer enforces three additional rules:
>
>
> Use proper return types when overriding methods.
>
>
> Use proper parameter types when overriding methods.
>
>
> Don’t use a dynamic list as a typed list.
>
>
>
Upvotes: 6 [selected_answer]<issue_comment>username_3: In that context, sound is an adjective, it means "in good condition".
<https://dictionary.cambridge.org/dictionary/english-japanese/sound>
Upvotes: -1 <issue_comment>username_4: While a sound type system provides developers with greater confidence, it also enables our compilers to safely use types to optimize generated code. With soundness, our tools guarantee that types are correct via a combination of static and (when needed) runtime checking. Without soundness, type checking can only go so far, and static types may be incorrect at runtime.
Upvotes: 0
|
2018/03/19
| 2,363 | 3,931 |
<issue_start>username_0: I have this array:
```
[[0, 1, 0, 1, 0, 1],
[0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 1, 0],
[1, 0, 1, 0, 1, 0],
[0, 1, 1, 1, 0, 1],
[0, 1, 0, 0, 1, 1],
[1, 1, 1, 0, 0, 0],
[1, 1, 1, 1, 0, 1],
[0, 1, 1, 0, 1, 0],
[1, 1, 0, 0, 0, 1],
[1, 0, 0, 0, 1, 0]]
```
I wish to create a new array, that will be only the rows that has 0 in column 1.
How can I build such array in python without writing up the function by myself. I have tried too complicated stuff and I just need simple selection method that gives me result.
@EDIT I forgot to mention that i am using numpy.array([])<issue_comment>username_1: You can use [list comprehensions](http://www.secnetix.de/olli/Python/list_comprehensions.hawk)
```
list = [[0, 1, 0, 1, 0, 1],[0, 0, 0, 1, 0, 0],[0, 0, 0, 0, 1, 0],[1, 0, 1, 0, 1, 0],[0, 1, 1, 1, 0, 1],[0, 1, 0, 0, 1, 1],[1, 1, 1, 0, 0, 0],[1, 1, 1, 1, 0, 1],[0, 1, 1, 0, 1, 0],[1, 1, 0, 0, 0, 1],[1, 0, 0, 0, 1, 0]]
list = [item for item in list if item[1] == 0]
```
Output:
```
[[0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0], [1, 0, 1, 0, 1, 0], [1, 0, 0, 0, 1, 0]]
```
If you're using `numpy` array, the first step is converting your numpy array to list using [tolist](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.ndarray.tolist.html) method.
```
import numpy
array = numpy.array([[0, 1, 0, 1, 0, 1],[0, 0, 0, 1, 0, 0],[0, 0, 0, 0, 1, 0],[1, 0, 1, 0, 1, 0],[0, 1, 1, 1, 0, 1],[0, 1, 0, 0, 1, 1],[1, 1, 1, 0, 0, 0],[1, 1, 1, 1, 0, 1],[0, 1, 1, 0, 1, 0],[1, 1, 0, 0, 0, 1],[1, 0, 0, 0, 1, 0]])
list = [item for item in array.tolist() if item[1] == 0]
array = numpy.array(list)
```
Upvotes: 2 <issue_comment>username_2: You can do it this way:
```
a = [[0, 1, 0, 1, 0, 1],[0, 0, 0, 1, 0, 0],[0, 0, 0, 0, 1, 0],[1, 0, 1, 0, 1, 0],[0, 1, 1, 1, 0, 1],[0, 1, 0, 0, 1, 1],[1, 1, 1, 0, 0, 0],[1, 1, 1, 1, 0, 1],[0, 1, 1, 0, 1, 0],[1, 1, 0, 0, 0, 1],[1, 0, 0, 0, 1, 0]]
b = [l for l in a if len(l) > 1 and l[1] == 0]
print(b)
```
The output would be:
```
[[0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0], [1, 0, 1, 0, 1, 0], [1, 0, 0, 0, 1, 0]]
```
Upvotes: 2 <issue_comment>username_3: You should use [list comp](https://stackoverflow.com/questions/20639180/explanation-of-how-list-comprehension-works)
```
l = [[0, 1, 0, 1, 0, 1],[0, 0, 0, 1, 0, 0],[0, 0, 0, 0, 1, 0],[1, 0, 1, 0, 1, 0],[0, 1, 1, 1, 0, 1],[0, 1, 0, 0, 1, 1],[1, 1, 1, 0, 0, 0],[1, 1, 1, 1, 0, 1],[0, 1, 1, 0, 1, 0],[1, 1, 0, 0, 0, 1],[1, 0, 0, 0, 1, 0]]
l1 = [item for item in l if item[1] == 0]
```
Upvotes: 1 <issue_comment>username_4: **Try this:**
```
new_list = []
for i in list:
has_zero = i[1]
if has_zero==0:
new_list.append(i)
print(new_list)
```
Upvotes: 1 <issue_comment>username_5: Since you are saying that you are using `numpy`, this is a good place for [`numpy.where`](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.where.html):
```
import numpy as np
a = np.array([[0, 1, 0, 1, 0, 1], [0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0], [1, 0, 1, 0, 1, 0], [0, 1, 1, 1, 0, 1], [0, 1, 0, 0, 1, 1], [1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 0, 1], [0, 1, 1, 0, 1, 0], [1, 1, 0, 0, 0, 1], [1, 0, 0, 0, 1, 0]])
a_new = a[np.where(a[:,1] == 0)]
print(a_new)
# array([[0, 0, 0, 1, 0, 0],
# [0, 0, 0, 0, 1, 0],
# [1, 0, 1, 0, 1, 0],
# [1, 0, 0, 0, 1, 0]])
```
Upvotes: 2 <issue_comment>username_6: This can be done with list comprehensions as suggested in other answers as well as with `filter` that might be bit clearer semantically in your case:
```
>>> a = [[0, 1, 0, 1, 0, 1],
... [0, 0, 0, 1, 0, 0],
... [0, 0, 0, 0, 1, 0],
... [1, 0, 1, 0, 1, 0],
... [0, 1, 1, 1, 0, 1],
... [0, 1, 0, 0, 1, 1],
... [1, 1, 1, 0, 0, 0],
... [1, 1, 1, 1, 0, 1],
... [0, 1, 1, 0, 1, 0],
... [1, 1, 0, 0, 0, 1],
... [1, 0, 0, 0, 1, 0]]
>>> filter(lambda row: row[1] == 0, a)
[[0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 1, 0], [1, 0, 1, 0, 1, 0], [1, 0, 0, 0, 1, 0]]
```
Upvotes: 1
|
2018/03/19
| 991 | 2,308 |
<issue_start>username_0: I am having a file as :
**file1.txt**
```
1 10 ABC KK-LK
1 33 23 KK-LK
2 34 32 CK-LK,LK
11 332 2 JK@
21 525 1 JK@
31 902 9 JK@
11 23 2 JK2
```
I am trying to see the number of lines being used by identifiers in 4th column:
```
KK-LK 1-2
CK-LK 3-3
JK@ 4-6
JK2 7-7
```
If I apply the code below, I can have the `number of lines`, but it prints it in a new line rather than a `range`, which would be a problem since each `identifier` has at least `1 million lines`:
```
grep -n "KK-LK" file1.txt | awk -F ":" '{print $1}'
1
2
```
How can I convert this to a range?<issue_comment>username_1: Could you please try following and let me know if this helps you.
```
awk '{a[$NF]=a[$NF]?a[$NF] OFS FNR:FNR} END{for(i in a){num=split(a[i],array," ");if(num>1){print i,array[1] "-" array[num]} else {print i,a[i]"-"a[i]}}}' Input_file
```
Adding a non-one liner form of solution too now.
```
awk '
{
a[$NF]=a[$NF]?a[$NF] OFS FNR:FNR}
END{
for(i in a){
num=split(a[i],array," ");
if(num>1){
print i,array[1] "-" array[num]}
else{
print i,a[i]"-"a[i]}
}}
' Input_file
```
Upvotes: 2 <issue_comment>username_2: **`awk`** solution:
```
awk '{ if ($4 in a) sub(/-[0-9]+/, "-"NR, a[$4]); else a[$4] = NR"-"NR }
END{ for (i in a) print i, a[i] }' file
```
The output:
```
JK2 7-7
CK-LK,LK 3-3
JK@ 4-6
KK-LK 1-2
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: another `awk`
```
$ awk '{if($NF in a) b[$NF]=NR;
else a[$NF]=b[$NF]=NR}
END{for(k in a) print k,a[k]"-"b[k]}' file | sort -k2 | column -t
KK-LK 1-2
CK-LK,LK 3-3
JK@ 4-6
JK2 7-7
```
or, since `NR>0`
```
$ awk '!a[$NF]{a[$NF]=NR} {b[$NF]=NR} END{for(k in a) print k,a[k]"-"b[k]}'
```
Upvotes: 2 <issue_comment>username_4: You can do it in a one-pass fashion like this:
*parse.awk*
```awk
# Initialize start-line and id variables
NR == 1 { s=1; id = $4 }
# When the id no longer matches print the range
$4 != id {
print id ": " s "-" NR-1
# Reset variables for the next id
s=NR; id=$4
}
# Print the last range when EOF occurs
END {
print id ": " s "-" NR
}
```
Run it like this:
```
awk -f parse.awk infile.txt
```
Output:
```none
KK-LK: 1-2
CK-LK,LK: 3-3
JK@: 4-6
JK2: 7-7
```
Upvotes: 1
|
2018/03/19
| 1,333 | 4,337 |
<issue_start>username_0: I am trying to create a script that gets the count of Business days, Calendar days and Weekdays within a set period (23rd of Previous month and 23rd of Current month).
I have the following script where I tried to use Worksheet Functions but it doesn't work, I get
>
> "Object variable or With block variable not set"
>
>
>
error, what gives?
```
Sub DayCounts()
Dim cYear As String
Dim pMon As String
Dim cpMon As String
Dim start_date As String
Dim end_date As String
Dim mySh As Worksheet
Dim wf As WorksheetFunction
Set mySh = Sheets("prod_log_manual")
cYear = Format(Date, yyyy) 'get year
pMon = Format(Date - 1, mm) 'get previous month
cMon = Format(Date, mm) 'get current month
start_date = cYear & pMon & 24 '23th of previous month
end_date = cYear & cMon & 23 '23rd of current month
mySh.Range("P7") = wf.NetworkDays(start_date, end_date) 'get number of workdays in period
mySh.Range("P8") = wf.Day(start_date, end_date) 'get number of calendar days in period
mySh.Range("P9") = mySh.Range("P8").Value / 7 'get number of weeks within period
End Sub
```<issue_comment>username_1: The error
>
> "Object variable or With block variable not set"
>
>
>
tells you that the variable `wf` is not set yet, which means you declared it
```
Dim wf As WorksheetFunction
```
but it's still empty because you didn't initialize it with
```
Set wf = Application.WorksheetFunction
```
---
This is what you should have done
```
Dim wf As WorksheetFunction
Set wf = Application.WorksheetFunction
mySh.Range("P7") = wf.NetworkDays(start_date, end_date) 'get number of workdays in period
mySh.Range("P8") = wf.Days(start_date, end_date) 'get number of calendar days in period
```
**Note that it is `wf.Days` not `wf.Day` as Jeeped pointed out in the comment.**
or use `WorksheetFunction.` instead of `wf.` or set `WorksheetFunction`
```
mySh.Range("P7") = WorksheetFunction.NetworkDays(start_date, end_date) 'get number of workdays in period
mySh.Range("P8") = WorksheetFunction.Days(start_date, end_date) 'get number of calendar days in period
```
And probably have a look at username_2's answer which will solve your next issue you will run into after you solved this one.
Upvotes: 2 <issue_comment>username_2: Write `Option Explicit` on the top of the module. Thus, code like:
```
cYear = Format(Date, yyyy) 'get year
pMon = Format(Date - 1, mm) 'get previous month
cMon = Format(Date, mm) 'get current month
```
will start telling you that something is wrong. E.g., `yyyy`, `mm` are not defined as variables, which would give you the idea that they should be either variables or passed as string like this:
`cYear = Format(Date, "yyyy")`
Upvotes: 2 <issue_comment>username_3: First, correct the date collection per [username_2](https://stackoverflow.com/users/5448626/vityata)'s response.
You might want to use actual dates, not string concatenations.
```
mySh.Range("P7") = wf.NetworkDays(dateserial(clng(cYear)+cbool(clng(pmon)=12), clng(pmon), 24), dateserial(clng(cYear), clng(cmon), 23)) 'get number of workdays in period
```
NetworkDays is inclusive. Should you be using 23 for both? Shouldn't the previous be 24 or the current be 22?
Upvotes: 2 [selected_answer]<issue_comment>username_4: You can use `start_date` and `end_date` as `Date`, instead of `String`, and it will simplify and shorten your code by a lot.
***Code***
```
Option Explicit
Sub DayCounts()
Dim start_date As Date ' use Date variables
Dim end_date As Date ' use Date variables
Dim mySh As Worksheet
Dim wf As WorksheetFunction
Set mySh = Sheets("prod_log_manual")
' set the worksheetfunction object
Set wf = WorksheetFunction
end_date = DateSerial(Year(Date), Month(Date), 23) ' 23rd of current month
start_date = DateAdd("m", -1, end_date) ' 23th of previous month
mySh.Range("P7") = wf.NetworkDays(start_date, end_date) ' get number of workdays in period
mySh.Range("P8") = wf.Days(start_date, end_date) ' get number of calendar days in period
mySh.Range("P9") = mySh.Range("P8").Value / 7 'get number of weeks within period
' can also use DateDiff with "w", which is weeks as the interval
mySh.Range("P9") = DateDiff("w", start_date, end_date)
End Sub
```
Upvotes: 2
|
2018/03/19
| 696 | 2,047 |
<issue_start>username_0: I have an array of Objects
```
[{"a":{"name":"abc","age":2}},
{"b":{"name":"xyz","age":3}},
{"c":{"name":"pqr","age":4}}]
```
I need to convert this to
```
[{"name":"abc","age":2},
{"name":"xyz","age":3},
{"name":"pqr","age":4}]
```<issue_comment>username_1: You can use one of [Array.prototype](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/prototype) iteration methods, like `map` or `reduce`. I prefer `reduce` for such kind of problems. `Array.prototype.reduce` takes the second parameter as a default value, so it's an empty array and for every iteration that array is concatenated with the value of an object e.g. `{ a: { name: "abc", age: 2 } }` will lead to adding `{ name: "abc", age: 2 }` to the resulted array.
```js
const input = [
{ a: { name: "abc", age: 2 } },
{ b: { name: "xyz", age: 3 } },
{ c: { name: "pqr", age: 4 } }
];
const output = input.reduce(
(acc, entry) => acc.concat(Object.values(entry)),
[]
)
console.log(output)
```
Upvotes: 0 <issue_comment>username_2: Simply use `map` and `Object.values`
```
var output = arr.map( s => Object.values(s)[0] );
```
**Demo**
```js
var arr = [{
"a": {
"name": "abc",
"age": 2
}
},
{
"b": {
"name": "xyz",
"age": 3
}
},
{
"c": {
"name": "pqr",
"age": 4
}
}
];
var output = arr.map( s => Object.values(s)[0] );
console.log(output);
```
Upvotes: 3 <issue_comment>username_3: ```
const array = [{"a":{"name":"abc","age":2}},
{"b":{"name":"xyz","age":3}},
{"c":{"name":"pqr","age":4}}]
array.map(item=>{
return Object.keys(item).reduce((acc,key)=>{
return item[key]
},{})
})
```
Upvotes: 1 <issue_comment>username_4: And straight ahead dumb way as well:
```
src=[{"a":{"name":"abc","age":2}},
{"b":{"name":"xyz","age":3}},
{"c":{"name":"pqr","age":4}}]
result=[]
for arr in src:
for d in arr:
result.append(arr[d])
```
Upvotes: 1
|
2018/03/19
| 2,497 | 7,970 |
<issue_start>username_0: ```
private int PingPong(int baseNumber, int limit)
{
if (limit < 2) return 0;
return limit - Mathf.Abs(limit - Modulus(baseNumber, limit + (limit - 2)) - 1) - 1;
}
private int Modulus(int baseNumber, int modulus)
{
return (modulus == 0) ? baseNumber : baseNumber - modulus * (int)Mathf.Floor(baseNumber / (float)modulus);
}
```
It's part of my script.
I thought it should make the object to move between two waypoints back and forward. But it's not.
I can't figure out what this part of the code should do and what it does.
My complete script with the changes I did:
```
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
[System.Serializable]
public class PatrolData
{
public Transform target = null;
public float minDistance = 5f;
public float lingerDuration = 5f;
public float desiredHeight = 10f;
public float flightSmoothTime = 10f;
public float maxFlightspeed = 10f;
public float flightAcceleration = 1f;
public float levelingSmoothTime = 0.5f;
public float maxLevelingSpeed = 10000f;
public float levelingAcceleration = 2f;
}
public class PatrolOverTerrain : MonoBehaviour
{
public FlyToOverTerrain flyOverTerrain;
public LookAtCamera lookAtCamera;
public UserInformation userinformation;
public enum PatrolMode { Clamp, Wrap, PingPong, Random };
public PatrolData[] patrolPoints;
public PatrolMode mode = PatrolMode.Wrap;
private int iterator = 0;
private int index = 0;
private float lingerDuration = 0f;
private int overallLength = 0;
public bool autoFreedomPatrol = false;
public List Targets = new List();
public string tagName;
public Vector3 distanceFromTarget;
public float k = 0f;
public Vector3 WayPointA;
public Vector3 WayPointB;
public void Start()
{
if (tagName != "")
{
GameObject[] tempObj = GameObject.FindGameObjectsWithTag(tagName);
for (int i = 0; i < tempObj.Length; i++)
{
//Add to list only if it does not exist
if (!Targets.Contains(tempObj[i]))
{
Targets.Add(tempObj[i]);
}
}
//Get the current Size
if (tempObj != null)
{
overallLength = Targets.Count;
}
GeneratePatrolPoints();
}
}
private void OnEnable()
{
if (patrolPoints.Length > 0)
{
lingerDuration = patrolPoints[index].lingerDuration;
}
}
private void Update()
{
int length = patrolPoints.Length;
if (!flyOverTerrain) return;
if (patrolPoints.Length < 1) return;
if (index < 0) return;
var patrol = patrolPoints[index];
if (lingerDuration <= 0)
{
iterator++;
switch (mode)
{
case PatrolMode.Clamp:
index = (iterator >= length) ? -1 : iterator;
break;
case PatrolMode.Wrap:
iterator = Modulus(iterator, length);
index = iterator;
break;
case PatrolMode.PingPong:
WayPointA = patrolPoints[index].target.position;
WayPointB = patrolPoints[index + 1].target.position;
break;
case PatrolMode.Random:
index = Random.Range(0, patrolPoints.Length);
break;
}
if (index < 0) return;
patrol = patrolPoints[index];
flyOverTerrain.target = patrol.target;
flyOverTerrain.desiredHeight = patrol.desiredHeight;
flyOverTerrain.flightSmoothTime = patrol.flightSmoothTime;
flyOverTerrain.maxFlightspeed = patrol.maxFlightspeed;
flyOverTerrain.flightAcceleration = patrol.flightAcceleration;
flyOverTerrain.levelingSmoothTime = patrol.levelingSmoothTime;
flyOverTerrain.maxLevelingSpeed = patrol.maxLevelingSpeed;
flyOverTerrain.levelingAcceleration = patrol.levelingAcceleration;
if (lookAtCamera != null)
{
lookAtCamera.target = patrol.target;
lookAtCamera.RotationSpeed = 3;
}
//userinformation.target = patrol.target;
lingerDuration = patrolPoints[index].lingerDuration;
}
Vector3 targetOffset = Vector3.zero;
if ((bool)patrol.target)
{
targetOffset = transform.position - patrol.target.position;
}
float sqrDistance = patrol.minDistance \* patrol.minDistance;
if (targetOffset.sqrMagnitude <= sqrDistance)
{
flyOverTerrain.target = null;
if (lookAtCamera != null)
lookAtCamera.target = null;
lingerDuration -= Time.deltaTime;
}
else
{
flyOverTerrain.target = patrol.target;
if (lookAtCamera != null)
lookAtCamera.target = patrol.target;
}
distanceFromTarget = transform.position - patrol.target.position;
}
private void PingPong()
{
k = Mathf.PingPong(Time.time, 1);
transform.position = Vector3.Lerp(WayPointA, WayPointB, k);
}
private int Modulus(int baseNumber, int modulus)
{
return (modulus == 0) ? baseNumber : baseNumber - modulus \* (int)Mathf.Floor(baseNumber / (float)modulus);
}
public void GeneratePatrolPoints()
{
patrolPoints = new PatrolData[Targets.Count];
for (int i = 0; i < patrolPoints.Length; i++)
{
patrolPoints[i] = new PatrolData();
patrolPoints[i].target = Targets[i].transform;
patrolPoints[i].minDistance = 30f;
patrolPoints[i].lingerDuration = 3f;
patrolPoints[i].desiredHeight = 20f;
patrolPoints[i].flightSmoothTime = 10f;
patrolPoints[i].maxFlightspeed = 10f;
patrolPoints[i].flightAcceleration = 3f;
patrolPoints[i].levelingSmoothTime = 0.5f;
patrolPoints[i].maxLevelingSpeed = 10000f;
patrolPoints[i].levelingAcceleration = 2f;
}
}
}
```
The changes I did and tried so far:
Added WaypointA and WaypointB as global variables:
```
public Vector3 WayPointA;
public Vector3 WayPointB;
```
Then in the case of PingPong I did:
```
case PatrolMode.PingPong:
WayPointA = patrolPoints[index].target.position;
WayPointB = patrolPoints[index + 1].target.position;
break;
```
Then created a PingPong method:
```
private void PingPong()
{
k = Mathf.PingPong(Time.time, 1);
transform.position = Vector3.Lerp(WayPointA, WayPointB, k);
}
```
But I'm not sure where to call the PingPong method in the Update ?
And maybe I should use a flag bool type to check when it's pingpong mode in the Update ?
What I want to do is if I start the game in PingPong mode case then the object should move between waypoint index 0 and index 1 nonstop.
But if I change the mode to PingPong in the middle while the game is running then the object should move between the current waypoint index and the next one. For example between index 4 and index 5.<issue_comment>username_1: You can use one of [Array.prototype](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/prototype) iteration methods, like `map` or `reduce`. I prefer `reduce` for such kind of problems. `Array.prototype.reduce` takes the second parameter as a default value, so it's an empty array and for every iteration that array is concatenated with the value of an object e.g. `{ a: { name: "abc", age: 2 } }` will lead to adding `{ name: "abc", age: 2 }` to the resulted array.
```js
const input = [
{ a: { name: "abc", age: 2 } },
{ b: { name: "xyz", age: 3 } },
{ c: { name: "pqr", age: 4 } }
];
const output = input.reduce(
(acc, entry) => acc.concat(Object.values(entry)),
[]
)
console.log(output)
```
Upvotes: 0 <issue_comment>username_2: Simply use `map` and `Object.values`
```
var output = arr.map( s => Object.values(s)[0] );
```
**Demo**
```js
var arr = [{
"a": {
"name": "abc",
"age": 2
}
},
{
"b": {
"name": "xyz",
"age": 3
}
},
{
"c": {
"name": "pqr",
"age": 4
}
}
];
var output = arr.map( s => Object.values(s)[0] );
console.log(output);
```
Upvotes: 3 <issue_comment>username_3: ```
const array = [{"a":{"name":"abc","age":2}},
{"b":{"name":"xyz","age":3}},
{"c":{"name":"pqr","age":4}}]
array.map(item=>{
return Object.keys(item).reduce((acc,key)=>{
return item[key]
},{})
})
```
Upvotes: 1 <issue_comment>username_4: And straight ahead dumb way as well:
```
src=[{"a":{"name":"abc","age":2}},
{"b":{"name":"xyz","age":3}},
{"c":{"name":"pqr","age":4}}]
result=[]
for arr in src:
for d in arr:
result.append(arr[d])
```
Upvotes: 1
|
2018/03/19
| 625 | 2,070 |
<issue_start>username_0: It's my first time creating an API, I'm using laravel 5.5.
I created a simple `GET` endpoint with `auth:api` as the only middleware.
When I access the endpoint using the API token in the request body (i.e. `http://localhost/my-endpoint?api_token=123`), it works, but I don't want it like that.
How can I make laravel to accept the token also in header? (i.e. `Authorization: Bearer 123`)
**EDIT:**
Here is my route code:
```
Route::get( '/my-endpoint', function(){
dd( 'hello world' );
})->middleware( 'auth:api' );
```<issue_comment>username_1: You must alter table to add 'api\_token' field.
```
Schema::create('users', function (Blueprint $table) {
$table->increments('id');
$table->string('name');
$table->string('email')->unique();
$table->string('password');
$table->string('api_token', 60)->unique(); // Add api_token field
$table->rememberToken();
$table->timestamps();
});
```
Now, you can use **Bearer token** with **Authorization** in header.
<https://medium.com/@sirajul.anik/laravel-api-authenticate-user-with-custom-driver-different-table-using-auth-middleware-fa2cabec2d61>
Upvotes: 2 <issue_comment>username_2: For details on this api authentication, [follow this stack](https://stackoverflow.com/questions/58074976/how-to-resolve-laravel-401-unauthorized-error/68957842#68957842)
If indeed you are able to pass it into the parameters successfully, then include it in the header object like you would any other header...
```
axios({method: 'get', url: 'api/user', headers: {
'Authorization': 'Bearer ' + yourToken,
}})
.then((response)=>{
console.log(response.data)
})
```
I say this because there's usually additional steps for the api to work, but if it works in your get parameter, then thats all you need
OR better still, pass it by default for all request
```
//js/main.js
axios.defaults.headers.common = {
'X-CSRF-TOKEN': your_csrfToken,
'X-Requested-With': 'XMLHttpRequest',
'Authorization': 'Bearer ' + yourToken,
};
```
Upvotes: 0
|
2018/03/19
| 1,772 | 4,557 |
<issue_start>username_0: I am struggling with a Kata in Code Wars:
<https://www.codewars.com/kata/5672682212c8ecf83e000050/train/javascript>
The idea is to create a sequence of numbers, where each number is created reclusively following this two formulas:
```
y=2x + 1
z=3x + 1
```
With x being the current number in the sequence.
Starting with 1, the sequence would grow like this:
```
sequence = [1]
x = 1
y = 2*1 +1 = 3
z = 3*1 + 1 = 4
leading to sequence = [1, 3, 4]
```
Applying it to the next numbers leads to:
```
x = 3
y = 2*3 + 1 = 7
z = 3*3 + 1 = 10
leading to sequence = [1, 3, 4, 7, 10]
x = 4
y = 2*4 + 1 = 9
z = 3*4 + 1 = 13
sequence [1, 3, 4, 7, 9, 10, 13]
```
And so forth. Notice that I sorted the sequence in the last step, as the results of x=4 (9 and 13) have to be mixed with the results of x=3 (7 and 10) to keep an ordered sequence.
[1, 3, 4, 7, 9, 10, 13, 15, 19, 21, 22, 27, ...]
I was able to solve the problem correctly, but the implementation is supposed to be efficient and I am timing out. My code:
```
function dblLinear(n) {
cache = {};
cache[0] = 1;
res = [1];
lengthCounter = 0
if (n === 0) {
return 1;
}
for (var i = 1; i < n + 10; i++) {
//console.log("i ", i)
if (cache[i] === undefined && res.includes(i)) {
//console.log('i: ', i, ' el1: ', i * 2 + 1, ' el2: ', i * 3 + 1);
cache[i] = [i * 2 + 1, i * 3 + 1]
if (!res.includes(i * 2 + 1)) {
res.push(i * 2 + 1);
}
if (!res.includes(i * 3 + 1)) {
res.push(i * 3 + 1);
}
//console.log("res ", res)
}
if (res.length !== lengthCounter) {
var arrStart = res.slice(0, Math.floor(res.length / 2));
var arrEnd = res.slice(Math.floor(res.length / 2), res.length)
arrEnd.sort(function(a, b) {
return a - b;
});
res = arrStart.concat(arrEnd)
lengthCounter = res.length
}
}
//console.log('res ', res);
return res[n];
}
```
As you can see in my code, I tried some simple tricks to increase the efficiency but I 'm guessing I need more speed gains. What do you think is the problem and how could I increase the efficiency?
Any help will be greatly appreciated!
Cheers
Manuel<issue_comment>username_1: This problem can be solved in O(n). The idea is to keep track which element was generated last and add the smaller one of the two possibilities, so the elements are added in order. This code passes all the tests easily.
```js
function dblLinear(n) {
let u = [1], x = 0, y = 0
for (let i = 0; i < n; i++) {
let nextX = 2 * u[x] + 1, nextY = 3 * u[y] + 1
if (nextX <= nextY) {
u.push(nextX)
x++
if (nextX == nextY)
y++
} else {
u.push(nextY)
y++
}
}
return u[n]
}
console.log(dblLinear(10) + " = " + 22)
console.log(dblLinear(20) + " = " + 57)
console.log(dblLinear(30) + " = " + 91)
console.log(dblLinear(50) + " = " + 175)
console.log(dblLinear(100) + " = " + 447)
```
Upvotes: 4 <issue_comment>username_2: The existing solution is great but my solution to this problem is
```js
function dblLinear(n) {
let eq1Index = 0;
let eq2Index = 0;
let eq1Array = [3];
let eq2Array = [4];
let result = 1;
for (let i = 1; i <= n; i++) {
if (eq1Array[eq1Index] < eq2Array[eq2Index]) {
result = eq1Array[eq1Index];
eq1Index++;
} else {
result = eq2Array[eq2Index];
eq2Index++;
}
eq2Array.indexOf(2 * result + 1) == -1 && eq1Array.push(2 * result + 1);
eq2Array.push(3 * result + 1);
}
return result;
}
console.log(dblLinear(10) + " = " + 22)
console.log(dblLinear(20) + " = " + 57)
console.log(dblLinear(30) + " = " + 91)
console.log(dblLinear(50) + " = " + 175)
console.log(dblLinear(100) + " = " + 447)
```
Upvotes: 2 <issue_comment>username_3: The good solution for this
```js
function dblLinear(n) {
var ai = 0, bi = 0, eq = 0;
var sequence = [1];
while (ai + bi < n + eq) {
var y = 2 * sequence[ai] + 1;
var z = 3 * sequence[bi] + 1;
if (y < z) { sequence.push(y); ai++; }
else if (y > z) { sequence.push(z); bi++; }
else { sequence.push(y); ai++; bi++; eq++; }
}
return sequence.pop();
}
console.log(dblLinear(10) + " = " + 22)
console.log(dblLinear(20) + " = " + 57)
console.log(dblLinear(30) + " = " + 91)
console.log(dblLinear(50) + " = " + 175)
console.log(dblLinear(100) + " = " + 447)
```
Upvotes: 2
|
2018/03/19
| 1,594 | 5,840 |
<issue_start>username_0: So, i'd like my user session to persist upon login/signup, which it does not.
The official documentation says to add this to start with :
```
app.use(express.session({ secret: 'keyboard cat' }));
app.use(passport.initialize());
app.use(passport.session());
```
which I did. Then it goes on to specify :
*In a typical web application, the credentials used to authenticate a user will only be transmitted during the login request. If authentication succeeds, a session will be established and maintained via a cookie set in the user's browser.
Each subsequent request will not contain credentials, but rather the unique cookie that identifies the session. In order to support login sessions, Passport will serialize and deserialize user instances to and from the session.*
```
passport.serializeUser(function(user, done) {
console.log("serialize user: ", user);
done(null, user[0]._id);
});
passport.deserializeUser(function(id, done) {
User.findById(id, function(err, user) {
done(err, user);
});
});
```
If I understand correctly, upon login, the user should see a new cookie being set.
My serialize and deserialize functions seem to work. The console will log the user details after I login a user. No error message in the console.
However, I don't see any cookie when I login a user.
Am I supposed to add an additional command manually ? something like this :
```
res.cookie('userid', user.id, { maxAge: 2592000000 });
```
I am using Redux, so am I supposed to deal with the persistent session via the reducer instead, with my authenticated (true or false) variable ?
I think I am a bit confused right now between what is supposed to be done on the server side and what is supposed to be done on the client side.<issue_comment>username_1: ```
//npm modules
const express = require('express');
const uuid = require('uuid/v4')
const session = require('express-session')
const FileStore = require('session-file-store')(session);
const bodyParser = require('body-parser');
const passport = require('passport');
const LocalStrategy = require('passport-local').Strategy;
const users = [
{id: '2f24vvg', email: '<EMAIL>', password: '<PASSWORD>'}
]
// configure passport.js to use the local strategy
passport.use(new LocalStrategy(
{ usernameField: 'email' },
(email, password, done) => {
console.log('Inside local strategy callback')
// here is where you make a call to the database
// to find the user based on their username or email address
// for now, we'll just pretend we found that it was users[0]
const user = users[0]
if(email === user.email && password === user.password) {
console.log('Local strategy returned true')
return done(null, user)
}
}
));
// tell passport how to serialize the user
passport.serializeUser((user, done) => {
console.log('Inside serializeUser callback. User id is save to the session file store here')
done(null, user.id);
});
// create the server
const app = express();
// add & configure middleware
app.use(bodyParser.urlencoded({ extended: false }))
app.use(bodyParser.json())
app.use(session({
genid: (req) => {
console.log('Inside session middleware genid function')
console.log(`Request object sessionID from client: ${req.sessionID}`)
return uuid() // use UUIDs for session IDs
},
store: new FileStore(),
secret: 'keyboard cat',
resave: false,
saveUninitialized: true
}))
app.use(passport.initialize());
app.use(passport.session());
// create the homepage route at '/'
app.get('/', (req, res) => {
console.log('Inside the homepage callback')
console.log(req.sessionID)
res.send(`You got home page!\n`)
})
// create the login get and post routes
app.get('/login', (req, res) => {
console.log('Inside GET /login callback')
console.log(req.sessionID)
res.send(`You got the login page!\n`)
})
app.post('/login', (req, res, next) => {
console.log('Inside POST /login callback')
passport.authenticate('local', (err, user, info) => {
console.log('Inside passport.authenticate() callback');
console.log(`req.session.passport: ${JSON.stringify(req.session.passport)}`)
console.log(`req.user: ${JSON.stringify(req.user)}`)
req.login(user, (err) => {
console.log('Inside req.login() callback')
console.log(`req.session.passport: ${JSON.stringify(req.session.passport)}`)
console.log(`req.user: ${JSON.stringify(req.user)}`)
return res.send('You were authenticated & logged in!\n');
})
})(req, res, next);
})
// tell the server what port to listen on
app.listen(3000, () => {
console.log('Listening on localhost:3000')
})
```
try going through the link
<https://medium.com/@evangow/server-authentication-basics-express-sessions-passport-and-curl-359b7456003d>
Upvotes: 2 <issue_comment>username_2: 1
Check hostname.
In my case, cookie was set on `127.0.0.1`, not `localhost`.
2
Make sure `express session` is called before `passport session`.
Upvotes: 1 <issue_comment>username_3: If anyone using **mongoose** with **expressjs** and **passportjs**, You can try this method below. This also keeps the cookie saved after the server restart. For other databases you can check the [Expressjs Compatible Session Stores List](https://github.com/expressjs/session/blob/master/README.md#compatible-session-stores)
```js
const session = require("express-session");
const MongoStore = require("connect-mongo")(session);
// before this setup mongoose connection then add this code.
app.use(
session({
store: new MongoStore({
mongooseConnection: mongoose.connection,
ttl: 365 * 24 * 60 * 60, // = 365 days.
}),
secret: process.env.AUTH_SECRET,
resave: true,
saveUninitialized: true,
cookie: {
secure: app.get("env") === "production",
},
})
);
```
Upvotes: 0
|
2018/03/19
| 3,066 | 10,517 |
<issue_start>username_0: I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got [here](https://stackoverflow.com/questions/49351071/load-image-dataset-folder-or-zip-located-in-google-drive-to-google-colab).
Then I pasted my code to create the CNN into Colab and started the process.
Here is the complete code:
Part 1: Setting up Colab to import picture from my Drive
--------------------------------------------------------
(part 1 is copied from [here](https://stackoverflow.com/questions/49351071/load-image-dataset-folder-or-zip-located-in-google-drive-to-google-colab) as it worked as exptected for me
Step 1:
```
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
```
Step 2:
```
from google.colab import auth
auth.authenticate_user()
```
Step 3:
```
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
```
Step 4:
```
!mkdir -p drive
!google-drive-ocamlfuse drive
```
Step 5:
```
print('Files in Drive:')
!ls drive/
```
Part 2: Copy pasting my CNN
---------------------------
I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend.
For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems
```
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.layers import Dropout
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
```
parameters
```
imageSize=32
batchSize=64
epochAmount=50
```
CNN
```
classifier=Sequential()
classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer
classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer
classifier.add(Flatten())
```
ANN
```
classifier.add(Dense(units=64, activation='relu')) #hidden layer
classifier.add(Dense(units=1, activation='sigmoid')) #output layer
classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method
```
image preprocessing
```
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
target_size = (imageSize, imageSize),
batch_size = batchSize,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
target_size = (imageSize, imageSize),
batch_size = batchSize,
class_mode = 'binary')
classifier.fit_generator(training_set,
steps_per_epoch = (8000//batchSize),
epochs = epochAmount,
validation_data = test_set,
validation_steps = (2000//batchSize))
```
Now comes my Problem
--------------------
First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training\_set, 2000 test\_set)
I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)
This is an intermediate result from my PC:
```
Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520
```
And this from Colab:
```
Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916
```
Why is Google Colab so slow in my case?
Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.<issue_comment>username_1: It's very slow to read file from google drives.
For example, I have one big file(39GB).
It cost more than 10min when I exec '!cp drive/big.file /content/'.
After I shared my file, and got the url from google drive. It cost 5 min when I exec '! wget -c -O big.file <http://share.url.from.drive>'. Download speed can up to 130MB/s.
Upvotes: 5 <issue_comment>username_2: As @[Feng](https://stackoverflow.com/users/5588431/feng) has already noted, reading files from drive is very slow. [This](https://medium.com/@oribarel/getting-the-most-out-of-your-google-colab-2b0585f82403) tutorial suggests using some sort of a memory mapped file like hdf5 or lmdb in order to overcome this issue. This way the I\O Operations are much faster (for a complete explanation on the speed gain of hdf5 format see [this](https://stackoverflow.com/questions/27710245/is-there-an-analysis-speed-or-memory-usage-advantage-to-using-hdf5-for-large-arr)).
Upvotes: 6 [selected_answer]<issue_comment>username_3: I have the same question as to why the GPU on colab seems to be taking at least just as long as my local pc so I can't really be of help there. But with that being said, if you are trying to use your data locally, I have found the following process to be significantly faster than just using the upload function provided in colab.
1.) mount google drive
```
# Run this cell to mount your Google Drive.
from google.colab import drive
drive.mount('/content/drive')
```
2.) create a folder outside of the google drive folder that you want your data to be stored in
3.) use the following command to copy the contents from your desired folder in google drive to the folder you created
```
!ln -s "/content/drive/My Drive/path_to_folder_desired" "/path/to/the_folder/you created"
```
(this is referenced from [another stackoverflow](https://stackoverflow.com/questions/53338621/issue-with-gdrive-on-colab) response that I used to find a solution to a similar issue )
4.) Now you have your data available to you at the path, "/path/to/the\_folder/you created"
Upvotes: -1 <issue_comment>username_4: You can load your data as numpy array (.npy format) and use flow method instead of flow\_from\_directory. Colab provides 25GB RAM ,so even for big data-sets you can load your entire data into memory. The speed up was found to be aroud 2.5x, with the same data generation steps!!!
(Even faster than data stored in colab local disk i.e '/content' or google drive.
Since colab provides only a single core CPU (2 threads per core), there seems to be a bottleneck with CPU-GPU data transfer (say K80 or T4 GPU), especially if you use data generator for heavy preprocessing or data augmentation.
You can also try setting different values for parameters like 'workers', 'use\_multiprocessing', 'max\_queue\_size ' in fit\_generator method ...
Upvotes: 2 <issue_comment>username_5: Reading files from google drive slow down your training process. The solution is to upload zip file to colab and unzip there. Hope it is clear for you.
Upvotes: 3 <issue_comment>username_6: If you want to work on datasets from kaggle check [this](https://medium.com/@saedhussain/google-colaboratory-and-kaggle-datasets-b57a83eb6ef8#:%7E:text=Step%203%3A%20Upload%20Kaggle%20API%20json%20file%20to%20Google%20Colab&text=PS%3A%20You%20could%20use%20this,the%20files%20in%20the%20directory.)
**Remember :** Inside Google colab Linux commands are ran by prefixing **'!'**
**eg :**
```
!mkdir ~/.kaggle/kaggle.json
```
```
!ls !unzip -q downloaded_file.zip
```
Upvotes: 1 <issue_comment>username_7: I had the same issue, and here's how I solved it.
First, make sure GPU is enabled(because it is not by default) by going to Runtime -> Change runtime type, and choosing GPU as your Hardware accelerator.
Then, as shown [here](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/load_data/images.ipynb#scrollTo=Ti8avTlLofoJ) you can use cache() and prefetch() functions to optimize the performance. Example:
```
# Load dataset
train_ds = keras.preprocessing.image_dataset_from_directory('Data/train',labels="inferred")
val_ds = keras.preprocessing.image_dataset_from_directory('Data/test',labels="inferred")
# Standardize data (optional)
from tensorflow.keras import layers
normalization_layer = keras.layers.experimental.preprocessing.Rescaling(1./255)
train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
val_ds = val_ds.map(lambda x, y: (normalization_layer(x), y))
# Cache to RAM (optional)
from tensorflow import data
AUTOTUNE = data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
# Train
model.fit(train_ds, validation_data=val_ds, epochs=3)
```
Upvotes: 3 <issue_comment>username_8: Google Colab instances are using some faster memory than google drive. As you are accessing files from google drive (it has a larger access time) so you are getting low speed. First copy the files to colab instance then train your network.
Upvotes: -1 <issue_comment>username_9: I was facing the same issue. Here's how I solved it:-
1. Uploaded the zip file of the dataset to google drive.
2. Mount the drive in colab and then unzip the dataset file 'in' a separate folder(other than ../drive) in colab itself.
3. Do your business.
It worked for me. I don't know the exact reason but since colab access its local directory faster than it accesses the mounted drive directory, that may happen to be the gist of the problem.
Upvotes: 3 <issue_comment>username_10: In my case, the GPU on colab is super fast compared to my Nvidia GPU card on PC- based on training speeds. However, when doing simulations, which I can only presume involves CPU, my PC is nearly 50% faster (i7, 10th Gen)
Upvotes: 0
|
2018/03/19
| 997 | 3,140 |
<issue_start>username_0: Given a List of elements:
```
xs = [a, b, c, d, ... z]
```
where `a, b, c` etc are placeholders for arbitrary values.
I want to implement a function `adjacents :: [a] -> [(a, a)]` that produces
```
adjacentValues = [(a, b), (b, c), (c, d), ... (y, z)]
```
---
In Haskell, a recursive definition is reasonably concise:
```
adjacents :: [a] -> [(a, a)]
adjacents (x:xs) = (x, head xs) : adjacents xs
adjacents [] = []
```
Purescript is a little more verbose:
```
adjacents :: forall a. List a -> List (Tuple a a)
adjacents list = case uncons list of
Nothing -> []
Just {head: x, tail: xs} -> case head xs of
Just next -> Tuple x next : adjacents xs
Nothing -> []
```
Is there a way to express `adjacents` without explicit recursion (using a fold)?
---
Disclaimer: This question has both Purescript and Haskell tags because I want to open it to a broader audience. I reckon an answer not to depend on haskells lazy-evaluation semantics, and therefore to be valid in both languages.<issue_comment>username_1: In Haskell, without explicit recursion, you can zip a list with its tail.
```
let a = [1,2,3,4,5,6,7,8,9,0]
a `zip` tail a
=> [(1,2),(2,3),(3,4),(4,5),(5,6),(6,7),(7,8),(8,9),(9,0)]
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Purescript solution for completeness sake:
```
adjacent :: forall n. List n -> List (Tuple n n)
adjacent list = zip list $ fromMaybe empty $ tail list
```
can be expressed more elegantly as:
```
adjacent :: forall n. List n -> List (Tuple n n)
adjacent list = zip list $ drop 1 list
```
Upvotes: 2 <issue_comment>username_3: For the sake of illustration (the `zip`-based solutions are definitely nicer), here is your explicitly recursive Haskell solution written as an unfold. I have golfed it into an one-liner for no particular reason.
```
{-# LANGUAGE LambdaCase #-}
import Data.List (unfoldr)
adjacent :: [a] -> [(a, a)]
adjacent = unfoldr (\case { x:y:ys -> Just ((x, y), ys); _ -> Nothing })
```
(Note that the pattern matches here handle lists with an odd number of elements without crashing.)
Upvotes: 0 <issue_comment>username_4: Since we've seen `zip` and `unfoldr`, we should have one using `foldr`:
```
adjacent :: [a] -> [(a,a)]
adjacent xs = foldr go (const []) xs Nothing
where
go a r Nothing = r (Just a)
go a r (Just prev) = (prev, a) : r (Just a)
```
And now, because every toy problem deserves an over-engineered solution, here's what you could use to get double-sided list fusion:
```
import GHC.Exts (build)
adjacent :: [a] -> [(a,a)]
adjacent xs = build $ \c nil ->
let
go a r Nothing = r (Just a)
go a r (Just prev) = (prev, a) `c` r (Just a)
in foldr go (const nil) xs Nothing
{-# INLINE adjacent #-}
```
Upvotes: 0 <issue_comment>username_5: folding with state, where the state is the last paired item:
in Haskell:
```
import Data.List (mapAccumL)
adjacents :: [a] -> [(a, a)]
adjacents [] = []
adjacents (x:xs) = snd $ mapAccumL op x xs
where
op x y = (y, (x,y))
```
Upvotes: 0
|
2018/03/19
| 735 | 2,290 |
<issue_start>username_0: In my ionic 3 app, when i click on the input field, then keyboard is open and my footer change it's position and it come above the keyboard.
i want to fix the layout of ionic 3 app, when keyboard is open, it should not change.<issue_comment>username_1: In Haskell, without explicit recursion, you can zip a list with its tail.
```
let a = [1,2,3,4,5,6,7,8,9,0]
a `zip` tail a
=> [(1,2),(2,3),(3,4),(4,5),(5,6),(6,7),(7,8),(8,9),(9,0)]
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Purescript solution for completeness sake:
```
adjacent :: forall n. List n -> List (Tuple n n)
adjacent list = zip list $ fromMaybe empty $ tail list
```
can be expressed more elegantly as:
```
adjacent :: forall n. List n -> List (Tuple n n)
adjacent list = zip list $ drop 1 list
```
Upvotes: 2 <issue_comment>username_3: For the sake of illustration (the `zip`-based solutions are definitely nicer), here is your explicitly recursive Haskell solution written as an unfold. I have golfed it into an one-liner for no particular reason.
```
{-# LANGUAGE LambdaCase #-}
import Data.List (unfoldr)
adjacent :: [a] -> [(a, a)]
adjacent = unfoldr (\case { x:y:ys -> Just ((x, y), ys); _ -> Nothing })
```
(Note that the pattern matches here handle lists with an odd number of elements without crashing.)
Upvotes: 0 <issue_comment>username_4: Since we've seen `zip` and `unfoldr`, we should have one using `foldr`:
```
adjacent :: [a] -> [(a,a)]
adjacent xs = foldr go (const []) xs Nothing
where
go a r Nothing = r (Just a)
go a r (Just prev) = (prev, a) : r (Just a)
```
And now, because every toy problem deserves an over-engineered solution, here's what you could use to get double-sided list fusion:
```
import GHC.Exts (build)
adjacent :: [a] -> [(a,a)]
adjacent xs = build $ \c nil ->
let
go a r Nothing = r (Just a)
go a r (Just prev) = (prev, a) `c` r (Just a)
in foldr go (const nil) xs Nothing
{-# INLINE adjacent #-}
```
Upvotes: 0 <issue_comment>username_5: folding with state, where the state is the last paired item:
in Haskell:
```
import Data.List (mapAccumL)
adjacents :: [a] -> [(a, a)]
adjacents [] = []
adjacents (x:xs) = snd $ mapAccumL op x xs
where
op x y = (y, (x,y))
```
Upvotes: 0
|
2018/03/19
| 427 | 1,581 |
<issue_start>username_0: My android studio is showing empty logcat, even it is not empty in android device monitor. I have attached some screenshots of my project:
[](https://i.stack.imgur.com/0V7yd.png)
[](https://i.stack.imgur.com/08eWv.png)
[](https://i.stack.imgur.com/xOhK6.png)
After running project, logcat showing unwanted values. This is happening in this project only, in other projects logcat is fine. Please help me in this issue.<issue_comment>username_1: Check in Tools--> Android--> Enable ADB Integration is enabled or not, if not please enable it.[](https://i.stack.imgur.com/8B1nD.png)
And as shown in the above image select "Show only selected aplication" in your logcat.
Upvotes: 0 <issue_comment>username_2: As per your first screenshot attached I can see that you have selected the Android debug device as SAMSUNG SM - J700F but you have not selected any Debuggable Process (Any Applciation). So if your project is running in Android studio and the same app is running on selected Android Debug device than your Project will be visible in dropdown between the SAMSUNG SM - J700F and Verbose dropdown. Make sure you click on that dropdown in which the current value is shown as **No Debuggable Process** is visible and select your appropriate project.
Upvotes: 1
|
2018/03/19
| 563 | 1,810 |
<issue_start>username_0: I'am trying to work with the external interrupt source and I wrote a small program to test the interrupt. When I start the program RB0 is set low, and RB1 is set high. If I set RB7 high, it must be generated an interrupt that reverses the logic state of RB0 and RB1.
I don't know why the interrupt doesn't work. I have configured all registers, is something missing yet? The compiler is xc16.
Thanks.
Here is the code:
```
#include
#include "setup.h"
void main(void) {
AD1PCFG = 0xFFFF;
TRISBbits.TRISB7=1;
TRISBbits.TRISB0=0;
TRISBbits.TRISB1=0;
PORTBbits.RB0=0;
PORTBbits.RB1=0;
\_INT0IE= 1; //enable interrupt on RB7
\_INT0IF = 0; //clear status flag interrupt on RB7
\_INT0IP = 0b010; /priority level 2 for INT0
while(1) {
\_RB0=0;
\_RB1=1;
}
}
void \_ISR \_INT0Interrupt(void) {
if(\_INT0IF) {
\_INT0IF = 0;
\_RB0=1;
\_RB1=0;
}
}
```<issue_comment>username_1: Check in Tools--> Android--> Enable ADB Integration is enabled or not, if not please enable it.[](https://i.stack.imgur.com/8B1nD.png)
And as shown in the above image select "Show only selected aplication" in your logcat.
Upvotes: 0 <issue_comment>username_2: As per your first screenshot attached I can see that you have selected the Android debug device as SAMSUNG SM - J700F but you have not selected any Debuggable Process (Any Applciation). So if your project is running in Android studio and the same app is running on selected Android Debug device than your Project will be visible in dropdown between the SAMSUNG SM - J700F and Verbose dropdown. Make sure you click on that dropdown in which the current value is shown as **No Debuggable Process** is visible and select your appropriate project.
Upvotes: 1
|
2018/03/19
| 831 | 2,841 |
<issue_start>username_0: I'm doing some tests with lists, `enumerate()` method and CSV files.
I'm using the `writerows()` method to save an enumerate object to a .csv file.
All works fine but the list / enumerate object becomes empty after the writing is done.
Why is this happening ?
How can I keep the values in my list (do I have to save them in amother variable)?
I'm on Windows 10 using Python 3.6.4
Here is My Code:
```
import csv
b = [1,2,3,4,5,6,7,8,9,10,11,"lol","hello"]
c = enumerate(b)
with open("output.csv", "w", newline='') as myFile:
print("Writing CSV")
writer = csv.writer(myFile)
writer.writerows(c)
print(list(c))
```
output:
```
>>>Writing CSV
>>>[]
>>>[Finished in 0.1s
```
If I preform: `print(list(c))` before the writing method, `c` also becomes empty.
Thanks!<issue_comment>username_1: ```
c = enumerate(b)
```
Here `c` is not `list` but a generator, which is consumed when you iterate over it.
You will have to create new generator every time you use it.
If you want a permanent reference to the exhausted contents of the generator, you have to convert it to `list`.
```
c = list(enumerate(b))
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: That is perfectly normal. `c` is a generator that will iterate over all elements of `b`, so it will iterate over `b` only once. That is when you call `writer.writerows(c).`
After that, the generator is depleted so making a `list` out of it will return an empty list.
Upvotes: 1 <issue_comment>username_3: [[Python]: **enumerate**(*iterable, start=0*)](https://docs.python.org/3/library/functions.html#enumerate) returns a **generator**.
From [[Python]: Generators](https://wiki.python.org/moin/Generators):
>
> The performance improvement from the use of generators is the result of the lazy (on demand) generation of values, which translates to lower memory usage. Furthermore, we do not need to wait until all the elements have been generated before we start to use them. This is similar to the benefits provided by iterators, but the generator makes building iterators easy.
> ...
> Note: a generator will provide performance benefits only if we do not intend to use that set of generated values more than once.
>
>
>
The values of a generator are consumed once it's iterated on. That's why you have to "save" it by e.g. converting it to a list (which will also consume it since it will iterate over it).
For more details, you could also check [[SO]: How do I list all files of a directory?
(@username_3's answer - Part One)](https://stackoverflow.com/questions/3207219/how-do-i-list-all-files-of-a-directory/48393588#48393588) (**Preliminary notes** section - where I illustrate the behavior of [[Python]: **map**(*function, iterable, ...*)](https://docs.python.org/3/library/functions.html#map)).
Upvotes: 1
|
2018/03/19
| 807 | 2,848 |
<issue_start>username_0: I am new to PHP. I have a function which returns an HTML code to my other function. That HTML code has some PHP code in it as well. The problem is I don't know how can I add a for loop in that HTML code and then return the whole HTML.
for example here is my code :
```
php
class Dummy
{
public static function testing($data)
{
return '<div class = "dummy"name:' . $data['name'] . '';
}
}
?>
```
I want to insert foreach loop in it for example
```
php
class Dummy
{
public static function testing($data)
{
return '<div class = "dummy"name:' . $data['name'] . '
foreach($data as $d){
| hello |
}
';
}
}
?>
```<issue_comment>username_1: You need to first build your string with desired outout. Then return that string
```
php
class Dummy
{
public static function testing($data)
{
$str = '<div class = "dummy"name:' . $data['name'] . '
';
foreach($data as $d){
$str .= '| hello |
';
}
$str .= '
';
return $str;
}
}
?>
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: You can't really have a `for` loop within a return statement like that. You should make a variable to hold the string, and build it up within the loop as in this example:
```
public static function testing($data) {
$html = 'name:' . $data['name'] . '';
$html .= '
';
foreach($data as $d){
$html .= '| hello |
';
}
$html .= '
';
return $html;
}
```
I would like to point out however that building HTML like this is generally a bad idea and you should use a templating language as is provided with most frameworks.
In PHP, the `.=` syntax is shorthand for concatenating a string. You could also use `$html = $html . 'some other content
';`.
Upvotes: 1 <issue_comment>username_3: create before return
```
php
class Dummy
{
public static function testing($data)
{
$return='<div class = "dummy"name:' . $data['name'] . '
';
foreach($data as $d){
$return+='| hello |
';
}
$return+='
';
return $return;
}
}
?>
```
Upvotes: 0 <issue_comment>username_4: As an example, you can do it like this.
```
php
class Dummy
{
public static function testing($data, $x)
{
$firstLine = '<div class = "dummy"X Value:' . $x . '';
$firstLine .= '';
$innerHtml = '';
foreach($data as $d) {
$innerHtml .= '| hello '.$d['name'].' |
';
}
$innerHtml = '
'.$innerHtml.'
';
$lastLine = "";
return $firstLine.$innerHtml.$lastLine;
}
}
$data[0]['name'] = "username_4";
$data[1]['name'] = "Amit";
$data[2]['name'] = "Rachit";
echo Dummy::testing($data, "This is X Value");
?>
```
Upvotes: -1
|
2018/03/19
| 1,442 | 5,044 |
<issue_start>username_0: They can all change file size according to my test.
why can they all change file to larger and to shorter?
what's the difference between fallocate and ftruncate?<issue_comment>username_1: `ftruncate` is a simple, single-purpose function. [Per the POSIX documentation](http://pubs.opengroup.org/onlinepubs/9699919799/functions/ftruncate.html), it simply sets the file to the requested length:
>
> If `fildes` refers to a regular file, the `ftruncate()` function shall cause the size of the file to be truncated to `length`. ...
>
>
>
`ftruncate()` is also a standard POSIX function and is portable. Note that POSIX does not specify *how* an OS sets the file length, such as whether or not a file set to any length is [a sparse file](https://en.wikipedia.org/wiki/Sparse_file).
[`fallocate()` is a Linux-specific function](http://man7.org/linux/man-pages/man2/fallocate.2.html) that does a lot more, and in very specific ways:
>
> **Allocating disk space**
>
>
> The default operation (i.e., mode is zero) of fallocate() allocates
> the disk space within the range specified by `offset` and `len`. The
> file size (as reported by `stat(2)`) will be changed if `offset+len` is
> greater than the file size. Any subregion within the range specified
> by offset and len that did not contain data before the call will be
> initialized to zero. This default behavior closely resembles the
> behavior of the `posix_fallocate(3)` library function, and is intended
> as a method of optimally implementing that function.
>
>
> ...
>
>
> **Deallocating file space**
>
>
> Specifying the `FALLOC_FL_PUNCH_HOLE` flag (available since Linux
> 2.6.38) in mode deallocates space (i.e., creates a hole) in the byte
> range starting at `offset` and continuing for `len` bytes. Within the
> specified range, partial filesystem blocks are zeroed, and whole
> filesystem blocks are removed from the file. After a successful
> call, subsequent reads from this range will return zeroes.
>
>
> ...
>
>
> **Collapsing file space**
>
>
> Specifying the `FALLOC_FL_COLLAPSE_RANGE` flag (available since Linux
> 3.15) in mode removes a byte range from a file, without leaving a
> hole. The byte range to be collapsed starts at `offset` and continues
> for `len` bytes. At the completion of the operation, the contents of
> the file starting at the location `offset+len` will be appended at the
> location offset, and the file will be `len` bytes smaller.
>
>
> ...
>
>
> **Zeroing file space**
>
>
> Specifying the `FALLOC_FL_ZERO_RANGE` flag (available since Linux 3.15)
> in mode zeroes space in the byte range starting at `offset` and
> continuing for `len` bytes. Within the specified range, blocks are
> preallocated for the regions that span the holes in the file. After
> a successful call, subsequent reads from this range will return
> zeroes.
>
>
> ...
>
>
> **Increasing file space**
>
>
> Specifying the `FALLOC_FL_INSERT_RANGE` flag (available since Linux
> 4.1) in mode increases the file space by inserting a hole within the
> file size without overwriting any existing data. The hole will start
> at `offset` and continue for `len` bytes. When inserting the hole inside
> file, the contents of the file starting at `offset` will be shifted
> upward (i.e., to a higher file offset) by `len` bytes. Inserting a
> hole inside a file increases the file size by `len` bytes.
>
>
> ...
>
>
>
Upvotes: 3 <issue_comment>username_2: **fallocate** is used to preallocate blocks to a file
The following command will allocate a file with a size of 1GB.
```
fallocate -l 1G test_file1.img
```
**ftruncate** - set a file to a specified length
```
ftruncate(fileno(fout),size);
```
Upvotes: 0 <issue_comment>username_3: As I now know:
1.fallocate can't change file to shorter. it add actual space to file.
2.ftruncate add len to "describe", just like declaration.
Upvotes: 0 <issue_comment>username_4: With ftruncate you tell linux the size you want this file to be.
It will truncate extra space if the file is getting shorter (including freeing up disk space) or add zeros, allocating disk space if you make the file longer.
fallocate is a general purpose function to effect change to a range to bytes belonging to a file.
Depending on how you use fallocate, you can accomplish everything you could with ftruncate with fallocate. Its just a little more complicated as you will have to know which range you need to allocate/deallocate (initial offset+length).
With fallocate you can pre allocate disk space without logically growing a file (example a zero byte file at the ls level that uses 1GB).
I just wrote a C program to perform high performance gzipping of a file with direct I/O using fallocate and ftruncate. I use fallocate to pre-allocate 64MB at a time to the file. In the end I use ftruncate to trim the excess space allocated.
Works perfectly with XFS, I confirmed ftruncate actually frees disk space with xfs\_bmap -vp on a few files.
Upvotes: 0
|
2018/03/19
| 464 | 1,386 |
<issue_start>username_0: Whats wrong in this query:
```
select *, STR_TO_DATE(start, '%d/%m/%Y') as date_format from dates where date_format >= 2018-03-19
```
error:
```
Column not found: 1054 Unknown column 'date_format' in 'where clause'
```<issue_comment>username_1: You can not use date\_format as it is just given name but `STR_TO_DATE(start, '%d/%m/%Y')`
Upvotes: 0 <issue_comment>username_2: You cannot use a column alias in a `where` clause. MySQL has an extension where you can do so in a `having` clause (without doing any aggregation). So you can do:
```
select d.*, STR_TO_DATE(start, '%d/%m/%Y') as date_format
from dates d
having date_format >= '2018-03-19';
```
The normal advice is to repeat the expression:
```
select d.*, STR_TO_DATE(start, '%d/%m/%Y') as date_format
from dates d
having STR_TO_DATE(start, '%d/%m/%Y') >= '2018-03-19';
```
However, I would strongly recommend that you change the structure of the table. The date should not be stored as a string. You can easily fix this:
```
update dates
set start = STR_TO_DATE(start, '%d/%m/%Y');
alter table dates modify column start date;
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: try this way
`date_format` is not field so use `start` instead of `date_format`
```
select *, STR_TO_DATE(start, '%d/%m/%Y') as date_format from dates where start >= 2018-03-19
```
Upvotes: 0
|
2018/03/19
| 662 | 2,754 |
<issue_start>username_0: I wonder what mean the following lines :
```
buildTypes {
lintOptions {
abortOnError false
}
}
```
May you help ?
Is it recommended or not recommended to use these lines?
Thanks.<issue_comment>username_1: [Lint](https://developer.android.com/studio/write/lint.html) is a tool which helps to find potential bugs in the code, as well as checking code style, etc.
It can be either enabled or disabled for the project. If it is enabled, it will abort the app build when certain bigger issues are discovered. The "abortOnError" flag allows to ignore this error and continue with building the app.
Ideally, you would fix the error rather than suppress it. Suppressing using this flag could be useful for debug builds if you know that the error is there, but don't want to deal with it straight away, or maybe another team member is dealing with it, etc. However, it is marked as an error for a reason, so in general it's not really recommended to ignore them, especially for production builds.
Upvotes: 3 <issue_comment>username_2: ```
lintOptions {
abortOnError false
}
```
This means it will run lint checks but won't abort the build if any lint error found. By default, its true and stops the build if errors are found.
Suppose a scenario like QA is blocked with build and the developer couldn't able to fix within the time, then we can give the `abortOnError false` and give build to QA. Then we should fix the issue before moving to production.
Little bit documentation [here](https://developer.android.com/studio/write/lint.html)
Upvotes: 0 <issue_comment>username_3: Lint tool by android is for improving your code the reliability and efficiency of your Android apps. For example, if your XML resource files contain unused namespaces, this takes up space and incurs unnecessary processing. Other structural issues, such as use of deprecated elements or API calls that are not supported by the target API versions, might lead to code failing to run correctly.
Now using this in gradle as
```
lintOptions {
abortOnError false
}
```
will run lint checks but won't abort the build. Follow this [link](https://developer.android.com/studio/write/lint.html#overview) if you want to know about lint.
Upvotes: 0 <issue_comment>username_4: Check the [official doc](https://developer.android.com/studio/write/lint.html)
```
android {
...
lintOptions {
// if set to true (default), stops the build if errors are found.
abortOnError false
}
}
...
```
>
> Is it recommended or not recommended to use these lines?
>
>
>
There isn't a general rule but in my opinion you should avoid using this configuration in a release build.
Upvotes: 0
|
2018/03/19
| 471 | 1,187 |
<issue_start>username_0: I know what I can use Selenoid-UI to connect to running webdriver container using my browser. But is there any ability to connect to container using one of VNC clients?<issue_comment>username_1: Two possible ways:
1) Launch browser VNC container as follows and connect with any VNC client using `vnc://localhost:4444` and password `<PASSWORD>`:
```
$ docker run -d --name browser -p 4444:4444 -p 5900:5900 selenoid/vnc:firefox_58.0
```
2) Use Selenoid `/vnc/` API. Having some running session ID, e.g. `bd0415ac-3cbc-427d-b1e6-d142889a6afa` you can access a web-socket proxying VNC traffic like this:
```
ws://selenoid-host.example.com:4444/vnc/bd0415ac-3cbc-427d-b1e6-d142889a6afa
```
Getting VNC traffic from web-socket is a built-in feature of some web-based VNC clients, e.g. [noVNC](https://github.com/novnc/noVNC) used in Selenoid UI.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Simpliest way to open VNC to selenoid:
`localhost:4444` - selenoid server
`2a398b1d73ca57e2559ad4ca785abae3` - your session id
<https://novnc.com/noVNC/vnc.html?host=localhost&port=4444&path=vnc/2a398b1d73ca57e2559ad4ca785abae3&password=<PASSWORD>>
Upvotes: 2
|
2018/03/19
| 379 | 1,491 |
<issue_start>username_0: This is referencing this question: [How to login to facebook in Xamarin.Forms](https://stackoverflow.com/questions/24105390/how-to-login-to-facebook-in-xamarin-forms).
I use a similar solution which uses Xamarin.Facebook.Android and Xamarin.Facebook.iOS for auth (<https://github.com/mikeapple/XamarinFormsNativeFBLogin>) and works perfectly, but I can't even begin to work out how to do native auth on UWP, there doesn't even seem to be Nuget to support this.
Does anyone have a workable solution to this problem that they would be prepared to share, or know of one that I could study, that would fit without having to repurpose my existing project too much?
Thanks<issue_comment>username_1: for this same interface as the xamarin.facebook.android and xamarin.facebook.IOS is not found for the UWP.
I have an alternative that opens as a browser.
[UWP](https://github.com/HoussemDellai/Facebook-Login-Xamarin-Forms)
Upvotes: 1 <issue_comment>username_2: There is a nuget package called winsdkfb, you can use this package to achieve what you want. i havent tested with facebook windows 10 store application together but when it is not installed, it displays webview within your application. here there is code sample <http://microsoft.github.io/winsdkfb/sample/>
You can simply create a buttonrender and inject to your uwp native level. because this code you cant run on share applicaton, only in uwp native. if you need more help, please let me know.
Upvotes: 0
|
2018/03/19
| 287 | 1,160 |
<issue_start>username_0: I want to defer parsing of scripts that are generate as a result of transpiling during build. I have added async to all the other tags on my index.html page. However, since the main js file is inject into page during build, it becomes harder.
Is there a way to tell babel/react-scripts to add the async attribute to the injected js tag?<issue_comment>username_1: for this same interface as the xamarin.facebook.android and xamarin.facebook.IOS is not found for the UWP.
I have an alternative that opens as a browser.
[UWP](https://github.com/HoussemDellai/Facebook-Login-Xamarin-Forms)
Upvotes: 1 <issue_comment>username_2: There is a nuget package called winsdkfb, you can use this package to achieve what you want. i havent tested with facebook windows 10 store application together but when it is not installed, it displays webview within your application. here there is code sample <http://microsoft.github.io/winsdkfb/sample/>
You can simply create a buttonrender and inject to your uwp native level. because this code you cant run on share applicaton, only in uwp native. if you need more help, please let me know.
Upvotes: 0
|
2018/03/19
| 303 | 1,184 |
<issue_start>username_0: I need the same output like in searchbox. please consider below url as an example:
<https://www.reifendiscount.de/de/reifen/pkw-reifen.html>
Here, single character is typing step by step automatically and after complete sentence is finished then each character is removed one by one from search box.
I need the same result for multiple sentences.
Thank you.<issue_comment>username_1: for this same interface as the xamarin.facebook.android and xamarin.facebook.IOS is not found for the UWP.
I have an alternative that opens as a browser.
[UWP](https://github.com/HoussemDellai/Facebook-Login-Xamarin-Forms)
Upvotes: 1 <issue_comment>username_2: There is a nuget package called winsdkfb, you can use this package to achieve what you want. i havent tested with facebook windows 10 store application together but when it is not installed, it displays webview within your application. here there is code sample <http://microsoft.github.io/winsdkfb/sample/>
You can simply create a buttonrender and inject to your uwp native level. because this code you cant run on share applicaton, only in uwp native. if you need more help, please let me know.
Upvotes: 0
|
2018/03/19
| 971 | 3,201 |
<issue_start>username_0: I want to read data of .csv file which is located at FTP or SFTP server using Oracle SQL or PL SQL.
I tried the below code and it showing output like SSH-2.0-OpenSSH\_5.3 that means connected i hope.
```
declare
c utl_tcp.connection; -- TCP/IP connection to the Web server
ret_val pls_integer;
BEGIN
c := utl_tcp.open_connection(remote_host => 'ftp.******.******.com'
,remote_port => 21
,charset => 'US7ASCII'
-- ,wallet_path => '****************'
-- ,wallet_password => '**********'
); -- open connection
-- ret_val := utl_tcp.write_line(c, 'GET / HTTP/1.0'); -- send HTTP request
ret_val := utl_tcp.write_line(c);
BEGIN
LOOP
dbms_output.put_line(utl_tcp.get_line(c, TRUE)); -- read result
END LOOP;
EXCEPTION
WHEN utl_tcp.end_of_input THEN
NULL; -- end of input
END;
utl_tcp.close_connection(c);
END;
/
```
Could someone help me on next steps on How to open and read the .csv file present in SFTP/FTP server and load it into Oracle DB table ?<issue_comment>username_1: You require some tools/ application to open the file present in FTP or SFTP and load the data in it to Database, there are some tools which can be used to load them to database like Dollar Universe, Tidal scheduler, etc. made up of sql and pl/sql code. these tools are linked with unix/windows OS which needs to be triggered manually
Upvotes: 0 <issue_comment>username_2: <NAME> over at [Oracle-base.com](http://oracle-base.com) did exactly this and has the ftp plsql API on his blog post.
Here's an excerpt which is what you are asking about.
```
l_conn := ftp.login('ftp.company.com', '21', 'ftpuser', 'ftppassword');
ftp.ascii(p_conn => l_conn);
ftp.get(p_conn => l_conn,
p_from_file => '/u01/app/oracle/test.txt',
p_to_dir => 'MY_DOCS',
p_to_file => 'test_get.txt');
ftp.logout(l_conn);
END;
/
```
Here's the full blog post :
<https://oracle-base.com/articles/misc/ftp-from-plsql>
Upvotes: 2 <issue_comment>username_3: If you need an SFTP client in PL/SQL you can take a look at the commercial [OraSFTP](https://www.didisoft.com/ora-sftp/) package from DidiSoft.
Here is a sample usage:
```
DECLARE
connection_id NUMBER;
private_key_handle BFILE;
private_key BLOB;
PRIVATE_KEY_PASSWORD VARCHAR2(500);
downloaded_file BLOB;
BEGIN
DBMS_LOB.createtemporary(PRIVATE_KEY, TRUE);
private_key_handle := BFILENAME('PGP_KEYS_DIR', 'test_putty_private.ppk'); -- directory name must be Upper case
DBMS_LOB.OPEN(private_key_handle, DBMS_LOB.LOB_READONLY);
DBMS_LOB.LoadFromFile( private_key, private_key_handle, DBMS_LOB.GETLENGTH(private_key_handle) );
DBMS_LOB.CLOSE(private_key_handle);
PRIVATE_KEY_PASSWORD := '<PASSWORD>';
connection_id := ORA_SFTP.CONNECT_HOST('localhost', 22, 'nasko', private_key, private_key_password);
downloaded_file := ORA_SFTP.DOWNLOAD(connection_id, 'remote_file.dat');
ORA_SFTP.DISCONNECT_HOST(connection_id);
END;
/
```
Disclaimer: I work for DidiSoft
Upvotes: 0
|
2018/03/19
| 700 | 2,267 |
<issue_start>username_0: I have a question that is about $state service.When i click the save button,i close modal and call `$state.go("name") but name state is running(I can see debug) but its controller does not start.Do you have any idea?<issue_comment>username_1: You require some tools/ application to open the file present in FTP or SFTP and load the data in it to Database, there are some tools which can be used to load them to database like Dollar Universe, Tidal scheduler, etc. made up of sql and pl/sql code. these tools are linked with unix/windows OS which needs to be triggered manually
Upvotes: 0 <issue_comment>username_2: Tim Hall over at [Oracle-base.com](http://oracle-base.com) did exactly this and has the ftp plsql API on his blog post.
Here's an excerpt which is what you are asking about.
```
l_conn := ftp.login('ftp.company.com', '21', 'ftpuser', 'ftppassword');
ftp.ascii(p_conn => l_conn);
ftp.get(p_conn => l_conn,
p_from_file => '/u01/app/oracle/test.txt',
p_to_dir => 'MY_DOCS',
p_to_file => 'test_get.txt');
ftp.logout(l_conn);
END;
/
```
Here's the full blog post :
<https://oracle-base.com/articles/misc/ftp-from-plsql>
Upvotes: 2 <issue_comment>username_3: If you need an SFTP client in PL/SQL you can take a look at the commercial [OraSFTP](https://www.didisoft.com/ora-sftp/) package from DidiSoft.
Here is a sample usage:
```
DECLARE
connection_id NUMBER;
private_key_handle BFILE;
private_key BLOB;
PRIVATE_KEY_PASSWORD VARCHAR2(500);
downloaded_file BLOB;
BEGIN
DBMS_LOB.createtemporary(PRIVATE_KEY, TRUE);
private_key_handle := BFILENAME('PGP_KEYS_DIR', 'test_putty_private.ppk'); -- directory name must be Upper case
DBMS_LOB.OPEN(private_key_handle, DBMS_LOB.LOB_READONLY);
DBMS_LOB.LoadFromFile( private_key, private_key_handle, DBMS_LOB.GETLENGTH(private_key_handle) );
DBMS_LOB.CLOSE(private_key_handle);
PRIVATE_KEY_PASSWORD := '<PASSWORD>';
connection_id := ORA_SFTP.CONNECT_HOST('localhost', 22, 'nasko', private_key, private_key_password);
downloaded_file := ORA_SFTP.DOWNLOAD(connection_id, 'remote_file.dat');
ORA_SFTP.DISCONNECT_HOST(connection_id);
END;
/
```
Disclaimer: I work for DidiSoft
Upvotes: 0
|
2018/03/19
| 555 | 1,879 |
<issue_start>username_0: ```
tweet = textblob(tweet)
TypeError: 'module' object is not callable
```
I have this problem while trying to run a sentiment analysis script. I have installed textblob with the following commands:
```
$ pip install -U textblob
$ python -m textblob.download_corpora
```
the code is the following:
```
import json
import csv
from textblob import TextBlob
#set the input and outputing file
input_file= "tweets.json"
output_file= "results.csv"
#store all json data
tweets_novartis = []
with open (input_file) as input_novartis:
for line in input_novartis:
tweets_novartis.append(json.loads(line))
#open output file to store the results
with open(output_file, "w") as output_novartis:
writer = csv.writer(output_novartis)
#iterate through all the tweets
for tweets_novartis in tweets_novartis:
tweet = tweets_novartis["full_text"]
#TextBlob to calculate sentiment
tweet = Textblob(tweet)
tweet = tweet.replace("\n" , " ")
tweet = tweet.replace("\r" , " ")
sentiment = [[tweet.sentiment.polarity]]
writer.writerows(sentiment)
```<issue_comment>username_1: You need to import `TextBlob` from `textblob` module
```
from textblob import TextBlob
tweet = TextBlob(tweet)
```
**From the Docs:**
```
>>> from textblob import TextBlob
>>> wiki = TextBlob("Python is a high-level, general-purpose programming language.")
```
**[More info](http://textblob.readthedocs.io/en/dev/quickstart.html)**
Upvotes: 0 <issue_comment>username_2: Python is case sensitive. Use this:
```
from textblob import TextBlob
tweet = TextBlob(tweet)
```
**python != vba**
Upvotes: 0 <issue_comment>username_3: `textblob` as a package is not callable, hence we import the `TextBlob` object from `textblob` package.
```
from textblob import TextBlob
b = TextBlob('i hate to go to school')
print(b.sentiment)
```
Upvotes: 1
|
2018/03/19
| 600 | 2,161 |
<issue_start>username_0: Before asking this, I did have a look at other similar questions, but none of them have been of help as of yet.
I have a react front-end using axios to make api calls to a node backend using express and express session. Once I enter login info on my front end, I send it to my backend where I set a session and return a response to the front end that I set in the localStorage to use for client side validation.
When I try to access a protected api endpoint - from the front end - after logging in, the session does not have any of the information I set in it and thus, gives me an error. However, if I try to login and access the protected endpoint from postman, it works perfectly fine.
Here is my express-session config:
```
router.use(session({
secret: 'notGoingToWork',
resave: false,
saveUninitialized: true
}))
```
Here is the api call (after logging in) I am making through axios:
```
axios.get(`http://localhost:5000/users/personNotes/${JSON.parse(userData).email}`)
.then(response => console.log(response);
```
I do not know what the issue is and would appreciate any help. Thank you in advance!<issue_comment>username_1: try using `withCredentials`
```
axios(`http://localhost:5000/users/personNotes/${JSON.parse(userData).email}`, {
method: "get",
withCredentials: true
})
```
or
```
axios.defaults.withCredentials = true
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can use **axios** try `withCredentials` to `true`.
For **fetch** with `credentials` to `include` will also work.
```
fetch(URL,
{
credentials: 'include',
method: 'POST',
body: JSON.stringify(payload),
headers: new Headers({
'Content-Type': 'application/json'
})
})
```
Upvotes: 1 <issue_comment>username_3: To use fetch with
```
credentials: 'include'
```
I also had to add the following in Express App.js.
To note, 'Access-Control-Allow-Origin' cannot set to '\*' with credentials. It must use a specific domain name.
```
res.setHeader(
'Access-Control-Allow-Origin',
'http://localhost:3000'
);
res.setHeader('Access-Control-Allow-Credentials', 'true');
```
Upvotes: 0
|
2018/03/19
| 473 | 1,170 |
<issue_start>username_0: My array:
```
[
{
"date":"2018-04-01",
"time":[{"10:00":"12"},{"12:00":"25"}]
},
{
"date":"2018-04-02",
"time":[{"10:00":"12"},{"12:00":"25"}]
},
{
"date":"2018-04-03",
"time":[{"10:00":"12"},{"12:00":"25"}]
}
]
```
I need to get every date and time. To get this I am using a for loop. But not able to get date and time.
My script:
```
var slots = req.body.availableSlots;
var count = slots.length;
for(var i=0;i
```
When getting `date` always says `undefined`.<issue_comment>username_1: It seems like `req.body.availableSlots` is coming as a multidimensional object array.
***So full code need to be:-***
```
var slots = req.body.availableSlots;
for(var i=0;i
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Instead of using jquery library (**jQuery.parseJSON()**) use javascript inbuilt **JSON.parse**
```
var slots = '[{"date":"2018-04-01","time":[{"10:00":"12"},{"12:00":"25"}]},{"date":"2018-04-02","time":[{"10:00":"12"},{"12:00":"25"}]},{"date":"2018-04-03","time":[{"10:00":"12"},{"12:00":"25"}]}]';
slots = JSON.parse(slots);
var count = slots.length;
for(var i=0;i
```
Upvotes: 0
|
2018/03/19
| 1,202 | 4,525 |
<issue_start>username_0: In RESTful websites, each resource should be identified by an URI. But how should I handle what are called "weak entities" in relational databases, aka, ressources that only make sense when related to another resource? Should these have specific URIs pointing to them too?
To give an example: Say I have a website for showing events going on in a city. Each event can have comments posted by users.
An `event` is a resource, with corresponding URI (`/events/:id`).
But should each `comment` have an URI, too? Like `/events/:eventid/comments/:commentid` ? Comments only make sense when they're associated with the corresponding event, so having a page just to represent one message seems weird/unnecessary. I only want the comments to appear on the page of the event.
I guess the `/events/:eventid/comments/:commentid` URI could be used for a `DELETE` request, but what should it return to a `GET` request?<issue_comment>username_1: It really depends on how the client wants to work with the domain. Would an app want to get a list of all the comments for an event? Are users allowed to update their commments? Or are they anonymous comments?
If you plan to only ever use the same page to display the backend information of events and comments, then you prolly don't need a separate comments api. If you want to open the dataset to app vendors (perhaps you might want to develop an app in future) a comments api would be handy. i.e. get all comments for an event.
Upvotes: 0 <issue_comment>username_2: >
> An `event` is a resource, with corresponding URI (`/events/:id`).
> But should each `comment` have an URI, too? Like `/events/:eventid/comments/:commentid` ?
>
>
>
If `comment` is a *resource*, it must have an identifier, but it doesn't mean that you have to support all operations for such resource. The server can return a response with the [`405`](https://www.rfc-editor.org/rfc/rfc7231#section-6.5.5) status code if a method is not supported by the target resource:
>
> [**6.5.5. 405 Method Not Allowed**](https://www.rfc-editor.org/rfc/rfc7231#section-6.5.5)
>
>
> The `405` (Method Not Allowed) status code indicates that the method received in the request-line is known by the origin server but not supported by the target resource. [...]
>
>
>
---
>
> I guess the `/events/:eventid/comments/:commentid` URI could be used for a `DELETE` request, but what should it return to a `GET` request?
>
>
>
Return a representation of the comment with the given identifier.
Upvotes: 2 <issue_comment>username_3: >
> In RESTful websites, each resource should be identified by an URI. But how should I handle what are called "weak entities" in relational databases, aka, ressources that only make sense when related to another resource? Should these have specific URIs pointing to them too?
>
>
>
It depends. An important thing to recognize is that resources and entities are *not* one to one. You are allowed to have many resources (web pages) that include information from the same entity.
<NAME> described the distinction this way
>
> The web is not your domain, it's a document management system. All the HTTP verbs apply to the document management domain. URIs do NOT map onto domain objects - that violates encapsulation.
>
>
>
[Domain Driven Design for Restful Systems](https://www.youtube.com/watch?v=aQVSzMV8DWc)
>
> I guess the /events/:eventid/comments/:commentid URI could be used for a DELETE request, but what should it return to a GET request?
>
>
>
As noted by [Cassio](https://stackoverflow.com/a/49362776/54734), [405 Method Not Allowed](https://www.rfc-editor.org/rfc/rfc7231#section-6.5.5) is the correct status code to use if the client erroneously sends a request with an unsupported method. [OPTIONS](https://www.rfc-editor.org/rfc/rfc7231#section-4.3.7) is the appropriate mechanism for informing a client of which methods are currently supported by a resource.
You could also have it respond by redirecting the client to the `/events/:eventId` resource; it might even make sense to take advantage of [URI Fragment support](https://www.rfc-editor.org/rfc/rfc3986#section-3.5), and redirect the client to `/events/:eventid#:commentid` - allowing the client to use its own fragment discovery to identify the specific comment within the representation of the event.
Or, if there are useful integrations that you want to support, you could simply have this resource return a representation that exposes the integrations.
Upvotes: 2
|
2018/03/19
| 1,203 | 4,285 |
<issue_start>username_0: Can anyone help me make this query work for SQL Server 2014?
This is working on Postgresql and probably on SQL Server 2017. On Oracle it is `listagg` instead of `string_agg`.
Here is the SQL:
```
select
string_agg(t.id,',') AS id
from
Table t
```
I checked on the site some xml option should be used but I could not understand it.<issue_comment>username_1: In SQL Server pre-2017, you can do:
```
select stuff( (select ',' + cast(t.id as varchar(max))
from tabel t
for xml path ('')
), 1, 1, ''
);
```
The only purpose of `stuff()` is to remove the initial comma. The work is being done by `for xml path`.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Note that for some characters, the values will be escaped when using `FOR XML PATH`, for example:
```sql
SELECT STUFF((SELECT ',' + V.String
FROM (VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String)
FOR XML PATH('')),1,1,'');
```
This returns the string below:
```none
7 > 5,Salt & pepper,2
lines'
```
This is unlikely desired. You can get around this using `TYPE` and then getting the value of the XML:
```sql
SELECT STUFF((SELECT ',' + V.String
FROM (VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String)
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'');
```
This returns the string below:
```none
7 > 5,Salt & pepper,2
lines
```
This would replicate the behaviour of the following:
```sql
SELECT STRING_AGG(V.String,',')
FROM VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String);
```
---
Of course, there might be times where you want to group the data, which the above doesn't demonstrate. To achieve this you would need to use a correlated subquery. Take the following sample data:
```sql
CREATE TABLE dbo.MyTable (ID int IDENTITY(1,1),
GroupID int,
SomeCharacter char(1));
INSERT INTO dbo.MyTable (GroupID, SomeCharacter)
VALUES (1,'A'), (1,'B'), (1,'D'),
(2,'C'), (2,NULL), (2,'Z');
```
From this wanted the below results:
| GroupID | Characters |
| --- | --- |
| 1 | A,B,D |
| 2 | C,Z |
To achieve this you would need to do something like this:
```sql
SELECT MT.GroupID,
STUFF((SELECT ',' + sq.SomeCharacter
FROM dbo.MyTable sq
WHERE sq.GroupID = MT.GroupID --This is your correlated join and should be on the same columns as your GROUP BY
--You "JOIN" on the columns that would have been in the PARTITION BY
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'')
FROM dbo.MyTable MT
GROUP BY MT.GroupID; --I use GROUP BY rather than DISTINCT as we are technically aggregating here
```
So, if you were grouping on 2 columns, then you would have 2 clauses your sub query's `WHERE`: `WHERE MT.SomeColumn = sq.SomeColumn AND MT.AnotherColumn = sq.AnotherColumn`, and your outer `GROUP BY` would be `GROUP BY MT.SomeColumn, MT.AnotherColumn`.
---
Finally, let's add an `ORDER BY` into this, which you also define in the subquery. Let's, for example, assume you wanted to sort the data by the value of the `ID` descending in the string aggregation:
```sql
SELECT MT.GroupID,
STUFF((SELECT ',' + sq.SomeCharacter
FROM dbo.MyTable sq
WHERE sq.GroupID = MT.GroupID
ORDER BY sq.ID DESC --This is identical to the ORDER BY you would have in your OVER clause
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'')
FROM dbo.MyTable MT
GROUP BY MT.GroupID;
```
For would produce the following results:
| GroupID | Characters |
| --- | --- |
| 1 | D,B,A |
| 2 | Z,C |
Unsurprisingly, this will never be as efficient as a `STRING_AGG`, due to having the reference the table multiple times (if you need to perform multiple aggregations, then you need multiple sub queries), but a well indexed table will greatly help the RDBMS. If performance really is a problem, because you're doing multiple string aggregations in a single query, then I would either suggest you need to reconsider if you need the aggregation, or it's about time you conisidered upgrading.
Upvotes: 5
|
2018/03/19
| 1,392 | 5,026 |
<issue_start>username_0: I'm new to Django, and i have a form that has two fields :
Client name & bill number .
i've created a validator that tests if the bill number already exists in the database table (called bills).
But now i need to transform this validator to another that tests in addition of the previous test, if the Client name exists in the same table line (more smiply : if client name and the bille number have the same pk).
The validator :
```
def validate_url(value):
try:
entry=facture_ventes.objects.get(numfac=value)
except ObjectDoesNotExist:
entry = None
if entry is not None:
factexist=facture_ventes.objects.all().filter(id=entry.id).count()
if factexist is not None:
raise ValidationError("Numéro de Facture déja saisi ! ")
```
the form :
```
class SubmitUrlForm(forms.Form):
numfacture=forms.CharField(label='Submit Form', validators=[validate_url])
```
here is the data base table :
<https://i.stack.imgur.com/3xmpd.png>
any help please, cause i know that validators cant return a value so i'm stuck here. Thank You<issue_comment>username_1: In SQL Server pre-2017, you can do:
```
select stuff( (select ',' + cast(t.id as varchar(max))
from tabel t
for xml path ('')
), 1, 1, ''
);
```
The only purpose of `stuff()` is to remove the initial comma. The work is being done by `for xml path`.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Note that for some characters, the values will be escaped when using `FOR XML PATH`, for example:
```sql
SELECT STUFF((SELECT ',' + V.String
FROM (VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String)
FOR XML PATH('')),1,1,'');
```
This returns the string below:
```none
7 > 5,Salt & pepper,2
lines'
```
This is unlikely desired. You can get around this using `TYPE` and then getting the value of the XML:
```sql
SELECT STUFF((SELECT ',' + V.String
FROM (VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String)
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'');
```
This returns the string below:
```none
7 > 5,Salt & pepper,2
lines
```
This would replicate the behaviour of the following:
```sql
SELECT STRING_AGG(V.String,',')
FROM VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String);
```
---
Of course, there might be times where you want to group the data, which the above doesn't demonstrate. To achieve this you would need to use a correlated subquery. Take the following sample data:
```sql
CREATE TABLE dbo.MyTable (ID int IDENTITY(1,1),
GroupID int,
SomeCharacter char(1));
INSERT INTO dbo.MyTable (GroupID, SomeCharacter)
VALUES (1,'A'), (1,'B'), (1,'D'),
(2,'C'), (2,NULL), (2,'Z');
```
From this wanted the below results:
| GroupID | Characters |
| --- | --- |
| 1 | A,B,D |
| 2 | C,Z |
To achieve this you would need to do something like this:
```sql
SELECT MT.GroupID,
STUFF((SELECT ',' + sq.SomeCharacter
FROM dbo.MyTable sq
WHERE sq.GroupID = MT.GroupID --This is your correlated join and should be on the same columns as your GROUP BY
--You "JOIN" on the columns that would have been in the PARTITION BY
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'')
FROM dbo.MyTable MT
GROUP BY MT.GroupID; --I use GROUP BY rather than DISTINCT as we are technically aggregating here
```
So, if you were grouping on 2 columns, then you would have 2 clauses your sub query's `WHERE`: `WHERE MT.SomeColumn = sq.SomeColumn AND MT.AnotherColumn = sq.AnotherColumn`, and your outer `GROUP BY` would be `GROUP BY MT.SomeColumn, MT.AnotherColumn`.
---
Finally, let's add an `ORDER BY` into this, which you also define in the subquery. Let's, for example, assume you wanted to sort the data by the value of the `ID` descending in the string aggregation:
```sql
SELECT MT.GroupID,
STUFF((SELECT ',' + sq.SomeCharacter
FROM dbo.MyTable sq
WHERE sq.GroupID = MT.GroupID
ORDER BY sq.ID DESC --This is identical to the ORDER BY you would have in your OVER clause
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'')
FROM dbo.MyTable MT
GROUP BY MT.GroupID;
```
For would produce the following results:
| GroupID | Characters |
| --- | --- |
| 1 | D,B,A |
| 2 | Z,C |
Unsurprisingly, this will never be as efficient as a `STRING_AGG`, due to having the reference the table multiple times (if you need to perform multiple aggregations, then you need multiple sub queries), but a well indexed table will greatly help the RDBMS. If performance really is a problem, because you're doing multiple string aggregations in a single query, then I would either suggest you need to reconsider if you need the aggregation, or it's about time you conisidered upgrading.
Upvotes: 5
|
2018/03/19
| 2,672 | 10,110 |
<issue_start>username_0: I am still an ASP.NET amateur and I've been working on an application that needs to calculate the hours an employee has worked if no special events have come up e.g the employee has been sick, I have 2 tables in my database, 1 with the employees. and a second table which holds the events. the events table is filled through a calendar and holds info like dates and who made the event.
My situation:
When the user clicks on an employee's detail page. I want the corresponding record of the employee, and the events he made. So I am assuming that I am looking for a join with linq.
An employee can make more than 1 event, let's say an employee needs to work overtime 3 days this month. then on the detail page, it should select the employee from the employee table and the 3 events from the events table.
[](https://i.stack.imgur.com/Y2iSs.png)
Update
------
Thanks to Vladimir's help, a whole lot of errors are gone and the query works. Though it does not completely work as expected yet. it currently returns 1 employee and 1 event. While the employee that I am testing with, should have 4 events returned.
This is my Context
```
namespace hrmTool.Models
{
public class MedewerkerMeldingContext : DbContext
{
public MedewerkerMeldingContext() : base("name=temphrmEntities") { }
public DbSet medewerker { get; set; }
public DbSet medewerker\_melding { get; set; }
}
}
```
My current viewModel
```
namespace hrmTool.Models
{
public class MedewerkerMeldingViewModel
{
//Medewerker tabel
public int ID { get; set; }
public string roepnaam { get; set; }
public string voorvoegsel { get; set; }
public string achternaam { get; set; }
public string tussenvoegsel { get; set; }
public string meisjesnaam { get; set; }
public Nullable datum\_in\_dienst { get; set; }
public Nullable datum\_uit\_dienst { get; set; }
public int aantal\_km\_woon\_werk { get; set; }
public bool maandag { get; set; }
public Nullable ma\_van { get; set; }
public Nullable ma\_tot { get; set; }
public bool dinsdag { get; set; }
public Nullable di\_van { get; set; }
public Nullable di\_tot { get; set; }
public bool woensdag { get; set; }
public Nullable wo\_van { get; set; }
public Nullable wo\_tot { get; set; }
public bool donderdag { get; set; }
public Nullable do\_van { get; set; }
public Nullable do\_tot { get; set; }
public bool vrijdag { get; set; }
public Nullable vr\_van { get; set; }
public Nullable vr\_tot { get; set; }
public bool zaterdag { get; set; }
public Nullable za\_van { get; set; }
public Nullable za\_tot { get; set; }
public bool zondag { get; set; }
public Nullable zo\_van { get; set; }
public Nullable zo\_tot { get; set; }
//Medewerker\_Melding combi tabel
public int medewerkerID { get; set; }
public int meldingID { get; set; }
public System.DateTime datum\_van { get; set; }
public Nullable datum\_tot { get; set; }
public int MM\_ID { get; set; }
public virtual ICollection medewerker\_melding { get; set; }
public virtual medewerker medewerker { get; set; }
}
}
```
My current query
```
using (var context = new MedewerkerMeldingContext())
{
var medewerkers = context.medewerker;
var medewerker_meldings = context.medewerker_melding;
var testQuery = from m in medewerkers
join mm in medewerker_meldings on m.ID equals mm.medewerkerID
where m.ID == id
select new MedewerkerMeldingViewModel
{
ID = m.ID,
roepnaam = m.roepnaam,
voorvoegsel = m.voorvoegsel,
achternaam = m.achternaam,
tussenvoegsel = m.tussenvoegsel,
meisjesnaam = m.meisjesnaam,
datum_in_dienst = m.datum_in_dienst,
datum_uit_dienst = m.datum_uit_dienst,
aantal_km_woon_werk = m.aantal_km_woon_werk,
maandag = m.maandag,
ma_van = m.ma_van,
ma_tot = m.ma_tot,
dinsdag = m.dinsdag,
di_van = m.di_van,
di_tot = m.di_tot,
woensdag = m.woensdag,
wo_van = m.wo_van,
wo_tot = m.wo_tot,
donderdag = m.donderdag,
do_van = m.do_van,
do_tot = m.do_tot,
vrijdag = m.vrijdag,
vr_van = m.vr_van,
vr_tot = m.vr_tot,
zaterdag = m.zaterdag,
za_van = m.za_van,
za_tot = m.za_tot,
zondag = m.zondag,
zo_van = m.zo_van,
zo_tot = m.zo_tot,
medewerkerID = mm.medewerkerID,
meldingID = mm.meldingID,
datum_van = mm.datum_van,
datum_tot = mm.datum_tot,
MM_ID = mm.ID
};
var getQueryResult = testQuery.FirstOrDefault();
Debug.WriteLine("Debug testQuery" + testQuery);
Debug.WriteLine("Debug getQueryResult: "+ getQueryResult);
if (id == null)
{
return new HttpStatusCodeResult(HttpStatusCode.BadRequest);
}
if (testQuery == null)
{
return HttpNotFound();
}
return View(getQueryResult);
}
```
Returns: 1 instance of employee and only 1 event
Expected return: 1 instance of employee, 4 events<issue_comment>username_1: In SQL Server pre-2017, you can do:
```
select stuff( (select ',' + cast(t.id as varchar(max))
from tabel t
for xml path ('')
), 1, 1, ''
);
```
The only purpose of `stuff()` is to remove the initial comma. The work is being done by `for xml path`.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Note that for some characters, the values will be escaped when using `FOR XML PATH`, for example:
```sql
SELECT STUFF((SELECT ',' + V.String
FROM (VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String)
FOR XML PATH('')),1,1,'');
```
This returns the string below:
```none
7 > 5,Salt & pepper,2
lines'
```
This is unlikely desired. You can get around this using `TYPE` and then getting the value of the XML:
```sql
SELECT STUFF((SELECT ',' + V.String
FROM (VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String)
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'');
```
This returns the string below:
```none
7 > 5,Salt & pepper,2
lines
```
This would replicate the behaviour of the following:
```sql
SELECT STRING_AGG(V.String,',')
FROM VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String);
```
---
Of course, there might be times where you want to group the data, which the above doesn't demonstrate. To achieve this you would need to use a correlated subquery. Take the following sample data:
```sql
CREATE TABLE dbo.MyTable (ID int IDENTITY(1,1),
GroupID int,
SomeCharacter char(1));
INSERT INTO dbo.MyTable (GroupID, SomeCharacter)
VALUES (1,'A'), (1,'B'), (1,'D'),
(2,'C'), (2,NULL), (2,'Z');
```
From this wanted the below results:
| GroupID | Characters |
| --- | --- |
| 1 | A,B,D |
| 2 | C,Z |
To achieve this you would need to do something like this:
```sql
SELECT MT.GroupID,
STUFF((SELECT ',' + sq.SomeCharacter
FROM dbo.MyTable sq
WHERE sq.GroupID = MT.GroupID --This is your correlated join and should be on the same columns as your GROUP BY
--You "JOIN" on the columns that would have been in the PARTITION BY
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'')
FROM dbo.MyTable MT
GROUP BY MT.GroupID; --I use GROUP BY rather than DISTINCT as we are technically aggregating here
```
So, if you were grouping on 2 columns, then you would have 2 clauses your sub query's `WHERE`: `WHERE MT.SomeColumn = sq.SomeColumn AND MT.AnotherColumn = sq.AnotherColumn`, and your outer `GROUP BY` would be `GROUP BY MT.SomeColumn, MT.AnotherColumn`.
---
Finally, let's add an `ORDER BY` into this, which you also define in the subquery. Let's, for example, assume you wanted to sort the data by the value of the `ID` descending in the string aggregation:
```sql
SELECT MT.GroupID,
STUFF((SELECT ',' + sq.SomeCharacter
FROM dbo.MyTable sq
WHERE sq.GroupID = MT.GroupID
ORDER BY sq.ID DESC --This is identical to the ORDER BY you would have in your OVER clause
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'')
FROM dbo.MyTable MT
GROUP BY MT.GroupID;
```
For would produce the following results:
| GroupID | Characters |
| --- | --- |
| 1 | D,B,A |
| 2 | Z,C |
Unsurprisingly, this will never be as efficient as a `STRING_AGG`, due to having the reference the table multiple times (if you need to perform multiple aggregations, then you need multiple sub queries), but a well indexed table will greatly help the RDBMS. If performance really is a problem, because you're doing multiple string aggregations in a single query, then I would either suggest you need to reconsider if you need the aggregation, or it's about time you conisidered upgrading.
Upvotes: 5
|
2018/03/19
| 1,457 | 5,310 |
<issue_start>username_0: I'm following [this](https://youtu.be/hz1h_ColGy0?t=13m1s) tutorial and I'm getting the error that you can see in the title, here is my code:
```
func loadData()
{
let delegate = UIApplication.shared.delegate as? AppDelegate
if let context = delegate?.persistentContainer.viewContext
{
if let friends = fetchFriends()
{
messages = [Message]()
for friend in friends
{
print(friend.name) // This line btw for some reason is giving me: Expression implicitly coerced from 'String?' to Any
let fetchRequest:NSFetchRequest = Message.fetchRequest()
fetchRequest.sortDescriptors = [NSSortDescriptor(key: "date", ascending: false)]
fetchRequest.predicate = NSPredicate(format: "friend.name = %@", friend.name!)
fetchRequest.fetchLimit = 1
do
{
let fetchedMessages = try(context.fetch(fetchRequest))
messages?.append(fetchedMessages)
// This above line is the line which is giving me this error:
//Cannot convert value of type '[Message]' to expected argument type 'Message'
}
catch let err
{
print(err)
}
}
}
}
}
```
As you can see from about 13min my code does not look exactly the same, since the video is a bit old and using an older version of swift.
I'm hoping someone can help me to solve the error since I do not know what to do, thank you in advance.<issue_comment>username_1: In SQL Server pre-2017, you can do:
```
select stuff( (select ',' + cast(t.id as varchar(max))
from tabel t
for xml path ('')
), 1, 1, ''
);
```
The only purpose of `stuff()` is to remove the initial comma. The work is being done by `for xml path`.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Note that for some characters, the values will be escaped when using `FOR XML PATH`, for example:
```sql
SELECT STUFF((SELECT ',' + V.String
FROM (VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String)
FOR XML PATH('')),1,1,'');
```
This returns the string below:
```none
7 > 5,Salt & pepper,2
lines'
```
This is unlikely desired. You can get around this using `TYPE` and then getting the value of the XML:
```sql
SELECT STUFF((SELECT ',' + V.String
FROM (VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String)
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'');
```
This returns the string below:
```none
7 > 5,Salt & pepper,2
lines
```
This would replicate the behaviour of the following:
```sql
SELECT STRING_AGG(V.String,',')
FROM VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String);
```
---
Of course, there might be times where you want to group the data, which the above doesn't demonstrate. To achieve this you would need to use a correlated subquery. Take the following sample data:
```sql
CREATE TABLE dbo.MyTable (ID int IDENTITY(1,1),
GroupID int,
SomeCharacter char(1));
INSERT INTO dbo.MyTable (GroupID, SomeCharacter)
VALUES (1,'A'), (1,'B'), (1,'D'),
(2,'C'), (2,NULL), (2,'Z');
```
From this wanted the below results:
| GroupID | Characters |
| --- | --- |
| 1 | A,B,D |
| 2 | C,Z |
To achieve this you would need to do something like this:
```sql
SELECT MT.GroupID,
STUFF((SELECT ',' + sq.SomeCharacter
FROM dbo.MyTable sq
WHERE sq.GroupID = MT.GroupID --This is your correlated join and should be on the same columns as your GROUP BY
--You "JOIN" on the columns that would have been in the PARTITION BY
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'')
FROM dbo.MyTable MT
GROUP BY MT.GroupID; --I use GROUP BY rather than DISTINCT as we are technically aggregating here
```
So, if you were grouping on 2 columns, then you would have 2 clauses your sub query's `WHERE`: `WHERE MT.SomeColumn = sq.SomeColumn AND MT.AnotherColumn = sq.AnotherColumn`, and your outer `GROUP BY` would be `GROUP BY MT.SomeColumn, MT.AnotherColumn`.
---
Finally, let's add an `ORDER BY` into this, which you also define in the subquery. Let's, for example, assume you wanted to sort the data by the value of the `ID` descending in the string aggregation:
```sql
SELECT MT.GroupID,
STUFF((SELECT ',' + sq.SomeCharacter
FROM dbo.MyTable sq
WHERE sq.GroupID = MT.GroupID
ORDER BY sq.ID DESC --This is identical to the ORDER BY you would have in your OVER clause
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'')
FROM dbo.MyTable MT
GROUP BY MT.GroupID;
```
For would produce the following results:
| GroupID | Characters |
| --- | --- |
| 1 | D,B,A |
| 2 | Z,C |
Unsurprisingly, this will never be as efficient as a `STRING_AGG`, due to having the reference the table multiple times (if you need to perform multiple aggregations, then you need multiple sub queries), but a well indexed table will greatly help the RDBMS. If performance really is a problem, because you're doing multiple string aggregations in a single query, then I would either suggest you need to reconsider if you need the aggregation, or it's about time you conisidered upgrading.
Upvotes: 5
|
2018/03/19
| 1,441 | 5,214 |
<issue_start>username_0: I am extracting some data out of an array of nested objects, using two `reduce`es, and `map`, which is working at the moment, but it is a bit ugly. How can this be optimized?
```js
function extractSchools(schools) {
let schoolData = [];
if (schools) {
schoolData = schools.reduce(function(parentdata, chlrn) {
let childrenlist = chlrn.children;
let childrendata = [];
if (childrenlist) {
childrendata = childrenlist.reduce(function(addrsslist, school) {
return addrsslist.concat(school.address.map(i => i.school));
}, []);
}
return parentdata.concat(chlrn.parent, childrendata);
}, []);
}
return {
schoolData
};
}
const schools = [{
"parent": "<NAME>",
"children": [{
"address": [{
"school": "School A"
}]
},
{
"address": [{
"school": "School B"
}]
}
]
},
{
"parent": "<NAME>",
"children": [{
"address": [{
"school": "School C"
}]
}]
}
];
console.log(extractSchools(schools));
```
How can I optimize this function to get the same results? using one `reduce` instead of two... or some other optimal way of doing it.<issue_comment>username_1: In SQL Server pre-2017, you can do:
```
select stuff( (select ',' + cast(t.id as varchar(max))
from tabel t
for xml path ('')
), 1, 1, ''
);
```
The only purpose of `stuff()` is to remove the initial comma. The work is being done by `for xml path`.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Note that for some characters, the values will be escaped when using `FOR XML PATH`, for example:
```sql
SELECT STUFF((SELECT ',' + V.String
FROM (VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String)
FOR XML PATH('')),1,1,'');
```
This returns the string below:
```none
7 > 5,Salt & pepper,2
lines'
```
This is unlikely desired. You can get around this using `TYPE` and then getting the value of the XML:
```sql
SELECT STUFF((SELECT ',' + V.String
FROM (VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String)
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'');
```
This returns the string below:
```none
7 > 5,Salt & pepper,2
lines
```
This would replicate the behaviour of the following:
```sql
SELECT STRING_AGG(V.String,',')
FROM VALUES('7 > 5'),('Salt & pepper'),('2
lines'))V(String);
```
---
Of course, there might be times where you want to group the data, which the above doesn't demonstrate. To achieve this you would need to use a correlated subquery. Take the following sample data:
```sql
CREATE TABLE dbo.MyTable (ID int IDENTITY(1,1),
GroupID int,
SomeCharacter char(1));
INSERT INTO dbo.MyTable (GroupID, SomeCharacter)
VALUES (1,'A'), (1,'B'), (1,'D'),
(2,'C'), (2,NULL), (2,'Z');
```
From this wanted the below results:
| GroupID | Characters |
| --- | --- |
| 1 | A,B,D |
| 2 | C,Z |
To achieve this you would need to do something like this:
```sql
SELECT MT.GroupID,
STUFF((SELECT ',' + sq.SomeCharacter
FROM dbo.MyTable sq
WHERE sq.GroupID = MT.GroupID --This is your correlated join and should be on the same columns as your GROUP BY
--You "JOIN" on the columns that would have been in the PARTITION BY
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'')
FROM dbo.MyTable MT
GROUP BY MT.GroupID; --I use GROUP BY rather than DISTINCT as we are technically aggregating here
```
So, if you were grouping on 2 columns, then you would have 2 clauses your sub query's `WHERE`: `WHERE MT.SomeColumn = sq.SomeColumn AND MT.AnotherColumn = sq.AnotherColumn`, and your outer `GROUP BY` would be `GROUP BY MT.SomeColumn, MT.AnotherColumn`.
---
Finally, let's add an `ORDER BY` into this, which you also define in the subquery. Let's, for example, assume you wanted to sort the data by the value of the `ID` descending in the string aggregation:
```sql
SELECT MT.GroupID,
STUFF((SELECT ',' + sq.SomeCharacter
FROM dbo.MyTable sq
WHERE sq.GroupID = MT.GroupID
ORDER BY sq.ID DESC --This is identical to the ORDER BY you would have in your OVER clause
FOR XML PATH(''),TYPE).value('(./text())[1]','varchar(MAX)'),1,1,'')
FROM dbo.MyTable MT
GROUP BY MT.GroupID;
```
For would produce the following results:
| GroupID | Characters |
| --- | --- |
| 1 | D,B,A |
| 2 | Z,C |
Unsurprisingly, this will never be as efficient as a `STRING_AGG`, due to having the reference the table multiple times (if you need to perform multiple aggregations, then you need multiple sub queries), but a well indexed table will greatly help the RDBMS. If performance really is a problem, because you're doing multiple string aggregations in a single query, then I would either suggest you need to reconsider if you need the aggregation, or it's about time you conisidered upgrading.
Upvotes: 5
|
2018/03/19
| 709 | 2,415 |
<issue_start>username_0: Is there a way to filter for events where a certain attribute is NOT the given string in Windows (Server 2016) Event Viewer's limited dialect of XPath?
I'm trying to get a view on logon events, but only actual user logons (console and RDP).
This is accepted as a filter, but gives too many results, as if the final AND term is ignored:
```
\*[System[(EventID=4624)]]
and \*[EventData[Data[@Name='LogonType'] and (Data=1 or Data=8 or Data=9)]]
and \*[EventData[Data[@Name='TargetUserName'] and (Data!='SYSTEM')]]
```
When I change the third test to this, it is flagged as "invalid query".
```
and not *[EventData[Data[@Name='TargetUserName'] and (Data='SYSTEM')]]
```
Yet I found an answer to another XPath question that suggests to prefer this form, because != gives the wrong result when one side of the comparison is a set instead of a value.
And the same for this, invalid query
```
and *[EventData[Data[@Name='TargetUserName'] and not (Data='SYSTEM')]]
```
or this
```
and *[EventData[Data[@Name='TargetUserName'] and !(Data='SYSTEM')]]
```<issue_comment>username_1: Your query should look like this:
```
\*[System[(EventID=4624)]]
and \*[EventData[Data[@Name='LogonType'] and (Data=1 or Data=8 or Data=9)]]
\*[EventData[Data[@Name='TargetUserName'] and (Data='SYSTEM')]]
```
Suppress is the secret
Upvotes: 4 <issue_comment>username_2: You've been caught out by common but wrong examples. Your first DataEvent search asks for records that contain a "LogonType" element, and also has the value 1 or 8 or 10 in any element. It isn't confined to checking the "LogonType" element. This happens to work because only "LogonType" elements contain those values.
To match in any element you write
```
Data=1
```
To match in a specific element you need to write:
```
Data[@Name='SpecificType']=1
```
for each value, so that query should read:
```
*[EventData[ (Data[@Name='LogonType']=1 or Data[@Name='LogonType']=8 or Data[@Name='LogonType']=9)]]
```
The second EventData section is asking for any record that has a data value that doesn't match 'SYSTEM', which is why it returns all of them. It should be:
```
*[EventData[Data[@Name='TargetUserName']!='SYSTEM']]
```
You can combine the two
```
*[EventData[ (Data[@Name='LogonType']=1 or Data[@Name='LogonType']=8 or Data[@Name='LogonType']=9) and Data[@Name='TargetUserName']!='SYSTEM']]
```
Upvotes: 2
|
2018/03/19
| 608 | 1,994 |
<issue_start>username_0: Context:
I currently want to flush my L1 DATA cache (target: NXP P2020 Qoriq e500).
I have an issue while using "dcbf" instruction:
```
dcbf r3, r4 // with r3 and r4 defining the address of the DATA cache
```
Issue:
My problem is that I don't know what parameter to give to this instruction to reach the DATA cache and flush the line ?
I tried with a "just created" variable :
```
int i = 0;
// let assume r3 = &i
dcbf 0, r3
isync
msync
```
I thougth that the dcbf instruction will reach the data cache via &i parameter, but when I futher look into the memory via a probe, I see the cache as not flushed and not invalidated.<issue_comment>username_1: Your query should look like this:
```
\*[System[(EventID=4624)]]
and \*[EventData[Data[@Name='LogonType'] and (Data=1 or Data=8 or Data=9)]]
\*[EventData[Data[@Name='TargetUserName'] and (Data='SYSTEM')]]
```
Suppress is the secret
Upvotes: 4 <issue_comment>username_2: You've been caught out by common but wrong examples. Your first DataEvent search asks for records that contain a "LogonType" element, and also has the value 1 or 8 or 10 in any element. It isn't confined to checking the "LogonType" element. This happens to work because only "LogonType" elements contain those values.
To match in any element you write
```
Data=1
```
To match in a specific element you need to write:
```
Data[@Name='SpecificType']=1
```
for each value, so that query should read:
```
*[EventData[ (Data[@Name='LogonType']=1 or Data[@Name='LogonType']=8 or Data[@Name='LogonType']=9)]]
```
The second EventData section is asking for any record that has a data value that doesn't match 'SYSTEM', which is why it returns all of them. It should be:
```
*[EventData[Data[@Name='TargetUserName']!='SYSTEM']]
```
You can combine the two
```
*[EventData[ (Data[@Name='LogonType']=1 or Data[@Name='LogonType']=8 or Data[@Name='LogonType']=9) and Data[@Name='TargetUserName']!='SYSTEM']]
```
Upvotes: 2
|
2018/03/19
| 1,008 | 4,193 |
<issue_start>username_0: I am currently working on a simple React app with a very common workflow where users trigger Redux actions that, in turn, request data from an API. But since I would like to make the results of these actions persistent in the URL, I have opted for React Router v4 to help me with the job.
I have gone through the Redux integration notes in the [React Router documentation](https://reacttraining.com/react-router/core/guides/redux-integration/deep-integration) but the idea of passing the `history` object to Redux actions just doesn't feel like the most elegant pattern to me. Since both Redux and Router state changes cause React components to be re-rendered, I'm a little worried the component updates could go a bit out of control in this scenario.
So in order to make the re-rendering a bit more predictable and sequential, I have come up with the following pattern that attempts to follow the single direction data flow principle:
1. Where I used to trigger Redux actions as a result of users' interactions with the UI, I am now calling React Router's `props.history.push` to update the URL instead. The actual change is about updating a URL parameter rather than the whole route but that's probably not that relevant here.
Before:
```
// UserSelector.jsx
handleUserChange = ({ target: selectElement }) => {
// Some preliminary checks here...
const userId = selectElement.value
// Fire a Redux action
this.props.setUser(userId)
}
```
After:
```
// UserSelector.jsx
handleUserChange = ({ target: selectElement }) => {
// Some preliminary checks here...
const userId = selectElement.value
// Use React Router to update the URL
this.props.history.push(`/user-selector/${userId}`)
}
```
2. The `userId` change in the URL causes React Router to trigger a re-render of the current route.
Route definition in App.jsx:
```
```
3. During that re-render, a `componentDidUpdate` lifecycle hook gets invoked. In there I am comparing the previous and current values of the URL parameter via the React Router's `props.match.params` object. If a change is detected, a Redux action gets fired to fetch new data.
Modified UserSelector.jsx:
```
componentDidUpdate (prevProps) {
const { match: { params: { userId: prevUserId } } } = prevProps
const { match: { params: { userId } } } = this.props
if (prevUserId === userId) {
return
}
// Fire a Redux action (previously this sat in the onChange handler)
this.props.setUser(userId)
}
```
4. When the results are ready, all React components subscribed to Redux get re-rendered.
And this is my attempt to visualise how the code's been structured:
[](https://i.stack.imgur.com/Dr11R.png)
If anyone could verify if this pattern is acceptable, I would be really grateful.<issue_comment>username_1: For step 3, I suggest a different approach which should be more in line with react-router:
* react-router renders a component based on a route
* this component should act as the handler based on the particular route it matches (think of this as a container or page component)
* when this component is mounted, you can use componentWillMount to fetch (or isomorphic-fetch) to load up the data for itself/children
* this way, you do not need to use componentDidUpdate to check the URL/params
* Don't forget to use componentWillUnmount to cancel the fetch request so that it doesn't cause an action to trigger in your redux state
* Don't use the App level itself to do the data fetching, it needs to be done at the page/container level
From the updated code provided in the question:
* I suggest moving the logic out, as you would most likely need the same logic for componentDidMount (such as the case when you first hit that route, componentDidUpdate will only trigger on subsequent changes, not the first render)
Upvotes: 1 <issue_comment>username_2: I think it's worth considering whether you need to store information about which user is selected in your Redux store and as part of URL - do you gain anything by structuring the application like this? If you do, is it worth the added complexity?
Upvotes: 0
|
2018/03/19
| 864 | 3,078 |
<issue_start>username_0: I have been trying to toggle state using dynamic key value pairs but it doesn't seem to happen.
Here is the state:
```
constructor(props) {
super(props);
this.state = {
firecrackerAnimation: false,
mainImageBounceAnimation: false,
flowersFallingAnimation: false,
};
}
```
This is the code I am using to toggle state
```
changeAnimation = e => {
this.setState(
{
[e.target.value]: !(this.state[event.target.value]),
},
() => {
console.log(this.state);
}
);
```
Below is where I am using it inside my render()
```
Animations
{" "}
Fire Cracker Animation
{" "}
Main Image Bounce
{" "}
Flowers Falling Animation
```<issue_comment>username_1: Change your `changeAnimation` function as follows:
```
changeAnimation(e){
var value = e.target.value;
this.setState({[value]: !(this.state[value])});
}
```
[Here is the fiddle.](https://jsfiddle.net/69z2wepo/141293/)
Upvotes: 1 <issue_comment>username_2: you have problem with your change animation event
```js
changeAnimation = e => {
this.setState(
{
// [e.target.value]: !(this.state[event.target.value]),
//here you have to change like
this.state[e.target.value]: !(this.state[event.target.value]),
},
() => {
console.log(this.state);
}
);
```
Upvotes: 0 <issue_comment>username_3: There are several mistakes I've pointed out:
* You should use `e.target.name` in order to get the name of the `checkbox` being clicked.
* You have to provide `name` for checkboxes, not the `value`
**WORKING DEMO**
```js
class App extends React.Component {
constructor(props) {
super(props);
this.state = {
firecrackerAnimation: false,
mainImageBounceAnimation: false,
flowersFallingAnimation: false,
};
}
changeAnimation = (e) => {
this.setState(
{
[e.target.name]: !(this.state[e.target.name]),
},
() => {
console.log(this.state);
})
}
render() {
return (
Animations
{" "}
Fire Cracker Animation
{" "}
Main Image Bounce
{" "}
Flowers Falling Animation
)
}
}
ReactDOM.render(, document.getElementById('root'));
```
```css
.as-console-wrapper { max-height: 50% !important; }
```
```html
```
Upvotes: 1 <issue_comment>username_4: The other responses are correct in that you can setState with dynamic keys and variables, however React recommends against using state inside setState.
From the docs:
"React may batch multiple setState() calls into a single update for performance. Because this.props and this.state may be updated asynchronously, you should not rely on their values for calculating the next state."
- <https://reactjs.org/docs/state-and-lifecycle.html>
You can take the same idea as the other comments mention and make setState accept a function instead. I used this in my code (button was defined in an outer function).
```
this.setState((prevState) => ({
[button]: !prevState[button]
}));
```
Upvotes: 0
|
2018/03/19
| 493 | 1,685 |
<issue_start>username_0: I'm using Spring Boot 1.5.6 with Jackson 2.8.8. When deserializing the answer of a REST call, Jackson fails with the following exception:
>
> JSON parse error: Can not construct instance of org.joda.time.DateTime: no String-argument constructor/factory method to deserialize from String value ('2018-03-19T12:05:21.885+01:00')
>
>
>
It's true there is no String constructor, only an Object constructor in the `DateTime` object.
I included the `jackson-datatype-joda` dependency in my build.gradle file. These are the respective lines from the build.gradle:
```
compile group: 'com.fasterxml.jackson.core', name: 'jackson-core', version: jacksonVersion
compile group: 'com.fasterxml.jackson.core', name: 'jackson-databind', version: jacksonVersion
compile group: 'com.fasterxml.jackson.dataformat', name: 'jackson-dataformat-yaml', version: jacksonVersion
compile group: 'com.fasterxml.jackson.datatype', name: 'jackson-datatype-hibernate5', version: jacksonVersion
compile group: 'com.fasterxml.jackson.datatype', name: 'jackson-datatype-joda', version: jacksonVersion
```
Is there any additional configuration I need to do?
PS: If I put the date String into a `new DateTime("2018-03-19T12:05:21.885+01:00")` it works fine.
Any ideas? Cheers!<issue_comment>username_1: Did you register the `JodaModule` module in your `ObjectMapper`?
```
ObjectMapper mapper = new ObjectMapper();
mapper.registerModule(new JodaModule());
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: It has worked for me when I add the following dependency in build.gradle
compile group: 'com.fasterxml.jackson.datatype', name: 'jackson-datatype-joda'
Upvotes: 0
|
2018/03/19
| 908 | 3,323 |
<issue_start>username_0: I'm trying to handle the multi-select with `react-native-super-grid` , here is my code :
```
(
this.pressEvent() }>
{item.image}
{item.name}
)}
/>
```
I tried using this function :
```
pressEvent(arr){
if(this.state.pressStatus == false){
this.setState({ pressStatus: true})
this.state.arr.push(arr)
this.setState({ color : 'white'})
} else {
this.setState({ pressStatus: false})
this.setState({ color: 'red'})
}
}
```
but it somehow doesn't work , can someone help me ?
Thank you .<issue_comment>username_1: This short example should give you an idea what are you doing wrong. The items itself are not aware of the state. So what I would do, I would create a separate child component for grid item and handle press state locally. Then handle parent, which is holding all the item trough callback about the pressed item.
```
class MyGridView extends Component {
render() {
return (
(
{
// set grid view callback
if (selected) {
//if true add to array
this.addToPressedArray(item);
} else {
//false remove from array
this.removeFromPressedArray(item);
}
}}
/>
)}
/>
);
}
// You don't change the state directly, you mutate it trough set state
addToPressedArray = item => this.setState(prevState => ({ arr: [...prevState.arr, item] }));
removeFromPressedArray = item => {
const arr = this.state.arr.remove(item);
this.setState({ arr });
};
}
```
And the GridItem
```
class GridItem extends Component {
// starting local state
state = {
pressStatus: false,
color: 'red'
};
// handle on item press
pressEvent = () => {
this.setState(prevState => ({
pressStatus: !prevState.pressStatus, //negate previous on state value
color: !prevState.pressStatus ? 'white' : 'red' //choose corect collor based on pressedStatus
}));
// call parent callback to notify grid view of item select/deselect
this.props.onItemPress(this.state.pressStatus);
};
render() {
return (
this.pressEvent()}>
{item.image}
{item.name}
);
}
}
```
I also recommend to read about `React.Component` [lifecycle](https://reactjs.org/docs/react-component.html#setstate). Its a good reading and gives you a better understanding how to achieve updates.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Since GridView has been merged into FlatGrid. Therefore, I've implemented the multi-select option in a pretty easy way. First of all I applied TouchableOpacity on top of the view in the renderItems prop of FlatGrid like this.
```
this.selectedServices(item.name)}>
...props
```
SelectedServices:
```
selectedServices = item => {
let services = this.state.selectedServices;
if (services.includes(item) == false) {
services.push(item);
this.setState({ selectedServices: services });
} else {
let itemIndex = services.indexOf(item);
services.splice(itemIndex, 1);
this.setState({ selectedServices: services });
}
```
};
Using splice, indexOf, and push, you can easily implement multi-selection.
To change the backgroundColor of the currently selected item, you can apply a check on the backgroundColor prop of the view.
```
renderItem={({ item, index }) => (
this.selectedServices(item.name)}
>
{item.name}
)}
```
Upvotes: 0
|
2018/03/19
| 673 | 2,168 |
<issue_start>username_0: I am trying to find out, what the postcss-loader is good for.
On the github page
<https://github.com/postcss/postcss-loader>
it says:
**Loader for webpack to process CSS with PostCSS**
I dont't get that: So, PostCSS is a a WP-Loader to process CSS with PostCSS?
IMHO, that's a circular definition.
So what is PostCSS, is it a CSSLoader? Or, since it's called **Post** CSS is it a loader to run after some other CSS-loader?<issue_comment>username_1: >
> So, PostCSS is a a WP-Loader to process CSS with PostCSS?
>
>
>
No.
PostCSS-**loader** is a WP-Loader so you can process CSS with PostCSS inside Webpack.
i.e. It loads [PostCSS](https://github.com/postcss/postcss) into Webpack.
>
> IMHO, that's a circular definition
>
>
>
It isn't because PostCSS and PostCSS-loader are different things.
>
> So what is PostCSS, is it a CSSLoader?
>
>
>
No. PostCSS is:
>
> a tool for transforming styles with JS plugins. These plugins can lint your CSS, support variables and mixins, transpile future CSS syntax, inline images, and more.
>
>
>
Upvotes: 1 [selected_answer]<issue_comment>username_2: Actually, it isn't a direct plugin for `PostCSS`, it works inside `Webpack`. if you use `Webpack` in your project for module bundling, then for using `PostCSS` as `CSS Preprocessor` you must use `postcss-loader` and add configs of it.
For example, you can see [THIS REPO](https://gitlab.com/amerllica/react-server-render-example), in `webpack` folder, there is two configuration file for development and production environment, open one of them, no different, and search the `postcss-loader` word in it, see a complete example of this usage.
Upvotes: 1 <issue_comment>username_3: **PostCSS**: use JavaScript plugin to transform CSS.
**PostCSS Loader**: process CSS with PostCSS inside Webpack
Example 1
```
/* Before */
.example {
display: flex;
}
/* After */
.example {
display: -webkit-box;
display: -webkit-flex;
display: -ms-flexbox;
display: flex;
}
```
Example 2:
```
/* Before */
:root {
--green: #00ff00;
}
.example {
color: var(--green);
}
/* After */
h1 {
color: #00ff00;
}
```
Upvotes: 1
|
2018/03/19
| 426 | 1,570 |
<issue_start>username_0: After referencing [GetStringFileInfo post](https://stackoverflow.com/q/1433106/361100), I try to read Assembly information and apply `AppName` and `FileVersion` to Inno Setup script as below.
But compiled installation wizard shows just `"GetStringFileInfo("{#RootDirectory}Sample.exe","Title")"` as a software title. The `GetStringFileInfo` was not *processed* at all. How can I make it run correctly?
```
#define RootDirectory "bin\Release\"
[Setup]
AppName=GetStringFileInfo("{#RootDirectory}Sample.exe","Title")
AppVersion=GetStringFileInfo("{#RootDirectory}Sample.exe","FileVersion")
```
Note that I'm using Script Studio and the file resides in `setup.iss` which created by Inno Setup wizard.<issue_comment>username_1: This is an alternative approach based on the [example](https://github.com/jrsoftware/issrc/blob/master/Examples/ISPPExample1.iss) provided with Inno Setup. Their code shows:
```
#define AppVersion GetFileVersion(AddBackslash(SourcePath) + "MyProg.exe")
[Setup]
AppVersion={#AppVersion}
VersionInfoVersion={#AppVersion}
```
I have cut out the rest of the example. Notice that the call to `GetFileVersion` was done outside of the `[Setup]` section. The other answer has a more direct way to do it within the `[Setup]` section.
Upvotes: 2 <issue_comment>username_2: `GetStringFileInfo` is a preprocessor function. A syntax for an [inline call of a preprocessor function](https://jrsoftware.org/ispphelp/) is:
```
AppName={#GetStringFileInfo(RootDirectory + "Sample.exe", "Title")}
```
Upvotes: 2 [selected_answer]
|
2018/03/19
| 897 | 3,137 |
<issue_start>username_0: I want both tabs(Hello World, Tab) equal width of controller width . I put the complete code of my tabs. if anyone have suggestion for it. plz give suggestion.i am waiting. Comment & Answer Fast.
[](https://i.stack.imgur.com/Gp2aK.png)
```
import UIKit
import CarbonKit
class ViewController: UIViewController, CarbonTabSwipeNavigationDelegate {
func generateImage(for view: UIView) -> UIImage? {
defer {
UIGraphicsEndImageContext()
}
UIGraphicsBeginImageContextWithOptions(view.bounds.size, false, UIScreen.main.scale)
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
return UIGraphicsGetImageFromCurrentImageContext()
}
return nil
}
var iconWithTextImage: UIImage {
let button = UIButton()
let icon = UIImage(named: "home")
button.setImage(icon, for: .normal)
button.setTitle("Home", for: .normal)
button.setTitleColor(UIColor.blue, for: .normal)
button.titleLabel?.font = UIFont.systemFont(ofSize: 15)
button.contentEdgeInsets = UIEdgeInsets(top: 0, left: 20, bottom: 0, right: 10)
button.imageEdgeInsets = UIEdgeInsets(top: 0, left: -10, bottom: 0, right: 10)
button.sizeToFit()
return generateImage(for: button) ?? UIImage()
}
override func viewDidLoad() {
super.viewDidLoad()
let tabSwipe = CarbonTabSwipeNavigation(items: ["HELLO WORLD", "Tab"], delegate: self)
tabSwipe.setTabExtraWidth(40)
tabSwipe.insert(intoRootViewController: self)
}
func carbonTabSwipeNavigation(_ carbonTabSwipeNavigation: CarbonTabSwipeNavigation, viewControllerAt index: UInt) -> UIViewController {
guard let storyboard = storyboard else { return UIViewController() }
if index == 0 {
return storyboard.instantiateViewController(withIdentifier: "FirstViewController")
}
return storyboard.instantiateViewController(withIdentifier: "SecondTableViewController")
}
}
```<issue_comment>username_1: This helps you.
First of all set `carbonSegmentControl` `frame` to Screen equal `Width` like this.
```
var frameRect: CGRect = (carbonTabSwipeNavigation.carbonSegmentedControl?.frame)!
frameRect.size.width = UIScreen.main.bounds.size.width
carbonTabSwipeNavigation.carbonSegmentedControl?.frame = frameRect
```
After that write this line for equal width.
```
carbonTabSwipeNavigation.carbonSegmentedControl?.apportionsSegmentWidthsByContent = false
```
**1. Before Equal Width**
[](https://i.stack.imgur.com/vxsvL.png)
**2. After Equal Width**
[](https://i.stack.imgur.com/poWC8.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: Before insert to root view controller:
```
carbonTabSwipeNavigation.carbonSegmentedControl.setWidth(view.frame.width / 2, forSegmentAt : 0)
carbonTabSwipeNavigation.carbonSegmentedControl.setWidth(view.frame.width / 2, forSegmentAt : 1)
```
This will set half of view width to each of the segments.
Upvotes: 0
|
2018/03/19
| 396 | 1,391 |
<issue_start>username_0: I have a question regarding SQL query for DWH. I have months and year column in my dimension table and sales value in my Fact table, I want to find the sales for the third quarter for a particular year. What would be the SQL query for this?<issue_comment>username_1: This helps you.
First of all set `carbonSegmentControl` `frame` to Screen equal `Width` like this.
```
var frameRect: CGRect = (carbonTabSwipeNavigation.carbonSegmentedControl?.frame)!
frameRect.size.width = UIScreen.main.bounds.size.width
carbonTabSwipeNavigation.carbonSegmentedControl?.frame = frameRect
```
After that write this line for equal width.
```
carbonTabSwipeNavigation.carbonSegmentedControl?.apportionsSegmentWidthsByContent = false
```
**1. Before Equal Width**
[](https://i.stack.imgur.com/vxsvL.png)
**2. After Equal Width**
[](https://i.stack.imgur.com/poWC8.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: Before insert to root view controller:
```
carbonTabSwipeNavigation.carbonSegmentedControl.setWidth(view.frame.width / 2, forSegmentAt : 0)
carbonTabSwipeNavigation.carbonSegmentedControl.setWidth(view.frame.width / 2, forSegmentAt : 1)
```
This will set half of view width to each of the segments.
Upvotes: 0
|
2018/03/19
| 988 | 2,921 |
<issue_start>username_0: I'm trying to do a responsive layout with css grid by getting two elements to overlap each other halfway. On wide screens they are in one row and overlapping horizontally, but on narrow screens they should be in one column and overlap vertically.
Here is an illustration of what I'm trying to achieve:
[](https://i.stack.imgur.com/YHaZN.jpg)
Is this behavior possible with css grid? Here is how far I got but now I'm stuck trying to get the vertical overlap.
```css
.wrapper {
background: white;
display: grid;
grid-template-columns: repeat(auto-fit, minmax(444px, 1fr));
}
.wrapper__red,
.wrapper__green {
align-self: center;
}
.wrapper__red {
z-index: 1;
background: red;
}
.wrapper__green {
justify-self: end;
height: 100%;
background: green;
}
```
```html
Title text goes here
====================

```<issue_comment>username_1: In CSS Grid you can create overlapping grid areas.
Grid makes this simple and easy with **line-based placement**.
*From the spec:*
>
> [**8.3. Line-based
> Placement**](http://www.w3.org/TR/css3-grid-layout/#line-placement)
>
>
> The `grid-row-start`, `grid-column-start`, `grid-row-end`, and
> `grid-column-end` properties determine a grid item’s size and location
> within the grid by contributing a line, a span, or nothing (automatic)
> to its grid placement, thereby specifying the inline-start,
> block-start, inline-end, and block-end edges of its grid area.
>
>
>
*note: re-size demo below to transition from desktop to mobile view*
```css
article {
display: grid;
grid-template-columns: repeat(6, 1fr);
grid-auto-rows: 50px;
grid-gap: 5px;
}
section:nth-child(1) { grid-column: 1 / 4; grid-row: 1; z-index: 1; }
section:nth-child(2) { grid-column: 3 / 5; grid-row: 1; }
section:nth-child(3) { grid-column: 5 / 7; grid-row: 1; }
@media ( max-width: 500px ) {
article { grid-template-columns: 100px; justify-content: center; }
section:nth-child(1) { grid-row: 1 / 4; grid-column: 1; }
section:nth-child(2) { grid-row: 3 / 5; grid-column: 1; }
section:nth-child(3) { grid-row: 5 / 7; grid-column: 1; }
}
/* non-essential demo styles */
section:nth-child(1) { background-color: lightgreen; }
section:nth-child(2) { background-color: orange; }
section:nth-child(3) { background-color: aqua; }
section {
display: flex;
justify-content: center;
align-items: center;
font-size: 1.2em;
}
```
```html
1
2
3
```
[jsFiddle demo](https://jsfiddle.net/pog9bwpq/)
-----------------------------------------------
Upvotes: 6 [selected_answer]<issue_comment>username_2: In my case I added:
```
section {
overflow: hidden;
}
```
to my grid blocks which gave me blocks the same height regardless of any wrapping or not.
```
1
2
3
```
Upvotes: 0
|
2018/03/19
| 214 | 885 |
<issue_start>username_0: I'm planning to upgrade Rails to 5.2 in one of my websites and introduce ActiveStorage, as of right now I use Paperclip with paperclip\_optimizer. One of the negative sides is that I will lose the optimizer, when replacing paperclip with ActiveStorage. How can I implement automatically image optimization on user uploads in ActiveStorage?<issue_comment>username_1: Active Storage doesn’t have built-in support for optimizing images on upload.
Upvotes: 0 <issue_comment>username_2: If you are on AWS you can create Lambda function that can listen to an S3 bucket for uploads, and runs image optimization on the newly uploaded files.
Upvotes: 1 <issue_comment>username_3: It's possible by creating a custom Variation. There is a good example here:
<https://prograils.com/posts/rails-5-2-active-storage-new-approach-to-file-uploads>
Upvotes: 3 [selected_answer]
|
2018/03/19
| 1,059 | 4,006 |
<issue_start>username_0: Error are as under:-
```
Multiple markers at this line
- Syntax error, insert ")" to complete MethodDeclaration
- Syntax error on token ".", @ expected after this token
- Syntax error, insert "Identifier (" to complete MethodHeaderName
- Syntax error on token ",", < expected
- Syntax error, insert "SimpleName" to complete QualifiedName
```
System.setProperty is a part of which jar file or where it is present? so that I can access it and use in my program.
```
public class Loginstepdef {
System.setProperty("webdriver.chrome.driver","E:\\Selenium\\chromedriver\\chromedriver.exe");
WebDriver driver = new ChromeDriver();
@Given("^I am on the login page of the application$")
public void output()throws InterruptedException
{
driver.get("https://motzie-staging.mobile-recruit.com/login");
//Navigation navigator=driver navigator();
//navigator.to(http://10.10.5.56/login);
}
@When("^I login with username (.*) and password(.*)$")
public void output2(String username, String password) throws InterruptedException
{
//WebElement loginfield = driver.findElement(By.className("ng-scope"));
WebElement loginfield = driver.findElement(By.id("username"));
loginfield.sendKeys(username);
loginfield.sendKeys(password);
WebElement loginbutton = driver.findElement(By.className("ng-scope"));
loginbutton.click();
}
@Then("^Login successfully in that account$")
public void output3() throws InterruptedException
{
System.out.print("login successfully");
}
}
```<issue_comment>username_1: `\` is used for escape sequence, so it's giving you an error. Use `/` or `\\` in the path
```
System.setProperty("webdriver.chrome.driver", "E:\\Selenium\\chromedriver\\chromedriver.exe");
```
Upvotes: 0 <issue_comment>username_2: The *Key* and the *Value* with in **`System.setProperty`** are from Java [System Class Method](https://docs.oracle.com/javase/7/docs/api/java/lang/System.html#setProperty(java.lang.String,%20java.lang.String)) and both accepts *String* values. Hence pressing `ctrl+space` won't fetch you optimum results.
The error you are seeing is coming out of the *Value* field :
```
"E:\Selenium\chromedriver\chromedriver.exe"
```
You have to pass the absolute path of the *WebDriver* variant through either of the following options:
* Escaping the back slash (`\\`) e.g. `"E:\\Selenium\\chromedriver\\chromedriver.exe"`
* Single forward slash (`/`) e.g. `"E:/Selenium/chromedriver/chromedriver.exe"`
>
> **Note** : You can find a detailed discussion in [Exception in thread “main” java.lang.IllegalStateException:The path to the driver executable must be set by the : system property](https://stackoverflow.com/questions/49350091/exception-in-thread-main-java-lang-illegalstateexceptionthe-path-to-the-drive/49356246#49356246)
>
>
>
---
Update
------
When you are using `cucumber` you have to put the initialization part of *WebDriver* within a method scope as follows :
```
WebDriver driver;
@Given("^Open Firefox and Start Application$")
public void Open_Firefox_and_Start_Application() throws Throwable
{
System.setProperty("webdriver.chrome.driver", "E:\\Selenium\\chromedriver\\chromedriver.exe");
driver = new ChromeDriver();
}
```
Upvotes: 1 [selected_answer]<issue_comment>username_3: Issue raised because you write it wrong. Use **" \\ "** or **" /"**
("webdriver.chrome.driver", "E:\\Selenium\\chromedriver\\chromedriver.exe");
Upvotes: 0 <issue_comment>username_4: Write it in main method:
```
public static void main(String[] args) {
System.setProperty("webdriver.chrome.driver","C:\\Users\\admin\\Downloads\\chromedriver_win32\\chromedriver.exe");
WebDriver driver = new ChromeDriver();;
String url ="https://www.gmail.com";
driver.get(url);
```
Upvotes: -1
|
2018/03/19
| 825 | 2,800 |
<issue_start>username_0: So I have created a subtheme witch his parent is Drupal Bootstrap theme. I would like to add an option to share a second logo in the page (I have googled but I have found nothing).
[](https://i.stack.imgur.com/jom3C.png)
As you see in the image, I would like to add the option just where the red line is between "logotip" and "Nom del lloc" and create a variable to acces to the second logo like $second\_logo.
Is there a way to do this?<issue_comment>username_1: `\` is used for escape sequence, so it's giving you an error. Use `/` or `\\` in the path
```
System.setProperty("webdriver.chrome.driver", "E:\\Selenium\\chromedriver\\chromedriver.exe");
```
Upvotes: 0 <issue_comment>username_2: The *Key* and the *Value* with in **`System.setProperty`** are from Java [System Class Method](https://docs.oracle.com/javase/7/docs/api/java/lang/System.html#setProperty(java.lang.String,%20java.lang.String)) and both accepts *String* values. Hence pressing `ctrl+space` won't fetch you optimum results.
The error you are seeing is coming out of the *Value* field :
```
"E:\Selenium\chromedriver\chromedriver.exe"
```
You have to pass the absolute path of the *WebDriver* variant through either of the following options:
* Escaping the back slash (`\\`) e.g. `"E:\\Selenium\\chromedriver\\chromedriver.exe"`
* Single forward slash (`/`) e.g. `"E:/Selenium/chromedriver/chromedriver.exe"`
>
> **Note** : You can find a detailed discussion in [Exception in thread “main” java.lang.IllegalStateException:The path to the driver executable must be set by the : system property](https://stackoverflow.com/questions/49350091/exception-in-thread-main-java-lang-illegalstateexceptionthe-path-to-the-drive/49356246#49356246)
>
>
>
---
Update
------
When you are using `cucumber` you have to put the initialization part of *WebDriver* within a method scope as follows :
```
WebDriver driver;
@Given("^Open Firefox and Start Application$")
public void Open_Firefox_and_Start_Application() throws Throwable
{
System.setProperty("webdriver.chrome.driver", "E:\\Selenium\\chromedriver\\chromedriver.exe");
driver = new ChromeDriver();
}
```
Upvotes: 1 [selected_answer]<issue_comment>username_3: Issue raised because you write it wrong. Use **" \\ "** or **" /"**
("webdriver.chrome.driver", "E:\\Selenium\\chromedriver\\chromedriver.exe");
Upvotes: 0 <issue_comment>username_4: Write it in main method:
```
public static void main(String[] args) {
System.setProperty("webdriver.chrome.driver","C:\\Users\\admin\\Downloads\\chromedriver_win32\\chromedriver.exe");
WebDriver driver = new ChromeDriver();;
String url ="https://www.gmail.com";
driver.get(url);
```
Upvotes: -1
|
2018/03/19
| 388 | 1,531 |
<issue_start>username_0: I want to have a rest client for my application. As Singletons are used for centralized management of internal or external resources and they provide a global point of access to themselves. I thing the rest client should be implemented as a Singleton class.
Can I implement the class in a generic way so that I can somehow control the type of the object I want the methods of the class to return.
I am looking for something like:
```
public class JersyRestClient implements RestClient
{
private static JersyRestClient instance = null;
private JersyRestClient()
{
}
public static JersyRestClient getInstance()
{
if (instance == null)
{
synchronized (JersyRestClient.class)
{
if (instance == null)
{
instance = new JersyRestClient();
}
}
}
return instance;
}
public T getContent(final String resourceUrl)
{
//get content
//return T
}
}
```<issue_comment>username_1: `JersyRestClient` does not make much sense.
You will return something specific in `getInstance()`. What will it be?
* Either the raw type (`JersyRestClient`). In this case you can spare the .
* Some specific type `JersyRestClient`. Possible but does not make much sense.
* You could do `JersyRestClient getInstance()` which will result in a cast to `JersyRestClient`.
The type parameter `T` allows parameterizing instances of `JersyRestClient`. If you only have one instance, there's nothing to parameterize.
Upvotes: 2 <issue_comment>username_2: I'd say only if it's contravariant or abstract.
Upvotes: 0
|
2018/03/19
| 249 | 1,003 |
<issue_start>username_0: AWS has a list of free tier (non-expiring offers) [here](https://aws.amazon.com/free/). I wonder if it is enough to deploy a small(less than 1 Gb in total) Spring Boot(+ mongo/postgres) hobby project using ONLY these features.<issue_comment>username_1: It is possible to do so, but free tier for EC2 instances that you need to run spring boot application (and mongo/postgres that you install manually on that instance) does expire after 12 months.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Yes, You can deploy your project on AWS infrastructure in free tier with above requirement.
* For hosting the application you can use **EC2**
* For relational db you can use **RDS** service with **postgres**.
* AWS URLs are very big and complex. So to access your application from AWS infrastructure, You need to use Route 53 service to map it some nice domain name.
Note: If you want to do it for development purpose only, then you can skip the route 53 part.
Upvotes: 2
|
2018/03/19
| 2,273 | 6,297 |
<issue_start>username_0: I have a lot of CSV files that contain 100 000+ rows and their structure looks similar to this:
```
Time,Longitude,Latitude,R,E,M
2016-01-01M12:01:01,39.92234,52.61532,"-11.5,-20.4",-4.5,No
2016-01-01M12:01:01,39.92238,52.61562,"-10.1,-12.7,-9.2,-7.7",,No
2016-01-01M12:01:02,39.92239,52.61552,"-12.1,-12.4",-3.9,No
2016-01-01M12:01:03,39.92248,52.61562,"-3.1,-1.9,-8.2",,No
```
and so on...
What I would like to do is to get the max number of values between the quotes, change the column names accordingly.
For example the second row has the max number of values between the quotes so R should be changed to `R1,R2,R3,R4`, and finally remove quote marks using batch file.
So the result should look like this:
```
Time,Longitude,Latitude,R1,R2,R3,R4,E,M
2016-01-01M12:01:01,39.92234,52.61532,-11.5,-20.4,,,-4.5,No
2016-01-01M12:01:01,39.92238,52.61562,-10.1,-12.7,-9.2,-7.7,,No
2016-01-01M12:01:02,39.92239,52.61552,-12.1,-12.4,,,-3.9,No
2016-01-01M12:01:03,39.92248,52.61562,-3.1,-1.9,-8.2,,,No
```
and so on...
I've been trying to find any example how to do this for almost a couple of weeks but without success. Maybe someone can help me out?<issue_comment>username_1: Although you did not show any own efforts to solve the task, I decided to provide a solution, because it seems to be a challenging project. So here is what I came up with:
```cmd
@echo off
setlocal EnableExtensions DisableDelayedExpansion
rem // Define constants here:
set "_FILE=%~1" & rem // (file to process; use first command line parameter)
rem // Initialise variables:
set /A "MAX=0" & rem // (maximum number of items in between quoted group)
set /A "POS=0" & rem // (position of quoted group)
rem // Pass 1: count maximum number of items within quotes:
set /A "COUNT=0, INDEX=0"
for /F usebackq^ skip^=1^ delims^=^ eol^= %%L in ("%_FILE%") do (
for %%I in (%%L) do (
set "QUOTED=%%I"
set "UNQUOTED=%%~I"
set /A "INDEX+=1"
setlocal EnableDelayedExpansion
if not "!QUOTED!"=="!UNQUOTED!" (
if !POS! leq 0 (
endlocal & set /A "POS=INDEX"
) else endlocal
set "COUNT="
setlocal EnableDelayedExpansion
set "ITEM=%%~I"
for %%J in ("!ITEM:,="^,"!") do (
if not defined COUNT endlocal
set /A "COUNT+=1"
)
setlocal EnableDelayedExpansion
if !MAX! lss !COUNT! (
endlocal & set /A "MAX=COUNT"
) else endlocal
) else endlocal
)
)
rem // Build separators butter:
set "SEPB=" & setlocal EnableDelayedExpansion
for /L %%E in (1,1,%MAX%) do (
set "SEPB=!SEPB!,"
)
endlocal & set "SEPB=%SEPB%"
rem // Process header:
set /A "INDEX=0"
for /F usebackq^ delims^=^ eol^= %%L in ("%_FILE%") do (
set "COLL=,"
for %%I in (%%L) do (
set /A "INDEX+=1" & set "ITEM=%%~I"
setlocal EnableDelayedExpansion
if !INDEX! equ !POS! (
for /L %%K in (1,1,%MAX%) do (
set "COLL=!COLL!!ITEM!%%K,"
)
) else (
set "COLL=!COLL!!ITEM!,"
)
for /F "delims=" %%E in (""!COLL!"") do (
endlocal & set "COLL=%%~E"
)
)
setlocal EnableDelayedExpansion
echo/!COLL:~1^,-1!
endlocal
goto :NEXT
)
:NEXT
rem // Pass 2: expand items in between quotes:
for /F usebackq^ skip^=1^ delims^=^ eol^= %%L in ("%_FILE%") do (
set "LINE=%%L" & set "COLL=,"
setlocal EnableDelayedExpansion
for %%I in ("!LINE:,="^,"!") do (
endlocal
set "SEPS=%SEPB%" & set "QUOTED=%%~I" & set "UNQUOTED="
for %%J in (%%~I) do (
set "UNQUOTED=%%~J"
setlocal EnableDelayedExpansion
if "!QUOTED!"=="!UNQUOTED!" (
set "COLL=!COLL!!QUOTED!," & set "SEPS="
) else (
set "COLL=!COLL!!UNQUOTED!," & set "SEPS=!SEPS:~,-1!"
)
for /F "delims=" %%E in (""!COLL!"") do (
for /F "delims=" %%F in (""!SEPS!"") do (
endlocal & set "COLL=%%~E" & set "SEPS=%%~F"
)
)
)
if not defined QUOTED set "SEPS=,"
setlocal EnableDelayedExpansion
for /F "delims=" %%K in (""!COLL!!SEPS!"") do (
endlocal & set "COLL=%%~K"
setlocal EnableDelayedExpansion
)
)
echo/!COLL:~1^,-1!
endlocal
)
endlocal
exit /B
```
Assuming the batch script is saved as `resolve-csv.bat` in the current directory and the CSV file to process is called `D:\Test\data.csv`, type the following into Windows Command Prompt:
```cmd
resolve-csv.bat "D:\Test\data.csv"
```
To store the output into another CSV file, for instance, `D:\Test\result.csv`, type this:
```cmd
resolve-csv.bat "D:\Test\data.csv" > "D:\Test\result.csv"
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: This Batch file do what you described; the comments are enough explanatory...
```
@echo off
setlocal EnableDelayedExpansion
rem Process all *.csv files and
rem get the maximum number of values in the _first_ value between quotes
set "firstFile="
set "max=0"
for %%f in (*.csv) do (
if not defined firstFile set "firstFile=%%f"
for /F usebackq^ skip^=1^ tokens^=2^ delims^=^" %%a in ("%%f") do (
set "n=0"
for %%i in (%%a) do set /A n+=1
if !n! gtr !max! set "max=!n!"
)
)
rem Read header from first input file and generate header of output file
rem changing the name of the _fourth_ column
set /P "header=" < "%firstFile%"
for /F "tokens=1-4* delims=," %%a in ("%header%") do (
set "header=%%a,%%b,%%c"
for /L %%i in (1,1,%max%) do set "header=!header!,%%d%%i"
set "header=!header!,%%e"
)
rem Process all files and generate output file
rem removing quotes from the _first_ value between quotes
(
echo %header%
for %%f in (*.csv) do (
for /F usebackq^ skip^=1^ tokens^=1-2*^ delims^=^" %%a in ("%%f") do (
set "n=%max%"
for %%i in (%%~b) do set /A n-=1
set "second=%%~b"
for /L %%i in (!n!,-1,1) do set "second=!second!,"
echo %%a!second!%%c
)
) > output.tmp
```
Of course, if your real data have not the same structure of the example data, this program would fail...
Upvotes: 0
|
2018/03/19
| 903 | 2,577 |
<issue_start>username_0: I want to edit crontab from PHP script.
```
$output = shell_exec('crontab -l');
echo "
```
";
print_r($output);
echo "
```
";
```
This is returned.
```
MAILTO="<EMAIL>"
*/2 * * * * /usr/bin/php5 /var/www/vhosts/example.com/streaming.example.com/index.php admin cron > /dev/null
MAILTO=""
*/1 * * * * /opt/psa/admin/sbin/fetch_url 'https://www.example.com/referral/send_referral_email'
MAILTO=""
*/5 * * * * /opt/psa/admin/sbin/fetch_url ' https://www.example.com/members/send_notif'
MAILTO="<EMAIL>"
*/2 * * * * /usr/bin/php5 /var/www/vhosts/example.com/streaming.example.com/index.php admin cron3 > /dev/null
```
One of the scripts `https://www.example.com/members/send_notif` should be running every five minutes but isn't. I see there is a space before the https and i think that might be the cause. How do i edit? I haven't got access to cpanel, so i have to do from PHP.<issue_comment>username_1: Try something like:
```
# new crontab EDIT THIS
$crontab = '''MAILTO="<EMAIL>"
*/2 * * * * /usr/bin/php5 /var/www/vhosts/example.com/streaming.example.com/index.php admin cron > /dev/null
MAILTO=""
*/1 * * * * /opt/psa/admin/sbin/fetch_url 'https://www.example.com/referral/send_referral_email'
MAILTO=""
*/5 * * * * /opt/psa/admin/sbin/fetch_url ' https://www.example.com/members/send_notif'
MAILTO="<EMAIL>"
*/2 * * * * /usr/bin/php5 /var/www/vhosts/example.com/streaming.example.com/index.php admin cron3 > /dev/null''';
# echo new cron into cron file
shell_exec('echo "' . $crontab . '" >> mycron')
# install new cron file
shell_exec('crontab mycron')
# delete cron file
shell_exec('rm mycron')
```
Upvotes: 0 <issue_comment>username_2: Once you have the output of crontab, make the required change in the variable and then write it to a file:
```
$output = shell_exec('crontab -l');
$find="' https";
$replace="'https";
$output =str_replace($find, $replace,$output);
$file="/path_to_a_file_which_is_writable/crontab.txt";
file_put_contents($file, $output);
```
Then write the new content to crontab:
```
shell_exec("crontab ".$file);
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Found out how to do it.
```
$url = " https://www.example.com";
$new_url = "https://www.example.com";
$output = shell_exec('crontab -l');
$output = str_replace($url,$new_url,$output);
file_put_contents('/tmp/crontab.txt', $output.PHP_EOL);
echo exec('crontab /tmp/crontab.txt');
```
Upvotes: 0
|
2018/03/19
| 1,034 | 2,535 |
<issue_start>username_0: I'm working with C# and want to parse phone numbers from a string. I live in Switzerland, phone numbers can either have 10 digits like following pattern:
`000 000 00 00` or can start with a `+41`: `+41 00 000 00 00`. I've written following regular expression:
`var phone = new Regex(@"\b(\+41\s\d{2}|\d{3})\s?\d{3}\s?\d{2}\s?\d{2}\b");`
This works perfectly fine with the first example, but the one with the "+41" doesn't match. I'm pretty sure there's a problem with the word boundary `\b` and the following `+`. When I remove the `\b` at the start it finds a match with the `+41`-example. My code:
```
var phone = new Regex(@"\b(\+41\s\d{2}|\d{3})\s?\d{3}\s?\d{2}\s?\d{2}\b");
var text = @"My first phonenumber is: +41 00 000 00 00. My second one is:
000 000 00 00. End.";
var phoneMatches = phone.Matches(text);
foreach(var match in phoneMatches)
{
Console.WriteLine(match);
}
Console.ReadKey();
```
Output: `000 000 00 00`.
Output without `\b`:
`+41 00 000 00 00
000 000 00 00`
Any solutions?<issue_comment>username_1: You may use a `(? [**positive lookbehind**](https://www.regular-expressions.info/lookaround.html) instead of the first `\b`. Since the next expected character can be a non-word char, the word boundary may fail the match, and `(? will only fail the match once there is a word char before the next expected char.``
Use
```
var phone = new Regex(@"(?
```
**Details**
* `(? - fail the match if there is a word char immediately to the left of the current location`
* `(\+41\s\d{2}|\d{3})` - `+41`, a whitespace and 2 digits, or 3 digits
* `\s?` - 1 or 0 whitespaces
* `\d{3}` - 3 digits
* `\s?` - 1 or 0 whitespaces
* `\d{2}` - 2 digits
* `\s?` - 1 or 0 whitespaces
* `\d{2}` - 2 digits
* `\b` - a word boundary (this one will work since the previous expected char is a digit).
NOTE: To only match ASCII digits, you might want to replace `\d` with `[0-9]` (see [this thread](https://stackoverflow.com/a/16621778/3832970)).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this [one](http://regexstorm.net/tester?p=%28%5C%2B%5Cb41%5Cs%5Cd%7B2%7D%7C%5Cb%5Cd%7B3%7D%29%5Cs%3F%5Cd%7B3%7D%5Cs%3F%5Cd%7B2%7D%5Cs%3F%5Cd%7B2%7D%5Cb&i=My%20first%20phonenumber%20is%3A%20%2B41%2000%20000%2000%2000.%20My%20second%20one%20is%3A%20000%20000%2000%2000.%20End.&o=i):
```
(\+\b41\s\d{2}|\b\d{3})\s?\d{3}\s?\d{2}\s?\d{2}\b
```
move the boundary separator inside the () block and place + to precede the word boundary separator.
Upvotes: -1
|
2018/03/19
| 1,125 | 2,874 |
<issue_start>username_0: I'm trying to find a right solution for stacking different height divs. Tried grid, flex and lastly inline-block.
From what i understand 3rd div(button) is attatched to the bottom of a 2nd div(image). I'm trying to make so it would be attatched to the bottom of a 1st div(text).
Button is being drawn as 3rd div is because button has to go under an image when window size gets too small. Can you even achive this with inline-block?
My current code:
<https://jsfiddle.net/mep2x67L/16/>
```css
#container {
padding-top: 50px;
padding-bottom: 50px;
display: inline-block;
position: relative;
width: 100%;
height: 100%;
}
#desccription {
width: calc(50% - 20px);
margin-left: 20px;
display: inline-block;
vertical-align: top;
}
#desccription_Container {
margin: 0 auto;
justify-content: left;
text-align: left;
max-width: 560px;
margin-right: 10px;
}
#half_img {
width: 100%;
}
#img_container {
width: 49%;
display: inline-block;
}
.btnWrap {
display: inline-block;
width: 355px;
vertical-align: top;
text-align: center;
}
#normal_text {
font-size: 36px;
line-height: 42px;
color: #F1ECE3;
}
#btnWrap {
display: inline-block;
width: 355px;
vertical-align: top;
text-align: center;
}
```
```html
hello hello

Button!
```<issue_comment>username_1: You may use a `(? [**positive lookbehind**](https://www.regular-expressions.info/lookaround.html) instead of the first `\b`. Since the next expected character can be a non-word char, the word boundary may fail the match, and `(? will only fail the match once there is a word char before the next expected char.``
Use
```
var phone = new Regex(@"(?
```
**Details**
* `(? - fail the match if there is a word char immediately to the left of the current location`
* `(\+41\s\d{2}|\d{3})` - `+41`, a whitespace and 2 digits, or 3 digits
* `\s?` - 1 or 0 whitespaces
* `\d{3}` - 3 digits
* `\s?` - 1 or 0 whitespaces
* `\d{2}` - 2 digits
* `\s?` - 1 or 0 whitespaces
* `\d{2}` - 2 digits
* `\b` - a word boundary (this one will work since the previous expected char is a digit).
NOTE: To only match ASCII digits, you might want to replace `\d` with `[0-9]` (see [this thread](https://stackoverflow.com/a/16621778/3832970)).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this [one](http://regexstorm.net/tester?p=%28%5C%2B%5Cb41%5Cs%5Cd%7B2%7D%7C%5Cb%5Cd%7B3%7D%29%5Cs%3F%5Cd%7B3%7D%5Cs%3F%5Cd%7B2%7D%5Cs%3F%5Cd%7B2%7D%5Cb&i=My%20first%20phonenumber%20is%3A%20%2B41%2000%20000%2000%2000.%20My%20second%20one%20is%3A%20000%20000%2000%2000.%20End.&o=i):
```
(\+\b41\s\d{2}|\b\d{3})\s?\d{3}\s?\d{2}\s?\d{2}\b
```
move the boundary separator inside the () block and place + to precede the word boundary separator.
Upvotes: -1
|
2018/03/19
| 774 | 2,811 |
<issue_start>username_0: I've created a function which takes a string and replace it's ending substring, so if the string ends with `AddFiche`, `EditFiche` or Fiche they should be replaced with `Liste`, and some other conditions this is what I tried:
```
function _getParentComponent(component){
if(component.endsWith('AddFiche')){
return component.replace('AddFiche', 'Liste');
}else if(component.endsWith('EditFiche')){
return component.replace('EditFiche', 'Liste');
}else if(component.endsWith('Fiche')){
return component.replace('Fiche', 'Liste');
}else if(component === "selection"){
if($rootRouter._outlet.previousInstruction.componentType === "import"){
return "import";
}
}else if(component === "result"){
if($rootRouter._outlet.previousInstruction.componentType === "selection"){
return "import";
}
}else if(component.startsWith("request")){
if($rootRouter._outlet.previousInstruction.componentType === "dynamicRouting"){
return "dynamicRouting";
}
}else{
return component;
}
}
```
As you can see there are a lot of if elses, isn't there any other way to do this ? since I might add other conditions later, and the code looks ugly with all those if elses.<issue_comment>username_1: ```js
var replaces = [{
match: 'AddFiche',
replace: 'Liste'
},
{
match: 'EditFiche',
replace: 'Liste'
},
{
match: 'Fiche',
replace: 'Liste'
}
]
function _getParentComponent(component) {
var done = false;
for (var r of replaces) {
if (component.endsWith(r.match)) {
return component.replace(r.match, r.replace);
}
}
if (component === "selection") {
if ($rootRouter._outlet.previousInstruction.componentType === "import") {
return "import";
}
} else if (component === "result") {
if ($rootRouter._outlet.previousInstruction.componentType === "selection") {
return "import";
}
} else if (component.startsWith("request")) {
if ($rootRouter._outlet.previousInstruction.componentType === "dynamicRouting") {
return "dynamicRouting";
}
} else {
return component;
}
}
console.log("Input: LoremIpsumFiche");
console.log("Output:",_getParentComponent("LoremIpsumFiche"));
```
Upvotes: 1 <issue_comment>username_2: Can be this
```js
var ends = ['One', 'Two', 'Wood'];
var words = ['LOne', 'ROnes', 'Two2', 'TwoTwo', 'No Wood', 'Woodless'];
var replaced = "REPLACED";
for(var i = 0; i < words.length; i++) {
for(var j = 0; j < ends.length; j++) {
if(words[i].endsWith(ends[j])) {
words[i] = words[i].replace(new RegExp(ends[j] + '$'), replaced);
break;
}
}
}
console.log(words);
```
Upvotes: 0
|
2018/03/19
| 1,375 | 4,266 |
<issue_start>username_0: I am working on a program, that has to initialize many different objects according to a list that defines which type each object is.
The code that does this task looks like this:
```
// name is one entry of the type list
// itemList is a std::vector where the new items are appended
if(name == "foo")
{
initItem(itemList);
}
else if(name == "bar")
{
initItem(itemList);
}
else if(name == "baz")
{
initItem(itemList);
}
....
```
initItem(ItemList) allocates an object of type T and appends it to itemList.
At other place in the code there are similar conditional statements for the different object types.
At the moment for each new object type added I have to add a new else if to all the conditional statements which is kind of annoying.
Is there a way to just define some kind of map somewhere that holds the assignment like
```
"foo", FooObject,
"bar", BarObject,
"baz", Object3,
```
and then template/auto-generate (maybe by preprocessor) the if-else statements so i don't have to setup them by hand every time?
Edit: Here is the whole method that contains the code snipset (there are many more else if() statements that all work according to the same principal.
```
bool Model::xml2Tree(const pugi::xml_node &xml_node, std::vector &parents)
{
bool all\_ok = true;
bool sucess;
pugi::xml\_node child\_node = xml\_node.first\_child();
for (; child\_node; child\_node = child\_node.next\_sibling())
{
sucess = true;
bool ok = false;
std::string name = child\_node.name();
if(name == "foo")
{
ok = initTreeItem(child\_node, parents);
}
else if(name == "bar")
{
ok = initTreeItem(child\_node, parents);
}
...
...
...
else
{
ok = false;
std::cout << "Unknown Element" << std::endl;
continue;
}
if(!sucess)
{
continue;
}
all\_ok = all\_ok && ok;
// recursiv
ok = xml2Tree(child\_node, parents);
all\_ok = all\_ok && ok;
}
parents.pop\_back();
return all\_ok;
}
template
bool Model::initTreeItem(const pugi::xml\_node &xml\_node,
std::vector &parents)
{
bool ok = false;
T \*pos = new T(parents.back());
parents.back()->appendChild(pos);
ok = pos->initFromXml(xml\_node);
parents.push\_back(pos);
return ok;
}
```<issue_comment>username_1: Firstly, you can encode your mapping in the type system as follows:
```
template
struct type\_wrapper { using type = T; };
template
inline constexpr type\_wrapper t{};
template
struct pair
{
K \_k;
V \_v;
constexpr pair(K k, V v) : \_k{k}, \_v{v} { }
};
template
struct map : Ts...
{
constexpr map(Ts... xs) : Ts{xs}... { }
};
constexpr auto my\_map = map{
pair{[]{ return "foo"; }, t},
pair{[]{ return "bar"; }, t},
pair{[]{ return "baz"; }, t}
};
```
We're using lambdas as they're implicitly `constexpr` in C++17, in order to simulate "`constexpr` arguments". If you do not require this, you can create a `constexpr` wrapper over a string literal and use that instead.
---
You can then go through the mapping with something like this:
```
template
void for\_kv\_pairs(const std::string& name, map m)
{
([&](const pair& p)
{
if(name == p.\_k())
{
initItem();
}
}(static\_cast(m)), ...);
}
```
This is using a *fold expression* over the comma operator plus C++20 template syntax in lambdas. The latter can be replaced by providing an extra implementation function to retrieve `K` and `V` from `pair` pre-C++20.
---
Usage:
```
template
void initItem()
{
std::cout << typeid(X).name() << '\n';
}
struct FooObject { };
struct BarObject { };
struct Object3 { };
constexpr auto my\_map = map{
pair{[]{ return "foo"; }, t},
pair{[]{ return "bar"; }, t},
pair{[]{ return "baz"; }, t}
};
int main()
{
for\_kv\_pairs("bar", my\_map);
}
```
Output:
>
> 9BarObject
>
>
>
[**live example on wandbox.org**](https://wandbox.org/permlink/UNensuovjrsNhoyC)
Upvotes: 2 <issue_comment>username_2: You can use higher-order macros (or [x-macros](https://en.wikipedia.org/wiki/X_Macro)) to generate code like that, for example:
```
#define each_item(item, sep) \
item("foo", FooObject) sep \
item("bar", BarObject) sep \
item("baz", Object3)
#define item_init(item_name, item_type) \
if (name == item_name) { \
initItem(itemList); \
}
each\_item(item\_init, else)
```
Upvotes: 1
|
2018/03/19
| 927 | 3,695 |
<issue_start>username_0: I am trying to find simple way to update the form fields with Validators. For now I do the below:
```
ngOnInit() {
this.form.get('licenseType').valueChanges.subscribe(value => {
this.licenseChange(value);
})
}
licenseChange(licenseValue: any) {
if (licenseValue === 2) {
this.form.get('price').setValidators([Validators.required]);
this.form.get('price').updateValueAndValidity();
this.form.get('noOfLicenses').setValidators([Validators.required]);
this.form.get('noOfLicenses').updateValueAndValidity();
this.form.get('licenseKey').setValidators([Validators.required]);
this.form.get('licenseKey').updateValueAndValidity();
this.form.get('supportNo').setValidators([Validators.required]);
this.form.get('supportNo').updateValueAndValidity();
this.form.get('purchasedFrom').setValidators([Validators.required]);
this.form.get('purchasedFrom').updateValueAndValidity();
//......others follows here
}
else {
this.form.get('price').clearValidators(); this.form.get('price').updateValueAndValidity();
this.form.get('noOfLicenses').clearValidators(); this.form.get('noOfLicenses').updateValueAndValidity();
this.form.get('licenseKey').clearValidators(); this.form.get('licenseKey').updateValueAndValidity();
this.form.get('supportNo').clearValidators(); this.form.get('supportNo').updateValueAndValidity();
this.form.get('purchasedFrom').clearValidators(); this.form.get('purchasedFrom').updateValueAndValidity();
//......others follows here
}
}
```
Is this the only way to add and update validators or is there any other way to achieve this. For now I am calling the updateValueAndValidity() after setting/clearing each field.
**Update**
Something like
```
licenseChange(licenseValue: any) {
if (licenseValue === 2) {
this.form.get('price').setValidators([Validators.required]);
//......others follows here
}
else{
//......
}
}
this.form.updateValueAndValidity();///only one line at the bottom setting the entire fields.
```<issue_comment>username_1: **I done something similar like this**
```
licenseChange(licenseValue: any) {
if (licenseValue === 2) {
this.updateValidation(true,this.form.get('price'));
//......others follows here
}
else {
this.updateValidation(false,this.form.get('price'));
//......others follows here
}
}
//TODO:To update formgroup validation
updateValidation(value, control: AbstractControl) {
if (value) {
control.setValidators([Validators.required]);
}else{
control.clearValidators();
}
control.updateValueAndValidity();
}
```
**If you want to do this for all the controlls inside your form**
```
licenseChange(licenseValue: any) {
for (const field in this.form.controls) { // 'field' is a string
const control = this.form.get(field); // 'control' is a FormControl
(licenseValue === 2) ? this.updateValidation(true,
control):this.updateValidation(fasle, control);
}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I did it like below:
```
this.form.get('licenseType').valueChanges.subscribe(value => {
this.licenseChange(value, this.form.get('price'));
//....Others
}
licenseChange(licenseValue: any, control: AbstractControl) {
licenseValue === 2 ? control.setValidators([Validators.required]) : control.clearValidators();
control.updateValueAndValidity();
}
```
Upvotes: 0
|
2018/03/19
| 1,258 | 4,510 |
<issue_start>username_0: I'm new to rxswift and here's my problem:
Suppose I have observable of actions: Observable.of("do1", "do2", "do3")
Now this observable mapped to function that returns observable:
```
let actions = Observable.of("do1", "do2", "do3")
func do(action: String) -> Observable {
// do something
// returns observable
}
let something = actions.map { action in return do(action) } ???
```
How can I wait for do1 to complete first, then execute do2, then do3?
Edit: Basically i want to achieve sequential execution of actions. do3 waits for do2 result, do2 waits for do1 result.
Edit2: I've tried using flatmap and subscribe, but all actions runs in parallel.<issue_comment>username_1: Use `flatMap` or `flatMapLatest`. You can find more about them at [reactivex.io](http://reactivex.io/documentation/operators.html).
Upvotes: -1 <issue_comment>username_2: You can `flatMap` and `subscribe` to the `Result` observable sequence by calling `subscribe(on:)` on the final output.
```
actions
.flatMap { (action) -> Observable in
return self.doAction(for: action)
}
.subscribe(onNext: { (result) in
print(result)
})
func doAction(for action: String) -> Observable {
//...
}
```
Read:
* <https://medium.com/ios-os-x-development/learn-and-master-%EF%B8%8F-the-basics-of-rxswift-in-10-minutes-818ea6e0a05b>
Upvotes: -1 <issue_comment>username_3: >
> How can I wait for do1 to complete first, then execute do2, then do3?
>
>
>
I think `concatMap` solves the problem.
Lets say we have some service and some actions we need to perform on it, for instance a backend against with we'd like to authenticate and store some data. Actions are **login** and **store**. We can't **store** any data if we aren't logged in, so we need to wait **login** to be completed before processing any **store** action.
While `flatMap`, `flatMapLatest` and `flatMapFirst` execute observables in parallel, `concatMap` waits for your observables to complete before moving on.
```
import Foundation
import RxSwift
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
let service: [String:Observable] = [
"login": Observable.create({
observer in
observer.onNext("login begins")
DispatchQueue.main.asyncAfter(deadline: .now() + 1.0, execute: {
observer.onNext("login completed")
observer.onCompleted()
})
return Disposables.create()
}),
"store": Observable.create({
observer in
observer.onNext("store begins")
DispatchQueue.main.asyncAfter(deadline: .now() + 0.2, execute: {
observer.onNext("store completed")
observer.onCompleted()
})
return Disposables.create()
}),
]
// flatMap example
let observeWithFlatMap = Observable.of("login", "store")
.flatMap {
action in
service[action] ?? .empty()
}
// flatMapFirst example
let observeWithFlatMapFirst = Observable.of("login", "store")
.flatMapFirst {
action in
service[action] ?? .empty()
}
// flatMapLatest example
let observeWithFlatMapLatest = Observable.of("login", "store")
.flatMapLatest {
action in
service[action] ?? .empty()
}
// concatMap example
let observeWithConcatMap = Observable.of("login", "store")
.concatMap {
action in
service[action] ?? .empty()
}
// change assignment to try different solutions
//
// flatMap: login begins / store begins / store completed / login completed
// flatMapFirst: login begins / login completed
// flatMapLatest: login begins / store begins / store completed
// concatMap: login begins / login completed / store begins / store completed
let observable = observeWithConcatMap
observable.subscribe(onNext: {
print($0)
})
```
Upvotes: 3 <issue_comment>username_4: I just face the same problem, and finally found the solution.
I expect my devices will do disconnect followed by one another, so I did as follow:
I just create the func like
```
func disconnect(position: WearingPosition) -> Completable{
print("test run")
return Completable.create { observer in
print("test run 2")
// Async process{
// observer(.complete)
// }
return Disposables.create()
}
}
```
And use like:
```
self.disconnect(position: .left_wrist).andThen(Completable.deferred({
return self.disconnect(position: .right_wrist)
})).subscribe(onCompleted: {
// do some things
}) { (error) in
print(error)
}.disposed(by: self.disposeBag)
```
The key is the usage of " Completable.deferred "
I have tested with the "test run" printed
Upvotes: 0
|
2018/03/19
| 673 | 2,599 |
<issue_start>username_0: Is there a way to structure `const reducer = (state = initialState, action)` in such a manner that the method isn't bloated by a bunch of switch cases?
My idea was to put related actions in arrays and check them with `Array.prototype.includes()` when handling an action.
I would then extract the switch cases that correlate to specific actions in new methods (for example the `List` component would have `LIST_ADD`, `LIST_REMOVE` etc.) and call those methods instead of just running through 100 cases in the `const reducer = (state = initialState, action)`method.
That would tax performance but it would be at least structured.
Any better ideas?<issue_comment>username_1: I use a library called reduxsauce which removes the need for large switch statements.
<https://github.com/infinitered/reduxsauce>
Instead it binds actions to methods with this syntax:
```
export const INITIAL_STATE = {
values: {},
}
export const reducerFunction = (state, action) => {
const values = action.value;
return {
...state,
values,
};
};
// map the action types to the reducer functions
export const HANDLERS = {
[Type.ACTION_NAME]: reducerFunction,
...
}
// call createReducer to magically tie it all together
export default createReducer(INITIAL_STATE, HANDLERS);
```
Upvotes: 0 <issue_comment>username_2: The offical Redux [docs](https://redux.js.org/recipes/reducing-boilerplate#reducers) provide this very handy reducer creator:
```
function createReducer(initialState, handlers) {
return function reducer(state = initialState, action) {
if (handlers.hasOwnProperty(action.type)) {
return handlers[action.type](state, action)
} else {
return state
}
}
}
```
which lets you create your reducer as follows:
```
const reducer = createReducer(initialState, {
[actionType.ACTION1]: specificActionReducer1,
[actionType.ACTION2]: specificActionReducer2,
}
```
No switch statements!
Upvotes: 2 [selected_answer]<issue_comment>username_3: You could try [redux-named-reducers](https://github.com/mileschristian/redux-named-reducers) for this as well. Allows you to compose reducers like so:
```
moduleA.reduce(SOME_ACTION, action => ({ state1: action.payload }))
moduleA.reduce(SOME_OTHER_ACTION, { state2: "constant" })
```
It has the added benefit of being able to access the reducer state anywhere, like in mapDispatchToProps for example:
```
const mapDispatchToProps = dispatch => {
return {
onClick: () => {
dispatch(someAction(getState(moduleA.state1)));
}
};
};
```
Upvotes: 0
|
2018/03/19
| 725 | 2,677 |
<issue_start>username_0: I need output of variable from `Invoke-Command`, but when printing it is showing empty, below is the sample code:
```
$ServiceName = "Service"
Invoke-Command -ScriptBlock {
try {
iisreset
$BodyContent += "Server: **$server** IIS reset completed
"
}
catch {
$BodyContent += "Server: **$server** is Failed to restart IIS
"
$ErrorStat = 1
}
try {
Stop-Service -DisplayName $using:ServiceName
$BodyContent += "Server: **$server** is successfully Stopped $using:ServiceName
"
}
catch {
$BodyContent += "Server: **$server** is failed to Stop $using:ServiceName
"
$ErrorStat = 1
}
try {
Start-Service -DisplayName $using:ServiceName
$BodyContent += "Server: **$server** is successfully Started $using:ServiceName
"
}
catch {
$BodyContent += "Server: **$server** is failed to Start $using:ServiceName
"
$ErrorStat = 1
}
} -ComputerName $server -Credential $user -ErrorAction Stop
```
Here, I want to capture `$BodyContent` and `$ErrorStat`<issue_comment>username_1: Add some output to the very end of given scriptblock, e.g.
```
@{ BodyContent = $BodyContent
ErrorStat = $ErrorStat
}
```
If you use e.g.
```
$result = Invoke-Command -ScriptBlock {
### original script block body here
@{ BodyContent = $BodyContent
ErrorStat = $ErrorStat
}
} -ComputerName $server -Credential $user -ErrorAction Stop
```
then you can check
```
$result.BodyContent
$result.ErrorStat
```
Upvotes: 0 <issue_comment>username_2: `Invoke-Command` returns to you what is printed to the end of the pipeline. If you want to return a variable you should `Return` like:
```
$ret = Invoke-Command -ScriptBlock { $var="test string"; return $var; }
```
where `$ret` contains now the value `test string`.
When you got multiple variables you want to return, then you can join them into a single variable, e.g. like this:
```
$str1 = "test"
$str2 = "123"
$combinedObjs = New-Object PSObject -Property @{1 = $str1; 2 = $str2}
```
Now you can combine it all
```
$ret = Invoke-Command -ScriptBlock {
$str1 = "test";
$str2 = "123";
$combinedObjs = @{val1 = $str1; val2 = $str2};
Return $combinedObjs;
}
```
now `$ret` contains
```
Name Value
---- -----
val1 test
val2 123
```
and you can access them by calling `$ret.val1` or `$ret.val2`
Upvotes: 1
|
2018/03/19
| 1,061 | 4,025 |
<issue_start>username_0: Using Microsoft Bot Framework, I created a Chat Bot and I am able to test the Bot using local Emulator. Bot service also deals with LUIS and everything again works just fine. My Bot Service has `MicrosoftAppId` and `MicrosoftAppPassword` in `web.config`.
I deployed the service to Azure. Then embedded a web chat in browser ([Github](https://github.com/Microsoft/BotFramework-WebChat)) but it doesn't work. The error say
```
Buffer="{
"message": "An error has occurred.",
"exceptionMessage": "Unauthorized",
"exceptionType": "Microsoft.Bot.Connector.MicrosoftAppCredentials+OAuthException",
"stackTrace": " at Microsoft.Bot.Sample.LuisBot.MessagesController.d\_\_0.MoveNext()
......
"innerException": {
"message": "An error has occurred.",
"exceptionMessage": "Response status code does not indicate success: 401 (Unauthorized).",
"exceptionType": "System.Net.Http.HttpRequestException",
"stackTrace": " at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode()\r\n at Microsoft.Bot.Connector.MicrosoftAppCredentials.d\_\_26.MoveNext()"
```
I believe when I pass my Web Chat Secret Key (I have also tried to pass generated Token but no luck), then it should be authorized (passing `App Id` and `Password`) automatically. Is it not?
1. Browser embedded web chat gives Unauthorized error.
2. Test in Web Chat windows gives same error.
I have also tried to deploy Bot service by removing `MicrosoftAppId` and `MicrosoftAppPassword` but nothing changes.
Its all very new to me. Could someone please help me to get some clarity on this issue? Also is it fine to give away my Microsoft App Id publicly?
**Edit 1**
Implementation of Web Chat control in Browser.
1. Created ASP.Net empty website and have a HTML page (index.html).
2. Following code is written in index.html page.
```html
---
```
3. Microsoft App Id = `f0f7297e-7a95-4a8f-afc1-eb3d19906cd0`
4. Screen rendered via my code looks like this. And when I send message, it says Couldn't Send. In Azure Diagnostics for App Service, I see error as Unauthorized.
[](https://i.stack.imgur.com/3XGiU.png)<issue_comment>username_1: >
> 401 (Unauthorized)
>
>
>
You can try the following steps to troubleshoot the issue:
1. Please check if you specify the correct `MicrosoftAppID` and `MicrosoftAppPassword` for your bot on **Application Settings** blade.
[](https://i.stack.imgur.com/0b8Gw.png)
2. Please check if you specify the correct message endpoint for the bot on the **[Settings](https://learn.microsoft.com/en-us/bot-framework/bot-service-manage-settings)** blade.
[](https://i.stack.imgur.com/SATlB.png)
***Note***: *You can also try to **Manage** your MicrosoftApp and **Generate New Password** for that app, and specify `MicrosoftAppPassword` with new generated password on Application Settings blade and check if it works with new password.*
3. Please try to create a new bot on Azure portal, and publish your bot application to corresponding web application that you specified as messaging endpoint, and check if that new bot works fine.
Besides, if it still does not work as expected even if you set correct configuration, you can try to [create support request](https://learn.microsoft.com/en-us/azure/azure-supportability/how-to-create-azure-support-request) to report it.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I also faced this error "**Error: Unauthorized. Invalid AppId passed on token**" while trying to access a bot deployed in Azure from the bot emulator. The solution was to go to Bot registration>Access control> I added myself as the owner of the bot. And it started working for me in the emulator. I gave http://localhost:port and didnt provide the MicrosoftAppId and MicrosoftAppPassword in the emulator and I was able to debug the bot locally.
Upvotes: 1
|
2018/03/19
| 744 | 2,448 |
<issue_start>username_0: I have two dates i.e. **2/02/2016** & **19/03/2018**
i am trying to fetch months & year between this dates as
*output should be*
>
> Feb 2016, Mar 2016 ......and so on.... Jan 2018, Feb 2018, Mar 2018.
>
>
>
Tried month Gap code -
```
let date1 = DateComponents(calendar: .current, year: 2016, month: 2, day: 2).date!
let date2 = DateComponents(calendar: .current, year: 2018, month: 3, day: 19).date!
let monthGap = Calendar.current.dateComponents([.month], from: date1, to: date2)
print("monthGap is \(String(describing: monthGap.month))")//prints monthGap is 25
```<issue_comment>username_1: Here is a simple method that returns you the date and month in the format as you described above,
```
func getMonthAndYearBetween(from start: String, to end: String) -> [String] {
let format = DateFormatter()
format.dateFormat = "dd/MM/yyyy"
guard let startDate = format.date(from: start),
let endDate = format.date(from: end) else {
return []
}
let calendar = Calendar(identifier: .gregorian)
let components = calendar.dateComponents(Set([.month]), from: startDate, to: endDate)
var allDates: [String] = []
let dateRangeFormatter = DateFormatter()
dateRangeFormatter.dateFormat = "MMM yyyy"
for i in 0 ... components.month! {
guard let date = calendar.date(byAdding: .month, value: i, to: startDate) else {
continue
}
let formattedDate = dateRangeFormatter.string(from: date)
allDates += [formattedDate]
}
return allDates
}
```
And you call it like,
```
getMonthAndYearBetween(from: "2/02/2016", to: "19/03/2018")
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: **Swift 5**
Based on username_1's answer, this `Date` static extension function takes `Date` objects for input & will return an array of `Date` objects sorted by month & year:
```
extension Date {
static func getMonthAndYearBetween(from start: Date, to end: Date) -> [Date] {
var allDates: [Date] = []
guard start < end else { return allDates }
let calendar = Calendar.current
let month = calendar.dateComponents([.month], from: start, to: end).month ?? 0
for i in 0...month {
if let date = calendar.date(byAdding: .month, value: i, to: start) {
allDates.append(date)
}
}
return allDates
}
}
```
Upvotes: 2
|
2018/03/19
| 680 | 2,395 |
<issue_start>username_0: I've searched a lot and can't find a solution to my problem (I've seen similar runtime issues but not build).
I have a .NET 4.7.1 project (class lib) that references a .NET Core project/library. When I try to build I get the following build error:
>
> 'System.Runtime, Version=4.2.0.0, Culture=neutral,
> PublicKeyToken=<KEY>' which has a higher version than
> referenced assembly 'System.Runtime' with identity 'System.Runtime,
> Version=4.1.2.0, Culture=neutral, PublicKeyToken=<KEY>'
>
>
>
The code is simple and is just calling an async method in the .NET Core lib. A bit like:
```
return _dotNetCoreClass.MethodAsync();
```
I've tried upgrading the project so it uses ProjectReferences and not packages.config. I've installed the System.Runtime package (version 4.3.0). I've updated my project file to include the below:
`true
true`
I've also installed the latest NETStandard.Librabry package but I can't get rid of the build error.<issue_comment>username_1: There is an issue tracked on git hub regarding the .Net Core SDK at the moment, as stated at the official .Net framework 4.7.1 disucssion:
>
> It turns out there is an issue with .NET Core SDK, the issue is being
> tracked on GitHub, <https://github.com/dotnet/sdk/issues/1738>. By
> accident this issue does not repro with Visual Studio 15.5 release. If
> anybody would like to see if there is any workaround before the SDK
> issue is fixed, feel free to file an issue on
> <https://github.com/aspnet/Mvc/issues>.
>
>
>
For the moment, there is no way to fix your solution as 4.7.1 cannot support .Net Core.
Upvotes: 1 <issue_comment>username_2: I thought it was important to post the solution I found and to highlight where my understanding went wrong (thanks to @jeroen-mostert and @hans-passant for steering me in the right direction).
Not all of the .NET Core 2.0 framework is supported by .NET 4.7.1 and so some of the code we were using was not supported and hence raised a build error in application referencing the .NET Core library.
Changing the .NET Core library to use .NET Standard 2.0 (which is fine because our other .NET Core applications that will also use the library are supported by .NET Standard) and then removing and re-adding the references in the application using it solved our problems and the application now builds.
Upvotes: 3
|
2018/03/19
| 365 | 1,465 |
<issue_start>username_0: I want to delete only the 1st child (`L7jrJ6DtQWrmZsC4zvT`) from Firebase database with an option in an Android app. I searched several places and could not find it. You only have one option to delete a whole database. Can anyone help?
[](https://i.stack.imgur.com/YTPbQ.jpg)<issue_comment>username_1: If i understood you correctly, you have the key of the child you want to remove and this child has no parent. so you can just use the single event listener like this -
```
val ref = FirebaseDatabase.getInstance().reference
val applesQuery = ref.child("L7jrJ6DtQWrmZsC4zvT")
applesQuery.addListenerForSingleValueEvent(
object : ValueEventListener {
override fun onDataChange(dataSnapshot: DataSnapshot) {
dataSnapshot.ref.removeValue()
}
override fun onCancelled(databaseError: DatabaseError) {
Log.e("frgrgr", "onCancelled", databaseError.toException())
}
})
```
this should work, let me know otherwise.
Upvotes: 0 <issue_comment>username_2: To delete a single object you can use `removeValue()` method directly on the reference like this:
```
DatabaseReference rootRef = FirebaseDatabase.getInstance().getReference();
rootRef.child("calendario").child("-L7jrJ6DtQWrmZsC4zvT").removeValue();
```
Upvotes: 1
|
2018/03/19
| 309 | 1,102 |
<issue_start>username_0: I am trying to redirect the current html page on screen to another html file, both files share the same location.
I know there are a lot online about this info but it is not working for me, my app just crashed
The app loads the html file from the .java file in android by using this code:
```
WebView view = new WebView(this);
view.getSettings().setJavaScriptEnabled(true);
view.loadUrl("file:///android_asset/htmlfile.html");
setContentView(view);
```
Here is what i have tried
```
[GO!](android_asset/home.html)
```
I have also tried to use a form onclick
```
```
Help is appreciated<issue_comment>username_1: As you are saying both files have the same location,
So use this code below (You just need to remove 'android\_asset/' from href attribute ):
```
[GO!](home.html)
```
Upvotes: -1 <issue_comment>username_2: Try this : (change myDomain value by url of your first page);
```js
function goToInteralLink(path){
var myDomain = 'http://example.com';
window.location.href = myDomain + "/" + path;
}
```
```html
```
Upvotes: 0
|
2018/03/19
| 679 | 2,146 |
<issue_start>username_0: How to join different tables with pivote table
I have 4 tables like
```
users
id | name |
-------------
1 | abc |
2 | ccc |
user_profile
id | user_id | email |
-------------------------------
1 | 1 | <EMAIL>
2 | 2 | <EMAIL>
skills
id | skill_name |
--------------------------
1 | java |
2 | php |
user_skills
user_id | skill_id |
---------------------------
1 | 1 |
1 | 2 |
2 | 1 |
```
The result should be
```
name | email | skills |
----------------------------------
abc |<EMAIL> | java, php |
ccc |<EMAIL> | java |
```
I am able to join multiple tables but I have problem joining pivote
I have tried below with query
```
SELECT users.name,user_profiles.email, group_concat(programs.name)
from users
JOIN user_profiles on user_profiles.user_id = users.id
LEFT JOIN user_skills on user_skills.user_id = users.id
LEFT JOIN skills on user_skills.skill_id = skills.id
GROUP BY users.id
```
Can anyone help me on this please??Thanks<issue_comment>username_1: Your query should work. Perhaps the problem is the reference to `programs` rather than `skills`:
```
select u.name, up.email, group_concat(s.name)
from users u join
user_profiles up
on up.user_id = u.id left join
user_skills us
on us.user_id = u.id left join
skills s
on us.skill_id = s.id
group by u.name, up.email;
```
Upvotes: 0 <issue_comment>username_2: You need `GROUP_CONCAT` to generate the CSV list of skills:
```
SELECT
u.name,
up.email,
GROUP_CONCAT(s.skill_name) AS skills
FROM users u
INNER JOIN user_profile up
ON u.id = up.user_id
LEFT JOIN user_skills us
ON u.id = us.user_id
INNER JOIN skills s
ON us.skill_id = s.id
GROUP BY
u.id, u.name, up.email;
```
[Demo
----](http://sqlfiddle.com/#!9/0d0655/2)
Note that I group by both the user's id and name, because perhaps two users happen to have the same name. Follow the link below for a running SQLFiddle.
Upvotes: 2 [selected_answer]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.