date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/14
| 1,036 | 3,996 |
<issue_start>username_0: I am trying to create a `.bat` file to install SQL Server 2014 Express silently, and with my own options, but each time it runs, it quits in the middle.
Here is my command:
```
C:> SQLEXPR_x86_EN /ACTION=Install /QS /BROWSERSVCSTARTUPTYPE=Automatic
/ADDCURRENTUSERASSQLADMIN=True /IACCEPTSQLSERVERLICENSETERMS
/FEATURES=SQLEngine /SQLSVCSTARTUPTYPE=Automatic
/INSTANCENAME="MX_SERVER_SQL" /NPENABLED=1 /TCPENABLED=1
/AGTSVCACCOUNT="NT AUTHORITY\Network Service"
/SQLSVCACCOUNT="NT AUTHORITY\Network Service"
/SQLSYSADMINACCOUNTS=BUILTIN\Administrators /SECURITYMODE=SQL
/SAPWD=My_123456
```
Why does it not run to the end?
ERROR > The installation starts but somewhere after start to load the files it quits and stops the installation and nothing installed<issue_comment>username_1: I think the root of your problem is the asynchronous event handling. After you've send the event, your test is just over. This will - of course - cause the Spring context and the broker to shutdown. The JMS listeners are running in another thread. You must find a way to wait for them. Otherwise, your thread (which is your test case) is just done.
We faced a similar problem in our last project and wrote a small utility which helped us a lot. JMS offers the ability to "browse" a queue and to look if it's empty:
```
public final class JmsUtil {
private static final int MAX_TRIES = 5000;
private final JmsTemplate jmsTemplate;
public JmsUtil(JmsTemplate jmsTemplate) {
this.jmsTemplate = jmsTemplate;
}
private int getMessageCount(String queueName) {
return jmsTemplate.browseSelected(queueName, "true = true", (s, qb) -> Collections.list(qb.getEnumeration()).size());
}
public void waitForAll(String queueName) {
int i = 0;
while (i <= MAX_TRIES) {
if (getMessageCount(queueName) == 0) {
return;
}
i++;
}
}
```
With this utility you could do something like this:
```
def "My event is successfully handled"() {
given:
def event = new MyEvent()
when:
jmsTemplate.convertAndSend("events", event)
jmsUtility.waitForAll("events"); // wait until the event has been consumed
jmsUtility.waitForAll("commands"); // wait until the command has been consumed
then:
1 == 1
}
```
Note: This utility assumes that you send JMS messages to a queue. By browsing the queue we can check if it's empty. In case of a topic you might need to do another check. So be aware of that!
Upvotes: 3 [selected_answer]<issue_comment>username_2: Well, this is a standard problem when testing systems based on async message exchange. Usually, it's solved in the part of the test that you skipped - the `then` part.
The thing is that in your tests you usually expect the system to do something useful, e.g. make changes in the DB, send a rest call to another system, send a message in another queue etc. **We could wait some time until it happened by constantly checking the result - if the result is achieved within the time window that we have set - then we can assume the test has passed.**
The pseudo code of this approach is the following:
```
for (i to MAX_RETRIES; i++) {
checkThatTheChangesInDBHasBeenMade();
checkThatTheRestCallHasBeenMade();
checkThatTheMessageIsPostedInAnotherQueue();
Thread.sleep(50ms);
}
```
This way in the best scenario your test will pass in 50ms. In the worse case, it will fail and this will take MAX\_RETRIES \* 50ms time to execute the test.
Also, I should mention that there is a nice tool called [awaitility](https://github.com/awaitility/awaitility/wiki/Getting_started) that provides nice API (btw it has support of groovy DSL) to handle such kind of problems in the async world:
```
await().atMost(5, SECONDS).until(customerStatusIsUpdated());
```
Upvotes: 2
|
2018/03/14
| 1,172 | 3,825 |
<issue_start>username_0: I am trying to make an executable which can read itself from memory using **ReadProcessMemory** api of windows.
Then, I will use this to calculate the checksum of executable.
This is my code :
```
#define PSAPI_VERSION 1
#include
#include
#include
#include
#include
#include
#define BUFSIZE 1024
#define MD5LEN 16
// To ensure correct resolution of symbols, add Psapi.lib to TARGETLIBS
#pragma comment(lib, "psapi.lib")
int main(void)
{
HWND hMyProcess = (HWND)(GetCurrentProcess());
HMODULE hModule = (HMODULE)GetModuleHandle(NULL);
TCHAR szProcessName[MAX\_PATH] = TEXT("");
MODULEINFO moduleInfo;
if(hModule != NULL && hMyProcess != NULL){
// if (GetModuleInformation())
GetModuleBaseName(hMyProcess, hModule, szProcessName, MAX\_PATH);
printf("%s\n", szProcessName);
if (GetModuleInformation(hMyProcess, hModule, &moduleInfo, sizeof(moduleInfo))){
printf("lpBaseOfDLL : %x\n", moduleInfo.lpBaseOfDll);
printf("Entry Point : %x\n", moduleInfo.EntryPoint);
printf("SizeOfImage : %x\n", moduleInfo.SizeOfImage);
}
}
// Till here working fine, problem lies below
// read process memory
TCHAR \*hexEXE;
SIZE\_T \*lpNumberOfBytesRead;
if(ReadProcessMemory(hMyProcess, moduleInfo.lpBaseOfDll,
hexEXE, moduleInfo.SizeOfImage, 0)){
//printf("%s\n", hexEXE);
printf("Read memory\n");
printf("%d \n",strlen(hexEXE));
}
// will be implemented later, taken from --> https://msdn.microsoft.com/en-us/library/aa382380(VS.85).aspx
DWORD dwStatus = 0;
BOOL bResult = FALSE;
HCRYPTPROV hProv = 0;
HCRYPTHASH hHash = 0;
/\*if (!CryptAcquireContext(&hProv,NULL,NULL,PROV\_RSA\_FULL,CRYPT\_VERIFYCONTEXT)){
dwStatus = GetLastError();
printf("CryptAcquireContext failed: %d\n", dwStatus);
//CloseHandle(hFile);
return dwStatus;
}
if (!CryptCreateHash(hProv, CALG\_MD5, 0, 0, &hHash)){
dwStatus = GetLastError();
printf("CryptAcquireContext failed: %d\n", dwStatus);
//CloseHandle(hFile);
CryptReleaseContext(hProv, 0);
return dwStatus;
}\*/
return 0;
}
```
Problem :
---------
I am not able to read the my own process's memory, it's the first time I'm using **WinAPI**, so perhaps I am using the function in some wrong way.
The program just hangs and it shows "Windows has encountered some problem..."
Possible Reasons of Error :
---------------------------
I think the handle to the process ( **hMyProcess** ) I'm getting earlier isn't with the required privileges ( **PROCESS\_VM\_READ** ), how do I verify it and if it isn't then how do I get the correct privileges.<issue_comment>username_1: Sorry for the extended discussion, but I got it running by changing the code to instead of using **ReadProcessMemory** directly iterating over the memory through a *for loop* like this :
```
long *baseAddress = (long *)moduleInfo.lpBaseOfDll;
printf("%d %x\n",baseAddress, baseAddress);
for (int i = 0; i < moduleInfo.SizeOfImage; ++i){
long *addressToRead = baseAddress+i;
printf("%x : %x\n", addressToRead, *addressToRead);
}
```
Here's the output :
[](https://i.stack.imgur.com/SACTW.png)
Further thoughts
----------------
However, I don't understand why am I not able to get it using **ReadProcessMemory**.
Upvotes: -1 <issue_comment>username_2: ```
TCHAR *hexEXE;
SIZE_T *lpNumberOfBytesRead;
hexEXE = malloc (moduleInfo.SizeOfImage);
if(ReadProcessMemory(hMyProcess, moduleInfo.lpBaseOfDll,
hexEXE, moduleInfo.SizeOfImage, 0)){
//printf("%s\n", hexEXE);
printf("Read memory\n");
//hexEXE is not a string. Don't use it in strlen.
//printf("%d \n",strlen(hexEXE));
print("%d \n", moduleInfo.SizeOfImage);
}
```
The ReadProcessMemory need a memory to store the image. So, the "hexEXE" need be assign to a memory buffer.
Upvotes: 0
|
2018/03/14
| 897 | 2,600 |
<issue_start>username_0: In ruby you can do something like this
```rb
a = ["a", "b", "c"]
a.cycle {|x| puts x } # print, a, b, c, a, b, c,.. forever.
a.cycle(2) {|x| puts x } # print, a, b, c, a, b, c.
```
and this is just beautiful.
The closest analog in Java 8 would be like this:
```java
Stream iterator = Stream.iterate(new int[] {0, 0}, p -> new int[]{p[0] + 1, (p[0] + 1) % 2}).map(el -> el[1]);
Iterator iter = iterator.iterator();
System.out.println(iter.next());//0
System.out.println(iter.next());//1
System.out.println(iter.next());//0
System.out.println(iter.next());//1
```
Is there a better way and more idiomatic to do it in Java?
**Update**
Just want to outline here that the closest solution to my problem was
```
IntStream.generate(() -> max).flatMap(i -> IntStream.range(0, i))
```
Thanks to @Hogler<issue_comment>username_1: Well if you define the array variable outside the stream, you can use indexes instead. And you will have something like:
```java
String[] array = { "a", "b", "c" };
Stream.iterate(0, i -> (i + 1) % array.length)
.map(i -> array[i])
.forEach(System.out::println); // prints a, b, c forever
Stream.iterate(0, i -> (i + 1) % array.length)
.map(i -> array[i])
.limit(2 * array.length)
.forEach(System.out::println); // prints a, b, c 2 times
```
Also can use [nCopies](https://docs.oracle.com/javase/7/docs/api/java/util/Collections.html#nCopies(int,%20T)) you don't need to use `array.length`:
```java
Collections.nCopies(2, array).stream()
.flatMap(Arrays::stream)
.forEach(System.out::println); // prints a, b, c 2 times
```
It is obviously longer than the ruby version, but that's how usually java is (more verbose)
Upvotes: 2 <issue_comment>username_2: You may use
```
String[] array = { "a", "b", "c" };
Stream.generate(() -> array).flatMap(Arrays::stream).forEach(System.out::println);
```
to print `a` `b` `c` forever and
```
String[] array = { "a", "b", "c" };
Stream.generate(() -> array).limit(2).flatMap(Arrays::stream).forEach(System.out::println);
```
to print `a` `b` `c` two times.
This doesn’t even require an existing array:
```
Stream.generate(() -> null)
.flatMap(x -> Stream.of("a", "b", "c"))
.forEach(System.out::println);
```
resp.
```
Stream.generate(() -> null).limit(2)
.flatMap(x -> Stream.of("a", "b", "c"))
.forEach(System.out::println);
```
you could also use
```
IntStream.range(0, 2).boxed()
.flatMap(x -> Stream.of("a", "b", "c"))
.forEach(System.out::println);
```
Upvotes: 4 [selected_answer]
|
2018/03/14
| 189 | 576 |
<issue_start>username_0: Hy guys, i need a layout divided by two buttons. HeightA: 50% - HeightB: 50%.
Like This:
[layout 2 button 50%](https://i.stack.imgur.com/1NoUm.png)
it's possible?? Even with devices of different sizes?
Thanks
**EDIT:**
I think I found the solution. I use ConstraintLayout.
I create two button and put a guideline:
```
```
With this solution i can put textview overlaps button.<issue_comment>username_1: ```
xml version="1.0" encoding="utf-8"?
```
Upvotes: 0 <issue_comment>username_2: ```
xml version="1.0" encoding="utf-8"?
```
Upvotes: 1
|
2018/03/14
| 555 | 2,073 |
<issue_start>username_0: Is there a way to do something like this:
```
ArrayList.removeAll(ArrayList)
```
With the `ArrayList` being the indices that I want deleted. I know that I could iterate through the indices list and use `remove(index)`, but I was wondering if there is a one-command way of doing so.
I know how to put this iteration into one line, my question is, if there is a way implemented by oracle.<issue_comment>username_1: You can use Java 8 Streams.
For example:
```
IntStream.of(7,6,5,2,1).forEach(i->list.remove(i));
```
If the indices are given as a `List`, you can do:
```
indexList.stream().mapToInt(Integer::intValue).forEach(i->list.remove(i));
```
Note that I preferred to use an `IntStream` and not a `Stream`, since if you use a `Stream` for the indices, if the list from which you wish to remove elements is itself a `List`, calling `remove(Integer)` will remove the element whose value is that `Integer`, not the element whose index is that `Integer`.
Upvotes: 2 <issue_comment>username_2: You can use a `Stream` to iterate through the indices to remove. However, take care to remove the highest index first, to avoid shifting other elements to remove out of position.
```
public void removeIndices(List strings, List indices)
{
indices.stream()
.sorted(Comparator.reverseOrder())
.forEach(strings::remove);
}
```
For removing from a list of `String`s this will work, calling the proper `remove(int)` method. If you were to try this on a `List`, then you will have to avoid calling `remove(E)` by calling `.mapToInt(Integer::intValue)` before calling `forEach`.
Upvotes: 4 [selected_answer]<issue_comment>username_3: I still get warnings with the above by @username_2. To avoid the warnings and to make sure you are using the right remove method you can do this. I know some people hate lambdas but I think this is clearer than the extra mapToInt
```
public void removeIndices(List other, List indices)
{
indices.stream()
.sorted(Comparator.reverseOrder())
.forEach(i->other.remove(i.intValue()));
}
```
Upvotes: 0
|
2018/03/14
| 1,196 | 4,227 |
<issue_start>username_0: Sign-in with Microsoft identity provider fails, works with others
I have an `Azure AD B2C` tenant with a `SingIn and SignUp` policy that I hope to use
for user management with an Angular2 SPA.
The policy is configured for three `identity providers`:
* Google
* Microsoft
* Email Signup
When I use the `Run Now` button in the Azure portal to run this policy, I get the default Sign In dialog, and I can sign in with either `Google` or `Email signin`. (By that I mean I get re-directed to my app's redirect page as I expect.) However, when I try to sign in using the `Microsoft`
provider, I end up at an error page with the following address:
`https://login.live.com/err.srf?lc=1033#error=invalid_request&error_description=The+provided+value+for+the+input+parameter+'redirect_uri'+is+not+valid.+The+expected+value+is+'https://login.live.com/oauth20_desktop.srf'+or+a+URL+which+matches+the+redirect+URI+registered+for+this+client+application.&state=StateProperties%3deyJTSUQiOiJ4LW1zLWNwaW0tcmM6NDcyMmQyNjItOTk1Yi00YTJlLWFmNWUtODkwNDgyODlhMzM0IiwiVElEIjoiM2Y2ZDVmNjAtMDdiNC00ZDA3LWEyZDItN2U3YWQwOWRhOGQ5In0`
I see that the problem is related to an invalid `redirect_uri`. But I thought the `redirect_uri` was an application-level setting shared by ALL identity provders that I have configured. Why does my `redirect_uri` setting work for `Google` and `Email signup`, but not for `Microsoft`?<issue_comment>username_1: You have to configure your Microsoft application with the right redirect URL.
As stated in the [documentation](https://learn.microsoft.com/en-us/azure/active-directory-b2c/active-directory-b2c-setup-msa-app):
>
> Enter <https://login.microsoftonline.com/te/>{tenant}/oauth2/authresp in the Redirect URIs field. Replace {tenant} with your tenant's name (for example, contosob2c.onmicrosoft.com).
>
>
>
Why you have to do this: (courtesy of <NAME>)
>
> The redirect URI that is configured in the Azure AD B2C Portal represents the reply address for your client application. This is so Azure AD B2C can return an ID token to your client application. The redirect URI that is configured in the Application Registration Portal represents the reply address for your Azure AD B2C tenant. This is so the Microsoft Account identity provider can return a security token to your Azure AD B2C tenant.
>
>
>
So, your app is federating authentication to Azure AD B2C.
B2C then further federates to the Microsoft Account identity provider.
So when a user a logs in with a Microsoft account, they are sent back to B2C with a token, which B2C validates.
If all is okay, they are signed in to B2C, and sent back to your app.
So you see that from the point of view of the MSA identity provider, B2C is the client.
So the redirect URL there must point to B2C.
Upvotes: 3 [selected_answer]<issue_comment>username_2: As the document stated, you should Enter `https://login.microsoftonline.com/te/{tenant}/oauth2/authresp` in the Redirect URIs field.
>
> But I thought the redirect\_uri was an application-level setting shared
> by ALL identity provders that I have configured. Why does my
> redirect\_uri setting work for Google and Email signup, but not for
> Microsoft?
>
>
>
You're right, the redirect\_uri is an applicaiton-level sttings. It should be same in all IDPs redirect URIs. But this Redirec URI is set by Azure. NOT your applicaiton. **It means that your can use other IDPs to login to your app with AAD B2C, NOT login to your applicaiton directly**. So, the `redirect_uris` must be `https://login.microsoftonline.com/te/{tenant}/oauth2/authresp`, not the `redirect_uri` in your application itself.
**URI doesn't equal URL**. The redirect URI is just a unique identifier to which Azure AD will redirect the user-agent in an OAuth 2.0 request. It's not redirect URL, Azure AD authentication endpoint `https://login.microsoftonline.com/` use redirect URIs to check where it should be responsed. Aslo, it can be same as the URL as the endpoint. Here should be the same I guess.
Summary, you need use the unique redirect URI `https://login.microsoftonline.com/te/{tenant}/oauth2/authresp`for all IDPs , not just Microsoft account.
Hope this helps!
Upvotes: 1
|
2018/03/14
| 1,466 | 5,283 |
<issue_start>username_0: In Scala, I want to generate some aliases for basic types, and then implement conversions through a type class. This is both useful for me, and an opportunity to understand type classes. The code is the following:
```
type Index = Int
val Index = Int
type Integer = Int
val Integer = Int
type Real = Double
val Real = Double // to have companion object of Double also be the companion object of Real
trait Convertible[A] {
def toIndex(a: A): Index
def toInteger(a: A): Integer
def toReal(a: A): Real
}
implicit val ConvertibleIndex: Convertible[Index] = new Convertible[Index] {
def toIndex(i: Index) = i
def toInteger(i: Index) = i
def toReal(i: Index) = i.toDouble
}
implicit val ConvertibleInteger: Convertible[Integer] = new Convertible[Integer] {
def toIndex(i: Integer) = i
def toInteger(i: Integer) = i
def toReal(i: Integer) = i.toDouble
}
implicit val ConvertibleReal: Convertible[Real] = new Convertible[Real] {
def toIndex(r: Real) = r.toInt
def toInteger(r: Real) = r.toInt
def toReal(r: Real) = r
}
implicit val ConvertibleString: Convertible[String] = new Convertible[String] {
def toIndex(s: String) = s.toInt
def toInteger(s: String) = s.toInt
def toReal(s: String) = s.toDouble
}
implicit class ConvertibleSyntax[A](a: A)(implicit val c: Convertible[A]) {
def toIndex = c.toIndex(a)
def toInteger = c.toInteger(a)
def toReal = c.toReal(a)
}
```
Consider now:
```
val a = 3.toReal
val b = 3.0.toReal
val c = "3".toReal
```
The statement for `a` does not compile, with the compilation error: `method toReal is not a member of Int`. But, for the `b` and `c` statements, the implicit conversion to `ConvertibleSyntax` is properly done.
Why is the implicit conversion not working on `Int`, but is working on `Double` and `String` ?<issue_comment>username_1: Because you define ambiguous implicits for `Index` and `Integer` (both `Int`).
Which one should be chosen by compiler?
Upvotes: 2 <issue_comment>username_2: I think you might be a little confused about how *Scala* does *implicit conversions*. (A common mistake, as `implicit` is a little overused.)
I think that what you want, first of all, is an *implicit conversion function* - or even an *implicit class*. Here's how you could do this using the latter:
Note: `Int`, `Index` and `Integer` are treated identically, so are `Real` and `Double`, confusing matters somewhat, so I've pared this down to something that will work. Also, `Convertible` does not need to be *generic* as its conversion functions need no arguments. Finally, you shouldn't have both `type` and `val` declarations for your types.
```scala
type Index = Int
type Integer = Int
type Real = Double
trait Convertible {
def toIndex: Index
def toInteger: Integer
def toReal: Real
}
// Implicit classes cannot be defined in top-level scope, so they belong to an object.
object Implicits {
implicit class ConvertibleInt(i: Int)
extends Convertible {
override def toIndex = i
override def toInteger = i
override def toReal = i.toDouble
}
implicit class ConvertibleDouble(d: Double)
extends Convertible {
override def toIndex = d.toInt
override def toInteger = d.toInt
override def toReal = d
}
implicit class ConvertibleString(s: String)
extends Convertible {
override def toIndex = s.toInt
override def toInteger = s.toInt
override def toReal = s.toDouble
}
}
```
Now try this:
```scala
import Implicits._
val a = 3.toReal
val b = 3.0.toReal
val c = "3".toReal
```
What's happening here? Well, the `implicit class` declarations define classes that *decorate* the sole constructor argument with additional functions. If the compiler sees that you're trying to call a method on a type that doesn't have that method, it will look to see if there's an *implicit conversion*, in scope, to a type that does. If so, it is used and the function is called; if not, you get a compiler error. (The `import` statement is used to bring the classes into your current scope.)
So, for example, when the compiler sees `"3".toReal` it firstly determines that `"3"` is a `String`. Since this type doesn't have a `.toReal` member, it tries to find a conversion from a `String` to a type that does have such a member. It finds the `ConvertibleString` *implicit class* that takes a `String` argument and provides a `.toReal` method. Yay! So the compiler creates an instance of this class by passing `"3"` to `ConvertibleString`'s constructor, then calls `.toReal` on the result.
On the other hand, when `implicit` is used with a value, it tells the compiler that the value is a default for any matching *implicit arguments* of the same type that are not provided. **NEVER USE `implicit` WITH A PRIMITIVE OR COMMON LIBRARY TYPE!**
For example:
```scala
final case class Example(i: Int)
// Default.
implicit val nameCanBeAnythingAtAll = Example(5)
// Function with implicit argument.
def someFunc(implicit x: Example): Unit = println(s"Value is $x")
```
Now, if you write something like this:
```scala
someFunc
```
the output will be `Value is Example(5)`.
`implicit` values and arguments are an advanced topic, and I wouldn't worry about how they're used right now.
Upvotes: 2 [selected_answer]
|
2018/03/14
| 1,033 | 3,844 |
<issue_start>username_0: I use Django 2.0.3 and I want try static , I create one in app folder and my code like this.
```
{% load static %}
```
And for pic is :
```

```
in setting the dir of static is :
```
STATIC_URL = '/static/'
```
but the pic comes like this :
[enter image description here](https://i.stack.imgur.com/sSTEc.png)<issue_comment>username_1: Because you define ambiguous implicits for `Index` and `Integer` (both `Int`).
Which one should be chosen by compiler?
Upvotes: 2 <issue_comment>username_2: I think you might be a little confused about how *Scala* does *implicit conversions*. (A common mistake, as `implicit` is a little overused.)
I think that what you want, first of all, is an *implicit conversion function* - or even an *implicit class*. Here's how you could do this using the latter:
Note: `Int`, `Index` and `Integer` are treated identically, so are `Real` and `Double`, confusing matters somewhat, so I've pared this down to something that will work. Also, `Convertible` does not need to be *generic* as its conversion functions need no arguments. Finally, you shouldn't have both `type` and `val` declarations for your types.
```scala
type Index = Int
type Integer = Int
type Real = Double
trait Convertible {
def toIndex: Index
def toInteger: Integer
def toReal: Real
}
// Implicit classes cannot be defined in top-level scope, so they belong to an object.
object Implicits {
implicit class ConvertibleInt(i: Int)
extends Convertible {
override def toIndex = i
override def toInteger = i
override def toReal = i.toDouble
}
implicit class ConvertibleDouble(d: Double)
extends Convertible {
override def toIndex = d.toInt
override def toInteger = d.toInt
override def toReal = d
}
implicit class ConvertibleString(s: String)
extends Convertible {
override def toIndex = s.toInt
override def toInteger = s.toInt
override def toReal = s.toDouble
}
}
```
Now try this:
```scala
import Implicits._
val a = 3.toReal
val b = 3.0.toReal
val c = "3".toReal
```
What's happening here? Well, the `implicit class` declarations define classes that *decorate* the sole constructor argument with additional functions. If the compiler sees that you're trying to call a method on a type that doesn't have that method, it will look to see if there's an *implicit conversion*, in scope, to a type that does. If so, it is used and the function is called; if not, you get a compiler error. (The `import` statement is used to bring the classes into your current scope.)
So, for example, when the compiler sees `"3".toReal` it firstly determines that `"3"` is a `String`. Since this type doesn't have a `.toReal` member, it tries to find a conversion from a `String` to a type that does have such a member. It finds the `ConvertibleString` *implicit class* that takes a `String` argument and provides a `.toReal` method. Yay! So the compiler creates an instance of this class by passing `"3"` to `ConvertibleString`'s constructor, then calls `.toReal` on the result.
On the other hand, when `implicit` is used with a value, it tells the compiler that the value is a default for any matching *implicit arguments* of the same type that are not provided. **NEVER USE `implicit` WITH A PRIMITIVE OR COMMON LIBRARY TYPE!**
For example:
```scala
final case class Example(i: Int)
// Default.
implicit val nameCanBeAnythingAtAll = Example(5)
// Function with implicit argument.
def someFunc(implicit x: Example): Unit = println(s"Value is $x")
```
Now, if you write something like this:
```scala
someFunc
```
the output will be `Value is Example(5)`.
`implicit` values and arguments are an advanced topic, and I wouldn't worry about how they're used right now.
Upvotes: 2 [selected_answer]
|
2018/03/14
| 1,873 | 7,146 |
<issue_start>username_0: There is screenshot attached below:
So basically i want to take file as a input(i.e suffix\_list.txt) instead of specifying file location inside the function and when i clicked action button i need execute a output inside QtTextEdit
I tried something but i can't get it. Please help me out. I'm new to PyQt5
Thanks
This is my gui code:
```
# -*- coding: utf-8 -*-
# Created by: PyQt5 UI code generator 5.10
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(514, 381)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.text_output = QtWidgets.QTextEdit(self.centralwidget)
self.text_output.setGeometry(QtCore.QRect(80, 10, 351, 271))
self.text_output.setObjectName("text_output")
self.btn_Action = QtWidgets.QPushButton(self.centralwidget)
self.btn_Action.setGeometry(QtCore.QRect(220, 300, 75, 23))
self.btn_Action.setObjectName("btn_Action")
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 514, 21))
self.menubar.setObjectName("menubar")
self.menuFile = QtWidgets.QMenu(self.menubar)
self.menuFile.setObjectName("menuFile")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.actionOpen_File = QtWidgets.QAction(MainWindow)
self.actionOpen_File.setObjectName("actionOpen_File")
self.menuFile.addAction(self.actionOpen_File)
self.menubar.addAction(self.menuFile.menuAction())
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.btn_Action.setText(_translate("MainWindow", "Action"))
self.menuFile.setTitle(_translate("MainWindow", "File"))
self.actionOpen_File.setText(_translate("MainWindow", "Open File"))
def file_open(self):
name, _ = QtWidgets.QFileDialog.getOpenFileName(self, 'Open File', options=QtWidgets.QFileDialog.DontUseNativeDialog)
file = open(name, 'r')
with file:
text = file.read()
self.textEdit.setText(text)
def suffix_remove(self):
suffix_list = []
dictionary = {}
lists = ['athletic','kitchenette','helpful','terrify']
with open('suffix_list.txt') as f:
for lines in f:
lines = lines.rstrip()
suffix_list.append(lines)
for words in lists:
for suffix in suffix_list:
if words.endswith(suffix):
final_list = str.replace(words,suffix,'')
dictionary[words] = final_list
return dictionary
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
ui.btn_Action.clicked.connect(lambda:
ui.text_output.append(str(ui.suffix_remove())))
MainWindow.show()
sys.exit(app.exec_())
```
[](https://i.stack.imgur.com/9iYOI.png)<issue_comment>username_1: Because you define ambiguous implicits for `Index` and `Integer` (both `Int`).
Which one should be chosen by compiler?
Upvotes: 2 <issue_comment>username_2: I think you might be a little confused about how *Scala* does *implicit conversions*. (A common mistake, as `implicit` is a little overused.)
I think that what you want, first of all, is an *implicit conversion function* - or even an *implicit class*. Here's how you could do this using the latter:
Note: `Int`, `Index` and `Integer` are treated identically, so are `Real` and `Double`, confusing matters somewhat, so I've pared this down to something that will work. Also, `Convertible` does not need to be *generic* as its conversion functions need no arguments. Finally, you shouldn't have both `type` and `val` declarations for your types.
```scala
type Index = Int
type Integer = Int
type Real = Double
trait Convertible {
def toIndex: Index
def toInteger: Integer
def toReal: Real
}
// Implicit classes cannot be defined in top-level scope, so they belong to an object.
object Implicits {
implicit class ConvertibleInt(i: Int)
extends Convertible {
override def toIndex = i
override def toInteger = i
override def toReal = i.toDouble
}
implicit class ConvertibleDouble(d: Double)
extends Convertible {
override def toIndex = d.toInt
override def toInteger = d.toInt
override def toReal = d
}
implicit class ConvertibleString(s: String)
extends Convertible {
override def toIndex = s.toInt
override def toInteger = s.toInt
override def toReal = s.toDouble
}
}
```
Now try this:
```scala
import Implicits._
val a = 3.toReal
val b = 3.0.toReal
val c = "3".toReal
```
What's happening here? Well, the `implicit class` declarations define classes that *decorate* the sole constructor argument with additional functions. If the compiler sees that you're trying to call a method on a type that doesn't have that method, it will look to see if there's an *implicit conversion*, in scope, to a type that does. If so, it is used and the function is called; if not, you get a compiler error. (The `import` statement is used to bring the classes into your current scope.)
So, for example, when the compiler sees `"3".toReal` it firstly determines that `"3"` is a `String`. Since this type doesn't have a `.toReal` member, it tries to find a conversion from a `String` to a type that does have such a member. It finds the `ConvertibleString` *implicit class* that takes a `String` argument and provides a `.toReal` method. Yay! So the compiler creates an instance of this class by passing `"3"` to `ConvertibleString`'s constructor, then calls `.toReal` on the result.
On the other hand, when `implicit` is used with a value, it tells the compiler that the value is a default for any matching *implicit arguments* of the same type that are not provided. **NEVER USE `implicit` WITH A PRIMITIVE OR COMMON LIBRARY TYPE!**
For example:
```scala
final case class Example(i: Int)
// Default.
implicit val nameCanBeAnythingAtAll = Example(5)
// Function with implicit argument.
def someFunc(implicit x: Example): Unit = println(s"Value is $x")
```
Now, if you write something like this:
```scala
someFunc
```
the output will be `Value is Example(5)`.
`implicit` values and arguments are an advanced topic, and I wouldn't worry about how they're used right now.
Upvotes: 2 [selected_answer]
|
2018/03/14
| 1,300 | 2,983 |
<issue_start>username_0: I have number of objects im receiving weekly, each one on these object has date, hours, and other fields. I wanna sort these object array total hours of each day.
example of the objects
```
var anArray = [{
'End':"22:00",
'Id':"Q45575",
'Name':"W-299849",
'Start':"20:00",
'date':"2018-02-04",
'hours':2
},{
'End':"21:00",
'Id':"Q45551",
'Name':"W-299809",
'Start':"15:00",
'date':"2018-02-07",
'hours':5
},{
'End':"20:00",
'Id':"Q45515",
'Name':"W-299849",
'Start':"10:00",
'date':"2018-02-04",
'hours':2
}];
```
output should be like this, assuming sunday is 2/4
Sun Mon tue Wed fri sat
4 0 0 5 0 0
This what I have
```
var resourceData = data.reduce((a, c) => {
var targetDay = new Date(c.date).getDay() === 6 ? 0 : (new Date(c.date).getDay() + 1);
if (a) {
a['week'][targetDay] += c.hours;
} else {
a = { 'week': new Array(7).fill(0) };
a['week'][targetDay] = c.hours;
}
return a;
}, {});
```
is not working im getting error with targetDay<issue_comment>username_1: Your codes almost reached the finish line.
You can let initialValue for the reduce is `{ 'week': new Array(7).fill(0) }`, don't need to compare with `if(a)` in the handler of reduce.
See the comments in below codes:
```js
var anArray = [{ 'End':"22:00", 'Id':"Q45575", 'Name':"W-299849", 'Start':"20:00", 'date':"2018-02-04", 'hours':2},{ 'End':"21:00", 'Id':"Q45551", 'Name':"W-299809", 'Start':"15:00", 'date':"2018-02-07", 'hours':5},{ 'End':"20:00", 'Id':"Q45515", 'Name':"W-299849", 'Start':"10:00", 'date':"2018-02-04", 'hours':2}];
var resourceData = anArray.reduce((a, c) => {
var targetDay = new Date(c.date).getDay() === 6 ? 0 : (new Date(c.date).getDay() + 1);
a['week'][targetDay] += c.hours;
/*
else {
a = { 'week': new Array(7).fill(0) };
a['week'][targetDay] = c.hours;
}*/ //remove else block because already created [var a] by the initialValue
return a;
}, { 'week': new Array(7).fill(0) }); //initialize with expected object instead of {}
console.log(resourceData)
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: Rather than reduce, for this I think forEach seems a better fit.
Example below.
```js
var anArray = [{
'End':"22:00",
'Id':"Q45575",
'Name':"W-299849",
'Start':"20:00",
'date':"2018-02-04",
'hours':2
},{
'End':"21:00",
'Id':"Q45551",
'Name':"W-299809",
'Start':"15:00",
'date':"2018-02-07",
'hours':5
},{
'End':"20:00",
'Id':"Q45515",
'Name':"W-299849",
'Start':"10:00",
'date':"2018-02-04",
'hours':2
}];
var days = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'];
var result = {};
//lets just zero every day
(new Array(7).fill(0)).map((x,ix) => {
result[days[ix]] = 0;
});
//now lets add them up
anArray.forEach((d) => {
var dt = new Date(d.date);
result[days[dt.getDay()]] += d.hours;
});
console.log(result);
```
Upvotes: 1
|
2018/03/14
| 1,435 | 3,972 |
<issue_start>username_0: I observed that when a normal property that came from `data()` and a computed property that is derived from it are passed through an event, the former keeps its reactivity while the latter loses it.
I set up the following test case for it ([also as a JSFiddle](https://jsfiddle.net/christophfriedrich/b27ap87v/25/) if you prefer that):
```js
const EventBus = new Vue();
Vue.component('comp', {
data() {
return {
arrData: ['a', 'b']
};
},
computed: {
arrComputed() {
return this.arrData.map((e) => e.toUpperCase());
}
},
template: `
Original array: {{ arrData }}
Computed array: {{ arrComputed }}
Remove a
Remove b
Pass through event
Start over soft
Start over hard
`,
methods: {
remove(name) {
name = name.toLowerCase();
if(this.arrData.indexOf(name) != -1) {
this.$delete(this.arrData, this.arrData.indexOf(name));
}
},
pass() {
EventBus.$emit('pass', this.arrData, this.arrComputed);
},
reset() {
this.$set(this.arrData, 0, 'a');
this.$set(this.arrData, 1, 'b');
},
resetHard() {
this.arrData = ['a','b'];
}
}
});
Vue.component('othercomp', {
data() {
return {
items1: [],
items2: []
}
},
mounted() {
EventBus.$on('pass', this.receive);
},
template: `
Original array: {{items1}}
Computed array: {{items2}}
`,
methods: {
receive(items1, items2) {
this.items1 = items1;
this.items2 = items2;
}
}
});
var app = new Vue({
el: '#app',
components:['comp', 'othercomp']
})
```
```html
```
How is a computed different from a normal property so that this difference in behaviour occurs?
I learned from a [previous question](https://stackoverflow.com/q/49259249/3746543) that passing reactive objects around like this is bad practice and that I should use a getter function instead, however I'd still like to know why this difference occurs.<issue_comment>username_1: Your codes almost reached the finish line.
You can let initialValue for the reduce is `{ 'week': new Array(7).fill(0) }`, don't need to compare with `if(a)` in the handler of reduce.
See the comments in below codes:
```js
var anArray = [{ 'End':"22:00", 'Id':"Q45575", 'Name':"W-299849", 'Start':"20:00", 'date':"2018-02-04", 'hours':2},{ 'End':"21:00", 'Id':"Q45551", 'Name':"W-299809", 'Start':"15:00", 'date':"2018-02-07", 'hours':5},{ 'End':"20:00", 'Id':"Q45515", 'Name':"W-299849", 'Start':"10:00", 'date':"2018-02-04", 'hours':2}];
var resourceData = anArray.reduce((a, c) => {
var targetDay = new Date(c.date).getDay() === 6 ? 0 : (new Date(c.date).getDay() + 1);
a['week'][targetDay] += c.hours;
/*
else {
a = { 'week': new Array(7).fill(0) };
a['week'][targetDay] = c.hours;
}*/ //remove else block because already created [var a] by the initialValue
return a;
}, { 'week': new Array(7).fill(0) }); //initialize with expected object instead of {}
console.log(resourceData)
```
Upvotes: 1 [selected_answer]<issue_comment>username_2: Rather than reduce, for this I think forEach seems a better fit.
Example below.
```js
var anArray = [{
'End':"22:00",
'Id':"Q45575",
'Name':"W-299849",
'Start':"20:00",
'date':"2018-02-04",
'hours':2
},{
'End':"21:00",
'Id':"Q45551",
'Name':"W-299809",
'Start':"15:00",
'date':"2018-02-07",
'hours':5
},{
'End':"20:00",
'Id':"Q45515",
'Name':"W-299849",
'Start':"10:00",
'date':"2018-02-04",
'hours':2
}];
var days = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun'];
var result = {};
//lets just zero every day
(new Array(7).fill(0)).map((x,ix) => {
result[days[ix]] = 0;
});
//now lets add them up
anArray.forEach((d) => {
var dt = new Date(d.date);
result[days[dt.getDay()]] += d.hours;
});
console.log(result);
```
Upvotes: 1
|
2018/03/14
| 1,044 | 3,469 |
<issue_start>username_0: I have made up a function which returns some chars , all I want to do is to append all those returned chars into one string .
```
#include
#include
char func(int n);
int main()
{
int i;
char str[] = "";
size\_t p = strlen(str);
for (i =0 ; i < 5; i++){
str[p++] = func(i);
str[p] = '\0';
p++;
}
printf("%s",str);
return 0;
}
char func(int n){
if (n == 0)
return '1';
if (n == 1)
return '2';
if (n > 1)
return '3';
}
```
//EDIT Output for this is `19`<issue_comment>username_1: ```c
char func(n){
if (n == 0)
return '1';
if (n == 1)
return '2';
if (n > 1)
return '3';
}
```
You should always specify the type for variables.
Please use something like `int n` instead of just `n`.
It's also bad that all of your returns are conditional, it's better to have a return statement that's guaranteed to be executed no matter what \*:
```c
char func(int n) {
if (n == 0) return '1';
if (n == 1) return '2';
return '3';
}
```
*\* Because not returning a value from a function that **should** return a value is undefined behaviour.*
Now that we have that out of the way, let's have a look at your `main()`:
```c
int main() {
int i;
char str[] = "";
size_t p = strlen(str);
for (i =0 ; i < 5; i++){
str[p++] = func(i);
str[p] = '\0';
p++;
}
printf("%s",str);
return 0;
}
```
* `str[]` is not big enough to store all the characters you write to it, resulting in undefined behaviour.
* Your loop body is written in a weird way, why are you incrementing `p` twice?
Here a very simple program that writes 5 characters into `str`:
```c
#include
char func(int n) {
if (n == 0) return '1';
if (n == 1) return '2';
return '3';
}
int main() {
int i;
// Allocate 6 bytes (5 characters) on the stack
char str[6] = "";
for (i = 0 ; i < 5; i++) {
str[i] = func(i);
}
// Strings \*must\* be NULL terminated in C
str[5] = 0;
printf("%s",str);
return 0;
}
```
Upvotes: 2 <issue_comment>username_2: The size of your str here is "0" (0 using the strlen and 1 using the sizeof operator because it counts the '\0' caracter) so you can not add more element to the str, and if you try, the program will crash. So you have two possibilies here, the first is to declare a fixed table size and the number n will be limited by the size, the second is a dynamic one using mallic. To intialize it to zeros you can just use the memset API.
Upvotes: 0 <issue_comment>username_3: Well short answer is everything you did would be right if you have in the array enough memory to hold those `5` characters and the `\0` if you want to treat it as a string (NUL terminated `char` array).
`""` is a string literal containing only the `\0`. Length of this string is `0`. What about the array? Applying `sizeof` over it reveals that it is capable of holding one character. (Well it contained `\0`).
Now with your code you surely did access positions that are beyond the size of the array. This is undefined behavior - mentioned by the C standard.
Solution is to either have an array having size capable of holding the maximum character you would like to store someday. Or you can have a `char*` to which you can assign address of allocated chunk by using functions like `malloc`,`realloc` etc. Benefit of this, you can increase memory as much as you need on runtime depending on the number of characters you want to store.
Upvotes: 0
|
2018/03/14
| 991 | 3,755 |
<issue_start>username_0: Dear StackOverFlow community,
Basing on a built-in user User model I've created my own model class called "ModelOfParticularPerson". The structure of it looks like this:
```
class ModelOfParticularPerson(models.Model):
user = models.OneToOneField(User)
nickname = models.CharField(max_length=100, null=True, unique=False)
uploaded_at = models.DateTimeField(auto_now_add=True, null=True)
email_address = models.EmailField(max_length=200, blank=False, null=False, help_text='Required')
description = models.CharField(max_length=4000, blank=True, null=True)
created = models.DateTimeField(auto_now_add=True, blank=True)
```
Unfortunately, after loggin in with the usage of particular account, whenever I am trying to reedit the profile, I do get following error:
"Model of particular person with this User already exists."
Any advice is priceless.
Thanks.
ps.
views.py:
[..]
```
@method_decorator(login_required, name='dispatch')
class ProfileUpdateView(LoginRequiredMixin, UpdateView):
model = ModelOfParticularPerson
form_class = ModelOfParticularPersonForm
success_url = "/accounts/profile/" # You should be using reverse here
def get_object(self):
# get_object_or_404
return ModelOfParticularPerson.objects.get(user=self.request.user)
def form_valid(self, form):
form.instance.user = self.request.user
return super().form_valid(form)
def post(self, request):
form = ModelOfParticularPersonForm(self.request.POST, self.request.FILES)
if form.is_valid():
print("FORM NOT VALID!")
profile = form.save(commit=False)
profile.user = self.request.user
profile.save()
return JsonResponse(profile)
else:
return render_to_response('my_account.html', {'form': form})
```
urls.py:
```
urlpatterns = [
[..]
url(r'^login/$', auth_views.LoginView.as_view(template_name='login.html'), name='login'),
url(r'^accounts/profile/$', ProfileUpdateView.as_view(template_name='my_account.html'), name='my_account'),
]
```
forms.py
```
class ModelOfParticularPersonForm(ModelForm):
class Meta:
model = ModelOfParticularPerson
fields = '__all__'
widgets = {
'user':forms.HiddenInput(),
'uploaded_at':forms.HiddenInput(),
'created':forms.HiddenInput(),
}
```<issue_comment>username_1: You need to pass the instance to the form, otherwise Django will try to create a new object when you save it.
```
def post(self, request):
form = ModelOfParticularPersonForm(instance=self.get_object(), self.request.POST, self.request.FILES)
...
```
You should try to avoid overriding `get` or `post` when you're using generic class based views. You can end up losing functionality or having to duplicate code. In this case, it looks like you can remove your `post` method. In the `form_valid` method you can return a `JsonResponse`. You shouldn't have to set `form.instance.user` if you are updating an existing object.
```
def form_valid(self, form):
profile = form.save()
return JsonResponse(profile)
```
Finally, you should leave fields like `user` and `uploaded_at` out of the model form instead of making them hidden fields.
Upvotes: 2 [selected_answer]<issue_comment>username_2: You're creating new forum in your `post` method of view, but you're not passing existing model object to it. That leads to creation of new model, which fails, because object already exists.
Instead of overwritting `post` method, put saving of object inside `is_valid` method and use already provided form object (passed to you by method parameter).
Upvotes: 0
|
2018/03/14
| 390 | 1,292 |
<issue_start>username_0: I have a problem with Symfony's CSRF protection.
When I log on to the domain toto.com, it works, but when I try to connect to the domain test.toto.com (same server, same code, it's just a redirection to toto.com), I have an error 'CSRF Invalid token'.
Do you know how I can solve this problem? How do you test your applications before you push them on the prod? do you have a sub-domain?
Thank you.
**[edit - 18-04-03]**
Added :
```
framework:
session:
cookie_domain: xxxx.com
```
in my config\_prod.yml file. It's not working
**[edit v2 - 18-04-03 - working]**
```
framework
trusted_hosts: ['toto.com', 'test.toto.com']
session:
cookie_domain: .toto.com
save_path: "%kernel.root_dir%/../var/sessions/"
cookie_lifetime: 31536000
name: totosessionid
```
Look like it's working.
Thank you.<issue_comment>username_1: You can try to set cookie\_domain, save\_path: and name: in the config (for framework).
Upvotes: 3 [selected_answer]<issue_comment>username_2: To include a domain and its subdomains, you should use this syntax:
```
framework:
session:
cookie_domain: .xxxx.com
```
Other syntaxes may cause issues with specific browsers (mostly old IE, as far I remember).
Upvotes: 0
|
2018/03/14
| 258 | 1,094 |
<issue_start>username_0: I am using nodejs to write a file in a shared drive and it is working fine in my local machine, however after deploying the above code in Openshift the file is not creating and it is because OpenShift is not able to access the folder. Below is my code:
```
writeFile() {
const sharedFolderPath = "\\server\folder";
fs.writeFile(sharedFolderPath, templatePath, (err) => {
if (err) {
console.error(err);
} else {
console.info("file created successfully");
}
})
}
```
How to configure share folder with credentials in Openshift so that my code could write the file?<issue_comment>username_1: You can try to set cookie\_domain, save\_path: and name: in the config (for framework).
Upvotes: 3 [selected_answer]<issue_comment>username_2: To include a domain and its subdomains, you should use this syntax:
```
framework:
session:
cookie_domain: .xxxx.com
```
Other syntaxes may cause issues with specific browsers (mostly old IE, as far I remember).
Upvotes: 0
|
2018/03/14
| 935 | 3,070 |
<issue_start>username_0: I am looking for a way to update/access a Python dictionary by addressing all keys that do NOT match the key given.
That is, instead of the usual `dict[key]`, I want to do something like `dict[!key]`. I found a workaround, but figured there must be a better way which I cannot figure out at the moment.
```
# I have a dictionary of counts
dicti = {"male": 1, "female": 200, "other": 0}
# Problem: I encounter a record (cannot reproduce here) that
# requires me to add 1 to every key in dicti that is NOT "male",
# i.e. dicti["female"], and dicti["other"],
# and other keys I might add later
# Here is what I am doing and I don't like it
dicti.update({k: v + 1 for k,v in dicti.items() if k != "male"})
```<issue_comment>username_1: ```
dicti.update({k: v + 1 for k,v in dicti.items() if k != "male"})
```
that creates a sub-dictionary (hashing, memory overhead) then passes it to the old dictionary: more hashing/ref copy.
Why not a good old loop on the keys (since the values aren't mutable):
```
for k in dicti:
if k != "male":
dicti[k] += 1
```
Maybe faster if there are a lot of keys and only one key to avoid: add to all the keys, and cancel the operation on the one key you want to avoid (saves a lot of string comparing):
```
for k in dicti:
dicti[k] += 1
dicti["male"] -= 1
```
if the values were mutable (ex: lists) we would avoid one hashing and mutate the value instead:
```
for k,v in dicti.items():
if k != "male":
v.append("something")
```
One-liners are cool, but sometimes it's better to avoid them (performance & readability in that case)
Upvotes: 3 <issue_comment>username_2: If you have to perform this "add to others" operation more often, and if all the values are numeric, you could also *subtract* from the given key and add the same value to some global variable counting towards *all* the values (including that same key). For example, as a wrapper class:
```
import collections
class Wrapper:
def __init__(self, **values):
self.d = collections.Counter(values)
self.n = 0
def add(self, key, value):
self.d[key] += value
def add_others(self, key, value):
self.d[key] -= value
self.n += value
def get(self, key):
return self.d[key] + self.n
def to_dict(self):
if self.n != 0: # recompute dict and reset global offset
self.d = {k: v + self.n for k, v in self.d.items()}
self.n = 0
return self.d
```
Example:
```
>>> dicti = Wrapper(**{"male": 1, "female": 200, "other": 0})
>>> dicti.add("male", 2)
>>> dicti.add_others("male", 5)
>>> dicti.get("male")
3
>>> dicti.to_dict()
{'other': 5, 'female': 205, 'male': 3}
```
The advantage is that both the `add` and the `add_others` operation are O(1) and only when you actually need them, you update the values with the global offset. Of course, the `to_dict` operation still is O(n), but the updated dict can be saved and only recomputed when `add_other` has been called again in between.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 833 | 2,806 |
<issue_start>username_0: I have a remote cron job that scrapes data using selenium every 30 minutes. Roughly 1 in 10 times the selenium script fails. When the script fails, I get an error output instead (various selenium error messages). Does this cause the cron job to stop? Shouldn't crontab try to run the script again in 30 minutes?
After a failed attempt, when I type `crontab -l`, it still shows my cron job.
How do I ensure that the crontab tries again in 30 minutes?<issue_comment>username_1: ```
dicti.update({k: v + 1 for k,v in dicti.items() if k != "male"})
```
that creates a sub-dictionary (hashing, memory overhead) then passes it to the old dictionary: more hashing/ref copy.
Why not a good old loop on the keys (since the values aren't mutable):
```
for k in dicti:
if k != "male":
dicti[k] += 1
```
Maybe faster if there are a lot of keys and only one key to avoid: add to all the keys, and cancel the operation on the one key you want to avoid (saves a lot of string comparing):
```
for k in dicti:
dicti[k] += 1
dicti["male"] -= 1
```
if the values were mutable (ex: lists) we would avoid one hashing and mutate the value instead:
```
for k,v in dicti.items():
if k != "male":
v.append("something")
```
One-liners are cool, but sometimes it's better to avoid them (performance & readability in that case)
Upvotes: 3 <issue_comment>username_2: If you have to perform this "add to others" operation more often, and if all the values are numeric, you could also *subtract* from the given key and add the same value to some global variable counting towards *all* the values (including that same key). For example, as a wrapper class:
```
import collections
class Wrapper:
def __init__(self, **values):
self.d = collections.Counter(values)
self.n = 0
def add(self, key, value):
self.d[key] += value
def add_others(self, key, value):
self.d[key] -= value
self.n += value
def get(self, key):
return self.d[key] + self.n
def to_dict(self):
if self.n != 0: # recompute dict and reset global offset
self.d = {k: v + self.n for k, v in self.d.items()}
self.n = 0
return self.d
```
Example:
```
>>> dicti = Wrapper(**{"male": 1, "female": 200, "other": 0})
>>> dicti.add("male", 2)
>>> dicti.add_others("male", 5)
>>> dicti.get("male")
3
>>> dicti.to_dict()
{'other': 5, 'female': 205, 'male': 3}
```
The advantage is that both the `add` and the `add_others` operation are O(1) and only when you actually need them, you update the values with the global offset. Of course, the `to_dict` operation still is O(n), but the updated dict can be saved and only recomputed when `add_other` has been called again in between.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 1,511 | 4,138 |
<issue_start>username_0: I would like to scrap publications from google scholar profile with SimpleHtmlDom.
I have script for scraping the projects, but the problem is, that i am able to scrap only projects, that are shown.
When i am using url like this
>
> $html->load\_file("<http://scholar.google.se/citations?user=Sx4G9YgAAAAJ>");
>
>
>
there are shown only 20 projects. I can increase the number when i change the url
>
> $html->load\_file("<https://scholar.google.se/citations?user=Sx4G9YgAAAAJ&hl=&view_op=list_works&pagesize=100>");
>
>
>
by set the "pagesize" attribute. But the problem is, that 100 is maximum number of publications, what is webpage able to show.
Is there some way how to scrap all the projects from profile?<issue_comment>username_1: You cannot get all of the projects at once but you can get 100 projects at a time then get another 100 and so on, here is the URL
```
https://scholar.google.com/citations?user=Sx4G9YgAAAAJ&hl=&view_op=list_works&cstart=100&pagesize=100
```
In the above URL focus on **cstart** attribute, let's say you already grabbed 100 projects so now you will enter `cstart=100` and grab another 100 list and then `cstart=200` and so on until you get all of the publications.
Hope this helps
Upvotes: 3 [selected_answer]<issue_comment>username_2: You have to pass additional pagination parameter to the request url.
`cstart` - Parameter defines the result offset. It skips the given number of results. It's used for pagination. (e.g., 0 (default) is the first page of results, 20 is the 2nd page of results, 40 is the 3rd page of results, etc.).
`pagesize` - Parameter defines the number of results to return. (e.g., 20 (default) returns 20 results, 40 returns 40 results, etc.). Maximum number of results to return is 100.
So, your URL should look like this:
>
> <https://scholar.google.com/citations?user=WLBAYWAAAAAJ&hl=en&cstart=100&pagesize=100>
>
>
>
You could also use a third party solution like SerpApi to do this for you. It's a paid API with a free trial.
Example PHP code (available in other libraries also) to retrieve the second page of results:
```php
require 'path/to/google_search_results';
$query = [
"api_key" => "secret_api_key",
"engine" => "google_scholar_author",
"hl" => "en",
"author_id" => "WLBAYWAAAAAJ",
"num" => "100",
"start" => "100"
];
$search = new GoogleSearch();
$results = $search->json($query);
```
Example JSON output:
```json
"articles": [
{
"title": "Geographic localization of knowledge spillovers as evidenced by patent citations",
"link": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=WLBAYWAAAAAJ&cstart=100&pagesize=100&citation_for_view=WLBAYWAAAAAJ:HGTzPopzzJcC",
"citation_id": "WLBAYWAAAAAJ:HGTzPopzzJcC",
"authors": "<NAME>, <NAME>, <NAME>",
"publication": "Patents, citations, and innovations: a window on the knowledge economy, 155-178, 2002",
"cited_by": {
"value": 18,
"link": "https://scholar.google.com/scholar?oi=bibs&hl=en&cites=8561816228378857607",
"serpapi_link": "https://serpapi.com/search.json?cites=8561816228378857607&engine=google_scholar&hl=en",
"cites_id": "8561816228378857607"
},
"year": "2002"
},
{
"title": "IPR, innovation, economic growth and development",
"link": "https://scholar.google.com/citations?view_op=view_citation&hl=en&user=WLBAYWAAAAAJ&cstart=100&pagesize=100&citation_for_view=WLBAYWAAAAAJ:70eg2SAEIzsC",
"citation_id": "WLBAYWAAAAAJ:70eg2SAEIzsC",
"authors": "<NAME>, <NAME>",
"publication": "Department of Economics, National University of Singapore, 2007",
"cited_by": {
"value": 17,
"link": "https://scholar.google.com/scholar?oi=bibs&hl=en&cites=7886734392494692167",
"serpapi_link": "https://serpapi.com/search.json?cites=7886734392494692167&engine=google_scholar&hl=en",
"cites_id": "7886734392494692167"
},
"year": "2007"
},
...
]
```
Check out the [documentation](https://serpapi.com/google-scholar-author-api) for more details.
*Disclaimer: I work at SerpApi.*
Upvotes: 0
|
2018/03/14
| 1,001 | 3,430 |
<issue_start>username_0: Declared structures:
```
typedef struct{
char fname[25];
char mname[3];
char lname[25];
}Name;
typedef struct{
char month[25];
int day,year;
}Date;
typedef struct{
Name gname;
char addr[50];
char cnum[11];
}Guardian;
typedef struct{
Name sname;
Date sbday;
Guardian sguard;
char gender[6];
char addr[50];
char cnum[11];
char course[10];
int year;
}Student;
```
Declared functions:
```
void input(Name name,Date date, Guardian guard, Student stud);
void display(Name name,Date date, Guardian guard, Student stud);
```
When input function is called, it lets me add strings being asked. Then, display function is called. It will not display the entered information.
Calling functions:
```
input(name, date, guard, stud);
display(name, date, guard, stud);
```
Display funtion:
```
void display(Name name, Date date, Guardian guard, Student stud)
{
printf(" -=Student Information=- \n");
printf("Name: %s %s. %s\n",stud.sname.fname,stud.sname.mname,stud.sname.lname);
printf("Birtday: %s %d, %d\n",stud.sbday.month,stud.sbday.day,stud.sbday.year);
printf("Gender: %s\n",stud.gender);
printf("Contact Number: \n%s",stud.cnum);
printf("Course & Year: %s-%d \n",stud.course,stud.year);
printf(" -=Student Guardian Information=- \n");
printf("Name: %s %s. %s\n",guard.gname.fname,guard.gname.mname,guard.gname.lname);
printf("Address: %s\n",guard.addr);
printf("Contact Number: %s\n",guard.cnum);
}
```
Input function:
```
void input(Name name,Date date, Guardian guard, Student stud)
{
printf("Student Information \n");
printf("First Name: ");
gets(stud.sname.fname);
//...
}
```
This will just display one set of information. It won't add new records per say. Just a simple exercise.<issue_comment>username_1: Like how to modify the content of a variable througth a function, you need to give the address of the structure, because "deep down", a variable of type "int" and a variable of type "struct something", well, it's a variable.
And if you want to modify the value of your variable, you need to send his address (pointer).
Also, note that by passing the value of the struct, the value are copied. For small structure, it doesn't really matter, but for "larger" one, it will cost really huge on your program's speed.
Upvotes: 0 <issue_comment>username_2: C is pass by value. So when you passed a variable as you have shown - a copy of it is given to the called function so that it can work with it. As a result the object from which the copy is made remains unchanged. The solution is to pass the address of the object and then dereferencing the copied pointer variable which contains the address of the object - thus changing the intended object.
So in your case,
```
void input(Name *name,Date *date, Guardian *guard, Student *stud){
printf("%s",name->fname); // short hand for (*name).fname
...
}
```
Call it like
```
input(&name, .. );
```
Even if you want only read those objects(you don't want to change anything) then also I would support this way of doing things - why burden things by copying those large structure instances? Just pass the address of them and work with it.
Also don't use deprecated `gets` - use `fgets` instead and when you use it (`fgets`) don't forget to check the return value.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 492 | 1,326 |
<issue_start>username_0: There is a function which is returned the average for all elements from beginning to `i + 1`.
```
std::vector f(const std::vector &v)
{
std::vector ret(v.size());
for (size\_t i = 0; i < v.size(); ++i) {
ret[i] = std::accumulate(v.begin(), v.begin() + i + 1, 0.0) / (i + 1);
}
return ret;
}
```
How can I optimize this function?<issue_comment>username_1: You don't need to recalculate the sum from the beginning at each iteration. Just keep a running sum.
```
std::vector f(const std::vector &v)
{
std::vector ret(v.size());
double cur = 0.;
for (size\_t i = 0; i < v.size(); ++i)
{
cur += v[i];
ret[i] = cur / (i + 1);
}
return ret;
}
```
This is even amenable to working inplace, so if the caller doesn't mind losing the source vector it can avoid the extra allocation.
```
void f_inplace(std::vector &v)
{
double cur = 0.;
for (size\_t i = 0; i < v.size(); ++i)
{
cur += v[i];
v[i] = cur / (i + 1);
}
}
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Use [`std::partial_sum`](http://en.cppreference.com/w/cpp/algorithm/partial_sum):
```
std::vector f(const std::vector &v)
{
std::vector ret(v.size());
std::partial\_sum(v.begin(), v.end(), ret.begin());
for (size\_t i = 0; i < ret.size(); ++i) {
ret[i] /= i + 1;
}
return ret;
}
```
Upvotes: 2
|
2018/03/14
| 5,705 | 15,946 |
<issue_start>username_0: I am trying to set arguments using assembly code that are used in a generic function. The arguments of this generic function - that is resident in a dll - are not known during compile time. During runtime the pointer to this function is determined using the GetProcAddress function. However its arguments are not known. During runtime I can determine the arguments - both value and type - using a datafile (not a header file or anything that can be included or compiled). I have found a good example of how to solve this problem for 32 bit ([C Pass arguments as void-pointer-list to imported function from LoadLibrary()](https://stackoverflow.com/questions/28996696)), but for 64 bit this example does not work, because you cannot fill the stack but you have to fill the registers. So I tried to use assembly code to fill the registers but until now no success. I use C-code to call the assembly code. I use VS2015 and MASM (64 bit). The C-code below works fine, but the assembly code does not. So what is wrong with the assembly code? Thanks in advance.
C code:
```
...
void fill_register_xmm0(double); // proto of assembly function
...
// code determining the pointer to a func returned by the GetProcAddress()
...
double dVal = 12.0;
int v;
fill_register_xmm0(dVal);
v = func->func_i(); // integer function that will use the dVal
...
```
assembly code in different .asm file (MASM syntax):
```
TITLE fill_register_xmm0
.code
option prologue:none ; turn off default prologue creation
option epilogue:none ; turn off default epilogue creation
fill_register_xmm0 PROC variable: REAL8 ; REAL8=equivalent to double or float64
movsd xmm0, variable ; fill value of variable into xmm0
ret
fill_register_xmm0 ENDP
option prologue:PrologueDef ; turn on default prologue creation
option epilogue:EpilogueDef ; turn on default epilogue creation
END
```<issue_comment>username_1: So you need to call a function (in a DLL) but only at run-time can you figure out the number and type of parameters. Then you need to perpare the parameters, either on the stack or in registers, depending on the Application Binary Interface/calling convention.
I would use the following approach: some component of your program figures out the number and type of parameters. Let's assume it creates a list of `{type, value}, {type, value}, ...`
You then pass this list to a function to prepare the ABI call. This will be an assembler function. For a stack-based ABI (32 bit), it just pushes the parameters on to the stack. For a register based ABI, it can prepare the register values and save them as local variables (`add sp,nnn`) and once all parameters have been prepared (possibly using registers needed for the call, hence first saving them), loads the registers (a series of `mov` instructions) and performs the `call` instruction.
Upvotes: 1 <issue_comment>username_2: The x86-64 Windows calling convention is fairly simple, and makes it possible to write a wrapper function that doesn't know the types of anything. Just load the first 32 bytes of args into registers, and copy the rest to the stack.
---
**You definitely need to make the function call from asm**; It can't possibly work *reliably* to make a bunch of function calls like `fill_register_xmm0` and hope that the compiler doesn't clobber any of those registers. **The C compiler emits instructions that use the registers**, as part of its normal job, including passing args to functions like `fill_register_xmm0`.
The only alternative would be to write a C statement with a function call with all the args having the correct type, to get the compiler to emit code to make a function call normally. If there are only a few possible different combinations of args, putting those in `if()` blocks might be good.
And BTW, `movsd xmm0, variable` probably assembles to `movsd xmm0, xmm0`, because the first function arg is passed in XMM0 if it's FP.
---
**In C, prepare a buffer with the args** (like in the 32-bit case).
Each one needs to be padded to 8 bytes if it's narrower. See [MS's docs for x86-64 `__fastcall`](https://msdn.microsoft.com/en-us/library/ms235286.aspx). (Note that x86-64 `__vectorcall` passes `__m128` args by value in registers, but for `__fastcall` it's strictly true that the args form an array of 8-byte values, after the register args. And storing those into the shadow space creates a full array of all the args.)
>
> Any argument that doesn’t fit in 8 bytes, or is not 1, 2, 4, or 8 bytes, must be passed by reference. There is no attempt to spread a single argument across multiple registers.
>
>
>
But the key thing that makes variadic functions easy in the Windows calling convention also works here: **The register used for the 2nd arg doesn't depend on the type of the first**. i.e. if an FP arg is the first arg, then that uses up an integer register arg-passing slot. So you can only have up to 4 register args, not 4 integer and 4 FP.
**If the 4th arg is integer, it goes in `R9`, even if it's the first integer arg**. Unlike in the x86-64 System V calling convention, where the first *integer* arg goes in `rdi`, regardless of how many earlier FP args are in registers and/or on the stack.
**So the asm wrapper that calls the function can load the first 8 bytes into both integer and FP registers**! (Variadic functions already require this, so a callee doesn't have to know whether to store the integer or FP register to form that arg array. MS optimized the calling convention for simplicity of variadic callee functions at the expense of efficiency for functions with a mix of integer and FP args.)
The C side that puts all the args into a buffer can look like this:
```
#include
int asmwrapper(const char \*argbuf, size\_t argp-argbuf, void (\*funcpointer)(...));
void somefunc() {
alignas(16) uint64\_t argbuf[256/8]; // or char argbuf[256]. But if you choose not to use alignas, then uint64\_t will still give 8-byte alignment
char \*argp = (char\*)argbuf;
for( ; argp < &argbuf[256] ; argp += 8) {
if (figure\_out\_an\_arg()) {
int foo = get\_int\_arg();
memcpy(argp, &foo, sizeof(foo));
} else if(bar) {
double foo = get\_double\_arg();
memcpy(argp, &foo, sizeof(foo));
} else
... memcpy whatever size
// or allocate space to pass by ref and memcpy a pointer
}
if (argp == &argbuf[256]) {
// error, ran out of space for args
}
asmwrapper(argbuf, argp-argbuf, funcpointer);
}
```
---
Unfortunately I don't think we can directly use `argbuf` on the stack *as* the args + shadow space for a function call. We have no way of stopping the compiler from putting something valuable below `argbuf` which would let us just set `rsp` to the bottom of it (and save the return address somewhere, maybe at the top of `argbuf` by reserving some space for use by the asm).
Anyway, just copying the whole buffer will work. Or actually, load the first 32 bytes into registers (both integer and FP), and only copy the rest. The shadow space doesn't need to be initialized.
`argbuf` could be a VLA if you knew ahead of time how big it needed to be, but 256 bytes is pretty small. It's not like reading past the end of it can be a problem, it can't be at the end of a page with unmapped memory later, because our parent function's stack frame definitely takes some space.
```
;; NASM syntax. For MASM just rename the local labels and add whatever PROC / ENDPROC is needed.
;; UNTESTED
;; rcx: argbuf
;; rdx: length in bytes of the args. 0..256, zero-extended to 64 bits
;; r8 : function pointer
;; reserve rdx bytes of space for arg passing
;; load first 32 bytes of argbuf into integer and FP arg-passing registers
;; copy the rest as stack-args above the shadow space
global asmwrapper
asmwrapper:
push rbp
mov rbp, rsp ; so we can efficiently restore the stack later
mov r10, r8 ; move function pointer to a volatile but non-arg-passing register
; load *both* xmm0-3 and rcx,rdx,r8,r9 from the first 32 bytes of argbuf
; regardless of types or whether there were that many arg bytes
; All bytes are loaded into registers early, some reg->reg transfers are done later
; when we're done with more registers.
; movsd xmm0, [rcx]
; movsd xmm1, [rcx+8]
movaps xmm0, [rcx] ; 16-byte alignment required for argbuf. Use movups to allow misalignment if you want
movhlps xmm1, xmm0 ; use some ALU instructions instead of just loads
; rcx,rdx can't be set yet, still in use for wrapper args
movaps xmm2, [rcx+16] ; it's ok to leave garbage in the high 64-bits of an XMM passing a float or double.
;movhlps xmm3, xmm2 ; the copyloop uses xmm3: do this later
movq r8, xmm2
mov r9, [rcx+24]
mov eax, 32
cmp edx, eax
jbe .small_args ; no copying needed, just shadow space
sub rsp, rdx
and rsp, -16 ; reserve extra space, realigning the stack by 16
; rax=32 on entry, start copying just above shadow space (which doesn't need to be copied)
.copyloop: ; do {
movaps xmm3, [rcx+rax]
movaps [rsp+rax], xmm3 ; indexed addressing modes aren't always optimal, but this loop only runs a couple times.
add eax, 16
cmp eax, edx
jb .copyloop ; } while(bytes_copied < arg_bytes);
.done_arg_copying:
; xmm0,xmm1 have the first 2 qwords of args
movq rcx, xmm0 ; RCX NO LONGER POINTS AT argbuf
movq rdx, xmm1
; xmm2 still has the 2nd 16 bytes of args
;movhlps xmm3, xmm2 ; don't use: false dependency on old value and we just used it.
pshufd xmm3, xmm2, 0xee ; xmm3 = high 64 bits of xmm2. (0xee = _MM_SHUFFLE(3,2,3,2))
; movq xmm3, r9 ; nah, can be multiple uops on AMD
; r8,r9 set earlier
call r10
leave ; restore RSP to its value on entry
ret
; could handle this branchlessly, but copy loop still needs to run zero times
; unless we bump up the min arg_bytes to 48 and sometimes copy an unnecessary 16 bytes
; As much work as possible is before the first branch, so it can happen while a mispredict recovers
.small_args:
sub rsp, rax ; reserve shadow space
;rsp still aligned by 16 after push rbp
jmp .done_arg_copying
;byte count. This wrapper is 82 bytes; would be nice to fit it in 80 so we don't waste 14 bytes before the next function.
;e.g. maybe mov rcx, [rcx] instead of movq rcx, xmm0
;mov eax, $-asmwrapper
align 16
```
This does assemble ([on Godbolt with NASM](https://gcc.godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAKxAEZSAbAQwDtQmBnDzAWwCMGAT1IdUAV2LJMIAOQBqBQFIAzACEVquZIAeIOU2LA%2BYgGaKADAEEFcjRq3pdchpjYEEcvCzl9BBTBxyqCZy7pj6hhwAdArmUVEATACsAGykcgBemMSoALSY2v4sWOihqHIpACw%2BhBwW1kpq<PASSWORD>%<PASSWORD>%<PASSWORD>%<PASSWORD>DAC<PASSWORD>SeKm5QedWKILsQQjpBh%2BLALkKjYB5EGsBwJjFQY4Rniw2X%2BpKQYXBaeLeJaYAyNFbWRYQHg6tFhXIo8UQyeimghhwpQ2BpWnpIoSSqDoG2boqMZ0ktJqrfQtibdtnYtDtc0NnMYqBAoq0RqdW3IOOSTbvNt4pLkSH6AweDACwmGpjMACOYgPHO97Hi5pgxHIACq3DanSAmBCh/LXOyPBIvyANA86nghII4gWqwBBaiGCAMAM93Hekj3AdMKqMqgXGWAAMgjeU4XhiYsKJAviaUXxUCqu43OgqIND1BlGXI1osINzyUcSVFyIIVHlbmApePxyN8SKYoSjDkTXQoIZ3StmkJOtZ2vdeqjdjtn2EINnykume70p56pMECeXWQgAMeFUP21JJNoABoALIx9xJyVkwuwKc8%2BxnmIAiYLN3WmqoVM03TmnKAzNtAfYXqvh%2Bxos4Eq3KHoZ64kifo1jLKNzKDgEmepNuU42JrEKZdsveOnYJNJ73mx3QFjm2CRagOQ4mnOrYLuOudUKrNhREKmMrpS91LpoRYK5%2Bgjwu1c5zukYt2UyLLspsl7m/YHCZ3qmzz0zmhMdwxDoVPOeZ%2BOw9bWmRhFC02Q4yYCijFbw7ZMpfEPvDCAAB1fcawBLpkQasJy3pfQgMin1Bw2hTTYBUNudsKEgTPD3ulYEAB6XYmDjxoCwAASi1O/Gcaov6kK1ORT%2BBo5C5DvMfN08xwgFAIKKC8UhDTNn%2BoDeEXoSTIE9r4L6KVJEAPCLFPcdlcEbggA/M8T9OSYA4ekIRpJMCYGNOoz2eNAZQy0c7Wsudmgtkoe2JMzpiDCAIcQF8594R30JLSQBj95HhAgCFFiHgzwBCVjIQs9jSgoVVliJ06AuFoiiFXVAn49AkTKZ9JuigADs6hc6W1ptbHgJdnrnS2j4qedS6RWyUGdTYjtRRvU3KXJpgj0DLRXi2dIHj25L1npM20jgtRbxNIU8%2B1dymfWqduBJLgICORXDkqGKgADCBw1x4K4U0TxDQogjQPoYA5YS2AKmuRIx6pBjpyGZPSR8tlnhvFBqyEgUtI5m06Z3bur1hnmFItMAASscqOcgAByAB5OQHNUXIoAOLYDhXICEqKACSyKQIAGU5CWBAt8Uw/cIWD3nMdae9hVpvCeHgdKXyiRegSKCRCeDQVUhSgXBpvcmnDLeOXaYbVlb62kCwhgyMsASmKK4ZAOIkzZlKLdBgYh9GgktHICJcrSiEBzu3WmCBTDjPrhK9I5htD2J6vXLcXyQ4VGqHwCOXxWWoIdU6yhcgVxxxXGSgAEgjMEYIObYAgCXW28aOH5PbvtelYrmlDw2fYFgTAEDpEVj4cIPAxAMF6AMPMYhPyfG8JYGOu0vHuRaGZNWzxmylSdGMV5/ZMZqnDFqb24QNmywJB6OQcKyVQhQhHHVeqgjeACYILUTEKbXPsGgEtpRmTFDzO4FutFWDIGpgEDgZV8yhOxM4YpjidZyCviClCuFvBZByKEPAmFpb2C6D5WC4RjCDn4oJcCONvCrkyihSobQhEVV6O%2Bs%2BF6bRdHalILgBgcT8o%2BBOSwgRi2HotCQZxgQhgnCzp4QIlEjyenAk%2BOy%2B76LlX%2BC%2BG0zI5T9WDnmZO<PASSWORD>MXwD0GYaAFiPHqLvHg%2B9D4vPbjwvhIiBlSt/rMPRJ4YlEJSkab0H<PASSWORD>ah<PASSWORD>%2B<PASSWORD>0<PASSWORD>ame<PASSWORD>Q<PASSWORD>0s<PASSWORD>e4<PASSWORD>80hqZVpLJsiW8nq<PASSWORD>HV6dqh<PASSWORD>FYiwc1eds7ABiCJwQqsQxaChS0h270ebC0kh3Lu5W16PQmBoMrc85AABIKRCncybCYmntFWBkNYhgsgkgyFICwWQ5gJuoFkFh7g/BXSiAkDsa8yhaATYINN4b1jSQgCSMoKIKRaCmVMuYUySRzAJASCkCelQkiMFkJUCbPA6DmAjFNmQM3SBzZkBNjgIAIzbe%2B8N0gcBYAwEQCgNmAwOXZHIJQNAg54fEBADmoUCRaDKHMCXe4paUSUD4Dt0gXqc2BNkJt0gyPgYEFRSwIQlOJtYBE2wFwJP8BMXwgsQHoPSAFEwMgAs0gZBU4BKNvnsi31M5G6wdgXBeACGEP9PggPIDWM/PzXnuQmoMBMFUAH4hJDSFoCNsbE2vs/b%2B8iywZL44JCiNjqIuO5AQFwIQfY62TrHNh6jjsG2OGUoV0twQW2dtJtIHiW48oIB7YOxGCXL3SBvZaJUKIl2khVKSDd2gCRlAtFMvnybJO/sA6B6QEHM3rEQ%2Bh%2BIAgyoCCI4gMjuHLg0f0GYkQdvZuZDjeL3zv7XuLRjRRbb%2B3juju47D6DiPUesBo9j09mQSe3u0CSM7yolQJ5VKqS0KpJ3zB3f71b2QZfgfh7j5UFIUR88ffO5UUyO/lBZ6SI9iXygLcl9PxXi/S%2BEif4H2/0r121IAEzhDYkqCAA)), but I haven't tested it.
It should perform pretty well, but if you get mispredicts around the cutoff from <= 32 bytes to > 32 bytes, change the branching so it always copies an extra 16 bytes. (Uncomment the `cmp`/`cmovb` in the version on Godbolt, but the copy loop still needs to start at 32 bytes into each buffer.)
If you often pass very few args, **the 16-byte loads might hit a store-forwarding stall from two narrow stores to one wide reload**, causing about an extra 8 cycles of latency. This isn't normally a throughput problem, but it can increase the latency before the called function can access its args. If out-of-order execution can't hide that, then it's worth using more load uops to load each 8-byte arg separately. (Especially into integer registers, and then from there to XMM, if the args are mostly integer. That will have lower latency than mem -> xmm -> integer.)
If you have more than a couple args, though, hopefully the first few have committed to L1d and no longer need store forwarding by the time the asm wrapper runs. Or there's enough copying of later args that the first 2 args finish their load + ALU chain early enough not to delay the critical path inside the called function.
Of course, if performance was a huge issue, you'd write the code that figures out the args in asm so you didn't need this copy stuff, or use a library interface with a fixed function signature that a C compiler can call directly. I did try to make this suck as little as possible on modern Intel / AMD mainstream CPUs (<http://agner.org/optimize/>), but I didn't benchmark it or tune it, so probably it could be improved with some time spent profiling it, especially for some real use-case.
If you know that FP args aren't a possibility for the first 4, you can simplify by just loading integer regs.
Upvotes: 2
|
2018/03/14
| 428 | 1,681 |
<issue_start>username_0: I'm in the process of moving a website from HostGator to Amazon EC2. Front end and back end are both moved. I added a Hosted Zone in Amazon Route 53 and updated my nameservers in HostGator. Unfortunately, the site won't load.
I ran a check with Zonemaster and received the following warnings:
>
> All nameservers in the delegation have IPv4 addresses in the same AS
> (16509). All nameservers in the delegation have IPv6 addresses in the
> same AS (16509). All nameservers in the delegation are in the same AS
> (16509).
>
>
>
I've searched but can't figure out what "AS" means in this context. Would love some help to point me in the right direction for troubleshooting.
The domain in question is tektonbody.com.
Thanks!<issue_comment>username_1: AS refers to [Autonomous System](https://en.wikipedia.org/wiki/Autonomous_system_(Internet)) which in rough terms means "block of IPs that share common routing" or in more general terms "are from the same allocation block".
You're getting this warning because the nameservers are all in the same block and if that single route goes offline for some reason, all your nameservers go down. It's generally best to spread these out geographically to minimize your exposure to localized events.
They're just looking out for you here. Typically you should have 3-4 different nameservers on different backbone providers in different regions so that no single failure, even at the provider level, can take them all down.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Adding an A record pointing to the IP of our ec2 instance fixed this issue for me
We are migrating from OVH to AWS
Upvotes: 0
|
2018/03/14
| 592 | 1,867 |
<issue_start>username_0: I am suppose to program a code to check whether the date is valid However my issue is over here is with the output. I am unable to display out if the user has input a valid/invalid date or month. T
```
String date;
int dd,mm,yyyy;
boolean leapyear,validdate;
if(mm >=1 || mm <=12) // to check if user input month which is less than 1 or more than 12
{
System.out.println(mm+" is an invalid month");
}
if((mm == 4 || mm == 6 || mm == 9 || mm == 11) && (dd <= 30)) // months with 30 days
{
validdate = true;
System.out.println(mm + "/" + dd + "/" + yyyy + " is a valid date.");
}
//For months with 31 days
else if((mm == 1 || mm == 3 || mm== 5 || mm == 7 || mm == 8 || mm == 10 || mm == 12) && (dd <= 31))
{
validdate = true;
System.out.println(mm + "/"+ dd + "/" + yyyy + " is a valid date.");
}
else
{
System.out.println("Invalid date!");
}
}
}
```
}<issue_comment>username_1: ```
if(mm >=1 || mm <=12)
```
should be
```
if (mm < 1 || mm > 12)
```
Otherwise **every** value of `mm` will evaluate to `true` making the date "invalid".
Upvotes: 2 <issue_comment>username_2: Instead of checking each individual filed (year, month and day) manually you can always use `LocalDatea.of(int year, int month, int dayOfMonth)`. For example:
```
LocalDate date;
try {
date = LocalDate.of(yyyy, mm, dd);
} catch(java.time.DateTimeException e) {
System.out.println(e.getMessage());
}
```
It will throw an exception if date is invalid:
```
LocalDate localDate = LocalDate.of(2018,2,31);
| java.time.DateTimeException thrown: Invalid date 'FEBRUARY 31'
| at LocalDate.create (LocalDate.java:459)
| at LocalDate.of (LocalDate.java:271)
| at (#3:1)
```
Upvotes: 0
|
2018/03/14
| 5,428 | 16,491 |
<issue_start>username_0: Im using SpringBoot with a mySql database.
I am trying to filter search through a table of teams. Initially this table was a thymeleaf table (see my related SO questions) but after exhausting all resources available on the internet, to no success I decided to implement the search differently.
I found a really helpful website that has a practical, relevant example of what I want to implement.
<http://javasampleapproach.com/frontend/bootstrap/bootstrap-filter-table-jquery-springboot-restapi>
I have a working example of it which is fine and good, but when I got to implement it into my project I get a few thought provoking errors, since I am a total Noob I am seeking the generous help of stackoverflow users.
I have also tried search stack overflow for a solution with no success.
[Error resolving template "index", template might not exist or might not be accessible by any of the configured Template Resolvers](https://stackoverflow.com/questions/31944355/error-resolving-template-index-template-might-not-exist-or-might-not-be-acces)
[Thymeleaf cannot detect templates inside spring-boot project](https://stackoverflow.com/questions/42947454/thymeleaf-cannot-detect-templates-inside-spring-boot-project/42947560)
When I go onto the relevant webpage I receive an error dialog immediately, see below with the console errors, you can also see the table is not being populated at all and is in fact empty:
[](https://i.stack.imgur.com/UfNYQ.png)
[!Console Error](https://i.stack.imgur.com/kTD5I.png)][2](https://i.stack.imgur.com/kTD5I.png)
My server side code error:
```
org.thymeleaf.exceptions.TemplateInputException: Error resolving template "all", template might not exist or might not be accessible by any of the configured Template Resolvers
at org.thymeleaf.TemplateRepository.getTemplate(TemplateRepository.java:246) ~[thymeleaf-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.thymeleaf.TemplateEngine.process(TemplateEngine.java:1104) ~[thymeleaf-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.thymeleaf.TemplateEngine.process(TemplateEngine.java:1060) ~[thymeleaf-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.thymeleaf.TemplateEngine.process(TemplateEngine.java:1011) ~[thymeleaf-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.thymeleaf.spring4.view.ThymeleafView.renderFragment(ThymeleafView.java:335) ~[thymeleaf-spring4-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.thymeleaf.spring4.view.ThymeleafView.render(ThymeleafView.java:190) ~[thymeleaf-spring4-2.1.5.RELEASE.jar:2.1.5.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.render(DispatcherServlet.java:1286) ~[spring-webmvc-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.processDispatchResult(DispatcherServlet.java:1041) ~[spring-webmvc-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:984) ~[spring-webmvc-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:901) ~[spring-webmvc-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970) ~[spring-webmvc-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861) ~[spring-webmvc-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:635) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846) ~[spring-webmvc-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) ~[tomcat-embed-websocket-8.5.23.jar:8.5.23]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:317) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:127) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:91) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:137) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:170) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:63) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:200) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:116) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:64) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:105) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:56) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:331) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:214) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:177) ~[spring-security-web-4.2.3.RELEASE.jar:4.2.3.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:347) ~[spring-web-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:263) ~[spring-web-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) ~[spring-web-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:108) ~[spring-web-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:81) ~[spring-web-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:197) ~[spring-web-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-4.3.12.RELEASE.jar:4.3.12.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199) ~[tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:478) [tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) [tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) [tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) [tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) [tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:803) [tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:868) [tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1459) [tomcat-embed-core-8.5.23.jar:8.5.23]
at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-8.5.23.jar:8.5.23]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na:1.8.0_60]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [na:1.8.0_60]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-8.5.23.jar:8.5.23]
at java.lang.Thread.run(Unknown Source) [na:1.8.0_60]
```
Here is my html code:
```
Spring Boot - DELETE-UPDATE AJAX Example
Filter Table
------------
Type some text to search the table for **Id**, **Name**, **Address**, **Level**:
| Id | Name | Address | Level |
| --- | --- | --- | --- |
$(document).ready(function() {
// DO GET
$.ajax({
type : "GET",
url : "/all",
success: function(result){
$.each(result, function(i, team){
var customerRow = '<tr>' +
'<td>' + team.id + '</td>' +
'<td>' + team.teamName.toUpperCase() + '</td>' +
'<td>' + team.teamAddress + '</td>' +
'<td>' + team.level + '</td>' +
'</tr>';
$('#customerTable tbody').append(customerRow);
});
$( "#customerTable tbody tr:odd" ).addClass("info");
$( "#customerTable tbody tr:even" ).addClass("success");
},
error : function(e) {
alert("ERROR: ", e);
console.log("ERROR: ", e);
}
});
// do Filter on View
$("#inputFilter").on("keyup", function() {
var inputValue = $(this).val().toLowerCase();
$("#customerTable tr").filter(function() {
$(this).toggle($(this).text().toLowerCase().indexOf(inputValue) > -1)
});
});
})
```
My relevant controller:
```
@RequestMapping(value="/showteams2", method=RequestMethod.GET)
public String aYeh()
{
return "teams2";
}
@GetMapping(value = "/all")
public List getResource() {
List cueList = teamRepository.findAll();
return cueList;
}
```
My projects setup:
[](https://i.stack.imgur.com/U3t93.png)<issue_comment>username_1: Need few more details on how you are setting your projects.
1. where is your html located. Is it under /resources/templates ?
2. you are missing on your html.
3. try commenting out your jQuery Ajax call and see if your initial html is loaded properly or not.
4. Is your controller rest controller @RestController is a convenience annotation that does nothing more than adding the @Controller and @ResponseBody ?
Check these and it should fix your problem.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I changed my "/all" controller ever so slightly:
```
@RequestMapping(value="/all", method=RequestMethod.GET)
@ResponseBody
public List getResource() {
List cueList = teamRepository.findAll();
return cueList;
}
```
By adding the ResponseBody annotation it solved all issues and the search works perfectly!
Upvotes: 3
|
2018/03/14
| 620 | 2,188 |
<issue_start>username_0: I have two datasets A and B:
Dataset A (called Sales) has the following data:
```
ID Person Sales
1 1 100
2 2 300
3 3 400
4 4 200
5 5 50
```
Dataset B (called Account\_Scenarios) has the following data (Note- there are a lot more rows in dataset B I have just included the first 6):
```
ID Scenario Person Upkeep
1 1 1 -10
2 1 2 -200
3 2 1 -150
4 3 4 -50
5 3 3 -100
6 4 5 -500
```
I want to add a column called 'Profit' in dataset B such that I am able to see the profit per person per scenario (Profit = Sales + Upkeep). For example as below:
```
ID Scenario Person Upkeep Profit
1 1 1 -10 90
2 1 2 -200 100
3 2 1 -150 -50
4 3 4 -50 150
5 3 3 -100 300
6 4 5 -500 -450
```
What is the best way to do this? I am new to R and trying use an aggregate function but it requires the arguments to be the same length.
```
Account_Scenarios$Profit <- aggregate(Sales[,c('Sales')], Account_Scenarios[,c('Upkeep')], by=list(Sales$Person), 'sum')
```<issue_comment>username_1: Need few more details on how you are setting your projects.
1. where is your html located. Is it under /resources/templates ?
2. you are missing on your html.
3. try commenting out your jQuery Ajax call and see if your initial html is loaded properly or not.
4. Is your controller rest controller @RestController is a convenience annotation that does nothing more than adding the @Controller and @ResponseBody ?
Check these and it should fix your problem.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I changed my "/all" controller ever so slightly:
```
@RequestMapping(value="/all", method=RequestMethod.GET)
@ResponseBody
public List getResource() {
List cueList = teamRepository.findAll();
return cueList;
}
```
By adding the ResponseBody annotation it solved all issues and the search works perfectly!
Upvotes: 3
|
2018/03/14
| 527 | 1,646 |
<issue_start>username_0: I'm a total beginner at javascript, just looking for some help on getting this basic problem to work. I want to take a Fahrenheit temperature number from a field, convert it to celsius, then output the result. nothing is happening on button click.
HTML:
```js
function myFunction() {
var t0 = document.getElementById('temp1').value;
t0 = (t0-32)*.5556;
var t1 = document.getElementById('temp2').value;
t1 = (t1-32)*.5556;
var t2 = document.getElementById('temp3').value;
t2 = (t2-32)*.5556;
var t3 = document.getElementById('temp4').value;
t3 = (t3-32)*.5556;
var t4 = document.getElementById('temp5').value;
t4 = (t4-32)*.5556;
var text = "Temp 1: " + t0 + "
";
text += "Temp 2: " + t1 + "
";
text += "Temp 3: " + t2 + "
";
text += "Temp 4: " + t3 + "
";
text += "Temp 5: " + t4 + "
";
document.getElementByID("output").innerHTML = text;
}
```
```html
Temp 1:
Temp 2:
Temp 3:
Temp 4:
Temp 5:
Click the button to convert from Farenheit to Celsius
Convert
```<issue_comment>username_1: ```
document.getElementByID("output").innerHTML = text;
You have a typo so correct above line to like this
document.getElementById("output").innerHTML = text;
```
Upvotes: 0 <issue_comment>username_2: besides fixing the typo with the `ID`, changing to `Id`,
I really believe it's just the way you wrote your code. Make sure you place your script inside `head` tag. Here is a jsfiddle example with load type inside `head` : <https://jsfiddle.net/56eet52k/9/>
Upvotes: 3 [selected_answer]
|
2018/03/14
| 737 | 2,516 |
<issue_start>username_0: I'd like to start a release leveraging the VSTS Rest API defined here:
<https://www.visualstudio.com/en-us/docs/integrate/api/rm/releases#create-a-release>
but I need to set some of the variable values when I create the release.
Looking at the ReleaseMetadata (<https://www.visualstudio.com/en-us/docs/integrate/api/rm/contracts#ReleaseStartMetadata>) I see there is a properties collection. Is this where I'd set the variable values? Is there any special naming convention I need to use in the property name to translate it the variable name? (like ##VSTS:[])??<issue_comment>username_1: Unfortunately, for now it's **not able to set variable values** when you created a release. This is unlike Build definition for VSTS, variables are allowed to change their values at queue time.
There has been a related uservoice, you could vote for it:
>
> **Change release variables' value when deploy a release**
>
>
> <https://visualstudio.uservoice.com/forums/330519-visual-studio-team-services/suggestions/16616269-change-release-variables-value-when-deploy-a-rele>
>
>
>
As a workaround, you could create a draft release, then update variables’ value ,after that, you can start the release by [Modifying the status of a release](https://www.visualstudio.com/en-us/docs/integrate/api/rm/releases#modify-the-status-of-a-release) (active)
```
{
"status": "Active"
}
```
More detail info please take a look this similar question: [Release Variables through REST API - Visual Studio Team Services](https://stackoverflow.com/questions/42677286/release-variables-through-rest-api-visual-studio-team-services)
Upvotes: 0 <issue_comment>username_2: Now you can define VSTS Release variables to be "Settable at release time" as shown below.
[](https://i.stack.imgur.com/iwj8N.png)
Then you can use below json syntax in [VSTS REST API call](https://learn.microsoft.com/en-us/rest/api/vsts/release/releases/create?view=vsts-rest-4.1) request body to pass the variable values to set at the time of creating the release.
```
"variables": {
"Variable1Name": {
"value": "Variable1Value"
},
"Variable2Name": {
"value": "Variable2Value"
}
}
```
[](https://i.stack.imgur.com/EPvcM.png)
For more information refer the [post here](http://chamindac.blogspot.com/2018/08/setting-vsts-release-variable-values-at.html).
Upvotes: 3
|
2018/03/14
| 606 | 1,999 |
<issue_start>username_0: Newbie alert!
I'm trying to understand the variable scope for a promise inside a for loop.
Consider the following code:
```
function sleep(ms) {
return new Promise(resolve =>
setTimeout(resolve, ms)
);
}
for (var i =0; i<4; i++){
sleep(1000).then(() => console.log(i));
}
```
The promise is called 4 times at once and then responds with:
```
4
4
4
4
```
How can I keep reference to the i variable? In order to get
```
0
1
2
3
```
Or whatever execution order actually took place.<issue_comment>username_1: You can pass the `i` into the sleep function and resolve it.
```
function sleep(i, ms) {
return new Promise(resolve =>
setTimeout(() => {
return resolve(i);
}, ms);
);
}
for (var i = 0; i < 4; i++){
sleep(i, 1000).then((data) => console.log(data));
}
```
Upvotes: 1 <issue_comment>username_2: Your problem is just how you're calling your sleep function. You must use the `await` keyword inside of an `async` labeled function.
This should be how your code looks:
```js
function sleep(ms) {
return new Promise(resolve =>
setTimeout(resolve, ms)
);
}
async function printSleep(){
for (var i =0; i<4; i++){
await sleep(1000).then(() => console.log(i));
}
}
printSleep();
```
The key part you're missing is that you must use `await` to actually have your code wait for the sleep function to finish. If you don't it will queue the function and continuing executing the rest of the function in a synchronous fashion, causing your then statement to execute after the for loop has finished.
Upvotes: 1 <issue_comment>username_3: As pointed out by [ASDFGerte](https://stackoverflow.com/users/6692606/asdfgerte), by changing **var** to **let**, the i variable is kept within the block and outputs the desired reference.
```
function sleep(ms) {
return new Promise(resolve =>
setTimeout(resolve, ms)
);
}
for (let i=0; i<4; i++){
sleep(1000).then(() => console.log(i));
}
```
Upvotes: 0
|
2018/03/14
| 1,098 | 3,800 |
<issue_start>username_0: I have a TargetMarket class that has been seeded with all of the countries in the world like so
```
TargetMarket.create([
{name: 'Andorra'},
{name: 'United Arab Emirates'},
{name: 'Afghanistan'},
{name: 'Antigua and Barbuda'},
....
....
{name: 'South Africa'},
{name: 'Zambia'},
{name: 'Zimbabwe'}
])
```
A user can then select up to 5 countries they wish to have as a target market for their Company.
On the public search page, I have a dropdown selection of all the TargetMarkets.
The current code reads as
```
<%= f.select :target_markets_id_in, TargetMarket.all.map{ |u| u.name, u.id] }, { include_blank: "All" }, {class: 'selectize-this', multiple: true} %>
```
However, this obviously shows up ALL of the countries. I only want the countries that have been used as a target market by a company to populate the dropdown.
For example; A company has target markets of "Ireland", "Belgium", "Australia" and "Japan".
On the target\_markets search option, I only want Ireland, Belgium, Australia and Japan to appear as possible search options as they are the only countries used in the database.
Is this possible?
Something like
```
<%= f.select target_market_ids_in, TargetMarkets.where('name' count >= 1) %>
```
Edit #
======
Relationship
------------
```
class Company < ApplicationRecord
has_and_belongs_to_many :target_markets
accepts_nested_attributes_for :target_markets, allow_destroy: true
end
```<issue_comment>username_1: When a user selects up to 5 countries they wish to have as a target market for their Company, do you store that selection as an association between the Company and the target market? If you do, then you can return only the target markets associated with the current company
```
class Company < ActiveRecord::Base
has_many :target_markets
end
```
Adding the target markets
```
@company.target_markets << TargetMarkets.find_by_name('Finland')
```
Displaying the target markets
```
# assuming `@company` is set based on the route (`/company/:id`), `current_user` settings, or however you determine what the current company is
<%= f.select :target_markets_id_in, @company.target_markets.map{ |u| u.name, u.id] }, { include_blank: "All" }, {class: 'selectize-this', multiple: true} %>
```
Further reading
===============
* [adding records with a has\_many association (StackOverflow answer)](https://stackoverflow.com/a/18114492/3012550)
+ Note that because you create the `TargetMarket`s in the seed, you shouldn't be `create`ing them again when you associate them, you should find them from the database and associate the found records.
* [the has-many association](http://guides.rubyonrails.org/association_basics.html#the-has-many-association)
* [general info on associations](http://api.rubyonrails.org/classes/ActiveRecord/Associations/ClassMethods.html)
Upvotes: 0 <issue_comment>username_2: JOINing and DISTINCTing will give you a unique list of objects with associations. This is because the default Rails' ActiveRecord [join](http://guides.rubyonrails.org/active_record_querying.html#joins) is an `INNER JOIN`, which filters for the presence of the association. ActiveRecord will still only `SELECT` columns from the original table, so `DISTINCT` will return a unique list of objects.
In your case, you need to add a habtm association to TargetMarket if it does not already exist:
```
class TargetMarket < ApplicationRecord
has_and_belongs_to_many :companies
end
```
Then replace this line:
```
TargetMarket.all.map{ |u| u.name, u.id] }
```
with:
```
TargetMarket.joins(:companies).distinct.map { |u| [u.name, u.id] }
```
While you're at it, you may want to take a peek at the result of
```
TargetMarket.joins(:companies).distinct.to_sql
```
Upvotes: 2 [selected_answer]
|
2018/03/14
| 649 | 2,494 |
<issue_start>username_0: I've created a branch **BranchA** from **develop**. On this branch, I've made some feature AND I've fixed an issue. Then I've created a PR for the original repo, because I'm just a contributor.
The PR is not yet accepted because we are still working on it.
Now, I'm working on a new feature, so I've created a branch from **develop** too : **BranchB**.
But I cannot continue my development because I don't have the fix of the issue which I've fixed on the **BranchA**.
So I would like to fix the same issue in the new branch (**BranchB**). But I think it's not the best idea to do the same thing twice.
What's the best practice and how can I do for getting the code from **BranchA** into **BranchB**.
Just for reminder, the **BranchA** is still running under development.
Thank you !<issue_comment>username_1: just checkout to branchA copy the code into a separate folder outside the repo, then checkout back to to branchB then plug in the code you need or just `git checkout -b branchB` from branchA but you would be creating a possible throw away codebase in case your code from branchA is rejected.
Upvotes: 0 <issue_comment>username_2: I think a rebase could work.
checkout BranchB and rebase your BranchB on BranchA with:
```
git rebase BranchA
```
and you will have BranchA's history on your BranchB.
Upvotes: 2 <issue_comment>username_3: [cherry-pick](https://git-scm.com/docs/git-cherry-pick) those few commits that fixes the issue:
```
git cherry-pick commit1 commit2 etc…
```
If you need all commits between commit1 and commit2:
```
git cherry-pick commit1~..commit2
```
Note the tilde!
Upvotes: 1 <issue_comment>username_4: Another way is to partial checkout the file containing the fix while having **BranchB** checked-out:
```
git checkout -- path/of/file/containing/fix
```
revision here is a commitish, meaning that it has to point to a commit. So executing:
```
git checkout BranchA -- path/of/file/containing/fix
```
would checkout the state of path/of/file/containing/fix based on **BranchA**'s state onto your current working tree. The file will be un-staged after the partial checkout and it is up to you what do with it from there on.
Keep in mind that if you have any un-staged changes to the file containing fix before the partial checkout they will get overwritten. So depending on what you are comfortable with, either stage and commit, just stage, or stash the changes before performing the partial checkout.
Upvotes: 1
|
2018/03/14
| 903 | 3,403 |
<issue_start>username_0: Let's say I am describing my own UIView, let's call it a HeaderView. I want the HeaderView to have the exact same properties, but only differ in label text. Here is how I currently have it:
```
private let headerView: UIView = {
let screenSize = UIScreen.main.bounds
let screenWidth = screenSize.width
let screenHeight = screenSize.height
let view = UIView()
view.backgroundColor = .white
view.heightAnchor.constraint(equalToConstant: 65).isActive = true
let label = UILabel()
label.font = UIFont.systemFont(ofSize: 30)
label.textAlignment = .left
label.textColor = .black
label.text = "Search"
view.addSubview(label)
label.translatesAutoresizingMaskIntoConstraints = false
label.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 8).isActive = true
label.centerYAnchor.constraint(equalTo: view.centerYAnchor, constant: 5).isActive = true
return view
}()
```
How I'd use it:
```
view.addSubview(headerView)
headerView.translatesAutoresizingMaskIntoConstraints = false
headerView.trailingAnchor.constraint(equalTo: view.trailingAnchor).isActive = true
headerView.leadingAnchor.constraint(equalTo: view.leadingAnchor).isActive = true
headerView.topAnchor.constraint(equalTo: view.topAnchor).isActive = true
```
What if I want 3 of these header views with varying text? How would I make it into a reusable programmatic view?<issue_comment>username_1: Instead of a variable you could have a function:
```
func buildHeaderView(withText text: String) -> UIView {
let screenSize = UIScreen.main.bounds
let screenWidth = screenSize.width
let screenHeight = screenSize.height
let view = UIView()
view.backgroundColor = .white
view.heightAnchor.constraint(equalToConstant: 65).isActive = true
let label = UILabel()
label.font = UIFont.systemFont(ofSize: 30)
label.textAlignment = .left
label.textColor = .black
label.text = text
view.addSubview(label)
label.translatesAutoresizingMaskIntoConstraints = false
label.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 8).isActive = true
label.centerYAnchor.constraint(equalTo: view.centerYAnchor, constant: 5).isActive = true
return view
}
```
Now you could use this function like:
```
let searchHeaderView = buildHeaderView(withText: "Search")
view.addSubview(searchHeaderView)
let otherView = buildHeaderView(withText: "Other")
```
Upvotes: 0 <issue_comment>username_2: You can create a subclass of `UIView` and reuse it anywhere
```
class HeaderView: UIView {
let innerview = UIView()
let innerlabel = UILabel()
override init(frame: CGRect) {
super.init(frame: frame)
sharedLayout()
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
sharedLayout()
}
private func sharedLayout() {
self.addSubview(innerview)
self.innerView.backgroundColor = UIColor.red
innerview.translatesAutoresizingMaskIntoConstraints = false
innerview.trailingAnchor.constraint(equalTo:self.trailingAnchor).isActive = true
innerview.leadingAnchor.constraint(equalTo: self.leadingAnchor).isActive = true
innerview.topAnchor.constraint(equalTo: self.topAnchor).isActive = true
innerview.bottomAnchor.constraint(equalTo: self.bottomAnchor).isActive = true
// configure other items here
}
}
```
Upvotes: 2
|
2018/03/14
| 6,487 | 16,342 |
<issue_start>username_0: I have a local Git repository with three annotated tags: `v0.1.0`, `v0.1.1`, and `v0.1.2`.
When I view my project's history with `gitk` (*Repository → Visualize master's history*), I can see each tag assigned to the proper commit.
[](https://i.stack.imgur.com/H039W.png)
However, when I try to checkout my tags in Git GUI (*Branch → Checkout... → Tags*), the tag for `v0.1.1` doesn't appear.
[](https://i.stack.imgur.com/ZG4xd.png)
When I went to check each tag in gitk, I noticed that the details for `v0.1.0` and `v0.1.2` listed them as `type commit`, while the tag for `v0.1.1` was listed as `type tag`.
[](https://i.stack.imgur.com/N4Djb.png)
It's worth noting that I've rewritten history on this tag to fix a typo. I edited my tag message using [`git tag -f -m ""`](https://stackoverflow.com/a/7813236/3357935).
**Why can't I see my `v0.1.1` tag when checking out with Git GUI? Why does it appear as `type tag`?**<issue_comment>username_1: Tags can [point to any object](https://git-scm.com/docs/git-tag#git-tag-ltobjectgt) in the git repository. If your tag type is "tag", then you have a tag pointing to another tag.
**Lightweight tags *are not* objects**; thus, they have no hash ID of their own and nothing else (like another tag) can point to them. They are literally just easy-to-remember names pointing to some object's hash ID, a little less than a branch name.
However, **annotated tags *are* objects**; they are like commits, with their own message, author, created date and, most importantly, their own hash ID. This means that, somewhat confusingly, they can be tagged.
Sure enough, as you described in [your comment](https://stackoverflow.com/questions/49283734/why-cant-i-checkout-my-tag-from-git-gui#comment85570763_49283734), this is exactly what happened. Acting on the advice found in [How do you rename a Git tag?](https://stackoverflow.com/q/1028649/3357935), you did the following:
```
# avoid this...
git tag new old
```
Since `old` was an annotated tag, the target for the `new` tag will be the `old` tag, not the commit that it was pointing to.
If you want to rename an annotated tag, you should use
```
git tag -a new old^{}
```
`old^{}` will [*dereference the tag recursively until a non-tag object is found*](https://git-scm.com/docs/gitrevisions#gitrevisions-emltrevgtemegemv0998em) (in our case, a commit), and use that as the target object for `new`.
---
To further illustrate: let's say you have a repo... oh, like this one: <https://github.com/cyborgx37/sandbox/releases>
In this repo you create an annotated tag like so:
```
> git tag -m "Version 0.1-beat" v0.1
```
Oh shoot... you misspelled "beta" and also you've decided that you want the tag name to be `v0.1-b`. Since this has already been published, you decide to do the [sane thing](https://git-scm.com/docs/git-tag#_on_re_tagging) and just create a new tag. Following [advice you found on the internet](https://stackoverflow.com/q/1028649/3357935), you create the tag you actually wanted (I appended `__tag` for reasons that will become clear) by copying the first tag:
```
> git tag -m "Version 0.1-beta" v0.1-b__tag v0.1
```
Only, these are annotated tags, meaning they are actual objects. So when you created `v0.1-b__tag`, you actually pointed it at `v0.1`. You can see the result clearly using `cat-file` and `show`.
Here's **`v0.1`**:
```bash
> git cat-file -p v0.1
object 5cf4de319291579d4416da8e0eba8a2973f8b0cf
type commit # ⇦ v0.1 is a tag which points to a commit
tag v0.1
tagger username_1 1521058797 -0400
Version 0.1-beat
```
```bash
> git show v0.1
# v0.1 is a tag
# ⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩
tag v0.1
Tagger: username_1
Date: Wed Mar 14 16:19:57 2018 -0400
Version 0.1-beat
# which is pointing directly to a commit
# ⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩
commit 5<PASSWORD>79d4416da8e0eba8a2973f8b0cf (HEAD -> master, tag: v0.1-b\_\_tag, tag: v0.1, origin/master)
Author: username_1
Date: Tue Oct 10 12:17:00 2017 -0400
add gitignore
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..42d9955
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1 @@
+file.txt
```
Notice that **`v0.1-b__tag`** is different both in its target type as well as its history:
```bash
> git cat-file -p v0.1-b__tag
object 889b82584b2294486f4956dfea17b05e6224fb7f
type tag # ⇦ v0.1-b__tag is a tag which points to a tag
tag v0.1-b__tag
tagger username_1 1521059058 -0400
Version 0.1-beta
```
```bash
> git show v0.1-b__tag
# v0.1-b__tag is a tag
# ⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩
tag v0.1-b__tag
Tagger: username_1
Date: Wed Mar 14 16:24:18 2018 -0400
Version 0.1-beta
# which is pointing to the v0.1 tag
# ⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩
tag v0.1
Tagger: username_1
Date: Wed Mar 14 16:19:57 2018 -0400
Version 0.1-beat
# which is pointing to the intended target commit
# ⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩⇩
commit 5cf4de319291579d4416da8e0eba8a2973f8b0cf (HEAD -> master, tag: v0.1-b\_\_tag, tag: v0.1, origin/master)
Author: username_1
Date: Tue Oct 10 12:17:00 2017 -0400
add gitignore
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..42d9955
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1 @@
+file.txt
```
Apparently Git GUI is rather selective about what types of objects can be checked out (commits, not tags), so it's ignoring your tag pointing at another tag.
If you use the `git tag -a new old^{}` approach I suggested above, you can avoid the drama and get what you wanted in the first place. I'll create a new tag, **`v0.1-b__commit`** that points to `v0.1`'s commit, rather than to `v0.1` directly:
```
> git tag -m "Version 0.1-beta" v0.1-b__commit v0.1^{}
```
```
> git cat-file -p v0.1-b__commit
object 5cf4de319291579d4416da8e0eba8a2973f8b0cf
type commit
tag v0.1-b__commit
tagger username_1 1521059039 -0400
Version 0.1-beta
```
```
> git show v0.1-b__commit
tag v0.1-b__commit
Tagger: username_1
Date: Wed Mar 14 16:23:59 2018 -0400
Version 0.1-beta
commit 5cf4de319291579d4416da8e0eba8a2973f8b0cf (HEAD -> master, tag: v0.1-b\_\_tag, tag: v0.1-b\_\_commit, tag: v0.1, origin/master)
Author: username_1
Date: Tue Oct 10 12:17:00 2017 -0400
add gitignore
diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..42d9955
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1 @@
+file.txt
```
Upvotes: 6 [selected_answer]<issue_comment>username_2: I don't normally use any of the Git GUIs so the GUI-specific parts, I can't really answer—but your observation that there's a difference here between *annotated tags* and *lightweight tags* is spot-on, and yes, there should be some warning(s) in some of the answers to [How do you rename a Git tag?](https://stackoverflow.com/q/1028649/1256452)
>
> When I went to check each tag in gitk, I noticed that the tag details were slightly different. The details for `v0.1.0` and `v0.1.2` listed them as `type commit`, while the tag for `v0.1.1` was listed as type tag. I suspect this may be the cause of my problem ...
>
>
>
Let's clear up the difference between these, and talk about the mechanisms behind tags.
In Git, the "true name" of any actual commit is the commit's hash ID. Hash IDs are long, ugly, impossible-to-remember strings, such as the `ca5728b6...` showing in one of your GUI panes. I made a new, empty repository and made one commit in it:
```
$ git init
Initialized empty Git repository in ...
$ echo for testing tags > README
$ git add README
$ git commit -m initial
[master (root-commit) a912caa] initial
1 file changed, 1 insertion(+)
create mode 100644 README
$ git rev-parse HEAD
a912caa83de69ef8e5e3e06c3d74b6c409068572
```
This identifies a commit, and we can see that using `git cat-file -t`, which tells us about the *type* of each internal Git object:
```
$ git cat-file -t a912c
commit
```
You can abbreviate the big ugly IDs as long as the abbreviation is unique and is at least four letters.1
Anyway, now let's make two different tags, pointing to this same commit:
```
$ git tag -m "an annotated tag" annotag
$ git tag lightweight
```
and use `git for-each-ref` to inspect them:
```
$ git for-each-ref
a912caa83de69ef8e5e3e06c3d74b6c409068572 commit refs/heads/master
dc4695ffede0a877fdc61dc06f5ad5c6d5cfc356 tag refs/tags/annotag
a912caa83de69ef8e5e3e06c3d74b6c409068572 commit refs/tags/lightweight
```
The annotated tag has a different hash ID than the lightweight tag.
The trick here is that the lightweight tag creates *only* a name in the reference database, in this case, `refs/tags/lightweight`. Names in the reference database store hash IDs, so this one stores the hash ID of our single commit.
An annotated tag, on the other hand, exists as an actual repository object, so we can inspect its type and see its contents, using `git cat-file`:
```
$ git cat-file -t dc4695ffede0a877fdc61dc06f5ad5c6d5cfc356
tag
$ git cat-file -p dc4695ffede0a877fdc61dc06f5ad5c6d5cfc356 | sed 's/@/ /'
object a912caa83de69ef8e5e3e06c3d74b6c409068572
type commit
tag annotag
tagger <NAME> 1521059496 -0700
an annotated tag
```
Note that the annotated tag object, in the *repository* database keyed by hash ID and containing object data, contains the hash ID of the commit. There is, in effect, also a "lightweight-like" tag named `refs/tags/annotag` pointing to the annotated tag object. But since it points to an annotated tag object, it's treated as an annotated tag.
When you make a new tag, you can point it to any existing object. Let's take a look at the objects associated with the single commit:
```
$ git cat-file -p HEAD | sed 's/@/ /'
tree 4d73be7092200632865da23347ba0af4ac6c91f7
author <NAME> 1521053169 -0700
committer <NAME> 1521053169 -0700
initial
```
This commit object refers to a tree object, which we can inspect:
```
$ git cat-file -p 4d73be7092200632865da23347ba0af4ac6c91f7
100644 blob 938c7cff87a9b753ae70d91412d3ead5c95ef932 README
```
and the tree points to a *blob* object, which we can also inspect:
```
$ git cat-file -p 938c7cff87a9b753ae70d91412d3ead5c95ef932
for testing tags
```
which is the content of the file `README`. Let's tag that:
```
$ git tag the-file 938c7cff87a9b753ae70d91412d3ead5c95ef932
```
and inspect its type:
```
$ git cat-file -t the-file
blob
```
This is not the normal use of a tag, but it's allowed. Let's try making a lightweight tag for the annotated tag:
```
$ git tag maybe-light annotag
$ git cat-file -t maybe-light
tag
$ git cat-file -p maybe-light | sed 's/@/ /'
object a912caa83de69ef8e5e3e06c3d74b6c409068572
type commit
tag annotag
tagger <NAME> 1521059496 -0700
an annotated tag
```
This `maybe-light` tag points to the annotated tag *object* that belongs to the annotated tag `annotag`. Is `maybe-light` an *annotated tag*? That depends on your point of view, doesn't it? I would say that it both is and isn't: it's a lightweight tag pointing to an annotated tag, but it's not *the* lightweight tag that goes by the same name as the annotated tag object, which claims right inside the object to be / belong-to `annotag`. But I would also say that in a way, `annotag` is both a lightweight and annotated tag: it's a lightweight tag that gives the ID of the annotated tag object. They use the same name so I'd call it an "annotated tag" and refer to `refs/tags/annotag` as the tag name, the same way `refs/tags/maybe-light` is a tag name.
In any case, we can also make more annotated tags pointing to any of these objects. If we make an annotated tag pointing to the other annotated tag, we end up with two annotated tag objects in the repository:
```
$ git tag -m "also annotated" anno2
$ git for-each-ref
a912caa83de69ef8e5e3e06c3d74b6c409068572 commit refs/heads/master
060527046d210f0219170cdc6354afe4834ddc6d tag refs/tags/anno2
dc4695ffede0a877fdc61dc06f5ad5c6d5cfc356 tag refs/tags/annotag
a912caa83de69ef8e5e3e06c3d74b6c409068572 commit refs/tags/lightweight
dc4695ffede0a877fdc61dc06f5ad5c6d5cfc356 tag refs/tags/maybe-light
938c7cff87a9b753ae70d91412d3ead5c95ef932 blob refs/tags/the-file
```
You can see from this that `anno2` has a new object, `0605...`:
```
$ git cat-file -p 0605 | sed 's/@/ /'
object a912caa83de69ef8e5e3e06c3d74b6c409068572
type commit
tag anno2
tagger <NAME> 1521060518 -0700
also annotated
```
Meanwhile, `git for-each-ref` describes the `maybe-light` tag as a `tag` rather than a `commit`: that just tells us that its immediate target object, without following through to further objects, is a tag, not a commit.
Let's make one more annotated tag, for the blob:
```
$ git tag -m "annotated blob" annoblob the-file
```
Since it's an annotated tag, `git for-each-ref` says that its type is `tag` (try it!).
Git calls the process of following a tag to its ultimate object "peeling the tag", and there is a special syntax for that:
```
$ git rev-parse annotag annotag^{} annoblob annoblob^{}
dc4695ffede0a877fdc61dc06f5ad5c6d5cfc356
a912caa83de69ef8e5e3e06c3d74b6c409068572
398b3b89e0377b8942e2f84c97a24afaad0dccb0
938c7cff87a9b753ae70d91412d3ead5c95ef932
```
Note that this is different from just following the tag *once*, as we see if we parse `anno2` this way:
```
$ git rev-parse anno2^{}
a912caa83de69ef8e5e3e06c3d74b6c409068572
```
The `a912...` is the ID of the commit, not the second annotated tag. Compare with:
```
$ git rev-parse anno2 anno2^{tag}
060527046d210f0219170cdc6354afe4834ddc6d
060527046d210f0219170cdc6354afe4834ddc6d
```
The first finds the ID of the object to which `anno2` points; the second verifies that it's a database object of type `tag`. Both are of course the same ID, and it is indeed an object of type `tag`. We can ask specifically for a commit:
```
$ git rev-parse anno2^{commit}
a912caa83de69ef8e5e3e06c3d74b6c409068572
```
but if we do this with the name `annoblob` we get an error:
```
$ git rev-parse annoblob^{commit}
error: annoblob^{commit}: expected commit type, but the object
dereferences to blob type
```
which is why the `^{}` syntax exists: it means *follow tags until you reach a non-tag, whatever that is.*
---
1The four-character limit means that if you name a branch `cab`, you're OK. If you name it `face`, though, is that a *branch name* or a *raw hash ID*? What if it could be more than one thing? See [the gitrevisions documentation](https://www.kernel.org/pub/software/scm/git/docs/gitrevisions.html) for hints, but the answer is: *it depends on the command*. If you spell out the reference, `refs/heads/face` or even just `heads/face`, it no longer resembles both a branch name *and* an abbreviated hash ID. Unfortunately `git checkout` demands the unadorned name `face` (but always treats it as a branch name, if it can).
---
### Summary
A *tag name* is simply a name in the `refs/tags/` name-space. The `git tag` command can make new tag names. This name must point to some hash ID; the ID can be the ID of any existing object, or you can have `git tag` make a new tag object.
A *tag object* or *annotated tag object* is an entity in the repository database. It has a unique hash ID, just like a commit. It has type `tag` (vs a commit, which has type `commit`). Its metadata consists of the *target object*, the tagger name, the tag name, any message you like, and an optional PGP signature.
The *target object* of a tag object is any existing object in the repository database. That object needs to exist when creating the tag object. This prevents the annotated tag from pointing to itself, or to a tag object you have not yet created, which prevents cycles in the graph.
Running `git tag` to make a new tag creates either just the tag name pointing to some existing object, or the tag name pointing to a new tag object pointing to some existing object. The existing object, whatever it is, continues existing.
Running `git tag -d` deletes *only the tag name*. The tag *object*, if there is one, remains in the repository. Like commit objects, it will eventually be garbage-collected and discarded if and only if there are no other references by which one can reach the tag object. (This happens some time in the future, when `git gc` runs.)
Upvotes: 2
|
2018/03/14
| 1,125 | 3,725 |
<issue_start>username_0: I have multiple -s which has attributes `id`, `data-type` and `data-value`. All `id`-s have a same prefix.
```
some * -s here
. . .
many -s here
. . .
```
I have *Javascript* function where I want to loop through this -s whose ids starting with *'req'* and collect `data-type` and `data-value` attribute values like that:
```
function collect(){
var data = [];
$.each( uls_starting_with_req, function( key, value ) {
data.push({data_type: 'ul_data_type', data_value: 'ul_data_value'});
});
}
```
So how can I achieve this?<issue_comment>username_1: ```
function collect(){
var data = [];
$('ul').each(function(){
var id = $(this).attr('id');
if(id.startsWith('req') ) {
var dataType = $(this).data('type');
var dataValue = $(this).data('value');
data.push({data_type: dataType, data_value: dataValue})
}
})
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Following is a way to do it:
```
var data = [];
$("ul[id^='req-']").each(function() {
data.push({ data_type: $(this).data('type'), data_value: $(this).data('value') });
});
```
The selector selects all the `ul`s which have ID starting with `req-` and then `each` loops on them. In each iteration, the value of the data attributes can be fetched using jQuery's `data` method, which are then pushed to the array `data`.
Working example:
```js
var data = [];
$("ul[id^='req-']").each(function() {
data.push({
data_type: $(this).data('type'),
data_value: $(this).data('value')
});
});
console.log(data);
```
```html
*
*
*
```
Upvotes: 1 <issue_comment>username_3: using [attribute starts with selector](http://api.jquery.com/attribute-starts-with-selector/):
```
function collect(){
var data = [];
$('ul[id^="req-"]').each(function(){
data.push({data_type: '+ $(this).data("type") +', data_value: '+ $(this).data("value") +'});
})
}
```
Upvotes: 1 <issue_comment>username_4: Use jquery attribute selector
```js
function collect() {
var data = [];
var getUL = $('ul[id^="req-"]');
$.each(getUL, function(key, value) {
data.push({
data_type: $(value).data('type'),
data_value: $(value).data('value')
});
});
console.log(data)
}
collect()
```
```html
```
Upvotes: 1 <issue_comment>username_5: Modified username_4's answer to use map off of the jQuery object
```js
function collect() {
var getUL = $('ul[id^="req-"]');
var data = getUL.map(function(key, value) {
return {
data_type: $(value).data('type'),
data_value: $(value).data('value')
};
});
console.log(data.get())
}
collect()
```
```html
```
Upvotes: 1 <issue_comment>username_6: jQuery is relevant if you have to deal with browsers different implementations/incompatibilities or if what you would like to achieve is quite verbose using vanilla JavaScript.
But if you target recent browsers, you should consider vanilla JavaScript instead since the required code in this case looks pretty the same.
```js
function collect() {
return Array.prototype.slice.call(document.querySelectorAll('ul[id^="req-"]'))
.map(function(x) {
return {data_type: '+ '+x.dataset.type+' +', data_value: '+ '+x.dataset.value+' +'}
});
}
// ES6 version
function collectES6() {
return Array.from(document.querySelectorAll('ul[id^="req-"]'), x => {
return {data_type: `+ ${x.dataset.type} +`, data_value: `+ ${x.dataset.value} +`}
});
}
console.log('Vanilla version (JavaScript 1.6):', collect());
console.log('Es6 version', collectES6());
```
```html
```
Upvotes: 1
|
2018/03/14
| 1,254 | 4,193 |
<issue_start>username_0: I've read through I don't know how many examples and to my knowledge I'm following the example correctly but the columns do not update when I click on a header.
-I have included MatSortModule in my app.module.ts
-I have included matSort on the mat-table
-I have included mat-sort-header on header-cells of the columns I want to be able to sort by (all of them)
-I have included @ViewChild(MatSort) sort: MatSort; and the corresponding
```
ngAfterViewInit() {
this.dataSource.sort = this.sort;
}
```
I don't see what else is left. Here is my code snippet:
```
Product Name
{{item.name}}
```
...
```
defaultData: Array = null;
dataSource = new MatTableDataSource(this.defaultData);
@ViewChild(MatSort) sort: MatSort;
constructor(private getDataService: GetDataService) {
this.getDataService.getData()
.subscribe(data => {
this.defaultData = data;
this.dataSource.data = this.defaultData;
},
err => console.log(err));
}
```
...
```
ngAfterViewInit() {
this.dataSource.sort = this.sort;
}
```<issue_comment>username_1: ```
function collect(){
var data = [];
$('ul').each(function(){
var id = $(this).attr('id');
if(id.startsWith('req') ) {
var dataType = $(this).data('type');
var dataValue = $(this).data('value');
data.push({data_type: dataType, data_value: dataValue})
}
})
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Following is a way to do it:
```
var data = [];
$("ul[id^='req-']").each(function() {
data.push({ data_type: $(this).data('type'), data_value: $(this).data('value') });
});
```
The selector selects all the `ul`s which have ID starting with `req-` and then `each` loops on them. In each iteration, the value of the data attributes can be fetched using jQuery's `data` method, which are then pushed to the array `data`.
Working example:
```js
var data = [];
$("ul[id^='req-']").each(function() {
data.push({
data_type: $(this).data('type'),
data_value: $(this).data('value')
});
});
console.log(data);
```
```html
*
*
*
```
Upvotes: 1 <issue_comment>username_3: using [attribute starts with selector](http://api.jquery.com/attribute-starts-with-selector/):
```
function collect(){
var data = [];
$('ul[id^="req-"]').each(function(){
data.push({data_type: '+ $(this).data("type") +', data_value: '+ $(this).data("value") +'});
})
}
```
Upvotes: 1 <issue_comment>username_4: Use jquery attribute selector
```js
function collect() {
var data = [];
var getUL = $('ul[id^="req-"]');
$.each(getUL, function(key, value) {
data.push({
data_type: $(value).data('type'),
data_value: $(value).data('value')
});
});
console.log(data)
}
collect()
```
```html
```
Upvotes: 1 <issue_comment>username_5: Modified username_4's answer to use map off of the jQuery object
```js
function collect() {
var getUL = $('ul[id^="req-"]');
var data = getUL.map(function(key, value) {
return {
data_type: $(value).data('type'),
data_value: $(value).data('value')
};
});
console.log(data.get())
}
collect()
```
```html
```
Upvotes: 1 <issue_comment>username_6: jQuery is relevant if you have to deal with browsers different implementations/incompatibilities or if what you would like to achieve is quite verbose using vanilla JavaScript.
But if you target recent browsers, you should consider vanilla JavaScript instead since the required code in this case looks pretty the same.
```js
function collect() {
return Array.prototype.slice.call(document.querySelectorAll('ul[id^="req-"]'))
.map(function(x) {
return {data_type: '+ '+x.dataset.type+' +', data_value: '+ '+x.dataset.value+' +'}
});
}
// ES6 version
function collectES6() {
return Array.from(document.querySelectorAll('ul[id^="req-"]'), x => {
return {data_type: `+ ${x.dataset.type} +`, data_value: `+ ${x.dataset.value} +`}
});
}
console.log('Vanilla version (JavaScript 1.6):', collect());
console.log('Es6 version', collectES6());
```
```html
```
Upvotes: 1
|
2018/03/14
| 626 | 1,923 |
<issue_start>username_0: i'm learning a lot about Natural Language Processing with nltk, can do a lot of things, but I'm not being able to find the way to read Texts from the package. I have tried things like this:
```
from nltk.book import *
text6 #Brings the title of the text
open(text6).read()
#or
nltk.book.text6.read()
```
But it doesn't seem to work, because it has no fileid. No one seems to have asked this question before, so I assume the answer should be easy. Do you know what's the way to read those texts or how to convert them into a string?
Thanks in advance<issue_comment>username_1: Looks like they already break it up into tokens for you.
```
from nltk.book import text6
text6.tokens
```
Upvotes: 2 <issue_comment>username_2: Lets dig into the code =)
Firstly, the `nltk.book` code resides on <https://github.com/nltk/nltk/blob/develop/nltk/book.py>
If we look carefully, the texts are loaded as an `nltk.Text` objects, e.g. for `text6` from <https://github.com/nltk/nltk/blob/develop/nltk/book.py#L36> :
```
text6 = Text(webtext.words('grail.txt'), name="Monty Python and the Holy Grail")
```
The `Text` object comes from <https://github.com/nltk/nltk/blob/develop/nltk/text.py#L286> , you can read more about how you can use it from <http://www.nltk.org/book/ch02.html>
The `webtext` is a corpus from `nltk.corpus` so to get to the raw text of `nltk.book.text6`, you could load the webtext directly, e.g.
```
>>> from nltk.corpus import webtext
>>> webtext.raw('grail.txt')
```
The `fileids` comes only when you load a `PlaintextCorpusReader` object, not from the `Text` object (processed object):
```
>>> type(webtext)
>>> for filename in webtext.fileids():
... print(filename)
...
firefox.txt
grail.txt
overheard.txt
pirates.txt
singles.txt
wine.txt
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: #generate sorted tokens
```
print(sorted(set(text6))
```
Upvotes: 0
|
2018/03/14
| 950 | 2,587 |
<issue_start>username_0: How to check if a string has 3 or more decimal points using regex.
I only am wanting to use a regex pattern to solve this issue.
```
var string1 = "1.23432 12.123.1231"; // true
var string2 = "1.23432 12123.1231"; // false
```
What I thought would work but doesn't:
```
let regx2 = RegExp(/.{3,}/g);
if(regx2.test(string1)){
output = false;
}
```
Any help would be much appreciated.<issue_comment>username_1: An alternative is using the function `match`:
*This approach only counts the amount of dots regardless of the current format.*
This regex `/(\.)/g` captures the desired groups.
```js
console.log(("1.23432 12.123.1231".match(/(\.)/g) || []).length >= 3);
console.log(("1.23432 12123.1231".match(/(\.)/g) || []).length >= 3);
```
Upvotes: 1 <issue_comment>username_2: You can remove all non `.` characters and them check the length
```js
let strs = [
"11.11 1.111.11.1111",
"1.1.1"
]
for (let s of strs) {
console.log(s.replace(/[^\.]/g, '').length >= 3)
}
```
Upvotes: 0 <issue_comment>username_3: You can use `match` to retrieves the matches when matching a string against a regular.
```js
function hasMoreThan3OrMore(s) {
return s.match(/\./g).length >= 3;
}
var string1 = "1.23432 12.123.1231"; // true
var string2 = "1.23432 12123.1231"; // false
console.log(hasMoreThan3OrMore(string1));
console.log(hasMoreThan3OrMore(string2));
```
Upvotes: 1 <issue_comment>username_4: An alternative to using Regex would be to use `String.split()` and check the length, like so:
```js
console.log("1.23432 12.123.1231".split(".").length > 3); // true
console.log("1.23432 12123.1231".split(".").length > 3); // false
```
Upvotes: 1 <issue_comment>username_5: `RegExp.test()` returns a boolean, so you can simply return that. Fixing your typo would also clear up your error:
```
var string1 = "1.23432 12.123.1231"; // true
var string2 = "1.23432 12123.1231"; // false
let regx2 = RegExp(/.{3,}/g);
console.log(regx2.test(string1); // true
console.log(regx2.test(string2); // false
```
Upvotes: 0 <issue_comment>username_6: The regex I was looking for was
```
let regx2 = RegExp(/^(.*\.){3,}\d+$/g);
if(regx2.test(string2)){
console.log('testing 3 or more . :', string1);
output = false;
}
```
I am trying to get better on my knowledge of using regex expressions and so was looking for an answer that only uses regex without using a method. Thanks for all your help, I actually didn't even think about checking if the string contains a decimal and then checking how many times it returns.
Upvotes: 0
|
2018/03/14
| 878 | 2,553 |
<issue_start>username_0: I have a query here is there any way we can configure exchange details in private git. Well in Github we do have this option of configuring it but i am using private git... Is there any such way we can send emails to specific set of people after each commits...
unfortunately I am not using Jenkins.
My Git server and Outlook is on Windows..
Thanks in Advance.<issue_comment>username_1: An alternative is using the function `match`:
*This approach only counts the amount of dots regardless of the current format.*
This regex `/(\.)/g` captures the desired groups.
```js
console.log(("1.23432 12.123.1231".match(/(\.)/g) || []).length >= 3);
console.log(("1.23432 12123.1231".match(/(\.)/g) || []).length >= 3);
```
Upvotes: 1 <issue_comment>username_2: You can remove all non `.` characters and them check the length
```js
let strs = [
"11.11 1.111.11.1111",
"1.1.1"
]
for (let s of strs) {
console.log(s.replace(/[^\.]/g, '').length >= 3)
}
```
Upvotes: 0 <issue_comment>username_3: You can use `match` to retrieves the matches when matching a string against a regular.
```js
function hasMoreThan3OrMore(s) {
return s.match(/\./g).length >= 3;
}
var string1 = "1.23432 12.123.1231"; // true
var string2 = "1.23432 12123.1231"; // false
console.log(hasMoreThan3OrMore(string1));
console.log(hasMoreThan3OrMore(string2));
```
Upvotes: 1 <issue_comment>username_4: An alternative to using Regex would be to use `String.split()` and check the length, like so:
```js
console.log("1.23432 12.123.1231".split(".").length > 3); // true
console.log("1.23432 12123.1231".split(".").length > 3); // false
```
Upvotes: 1 <issue_comment>username_5: `RegExp.test()` returns a boolean, so you can simply return that. Fixing your typo would also clear up your error:
```
var string1 = "1.23432 12.123.1231"; // true
var string2 = "1.23432 12123.1231"; // false
let regx2 = RegExp(/.{3,}/g);
console.log(regx2.test(string1); // true
console.log(regx2.test(string2); // false
```
Upvotes: 0 <issue_comment>username_6: The regex I was looking for was
```
let regx2 = RegExp(/^(.*\.){3,}\d+$/g);
if(regx2.test(string2)){
console.log('testing 3 or more . :', string1);
output = false;
}
```
I am trying to get better on my knowledge of using regex expressions and so was looking for an answer that only uses regex without using a method. Thanks for all your help, I actually didn't even think about checking if the string contains a decimal and then checking how many times it returns.
Upvotes: 0
|
2018/03/14
| 590 | 2,090 |
<issue_start>username_0: For example, I have the following code
```
Click
```
Guys, is it possible to set a default value "true" to the attribute "required"?
Something like this:
```
Click
```<issue_comment>username_1: This is not possible with pure HTML. Some templating languages may have this feature (or something similar).
You could consider using javascript.
```
var inputs = document.getElementsByTagName("input");
for(var i = 0; i < inputs.length; i++){
inputs[i].required = true;
}
```
Upvotes: 2 <issue_comment>username_2: nope. unless you catch the form submit event in javascript and disable it if any field is unfilled or with javascript add the required attribute like what <https://stackoverflow.com/users/7183244/keegan-teetaert> suggested, otherwise it's easier just to write required for every field. I don't see the pain in that.
Upvotes: 0 <issue_comment>username_3: You can't do it with `html` or `css`, you should use `javascript`, like this:
```
document.querySelectorAll("input[type=text]").forEach(function(e) {
e.setAttribute("required", true);
});
```
The selector selects all inputs of text type.
Use this code after document has been loaded
Upvotes: 0 <issue_comment>username_4: I dont think it it possible in HTML but you can do with the help of jquery like this :
HTML code:
```
Click
```
Jquery code:
```
$(function(){
$("input").prop('required',true);
});
```
check : [DEMO](https://jsfiddle.net/13nfxsn1/7/)
Upvotes: 1 <issue_comment>username_5: The required option on input is removable by the user easily by inspecting the source code.
I think the best way to do that is to check all input with a javascript function call before sending the data like it's explain in the example bellow.
<https://www.w3schools.com/js/js_validation.asp>
>
> Edit :
>
>
>
```
..
```
And if you want you can use JQuery, just do :
```
$('#yourFormId').submit(function() {
// Something
return true; // True submit your form.
//return false; // False cancel your form.
});
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 657 | 2,578 |
<issue_start>username_0: I have a module that is installed in `node_modules`, and want to import a file from it.
I also have a locally installed version of this module in a parent directory to the one where my project is.
For some reason, when I do `import { something } from 'my-module/myFile'`, it actually resolves it not by going in `node_modules`, but by taking my local version of that module that is stored somewhere else in my machine, in a parent directory.
Why does TypeScript resolves modules outside of the directory it's being built in? How could it even find my module outside of its own folder and not just find it in node modules?<issue_comment>username_1: When you import from npm, you can't import by a file name (unless you write the full path starting with ./node\_modules... don't), only by package. It doesn't find one so it falls back to looking for a local file instead.
So assuming you have an actual package `my-module` in node\_modules which has an entry point (main in package.json), which says `export something`, then `import {something} from 'my-module'` should do the trick.
Upvotes: 1 <issue_comment>username_2: Starting with ECMAScript 2015, JavaScript has a concept of modules. TypeScript shares this concept.
Modules import one another using a module loader. At runtime the module loader is responsible for locating and executing all dependencies of a module before executing it. Well-known modules loaders used in JavaScript are the CommonJS module loader for Node.js and require.js for Web applications.
You mention `node_modules` so I'm guessing you are using Node.js. `"moduleResolution": "node"` in your tsconfig.json file.
From Node.js [docs](https://nodejs.org/api/modules.html#modules_loading_from_node_modules_folders):
>
> If the module identifier passed to require() is not a core module, and
> does not begin with '/', '../', or './', then Node.js starts at the
> parent directory of the current module, and adds /node\_modules, and
> attempts to load the module from that location. Node will not append
> node\_modules to a path already ending in node\_modules.
>
>
>
And for [file modules](https://nodejs.org/api/modules.html#modules_file_modules):
>
> If the exact filename is not found, then Node.js will attempt to load
> the required filename with the added extensions: .js, .json, and
> finally .node.
>
>
> TypeScript overlays the TypeScript source file extensions (.ts, .tsx,
> and .d.ts) over the Node’s resolution logic.
>
>
>
Make sure that you are using it correctly.
Upvotes: 4 [selected_answer]
|
2018/03/14
| 621 | 2,248 |
<issue_start>username_0: What is the difference between `test1`:
```
(define test1
(lambda (x) (* x x)))
```
and `test2`
```
(define (test2 x)
(lambda (x) (* x x)))
```
Aren't both suppose to be the same. When I test `test1` I get an correct answer but `test2` returns `#` or `(lambda (a1) ...)`
Why is that?<issue_comment>username_1: When you import from npm, you can't import by a file name (unless you write the full path starting with ./node\_modules... don't), only by package. It doesn't find one so it falls back to looking for a local file instead.
So assuming you have an actual package `my-module` in node\_modules which has an entry point (main in package.json), which says `export something`, then `import {something} from 'my-module'` should do the trick.
Upvotes: 1 <issue_comment>username_2: Starting with ECMAScript 2015, JavaScript has a concept of modules. TypeScript shares this concept.
Modules import one another using a module loader. At runtime the module loader is responsible for locating and executing all dependencies of a module before executing it. Well-known modules loaders used in JavaScript are the CommonJS module loader for Node.js and require.js for Web applications.
You mention `node_modules` so I'm guessing you are using Node.js. `"moduleResolution": "node"` in your tsconfig.json file.
From Node.js [docs](https://nodejs.org/api/modules.html#modules_loading_from_node_modules_folders):
>
> If the module identifier passed to require() is not a core module, and
> does not begin with '/', '../', or './', then Node.js starts at the
> parent directory of the current module, and adds /node\_modules, and
> attempts to load the module from that location. Node will not append
> node\_modules to a path already ending in node\_modules.
>
>
>
And for [file modules](https://nodejs.org/api/modules.html#modules_file_modules):
>
> If the exact filename is not found, then Node.js will attempt to load
> the required filename with the added extensions: .js, .json, and
> finally .node.
>
>
> TypeScript overlays the TypeScript source file extensions (.ts, .tsx,
> and .d.ts) over the Node’s resolution logic.
>
>
>
Make sure that you are using it correctly.
Upvotes: 4 [selected_answer]
|
2018/03/14
| 658 | 1,728 |
<issue_start>username_0: I am trying to write this program without using Counter.Write a Python program to combine values in python list of dictionaries. Go to the editor
Sample data:
```
[{'item': 'item1', 'amount': 400}, {'item': 'item2', 'amount': 300}, {'item': 'item1', 'amount': 750}]
**Expected Output: {'item1': 1150, 'item2': 300}**
```
So far here's my code.
```
a=[{'item': 'item1', 'amount': 400}, {'item': 'item2', 'amount': 300}, {'item': 'item1', 'amount': 750}]
cp={}
val=0
for d in a:
if d['item'] not in cp:
cp[d['item']]=d['amount']
print(cp)
```
My output:`{'item1': 400, 'item2': 300}`
How can I combined the total of of 'item1'?Any help is appreciated?<issue_comment>username_1: ```
a=[{'item': 'item1', 'amount': 400}, {'item': 'item2', 'amount': 300}, {'item': 'item1', 'amount': 750}]
cp={}
val=0
for d in a:
if d['item'] not in cp:
cp[d['item']] = d['amount']
else:
cp[d['item']] += d['amount']
print(cp)
```
Upvotes: 2 <issue_comment>username_2: Here is one way:
```
from collections import defaultdict
lst = [{'item': 'item1', 'amount': 400},
{'item': 'item2', 'amount': 300},
{'item': 'item1', 'amount': 750}]
d = defaultdict(int)
for i in lst:
d[i['item']] += i['amount']
# defaultdict(, {'item1': 1150, 'item2': 300})
```
Upvotes: 2 <issue_comment>username_3: ```
d = {}
for a_dict in all_my_dicts:
for key in a_dict:
d[key] = d.get(key,0)+a_dict[key]
```
I guess maybe
Upvotes: 0 <issue_comment>username_4: You can use `defaultdict` here.
```
from collections import defaultdict
for d in l:
data[d['item']] += d['amount']
Out[72]: defaultdict(int, {'item1': 1150, 'item2': 300})
```
Upvotes: 0
|
2018/03/14
| 174 | 515 |
<issue_start>username_0: I'd like to add a new element to an array in a non-mutating way. In JS, I can do this:
```
var new_arr = arr.concat(3)
```
instead of this:
```
arr.push(3)
```
How can I do the same thing in Ruby? The `concat` method in Ruby is mutating.<issue_comment>username_1: As simple as this:
```
new_arr = arr + [3]
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: I'll add another solution using array splats that might seem less awkward:
```
new_arr = [*arr, 3]
```
Upvotes: 3
|
2018/03/14
| 186 | 616 |
<issue_start>username_0: I want to hide span tag containing "Prasad" using only java script.
u can see there is div tag also having same id which i don't want to hide at all.
```
div
- Prasad
- Mphasis
function onLoad()
{
var x= document.getElementById("123").getElementsByClassName("alert");
x.style.display="none"; --->> getting error here as undefine
}
```<issue_comment>username_1: As simple as this:
```
new_arr = arr + [3]
```
Upvotes: 5 [selected_answer]<issue_comment>username_2: I'll add another solution using array splats that might seem less awkward:
```
new_arr = [*arr, 3]
```
Upvotes: 3
|
2018/03/14
| 893 | 3,396 |
<issue_start>username_0: I am trying to understand why setting a value does not automatically refresh the UI. If I call binding.setItem, the UI refreshes.
I know binding object contains the updated values, but the UI is not being refreshed after setting item.name and item.checked.
What am I doing wrong? Do I need to call setItem everytime to refresh the UI? I would think that this would be unnecessary, since the UI would automatically update after setting a value.
Item.java:
```
public class Item {
public String name;
public Boolean checked;
}
```
MainActivity.java:
```
public class MainActivity extends AppCompatActivity {
public Item item;
ActivityMainBinding binding;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
item = new Item();
item.checked = true;
item.name = "a";
binding = DataBindingUtil.setContentView(this, R.layout.activity_main);
binding.setItem(item);
}
public void button_onClick(View v) {
item.checked = !item.checked;
item.name = item.name + "a";
///binding.setItem(item); --> ??? this updates UI, do I need to call this to refresh UI everytime???
}
}
```
activity\_main.xml:
```
xml version="1.0" encoding="utf-8"?
```
build.gradle:
```
android {
...
dataBinding{
enabled=true
}
}
```<issue_comment>username_1: According to the developer guide:
>
> Any plain old Java object (POJO) may be used for data binding, but modifying a POJO will not cause the UI to update.
>
>
>
You can follow the developer guide's "Observable Objects" section to allow modification of your object to update your UI.
<https://developer.android.com/topic/libraries/data-binding/index.html#data_objects>
Upvotes: 3 [selected_answer]<issue_comment>username_2: As already mentioned use ObservableFields
```
public ObservableField checked = new ObservableField<>();
public ObservableField name = new ObservableField<>();
```
And make changes in MainActivity as
```
item = new Item();
item.checked.set(true);
item.name.set("a");
binding = DataBindingUtil.setContentView(this, R.layout.activity_main);
binding.setItem(item);
}
public void button_onClick(View v) {
item.checked.set(!(item.checked.get()));
item.name.set(item.name.get() + "a");
}
```
By the way you can look into series on blog posts on databinding <https://medium.com/google-developers/android-data-binding-2-way-your-way-ccac20f6313> and this <https://medium.com/@fabioCollini/android-data-binding-f9f9d3afc761>
Upvotes: 2 <issue_comment>username_3: In order to update the UI, fields need to be Observable. Just an update with my changes:
```
import android.databinding.BaseObservable;
import android.databinding.Bindable;
public class Item extends BaseObservable {
private String name;
private Boolean checked;
@Bindable
public String getName() {
return this.name;
}
@Bindable
public Boolean getChecked() {
return this.checked;
}
public void setName(String name) {
this.name = name;
notifyPropertyChanged(BR.name);
}
public void setChecked(Boolean checked) {
this.checked = checked;
notifyPropertyChanged(BR.checked);
}
}
```
Upvotes: 1
|
2018/03/14
| 423 | 1,733 |
<issue_start>username_0: I have a script which begins with:
```
#Requires -Modules ActiveDirectory, Microsoft.Online.SharePoint.PowerShell
```
which is all fine and dandy except that Sharepoint PS module throws a verb warning:
>
> WARNING: The names of some imported commands from the module 'Microsoft.Online.SharePoint.PowerShell' include
> unapproved verbs that might make them less discoverable. To find the commands with unapproved verbs, run the
> Import-Module command again with the Verbose parameter. For a list of approved verbs, type Get-Verb.
>
>
>
I'd like to use the #Requires -Modules header in the script but suppress the warning.
I know there are ways to suppress all warnings in the shell before running the script but wondered if there was a better way to do it within the script.<issue_comment>username_1: a rather sneaky Workaround would be to use #requires only for the modules, that dont produce warnings and use
```
if (-not (get-module Microsoft.Online.SharePoint.PowerShell)) {
Import-Module Microsoft.Online.SharePoint.PowerShell -warningaction silentlycontinue
}
```
for the ones not strictly following naming conventions
Upvotes: 1 <issue_comment>username_2: I'm not sure if something like this might work:
```
$OriginalWarningPreference = $WarningPreference
$WarningPreference = 'SilentlyContinue'
#Requires -Modules ActiveDirectory, Microsoft.Online.SharePoint.PowerShell
$WarningPreference = $OriginalWarningPreference
```
Alternately, you can sacrifice some of the functionality of `#Requires` and do this:
```
#Requires -Modules ActiveDirectory
Import-Module -Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking -ErrorAction Stop
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 964 | 3,450 |
<issue_start>username_0: I am using `Flink v1.4.0`. I am using the `DataSet API` (though this, I don't think matters).
I am running some heavy duty transformations on a 12 core VM. I am utilising 2 cores for one `Flink job` in which I am storing some data into a `Flink Queryable State`and am running another `Flink` job with the remaining 10 cores.
When I run the second job with 10 cores I seem to get following error:
```
java.io.IOException: Insufficient number of network buffers: required 10, but only 9 available. The total number of network buffers is currently set to 4096 of 32768 bytes each. You can increase this number by setting the configuration keys 'taskmanager.network.memory.fraction', 'taskmanager.network.memory.min', and 'taskmanager.network.memory.max'.
at org.apache.flink.runtime.io.network.buffer.NetworkBufferPool.createBufferPool(NetworkBufferPool.java:257)
at org.apache.flink.runtime.io.network.NetworkEnvironment.registerTask(NetworkEnvironment.java:199)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:618)
at java.lang.Thread.run(Thread.java:745)
```
If I do run it with 8 cores it makes it through just fine. What's causing this and why can't I use the other 2 --> 8+2 = 10 cores?<issue_comment>username_1: Quoting the Apache Flink FAQ:
>
> If you run Flink with a very high parallelism, you may need to increase the number of network buffers.
>
>
> By default, Flink takes 10% of the JVM heap size for network buffers, with a minimum of 64MB and a maximum of 1GB. You can adjust all these values via taskmanager.network.memory.fraction, taskmanager.network.memory.min, and taskmanager.network.memory.max.
>
>
> Please refer to the Configuration Reference for details.
>
>
>
There is a [dedicated section in the docs for how to configure the network buffers](https://nightlies.apache.org/flink/flink-docs-release-1.8/ops/config.html#configuring-the-network-buffers).
In summary, you can configure the number of network buffers in the `./conf/flink-conf.yaml` file by setting the `taskmanager.network.numberOfBuffers` parameter.
The parameter should be set to `#slots-per-TM^2 * #TMs * 4`, where `#slots per TM` are the number of slots per TaskManager and `#TMs` are the total number of task managers.
To support, for example, a cluster of 20 8-slot machines, you should use roughly 5000 network buffers for optimal throughput. Each network buffer has by default a size of 32 KiBytes. In the example above, the system would thus allocate roughly 300 MiBytes for network buffers.
Please refer to the docs for details.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I too faced the same error
>
> Caused by: java.io.IOException: Insufficient number of network
> buffers: required 13, but only 7 available. The total number of
> network buffers is currently set to 2048 of 32768 bytes each. You can
> increase this number by setting the configuration keys
> 'taskmanager.memory.network.fraction',
> 'taskmanager.memory.network.min', and
> 'taskmanager.memory.network.max'.
>
>
>
The below code snippet resolved my issue.
```
Configuration cfg = new Configuration();
int defaultLocalParallelism = Runtime.getRuntime().availableProcessors();
cfg.setString("taskmanager.memory.network.max", "1gb");
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment(defaultLocalParallelism, cfg);
```
Upvotes: 2
|
2018/03/14
| 378 | 886 |
<issue_start>username_0: I use curl parameters:
```
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_BINARYTRANSFER,true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
```
�PNG IHDRx�zB�d,IIDATx���Ϗ��}؋f����� ���>Y�\f�AӀ/��?�+(���w�����&�ml\_4\_,`<���o��45�� μc�����}����,;�a��H��*��*��$�U��k������w}�ú��;�μ3��;�μ3��;�μ:W{ݭʍfޯ,�r��y�7�̫ʛ�����K͋�^��e���A��R����y߈x�Fys����=����xy���`2�[�0��r�y ލI}����V����c���bE
And take that result. Why I can't see image?
I need save to file the image<issue_comment>username_1: That is an image, but you’re in a text-only command line. Save the contents to a file, and you’ll have an image file.
Upvotes: 1 <issue_comment>username_2: You can write the body directly to a file:
```
$fp = fopen('image.png', 'w');
curl_setopt($ch, CURLOPT_FILE, $fp);
```
Upvotes: 1 [selected_answer]
|
2018/03/14
| 788 | 2,674 |
<issue_start>username_0: I use the following Code:
```
using (var package = new ExcelPackage()) {
var worksheet = package.Workbook.Worksheets.Add("Test");
var cell = worksheet.Cells[1, 1];
var r1 = cell.RichText.Add("TextLine1" + "\r\n");
r1.Bold = true;
var r2 = cell.RichText.Add("TextLine2" + "\r\n");
r2.Bold = false;
package.SaveAs(...);
}
```
But in the Excel file the newLines are gone...
I tried also with "\n" and "\r" but nothing was working...<issue_comment>username_1: Finally I found the solution. Here is a working sample:
```
using (var package = new ExcelPackage(fileInfo)) {
var worksheet = package.Workbook.Worksheets.Add("Test");
var cell = worksheet.Cells[1, 1];
cell.Style.WrapText = true;
cell.Style.VerticalAlignment = ExcelVerticalAlignment.Top;
var r1 = cell.RichText.Add("TextLine1" + "\r\n");
r1.Bold = true;
var r2 = cell.RichText.Add("TextLine2" + "\r\n");
r2.Bold = false;
package.Save();
}
```
But I think I found a bug in the Lib: This Code is **NOT** working:
```
using (var package = new ExcelPackage(fileInfo)) {
var worksheet = package.Workbook.Worksheets.Add("Test");
var cell = worksheet.Cells[1, 1];
cell.Style.WrapText = true;
cell.Style.VerticalAlignment = ExcelVerticalAlignment.Top;
var r1 = cell.RichText.Add("TextLine1" + "\r\n");
r1.Bold = true;
var r2 = cell.RichText.Add("TextLine2" + "\r\n");
r2.Bold = false;
cell = worksheet.Cells[1, 1];
var r4 = cell.RichText.Add("TextLine3" + "\r\n");
r4.Bold = true;
package.Save();
}
```
When I get the same range again and add new RichText Tokens, the old LineBreaks are deleted... (They are actually converted to "\n" and this is not working in Excel.)
Upvotes: 4 [selected_answer]<issue_comment>username_2: The Encoding of new line in excel cell is 10.
I think you have to do someihing like thise:
```
var r1 = cell.RichText.Add("TextLine1" + ((char)10).ToString());
```
Upvotes: 2 <issue_comment>username_3: ```
using (var package = new ExcelPackage(fileInfo)) {
var worksheet = package.Workbook.Worksheets.Add("Test");
var cell = worksheet.Cells[1, 1];
cell.Style.WrapText = true;
cell.Style.VerticalAlignment = ExcelVerticalAlignment.Top;
var r1 = cell.RichText.Add("TextLine1" + "\n\n");
r1.Bold = true;
var r2 = cell.RichText.Add("TextLine2" + "\n\n");
r2.Bold = false;
package.Save();
}
```
Upvotes: 0
|
2018/03/14
| 1,305 | 4,464 |
<issue_start>username_0: I'm still quite new to VBA and I'm basically self-taught. I've developed a spreadsheet for work and I need a macro to allow customers to add information then the information copy to sheet 2 in descending order. This is the code I am using currently attempting to use but when I click on the “Save” macro button, the data stops copying over after two entries. Additionally, is there some code that I can input to clear the blocks so each new customer cannot see what the previous customer entered?
```
Private Sub CommandButton1_Click()
Dim Name As String, Org As String, POCPhone As String, Email As String, TypeofVeh As String, TotPax As String, TotCar As String, Pickup As String, DateReq As String, DateRet As String, Destination As String, YN As String, Remarks As String
Worksheets("TransReq").Select
Name = Range("B4")
Org = Range("C4")
POCPhone = Range("D4")
Email = Range("E4")
TypeofVeh = Range("F4")
TotPax = Range("G4")
TotCar = Range("H4")
Pickup = Range("I4")
DateReq = Range("J4")
DateRet = Range("K4")
Destination = Range("L4")
YN = Range("M4")
Remarks = Range("N4")
Worksheets("TransReqLog").Select
Worksheets("TransReqLog").Range("B3").Select
If Worksheets("TransReqLog").Range("B3").Offset(1, 1) <> "" Then
Worksheets("TransReqLog").Range("B3").End(xlDown).Select
End If
ActiveCell.Offset(1, 0).Select
ActiveCell.Value = Name
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = Org
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = POCPhone
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = Email
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = TypeofVeh
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = TotPax
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = TotCar
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = Pickup
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = DateReq
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = DateRet
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = Destination
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = YN
ActiveCell.Offset(0, 1).Select
ActiveCell.Value = Remarks
Worksheets("TransReq").Select
Worksheets("TransReq").Range("B4").Select
End Sub
```<issue_comment>username_1: Finally I found the solution. Here is a working sample:
```
using (var package = new ExcelPackage(fileInfo)) {
var worksheet = package.Workbook.Worksheets.Add("Test");
var cell = worksheet.Cells[1, 1];
cell.Style.WrapText = true;
cell.Style.VerticalAlignment = ExcelVerticalAlignment.Top;
var r1 = cell.RichText.Add("TextLine1" + "\r\n");
r1.Bold = true;
var r2 = cell.RichText.Add("TextLine2" + "\r\n");
r2.Bold = false;
package.Save();
}
```
But I think I found a bug in the Lib: This Code is **NOT** working:
```
using (var package = new ExcelPackage(fileInfo)) {
var worksheet = package.Workbook.Worksheets.Add("Test");
var cell = worksheet.Cells[1, 1];
cell.Style.WrapText = true;
cell.Style.VerticalAlignment = ExcelVerticalAlignment.Top;
var r1 = cell.RichText.Add("TextLine1" + "\r\n");
r1.Bold = true;
var r2 = cell.RichText.Add("TextLine2" + "\r\n");
r2.Bold = false;
cell = worksheet.Cells[1, 1];
var r4 = cell.RichText.Add("TextLine3" + "\r\n");
r4.Bold = true;
package.Save();
}
```
When I get the same range again and add new RichText Tokens, the old LineBreaks are deleted... (They are actually converted to "\n" and this is not working in Excel.)
Upvotes: 4 [selected_answer]<issue_comment>username_2: The Encoding of new line in excel cell is 10.
I think you have to do someihing like thise:
```
var r1 = cell.RichText.Add("TextLine1" + ((char)10).ToString());
```
Upvotes: 2 <issue_comment>username_3: ```
using (var package = new ExcelPackage(fileInfo)) {
var worksheet = package.Workbook.Worksheets.Add("Test");
var cell = worksheet.Cells[1, 1];
cell.Style.WrapText = true;
cell.Style.VerticalAlignment = ExcelVerticalAlignment.Top;
var r1 = cell.RichText.Add("TextLine1" + "\n\n");
r1.Bold = true;
var r2 = cell.RichText.Add("TextLine2" + "\n\n");
r2.Bold = false;
package.Save();
}
```
Upvotes: 0
|
2018/03/14
| 1,096 | 2,777 |
<issue_start>username_0: If a large data set containing data that resembles
```
df<- Energy Power StartTime Timehour
7.50 10 2017-10-18 09:47:10.073 0.75
```
1) I want to transfer the time hour data into a time
```
df<- Energy Power StartTime Timehour
7.50 10 2017-10-18 09:47:10.073 00:45:00
```
2) Then I want to add the Time Hour value to the starttime
```
df<- Energy Power StartTime Timehour NewTime
7.50 10 2017-10-18 09:45:10.000 00:45:00 2017-10-18 10:30:10.000
```
For 1) I tried to change the time notation with [Convert time from numeric to time format in R](https://stackoverflow.com/questions/28043140/convert-time-from-numeric-to-time-format-in-r)
```
df$Timehour <- as.Date(df$Timehour)
df$Timehour <- format(as.POSIXct(df$Timehour)*86400, origin = "1970-01-01",tz = "UTC", "%H:%M:%OS")
```
But I receive the following message
```
Error in as.POSIXct.numeric(df$Timehour) :
```
'origin' must be supplied
Can someone help me with my 1st and 2th question?<issue_comment>username_1: If you convert your time hour into minutes with lubridate, you can jut add it to the starttime column
```
library(lubridate)
library(tidyverse)
library(readr)
df <- read_table("Energy Power StartTime Timehour
7.50 10 2017-10-18 09:47:10.073 0.75")
df %>% mutate(
minutes = minutes(round(Timehour * 60)),
newTime = StartTime + minutes)
# A tibble: 1 x 6
Energy Power StartTime Timehour minutes newTime
1 7.50 10 2017-10-18 09:47:10 0.750 45M 0S 2017-10-18 10:32:10
```
If needed, first convert your StartTime column to datatime class with
```
df <- df %>% mutate(StartTime = ymd_hms(StartTime))
```
Upvotes: 1 <issue_comment>username_2: A base solution: you need to put your timehour in seconds to add it simply to Posixct
```
df$Timehour <- df$Timehour*3600
df$StartTime <- as.POSIXct(as.character(df$StartTime),format = "%Y-%m-%d %H:%M:%OS")
df$newtime <- df$StartTime + df$Timehour
Energy Power StartTime Timehour newtime
1 7.5 10 2017-10-18 09:47:10 2700 2017-10-18 10:32:10
```
You can convert your timehour to the desired format this way:
```
df$Timehour <- strftime(as.POSIXct(df$Timehour,origin='1900-01-01 00:00'),format="%H:%M:%S",tz = "GMT")
Energy Power StartTime Timehour newtime
1 7.5 10 2017-10-18 09:47:10 00:45:00 2017-10-18 10:32:10
```
data:
```
df<- read.table(text = "Energy, Power, StartTime, Timehour
7.50, 10 , 2017-10-18 09:47:10, 0.75",header = TRUE ,sep = ",")
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 1,275 | 3,513 |
<issue_start>username_0: I am developing a custom function that will calculate the logarithmic average of a range. The answer for the post [logarithmic averaging question](https://stackoverflow.com/questions/23956551/logarithmic-averaging-preset-function-in-excel-using-ranges-as-input-values) is wrong but it is a starting point. The problem lies in calculating the anti-logs (10^(0.1x)) of range values. This is my first post so please forgive any missteps.
Here is my code:
```
Function logavg(rngValues As Range) As Double
Dim lSumofValues As Double
Dim lCountofValues As Double
Dim lAntilog As Double
Dim rngLoop As Range
lSumofValues = 0
lAntilog = 0
lCountofValues = rngValues.Count 'Get count of values in items in range
'For loop - add the antilogs of the values in the range - does not work
For Each rngLoop In rngValues
lAntilog = WorksheetFunction.Power(10, 0.1 * rngLoop.Value)
lSumofValues = lSumofValues + lAntilog
Next
'Perform calculation - logarithmic average
logavg = 10 * Log(lSumofValues / lCountofValues)
End Function
```
I tried this 'for' loop but it doesn't work:
```
For Each rngLoop In rngValues
lSumofValues = lSumofValues + (10 ^ (0.1 * rngLoop.Value))
Next
```
This code for a simple (arithmetic) average works so I know that the range values are being transferred and used properly:
```
For Each rngLoop In rngValues
lSumofValues = lSumofValues + rngLoop.Value
Next
logavg = lSumofValues / lCountofValues
```
Test data is: 92.8, 79.1, 81.6, 78.3, 89.4, 86.5, 86.9
The arithmetic average is 84.9 and the logarithmic average is 87.6.
Two Excel formulae that calculate the logarithmic average of B2:B8 are:
a) array formula =10\*LOG(SUM(10^(0.1\*B2:B8))/COUNT(B2:B8)), and
b) standard formula =10\*LOG(SUMPRODUCT(10^(0.1\*B2:B8))/COUNT(B2:B8))
Thanks.<issue_comment>username_1: If you convert your time hour into minutes with lubridate, you can jut add it to the starttime column
```
library(lubridate)
library(tidyverse)
library(readr)
df <- read_table("Energy Power StartTime Timehour
7.50 10 2017-10-18 09:47:10.073 0.75")
df %>% mutate(
minutes = minutes(round(Timehour * 60)),
newTime = StartTime + minutes)
# A tibble: 1 x 6
Energy Power StartTime Timehour minutes newTime
1 7.50 10 2017-10-18 09:47:10 0.750 45M 0S 2017-10-18 10:32:10
```
If needed, first convert your StartTime column to datatime class with
```
df <- df %>% mutate(StartTime = ymd_hms(StartTime))
```
Upvotes: 1 <issue_comment>username_2: A base solution: you need to put your timehour in seconds to add it simply to Posixct
```
df$Timehour <- df$Timehour*3600
df$StartTime <- as.POSIXct(as.character(df$StartTime),format = "%Y-%m-%d %H:%M:%OS")
df$newtime <- df$StartTime + df$Timehour
Energy Power StartTime Timehour newtime
1 7.5 10 2017-10-18 09:47:10 2700 2017-10-18 10:32:10
```
You can convert your timehour to the desired format this way:
```
df$Timehour <- strftime(as.POSIXct(df$Timehour,origin='1900-01-01 00:00'),format="%H:%M:%S",tz = "GMT")
Energy Power StartTime Timehour newtime
1 7.5 10 2017-10-18 09:47:10 00:45:00 2017-10-18 10:32:10
```
data:
```
df<- read.table(text = "Energy, Power, StartTime, Timehour
7.50, 10 , 2017-10-18 09:47:10, 0.75",header = TRUE ,sep = ",")
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 1,338 | 5,068 |
<issue_start>username_0: In this I created a simple HTML form in which the email and password entered from HTML form are not saving in the MYSQL database! what changes do I made so that data can be save in MYSQL database.
**This is my HTML code :**
```
```
**This is php code :**
```
php
if( $_POST )
{
$con = mysql_connect("mysqlhostname","username","password","databasename");
if (!$con)
{
die('Could not connect: ' . mysql_error());
}
mysql_select_db("databasename", $con);
$users_email = $_POST['email'];
$users_password = $_POST['password'];
$users_email = mysql_real_escape_string($users_email);
$users_password = mysql_real_escape_string($users_password);
$query = "INSERT INTO pass (`email`, `password`) VALUES ('$users_email', <PASSWORD>');";
mysql_query($query);
mysql_close($con);
}
?
```<issue_comment>username_1: **In Html file : give the name property of submit button**
```
```
**This is php code :**
```
php
if( isset( $_POST['Save'] ) )
{
$con = mysql_connect("mysqlhostname","username","password","databasename");
if (!$con)
{
die('Could not connect: ' . mysql_error());
}
mysql_select_db( "databasename", $con );
$users_email = $_POST['email'];
$users_password = $_POST['<PASSWORD>'];
$users_email = mysql_real_escape_string($users_email);
$users_password = mysql_real_escape_string($users_password);
$query = "INSERT INTO pass ( email, password ) VALUES ( '$users_email', '$users_password' )";
mysql_query( $query, $con );
mysql_close($con);
}
?
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: In an offline discussion, we discovered that OP's code is not going into the outer `if` statement. Here's some sample code on how *I would structure it.*
Things to notice:
* give the submit button a name and then check for that in `$_POST`
* Use PDO instead of mysql API
* wrap everything in a try/catch block to see any errors
Gotta run. Good luck!
```
/*****************************************/
/**************** pass.php ***************/
/*****************************************/
php
// Please use PDO instead of mysql because mysql is deprecated and officially removed as of PHP 7.
// Read about it here: https://phpdelusions.net/pdo
if(isset($_POST['submit'])) {
// Wrap everything in a try/catch block so you can actually see the error.
try {
/*****************************************/
/************* DB CONNECTION *************/
/*****************************************/
// Change this to match your DB credentials
$host = '127.0.0.1';
$db = 'test';
$user = 'root';
$pass = '';
$charset = 'utf8mb4';
$dsn = "mysql:host=$host;dbname=$db;charset=$charset";
$opt = [
PDO::ATTR_ERRMODE = PDO::ERRMODE_EXCEPTION,
PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,
PDO::ATTR_EMULATE_PREPARES => false,
];
// This is your DB connection. Use it below.
$pdo = new PDO($dsn, $user, $pass, $opt);
/*****************************************/
/************ FORM PROCESSING ************/
/*****************************************/
// Get values from form.
// Do any validation you want here.
$email = $_POST['email'];
$password = $_POST['<PASSWORD>'];
/*****************************************/
/*************** DB INSERT ***************/
/*****************************************/
$stmt = $pdo->prepare('INSERT INTO pass (email, password) VALUES (?, ?)');
$stmt->execute(array(
$email,
$password
));
/*****************************************/
/*********** OUTPUT ANY ERRORS ***********/
/*****************************************/
} catch (Exception $e) {
var_dump($e->getMessage());
}
}
```
Upvotes: 1 <issue_comment>username_3: php file `if($_POST)` It should be `if( isset( $_POST ['Save'] ) )` and
it should be
try below php code
```
php
if( isset( $_POST['Save'] ) )
{
$servername = "mysqlhostname"; //localhost or your server name
$username = "username"; //root or your username
$password = "<PASSWORD>"; //password or server hasn't password it should be `$password = "";`
$dbname = "databasename"; //your database name
// Create connection
$con = new mysqli($servername, $username, $password, $dbname);
// Check connection
if ($con-connect_error) {
die("Could not connect: " . $con->connect_error);
}
$users_email = $_POST['email'];
$users_password = $_POST['password'];
$sql = "INSERT INTO pass (email, password) VALUES ('$users_email', '$<PASSWORD>')";
if ($con->query($sql) === TRUE) {
echo "New record created successfully";
} else {
echo "Error: " . $sql . "
" . $conn->error;
}
$con->close();
}
?>
```
I got result
[](https://i.stack.imgur.com/To4sj.jpg)
Upvotes: 1
|
2018/03/14
| 1,626 | 5,398 |
<issue_start>username_0: I am trying to check if a folder is empty and do the following:
```
import os
downloadsFolder = '../../Downloads/'
if not os.listdir(downloadsFolder):
print "empty"
else:
print "not empty"
```
Unfortunately, I always get "not empty" no matter if I have any files in that folder or not. Is it because there might be some hidden system files? Is there a way to modify the above code to check just for not hidden files?<issue_comment>username_1: In python os, there is a function called `listdir`, which returns an array of the files contents of the given directory. So if the array is empty, it means there are no files in the directory.
Here is a code snippet;
```
import os
path = '../../Downloads'
if not os.listdir(path):
print "Empty"
else:
print "Not empty"
```
Hope this is what you're looking for.
Upvotes: -1 <issue_comment>username_2: Hmm, I just tried your code changing the path to an empty directory and it did print "empty" for me, so there must be hidden files in your downloads. You could try looping through all files and checking if they start with '.' or not.
EDIT:
Try this. It worked for me.
```
if [f for f in os.listdir(downloadsFolder) if not f.startswith('.')] == []:
print "empty"
else:
print "not empty"
```
Upvotes: 4 <issue_comment>username_3: I think you can try this code:
```
directory = r'path to the directory'
from os import walk
f = [] #to list the files
d = [] # to list the directories
for (dirpath, dirnames, filenames) in walk(directory):
f.extend(filenames)
d.extend(dirnames)
break
if len(f) and len(d) != 0:
print('Not empty')
else:
print('empty')
```
Upvotes: 1 <issue_comment>username_4: Give a try to:
```
import os
dirContents = os.listdir(downloadsFolder)
if not dirContents:
print('Folder is Empty')
else:
print('Folder is Not Empty')
```
Upvotes: 3 <issue_comment>username_5: You can use these two methods from the [os module](https://docs.python.org/3/library/os.html).
First option:
```
import os
if len(os.listdir('/your/path')) == 0:
print("Directory is empty")
else:
print("Directory is not empty")
```
Second option (as an empty list evaluates to `False` in Python):
```
import os
if not os.listdir('/your/path'):
print("Directory is empty")
else:
print("Directory is not empty")
```
However, the `os.listdir()` can throw an exception, for example when the given path does not exist. Therefore, you need to cover this.
```
import os
dir_name = '/your/path'
if os.path.isdir(dir_name):
if not os.listdir(dir_name):
print("Directory is empty")
else:
print("Directory is not empty")
else:
print("Given directory doesn't exist")
```
I hope it will be helpful for you.
Note that `os.listdir(dir_name)` does not include the special entries '.' and '..' even if they are present in the directory.
Source: <http://web.archive.org/web/20180531030548/http://thispointer.com/python-how-to-check-if-a-directory-is-empty>
Upvotes: 6 <issue_comment>username_6: Yet another version:
```
def is_folder_empty(dir_name):
import os
if os.path.exists(dir_name) and os.path.isdir(dir_name):
return not os.listdir(dir_name)
else:
raise Exception(f"Given output directory {dir_name} doesn't exist")
```
Upvotes: 0 <issue_comment>username_7: Since Python 3.5+,
```
with os.scandir(path) as it:
if any(it):
print('not empty')
```
which generally performs faster than `listdir()` since `scandir()` is an iterator and does not use certain functions to get stats on a given file.
Upvotes: 5 <issue_comment>username_8: expanding @username_7's answer with error handling:
you can use `os.scandir`:
```
from os import scandir
def is_non_empty_dir(dir_name: str) -> bool:
"""
Returns True if the directory exists and contains item(s) else False
"""
try:
if any(scandir(dir_name)):
return True
except (NotADirectoryError, FileNotFoundError):
pass
return False
```
Upvotes: 2 <issue_comment>username_9: As mentioned by [muskaya](https://stackoverflow.com/users/12081301/muskaya) [here](https://stackoverflow.com/revisions/70285710/1). You can also use [pathlib](https://docs.python.org/3/library/pathlib.html) since Python 3.4 (standard lib). However, if you only want to check, if the folder is empty or not, you should check out this [answser](https://stackoverflow.com/a/54216885/9620269):
```py
from pathlib import Path
def directory_is_empty(directory: str) -> bool:
return not any(Path(directory).iterdir())
downloadsFolder = '../../Downloads/'
if directory_is_empty(downloadsFolder):
print "empty"
else:
print "not empty"
```
Upvotes: 3 <issue_comment>username_10: You can use the following to validate that no files exist inside the input folder:
```
import glob
if len(glob.glob("input\*")) == 0:
print("No files found to Process...")
exit()
```
Upvotes: 1 <issue_comment>username_11: TLDR ( for python 3.10.4 ): `if not os.listdir(dirname): ...`
Here is a quick demo:
```
$ rm -rf BBB
$ mkdir BBB
$ echo -e "import os\nif os.listdir(\"BBB\"):\n print(\"non-empty\")\nelse:\n print(\"empty\")" > empty.py
$ python empty.py
empty
$ echo "lorem ipsum" > BBB/some_file.txt
$ python empty.py
non-empty
$ rm BBB/some_file.txt
$ python empty.py
empty
```
Upvotes: 0
|
2018/03/14
| 311 | 1,185 |
<issue_start>username_0: When declaring a class in C# without specifying if the class is a static or non-static class which will it default to?
EDIT: Here's an article I wrote based on this discussion.
<https://hackernoon.com/c-static-vs-instance-classes-and-methods-50fe8987b231><issue_comment>username_1: A class is always an instance class unless you specify otherwise
Upvotes: 2 <issue_comment>username_2: If the "static" is not specified, it will require an instance of the class to be used (Unless the member itself is specified as static).
>
> If the class is not declared as static, client code can use it by creating objects or instances which are assigned to a variable.
>
>
>
From: <https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/classes>
Upvotes: 2 <issue_comment>username_3: If you don't declare the class as static then its members can be either static or non-static.
A static class can only have static members.
You can invoke non-static members only on instances of the class.
You can invoke static members only on the class itself.
Also, in your class declaration, there are no parentheses.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 420 | 1,672 |
<issue_start>username_0: I wrote my own function of rotating a group of objects. I want ot make a new one, with smooth rotating, so I need a timer. I have tried to call Timer a few times, but it doesnt work. Here's my code:
```cs
public class rotate_around : MonoBehaviour
{
public Transform sphere;
// Update is called once per frame
void Update ()
{
if (Input.GetKeyDown(KeyCode.RightArrow))
{
sphere.RotateAround(new Vector3(3, 3, 3), new Vector3(0, 1, 0), 45);
timer();
}
}
public IEnumerator timer()
{
yield return new WaitForSeconds(5);
// actually I tried to add Debug.Log("blahblahblah") here, but it still didnt output anything
}
}
```<issue_comment>username_1: Try using `StartCoroutine(timer());` instead of just `timer();`
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
StartCoroutine("timer", true);
```
would be the best way to do it!
`timer()` would just call a normal method. `IEnumerator` works different.
Upvotes: 1 <issue_comment>username_3: As mentioned before, the correct way to start a coroutine is StartCoroutine(timer());
from your code it isn't clear what you are trying to achieve, currently you do nothing after waiting those five seconds
Also you probably should implement some sort of mechanism that prevents user from spamming coroutines. You can set a bool flag, or hold reference to the coroutine you started - I often do
```
Coroutine myRoutine; // in global scope
if (myRoutine==null) myRoutine=StartCoroutine(timer()); // in event handler
```
myRoutine will null itself when its finished
Upvotes: 0
|
2018/03/14
| 416 | 1,435 |
<issue_start>username_0: I'm wondering why setting the `font-size` CSS attribute on an HTML `input` automatically increases its width?
I understand why it increases the height, but I'd like an `input` to remain the same base width when changing the `font-size`.
The reason I ask, is because it is breaking a flex layout I'm building, in which there is an `input`. When I increase the font-size, the `input` completely breaks out of the layout.
Here's a (react) reproduction:
```
Some text
{' '}
Some text
```
Is there a clean way to solve this?<issue_comment>username_1: You can use 100% width or a fixed width on the inputs. Also to get this working in IE, you need to remove `alignItems: 'center'` from the outer most div
```
Some text
Some text
```
For firefox you have to wrap the input in a container and apply `flex: 1 1 auto;` to the container.
Hope that helps
Upvotes: 3 [selected_answer]<issue_comment>username_2: tag is one of the [replaced elements](https://developer.mozilla.org/en-US/docs/Web/CSS/Replaced_element) and they often have intrinsic dimensions.
In flexbox you can override the intrinsic size by adding:
```
input {
min-width: 0;
}
```
Or, give it a smaller [`size`](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/input#attr-size) than the default value `20`:
```
```
Then set the desire size by using `flex` / `flex-grow` / `flex-basis`, or `width` as needed.
Upvotes: 3
|
2018/03/14
| 540 | 1,903 |
<issue_start>username_0: Problem: I have a component that needs a boolean value optionally passed to it as part of a call from within the view app and from an external app. When I invoke the router directly I can pass the boolean with no issues but if I route by using the actual URL I get a parse warning. To prevent the warning should I be using a string instead and parse it myself?
Router:
```
export default new Router({
routes: [
{
path: '/foo',
name: 'Foo',
component: Foo,
props: (route) => ({ booleanParam: route.query.booleanParam })
}
]
})
```
Component:
```
BooleanParam: {{booleanParam}}
export default {
name: 'Foo',
props: {
booleanParam: {
type: Boolean,
required: false
}
}
}
```
Works:
```
router.push( { name: 'Foo', query: { booleanParam: true } })
```
Generates Warning:
```
http://localhost:8080/foo?booleanParam=true
```
Warning:
>
> [Vue warn]: Invalid prop: type check failed for prop "booleanParam".
> Expected Boolean, got String.
>
>
><issue_comment>username_1: have you try this
```
const router = new VueRouter({
routes: [
{
path: '/foo',
name:"Foo"
component: Foo, props: (route) => ({ query: route.query.q }) }
]
})
OR
{path:'/foo/:booleanParam', name:'Foo', component: Foo }
```
Upvotes: 0 <issue_comment>username_2: If a boolean is required by the component then parse the value as a Boolean before setting this as a prop:
```
props: (route) => ({ booleanParam: (route.query.booleanParam === 'true') })
```
This way the correct type is ensured.
---
**EDIT:**
As noted in the comment by the OP to support both String and Boolean the parameter could be converted to String to ensure both types work:
```
props: (route) => ({ booleanParam: (String(route.query.booleanParam). toLowerCase() === 'true') })
```
Upvotes: 4 [selected_answer]
|
2018/03/14
| 2,085 | 10,326 |
<issue_start>username_0: I'm trying to make a image editor tool to go with my image uploader system. I got most of it working, but I think the box with the image and the box for cropping moves kind of weird on resizing. Especially when trying to resize from left bottom corner and both top corners.
I don't really know if this is a `CSS` issue or some problems with the `jQuery` logic. I would really appreciate if someone could take a look.
I know `jQuery UI` have `draggable, resizable` etc but, I wanted to make this my self.
[jsfiddle demo](https://jsfiddle.net/3g7zfvdo/30/)
**EDIT:** I got it working alot better when changing from `resize.width()` and `resize.hight()` to my variables `width` and `height` but the movment is still kinda weird when resizing from left to right. (jsfiddle updated)
```
$(document).ready(function () {
var dragging = null;
var resize = null;
var pos;
var height;
var width;
$(".image-resize__box").on("mousemove", function (e) {
if (dragging) {
dragging.offset({
top: e.pageY,
left: e.pageX
});
}
});
$(".image-resize__box__crop__box").on("mousedown", null, function () {
dragging = $(".image-resize__box__crop");
});
$(".image-resize__box__handle").on("mousedown", null, function () {
resize = $(".image-resize__box");
pos = $(this).attr("data-pos");
height = resize.height();
width = resize.width();
});
$(document.body).on("mousemove", null, function (e) {
if (resize) {
var relX = e.pageX - resize.offset().left;
var relY = e.pageY - resize.offset().top;
if(pos === "top-left") {
resize.css({
"width" : resize.width() - relX,
"height" : resize.height() - relY,
});
}
if(pos === "top-mid") {
resize.css({
"width" : width,
"height" : height - relY,
});
}
if(pos === "top-right") {
resize.css({
"width" : resize.width() + (relX - resize.width()),
"height" : resize.height() - relY,
});
}
if(pos === "mid-right") {
resize.css({
"width" : resize.width() + (relX - resize.width()),
"height" : height,
});
}
if(pos === "mid-left") {
resize.css({
"width" : resize.width() - relX,
"height" : height,
});
}
if(pos === "bottom-left") {
resize.css({
"width" : resize.width() - relX,
"height" : resize.height() + (relY - resize.height()),
});
}
if(pos === "bottom-mid") {
resize.css({
"width" : width,
"height" : resize.height() - (resize.height() - relY),
});
}
if(pos === "bottom-right") {
resize.css({
"width" : resize.width() + (relX - resize.width()),
"height" : resize.height() + (relY - resize.height()),
});
}
}
});
$(document.body).on("mouseup", function () {
resize = null;
dragging = null;
});
});
```<issue_comment>username_1: I think the problem is that you can click somewhere *inside* the handles, but then on mouse move, you jump to the `0,0` position of the handle. This results in the image "jumping" in size.
You can fix this by factoring in the offset of the mouse click from the handle origin, and adding that to the x and y of the onmousemove event.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I think I have gotten most of it working now. Still some small things to fix like, the crop box is currently able to move outside the image box. If anyone should be interested a updated jsfiddle can be found [here](https://jsfiddle.net/3g7zfvdo/70/)
```
(function () {
//-------------------------------------------------
// Image details
//-------------------------------------------------
var imageDetails = function() {
var details = function () {
$("#imageWidth").html($(".image-resize__box").width());
$("#imageHeight").html($(".image-resize__box").height());
$("#cropWidth").html($(".image-resize__box__crop").width());
$("#cropHeight").html($(".image-resize__box__crop").height());
$("#cropOffsetTop").html($(".image-resize__box__crop").position().top);
$("#cropOffsetLeft").html($(".image-resize__box__crop").position().left);
};
return {
getDetails: details
};
}();
//-------------------------------------------------
// Image resizer
//-------------------------------------------------
var imageResize = function() {
$(document).ready(function () {
//-------------------------------------------------
// Global variables
//-------------------------------------------------
var dragging = null;
var resize = null;
var pos;
var height;
var width;
var crop = $(".image-resize__box__crop");
var resizeBox = $(".image-resize__box");
//-------------------------------------------------
// Event handlers
//-------------------------------------------------
// Set dragging active on mouse down
$(".image-resize__box__crop__box").on("mousedown", null, function () {
dragging = crop;
});
// Set resize active on mouse down
$(".image-resize__box__handle").on("mousedown", null, function () {
// Resize crop box
if($(this).parent().hasClass("image-resize__box__crop")) {
resize = crop;
} else { // Resize image
resize = resizeBox;
}
pos = $(this).attr("data-pos"); // Resize handler used
height = resize.height(); // Height of element resize
width = resize.width(); // Width of element to resize
});
// Deactivate resize/dragging
$(document.body).on("mouseup", function () {
resize = null;
dragging = null;
});
//-------------------------------------------------
// Drag (crop box)
//-------------------------------------------------
resizeBox.on("mousemove", function (e) {
if (dragging) {
dragging.offset({
top: e.pageY - (crop.height()/ 2),
left: e.pageX - (crop.width()/ 2),
});
imageDetails.getDetails(); // Update image details
}
});
//-------------------------------------------------
// Resize image
//-------------------------------------------------
$(document.body).on("mousemove", null, function (e) {
if (resize) {
var relX = e.pageX - resize.offset().left; // Mouse position in element left
var relY = e.pageY - resize.offset().top; // Mouse position in element top
//-------------------------------------------------
// Resize handlers
//-------------------------------------------------
if(pos === "top-left") {
resize.css({
"width" : width - relX,
"height" : height - relY,
});
}
if(pos === "top-mid") {
resize.css({
"width" : width,
"height" : height - relY,
});
}
if(pos === "top-right") {
resize.css({
"width" : width + (relX - width),
"height" : height - relY,
});
}
if(pos === "mid-right") {
resize.css({
"width" : width + (relX - width),
"height" : height,
});
}
if(pos === "mid-left") {
resize.css({
"width" : width - relX,
"height" : height,
});
}
if(pos === "bottom-left") {
resize.css({
"width" : width - relX,
"height" : height + (relY - height),
});
}
if(pos === "bottom-mid") {
resize.css({
"width" : width,
"height" : height - (height - relY),
});
}
if(pos === "bottom-right") {
resize.css({
"width" : width + (relX - width),
"height" : height + (relY - height),
});
}
imageDetails.getDetails(); // Update image details
}
});
});
}(imageDetails.getDetails());
}());
```
Upvotes: 0 <issue_comment>username_3: I have fixed jumping of cropbox as well as weird resizing. I added `drag` variable to keep track of previous `x` and `y` position
you can check it here [Updated Cropbox](https://jsfiddle.net/ruht5vyf/1/)
Upvotes: 0
|
2018/03/14
| 1,605 | 7,746 |
<issue_start>username_0: I am trying to make a bowling simulator with Javascript. I have created a function that takes a random number between 0 & 10. And at bowling you can throw 2 times per round. I managed to get 2 values back like 5 and 8 but the problem is that you cannot throw more than 10 pins at bowling. And I can't figure out how I get the value 4, 3, 2, 1 or 0 when i throw 6.
This is my code in a snippet...
```js
var team1 = ["Jason", "Jake", "Jane", "Joe"];
var team2 = ["John", "Drake", "Nick", "Joseph"];
var rounds = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
var pin = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
for (var team1 = 0; team1 < 2; team1++) {
console.log(getRandomPin(team1));
}
for (var team2 = 0; team2 < 2; team2++) {
console.log(getRandomPin(team2));
}
function getRandomPin(min, max) {
min = Math.ceil(0);
max = Math.floor(11);
return Math.floor(Math.random() * (11 - 0)) + 0;
}
```
Thank you in advance.<issue_comment>username_1: I think the problem is that you can click somewhere *inside* the handles, but then on mouse move, you jump to the `0,0` position of the handle. This results in the image "jumping" in size.
You can fix this by factoring in the offset of the mouse click from the handle origin, and adding that to the x and y of the onmousemove event.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I think I have gotten most of it working now. Still some small things to fix like, the crop box is currently able to move outside the image box. If anyone should be interested a updated jsfiddle can be found [here](https://jsfiddle.net/3g7zfvdo/70/)
```
(function () {
//-------------------------------------------------
// Image details
//-------------------------------------------------
var imageDetails = function() {
var details = function () {
$("#imageWidth").html($(".image-resize__box").width());
$("#imageHeight").html($(".image-resize__box").height());
$("#cropWidth").html($(".image-resize__box__crop").width());
$("#cropHeight").html($(".image-resize__box__crop").height());
$("#cropOffsetTop").html($(".image-resize__box__crop").position().top);
$("#cropOffsetLeft").html($(".image-resize__box__crop").position().left);
};
return {
getDetails: details
};
}();
//-------------------------------------------------
// Image resizer
//-------------------------------------------------
var imageResize = function() {
$(document).ready(function () {
//-------------------------------------------------
// Global variables
//-------------------------------------------------
var dragging = null;
var resize = null;
var pos;
var height;
var width;
var crop = $(".image-resize__box__crop");
var resizeBox = $(".image-resize__box");
//-------------------------------------------------
// Event handlers
//-------------------------------------------------
// Set dragging active on mouse down
$(".image-resize__box__crop__box").on("mousedown", null, function () {
dragging = crop;
});
// Set resize active on mouse down
$(".image-resize__box__handle").on("mousedown", null, function () {
// Resize crop box
if($(this).parent().hasClass("image-resize__box__crop")) {
resize = crop;
} else { // Resize image
resize = resizeBox;
}
pos = $(this).attr("data-pos"); // Resize handler used
height = resize.height(); // Height of element resize
width = resize.width(); // Width of element to resize
});
// Deactivate resize/dragging
$(document.body).on("mouseup", function () {
resize = null;
dragging = null;
});
//-------------------------------------------------
// Drag (crop box)
//-------------------------------------------------
resizeBox.on("mousemove", function (e) {
if (dragging) {
dragging.offset({
top: e.pageY - (crop.height()/ 2),
left: e.pageX - (crop.width()/ 2),
});
imageDetails.getDetails(); // Update image details
}
});
//-------------------------------------------------
// Resize image
//-------------------------------------------------
$(document.body).on("mousemove", null, function (e) {
if (resize) {
var relX = e.pageX - resize.offset().left; // Mouse position in element left
var relY = e.pageY - resize.offset().top; // Mouse position in element top
//-------------------------------------------------
// Resize handlers
//-------------------------------------------------
if(pos === "top-left") {
resize.css({
"width" : width - relX,
"height" : height - relY,
});
}
if(pos === "top-mid") {
resize.css({
"width" : width,
"height" : height - relY,
});
}
if(pos === "top-right") {
resize.css({
"width" : width + (relX - width),
"height" : height - relY,
});
}
if(pos === "mid-right") {
resize.css({
"width" : width + (relX - width),
"height" : height,
});
}
if(pos === "mid-left") {
resize.css({
"width" : width - relX,
"height" : height,
});
}
if(pos === "bottom-left") {
resize.css({
"width" : width - relX,
"height" : height + (relY - height),
});
}
if(pos === "bottom-mid") {
resize.css({
"width" : width,
"height" : height - (height - relY),
});
}
if(pos === "bottom-right") {
resize.css({
"width" : width + (relX - width),
"height" : height + (relY - height),
});
}
imageDetails.getDetails(); // Update image details
}
});
});
}(imageDetails.getDetails());
}());
```
Upvotes: 0 <issue_comment>username_3: I have fixed jumping of cropbox as well as weird resizing. I added `drag` variable to keep track of previous `x` and `y` position
you can check it here [Updated Cropbox](https://jsfiddle.net/ruht5vyf/1/)
Upvotes: 0
|
2018/03/14
| 1,336 | 7,104 |
<issue_start>username_0: I am trying to push local change to my remote repository, but it is fails with 403 - Permission denied.
```
remote: Permission to rmanivannan/speedometer-jquery-plugin.git denied to mcttvni.
fatal: unable to access 'https://github.com/rmanivannan/
speedometer-jquery-plugin.git/': The requested URL returned error: 403
```<issue_comment>username_1: I think the problem is that you can click somewhere *inside* the handles, but then on mouse move, you jump to the `0,0` position of the handle. This results in the image "jumping" in size.
You can fix this by factoring in the offset of the mouse click from the handle origin, and adding that to the x and y of the onmousemove event.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I think I have gotten most of it working now. Still some small things to fix like, the crop box is currently able to move outside the image box. If anyone should be interested a updated jsfiddle can be found [here](https://jsfiddle.net/3g7zfvdo/70/)
```
(function () {
//-------------------------------------------------
// Image details
//-------------------------------------------------
var imageDetails = function() {
var details = function () {
$("#imageWidth").html($(".image-resize__box").width());
$("#imageHeight").html($(".image-resize__box").height());
$("#cropWidth").html($(".image-resize__box__crop").width());
$("#cropHeight").html($(".image-resize__box__crop").height());
$("#cropOffsetTop").html($(".image-resize__box__crop").position().top);
$("#cropOffsetLeft").html($(".image-resize__box__crop").position().left);
};
return {
getDetails: details
};
}();
//-------------------------------------------------
// Image resizer
//-------------------------------------------------
var imageResize = function() {
$(document).ready(function () {
//-------------------------------------------------
// Global variables
//-------------------------------------------------
var dragging = null;
var resize = null;
var pos;
var height;
var width;
var crop = $(".image-resize__box__crop");
var resizeBox = $(".image-resize__box");
//-------------------------------------------------
// Event handlers
//-------------------------------------------------
// Set dragging active on mouse down
$(".image-resize__box__crop__box").on("mousedown", null, function () {
dragging = crop;
});
// Set resize active on mouse down
$(".image-resize__box__handle").on("mousedown", null, function () {
// Resize crop box
if($(this).parent().hasClass("image-resize__box__crop")) {
resize = crop;
} else { // Resize image
resize = resizeBox;
}
pos = $(this).attr("data-pos"); // Resize handler used
height = resize.height(); // Height of element resize
width = resize.width(); // Width of element to resize
});
// Deactivate resize/dragging
$(document.body).on("mouseup", function () {
resize = null;
dragging = null;
});
//-------------------------------------------------
// Drag (crop box)
//-------------------------------------------------
resizeBox.on("mousemove", function (e) {
if (dragging) {
dragging.offset({
top: e.pageY - (crop.height()/ 2),
left: e.pageX - (crop.width()/ 2),
});
imageDetails.getDetails(); // Update image details
}
});
//-------------------------------------------------
// Resize image
//-------------------------------------------------
$(document.body).on("mousemove", null, function (e) {
if (resize) {
var relX = e.pageX - resize.offset().left; // Mouse position in element left
var relY = e.pageY - resize.offset().top; // Mouse position in element top
//-------------------------------------------------
// Resize handlers
//-------------------------------------------------
if(pos === "top-left") {
resize.css({
"width" : width - relX,
"height" : height - relY,
});
}
if(pos === "top-mid") {
resize.css({
"width" : width,
"height" : height - relY,
});
}
if(pos === "top-right") {
resize.css({
"width" : width + (relX - width),
"height" : height - relY,
});
}
if(pos === "mid-right") {
resize.css({
"width" : width + (relX - width),
"height" : height,
});
}
if(pos === "mid-left") {
resize.css({
"width" : width - relX,
"height" : height,
});
}
if(pos === "bottom-left") {
resize.css({
"width" : width - relX,
"height" : height + (relY - height),
});
}
if(pos === "bottom-mid") {
resize.css({
"width" : width,
"height" : height - (height - relY),
});
}
if(pos === "bottom-right") {
resize.css({
"width" : width + (relX - width),
"height" : height + (relY - height),
});
}
imageDetails.getDetails(); // Update image details
}
});
});
}(imageDetails.getDetails());
}());
```
Upvotes: 0 <issue_comment>username_3: I have fixed jumping of cropbox as well as weird resizing. I added `drag` variable to keep track of previous `x` and `y` position
you can check it here [Updated Cropbox](https://jsfiddle.net/ruht5vyf/1/)
Upvotes: 0
|
2018/03/14
| 487 | 1,547 |
<issue_start>username_0: In a CDS View select statement, given that I have a column of type `DEC`, how do I convert that to type `INT`?
**Work done so far:** According to the [`CAST_EXPR` documentation](https://help.sap.com/doc/abapdocu_751_index_htm/7.51/en-US/abencds_f1_cast_expression.htm), this is not possible with `CAST_EXPR` . According to the [numeric functions documentation](https://help.sap.com/doc/abapdocu_752_index_htm/7.52/en-US/abencds_f1_sql_functions_numeric.htm), math functions like `FLOOR` will return a value of the same type.<issue_comment>username_1: **Update:** the [numeric functions documentation](https://help.sap.com/doc/abapdocu_752_index_htm/7.52/en-US/abencds_f1_sql_functions_numeric.htm) is correct.
The code `floor(fieldname)` will convert a `DEC(X,Y)` (where `Y > 0`) into a `DEC(X,0)`. Essentially, `floor` strips the decimal places from the field without changing its type.
On the other hand, `ceil(fieldname)` will round up to the nearest integer *and* convert a `DEC` to an `INT`
If you want to get an integer from the `floor` function, then you must call `ceil(floor(fieldname))`
On a NetWeaver system, you should be able to find the CDS View `demo_cds_sql_functions_num` and program/report `demo_cds_sql_functions_num` that help demonstrate these concepts. You can use the debugger to view the report's variable `result` and confirm my findings.
Upvotes: 3 [selected_answer]<issue_comment>username_2: `ceil` does it:
```
cast(ceil(fieldname) as abap.int4)
```
Note that `floor` won't.
Upvotes: 1
|
2018/03/14
| 975 | 3,653 |
<issue_start>username_0: I have this situation:
There are a Java class
```
public class A {
public void overrideMe(B param){
//TODO: override me in Kotlin!
}
protected static class B {
}
}
```
and a Kotlin class, which inherits from it and has to override method "overrideMe"
```
class K: A() {
override fun overrideMe(param: B) {
println("Wow!")
}
}
```
But Kotlin doesn't allow this behaviour.
>
> 'public' function exposes its 'protected (in A)' parameter type B
>
>
>
Is there any way how to resolve this one?
P.S. It's not just a synthetic case - I faced this problem when I tried to implement custom [Spring AmqpAppender](https://docs.spring.io/spring-amqp/api/org/springframework/amqp/rabbit/logback/AmqpAppender.html) and to override it's postProcessMessageBeforeSend method.<issue_comment>username_1: There is **no way to resolve this** in Kotlin, and here is why:
The difference is that `protected` actually means something subtly different in Kotlin than in Java.
`protected` in [Kotlin](https://kotlinlang.org/docs/reference/visibility-modifiers.html) means:
>
> **kotlin protected:** same as private (visible inside the file containing the declaration) + visible in subclasses too;
>
>
>
`protected` in [Java](https://docs.oracle.com/javase/tutorial/java/javaOO/accesscontrol.html) means:
>
> **java protected**: the member can only be accessed within its own package (as with package-private) and, in addition, by a subclass of its class in another package.
>
>
>
And with this knowledge the issue should be clear, the `protected static class B` in Kotlin is more like `private static class B` in Java. Therefore the warning is correct.
The [Kotlin-Java Interop](http://kotlinlang.org/docs/reference/java-to-kotlin-interop.html#variant-generics) guide specifically states:
>
> `protected` remains `protected` (note that Java allows accessing protected members from other classes in the same package and Kotlin doesn't, so **Java classes will have broader access to the code**);
>
>
>
**Conclusion:**
This means that Kotlin interprets the `Java-protected` as if it was a Kotlin-`protected` ergo there is no way to implement the `class K` in Kotlin as it is. The least you must do to make it work is create `C extends A` (in Java) that handles all public access of `B` and then extend this class in Kotlin. Like in this issue [Calling protected static methods](https://discuss.kotlinlang.org/t/calling-protected-static-methods/947)
The culprit:
The main problem is Javas behaviour of [static nested classes](https://docs.oracle.com/javase/tutorial/java/javaOO/nested.html), which
>
> interacts with the instance members of its outer class (and other classes) just like any other top-level class. **In effect, a static nested class is behaviorally a top-level class that has been nested in another top-level class for packaging convenience**.
>
>
>
This *convenient* behaviour creates the problem in the first place.
Side note:
Probably the better match for Java-`protected` is Kotlins `internal` which provides a better level of encapsulation.
>
> **kotlin internal:** any client inside this module who sees the declaring class sees its internal members;
>
>
>
Upvotes: 2 <issue_comment>username_2: Well, after all, the conclusion is: there is no way to solve this situation in pure Kotlin.
I hope that AmqpAppender.Event will become public in the nearest future.
Even if Java allows that behaviour, having no-public arguments in public methods seems like a bad design for me (also for the developers of Kotlin).
Upvotes: 1 [selected_answer]
|
2018/03/14
| 1,166 | 4,008 |
<issue_start>username_0: I am using `Flink v.1.4.0`.
I am trying to run a job using the `DataSet API` through `IntelliJ`. Note that If I run the same job through the `Flink UI` the job runs fine. In order to run the job, I need to first specify through environment variables the amount of data that will be processed. When the amount is relatively small, the job runs fine. But as it gets bigger I am beginning to get the following error:
```
ERROR StatusLogger Log4j2 could not find a logging implementation. Please add log4j-core to the classpath. Using SimpleLogger to log to the console...
31107 [main] ERROR com.company.someLib.SomeClass - Error executing pipeline
org.apache.flink.runtime.client.JobExecutionException: Couldn't retrieve the JobExecutionResult from the JobManager.
at org.apache.flink.runtime.client.JobClient.awaitJobResult(JobClient.java:300)
at org.apache.flink.runtime.client.JobClient.submitJobAndWait(JobClient.java:387)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:565)
at org.apache.flink.runtime.minicluster.FlinkMiniCluster.submitJobAndWait(FlinkMiniCluster.scala:539)
at org.apache.flink.client.LocalExecutor.executePlan(LocalExecutor.java:193)
at org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:91)
at com.ubs.digital.comms.graph.emailanalyser.EmailAnalyserPipeline.lambda$runPipeline$1(EmailAnalyserPipeline.java:120)
at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
at java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
at com.ubs.digital.comms.graph.emailanalyser.EmailAnalyserPipeline.runPipeline(EmailAnalyserPipeline.java:87)
at com.ubs.digital.comms.graph.emailanalyser.EmailAnalyserPipeline.main(EmailAnalyserPipeline.java:65)
Caused by: org.apache.flink.runtime.client.JobClientActorSubmissionTimeoutException: Job submission to the JobManager timed out. You may increase 'akka.client.timeout' in case the JobManager needs more time to configure and confirm the job submission.
```
I can see that the advice is:
```
You may increase 'akka.client.timeout' in case the JobManager needs more time to configure and confirm the job submission.
```
but I suspect that the problem goes deeper than that. But in order to get there I need to first configure `akka.client.timeout`. How do I do this **in IntelliJ**? and how long should the timeout be?
Furthermore, what's really causing this? Do I need to increase my heap memory or something? Thanks.<issue_comment>username_1: You can set this property via the flink configuration file. See <https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/config.html#distributed-coordination-via-akka>
So in flink-conf.yaml you would add for example:
```
akka.client.timeout: 10min
```
But it seems like the data is being processed in the wrong place. Do you perhaps load the data in a constructor rather than in a `map` or `run` function?
Upvotes: 0 <issue_comment>username_2: I was able to figure it out and it was not so difficult either. All I had to do was go to `Run > Edit Configurations` and under the `Configucation` tab in the `Program arguments` field, add the following:
```
-Dakka.client.timeout:600s
-Dakka.ask.timeout:600s
```
I should note, however, that this did not solve the problem I was having altogether.
Upvotes: 4 [selected_answer]
|
2018/03/14
| 508 | 1,728 |
<issue_start>username_0: I'm working with this kind of Json in Scala :
```
{
"ClientBase": [
{
"string#name": "robert",
"int#age": 46,
"string#country": "USA"
},
{
"string#name": "tom",
"int#age": 45,
"string#country": "UK"
}
]
}
```
I use Json4s library and I would like to add a new field to each client. I know how do this for one but is there a quick way to do this for every one ?
I would like a result like this :
```
{
"ClientBase": [
{
"string#name": "robert",
"int#age": 46,
"string#country": "USA",
"BLOCK_ID" : "client_base"
},
{
"string#name": "tom",
"int#age": 45,
"string#country": "UK",
"BLOCK_ID" : "client_base"
}
]
}
```<issue_comment>username_1: You can set this property via the flink configuration file. See <https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/config.html#distributed-coordination-via-akka>
So in flink-conf.yaml you would add for example:
```
akka.client.timeout: 10min
```
But it seems like the data is being processed in the wrong place. Do you perhaps load the data in a constructor rather than in a `map` or `run` function?
Upvotes: 0 <issue_comment>username_2: I was able to figure it out and it was not so difficult either. All I had to do was go to `Run > Edit Configurations` and under the `Configucation` tab in the `Program arguments` field, add the following:
```
-Dakka.client.timeout:600s
-Dakka.ask.timeout:600s
```
I should note, however, that this did not solve the problem I was having altogether.
Upvotes: 4 [selected_answer]
|
2018/03/14
| 1,448 | 5,479 |
<issue_start>username_0: Data bindings don't get updated if their values are changed after an `await` statement.
```
handle() {
this.message = 'Works'
}
async handle() {
this.message = 'Works'
}
async handle() {
await new Promise((resolve, reject) => {
resolve()
})
this.message = 'Works'
}
async handle() {
await new Promise((resolve, reject) => {
setTimeout(() => resolve(), 3000)
})
this.message = 'Doesn\'t work'
}
handle() {
new Promise((resolve, reject) => {
setTimeout(() => resolve(), 3000)
})
.then(() => this.message = 'Works')
}
```
Why do the last two not behave the same? aren't they supposed to be the same thing?
Ionic: 3.9.2
Angular: 5.0.3
TypeScript: 2.4.2
EDIT: I came across another problem with this which may be useful to some.
**Changing the values of a binding in the constructor behaves differently to ionViewDidLoad or ngOnInit!**
```
constructor(private zone: NgZone) {
// This will cause the same problems, bindings not updating
this.handle()
}
constructor(private zone: NgZone) {
// Unless you do this...
this.zone.run(() => {
this.handle()
})
}
ionViewDidLoad() {
// But I think this is better/cleaner
this.handle()
}
```<issue_comment>username_1: It's got to do with how change detection works in Angular. See:
<https://stackblitz.com/edit/angular-jajbza?file=app%2Fapp.component.ts>
I'm betting Ionic uses OnPush strategy by default, or you've enabled it, as I did in the Blitz. It's all good, IMO it should be on by default anyway as it forces you to think about these things and write more performant code.
Can't say exatly why your view is updated in your last example when you call .then though. Maybe in that case CD manages to keep track of the Promise, but not with async functions. Async functions themselves return a Promise if you don't, so basically after transpilation, async handle() returns something like Promise.resolve(null), though the actual JS code probably looks messier than that.
Edit: Another way, perhaps cleaner than calling detectChanges manually would be to run anything that changes the view inside Angular's zone:
```
import { NgZone } from '@angular/core';
constructor (private zone:NgZone){}
// after await somePromise :
this.zone.run(() => {
this.someProperty = 'Something changed';
});
```
Edit2: Interesting, zone.run() actually doesn't change anything. Tested that in the blitz. So manual CD is the only way. One more reason to avoid async/await and try to stick with Observables and NG's async pipe. :)
Upvotes: 1 <issue_comment>username_2: Check your browser version and what it supports, async/await is native with ES2017. If your browser doesn't support that use ES2016 as your target.
I needed ES2016 for Electron.
Upvotes: 2 <issue_comment>username_3: Angular relies on Zone.js for change detection, and Zone.js provides this by patching every API that can provide asynchronous behaviour.
The problem is in how native `async` functions are implemented. As confirmed in [this question](https://stackoverflow.com/questions/46908575/async-await-native-implementations), they don't just wrap around global `Promise` but rely on internal mechanisms that may vary from one browser to another.
Zone.js patches `Promise` but it's impossible to patch internal promise that is used by `async` functions in current engine implementations (here is [open issue](https://github.com/angular/zone.js/issues/740) for that).
Usually `(async () => {})() instanceof Promise === true`. In case of Zone.js, this isn't true; `async` function returns an instance of native `Promise`, while `Promise` global is zone-aware promise patched by Zone.js.
In order to make native `async` functions work in Angular, change detection should be additionally triggered. This can be done by triggering it explicitly (as another answer already suggests) or by using any zone-aware API. A helper that wraps `async` function result with zone-aware promise will do the trick:
```
function nativeAsync(target, method, descriptor) {
const originalMethod = target[method];
descriptor.value = function () {
return Promise.resolve(originalMethod.apply(this, arguments));
}
}
```
[Here](http://plnkr.co/edit/lSZgKYybrc0s6sAVRivd?p=preview) is an example that uses `@nativeAsync` decorator on `async` methods to trigger change detection:
```
@nativeAsync
async getFoo() {
await new Promise(resolve => setTimeout(resolve, 100));
this.foo = 'foo';
}
```
[Here](http://plnkr.co/edit/lSZgKYybrc0s6sAVRivd?p=preview) is same example that doesn't use additional measures to trigger change detection and expectedly doesn't work as intended.
It makes sense to stick to native implementation in environment that doesn't require transpilation step. Since Angular application is supposed to be compiled any way, the problem can be solved by switching from `ES2017` to `ES2015` or `ES2016` TypeScript `target`.
Upvotes: 5 [selected_answer]<issue_comment>username_4: just like estus said, currently `zone.js` don't support native async/await, so you can't compile typescript which target to `ES2017`, and I am working on it, <https://github.com/angular/zone.js/pull/795>, I have made a working demo which can run in nodejs, but in browser(chrome), it will still take some time because chrome does't open javascript version of AsyncHooks and PromiseHooks for now.
Upvotes: 3
|
2018/03/14
| 1,347 | 5,576 |
<issue_start>username_0: I am doing the following which perfectly works
```
//else proceed with the checks
JsonObjectRequest jsonObjectRequest = new JsonObjectRequest(
Request.Method.GET,
checkauthurl,
null,
new Response.Listener() {
@Override
public void onResponse(String response) {
//do stuff here
}
},
new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
// do stuff here
}
}) {
@Override
public Map getHeaders() throws AuthFailureError {
HashMap headers = new HashMap ();
TokenService tokenservice = new TokenService(ctx);
String accesstoken = tokenservice.getToken(ApiHelper.ACCESS\_TOKEN\_SHARED\_PREF);
headers.put("Authorization", "Bearer " + accesstoken);
return headers;
}
};
// Access the RequestQueue through your singleton class.
ApiSingleton strngle = new ApiSingleton(ctx);
strngle.addToRequestQueue(jsonObjectRequest);
```
For every request, I have to add the request header. How can I set request headers directly in the singleton.
This is my singleton
```
private static ApiSingleton mInstance;
private RequestQueue mRequestQueue;
public static Context mCtx;
private ImageLoader mImageLoader;
public ApiSingleton(Context context) {
mCtx = context;
mRequestQueue = getRequestQueue();
//do stuff
}
public RequestQueue getRequestQueue() {
if (mRequestQueue == null) {
// getApplicationContext() is key, it keeps you from leaking the
// Activity or BroadcastReceiver if someone passes one in.
mRequestQueue = Volley.newRequestQueue(mCtx.getApplicationContext());
}
return mRequestQueue;
}
```
How do I avoid the above code duplication when attaching the bearer token in every request?<issue_comment>username_1: Make an AppController.java file and mention this file name as android:app in manifest tag.
```
public class AppController extends Application {
public static final String TAG = AppController.class.getSimpleName();
private RequestQueue mRequestQueue;
private static AppController mInstance;
private ImageLoader mImageLoader;
@Override
public void onCreate() {
super.onCreate();
mInstance = this;
}
public static synchronized AppController getInstance() {
return mInstance;
}
public RequestQueue getRequestQueue() {
if (mRequestQueue == null) {
mRequestQueue = Volley.newRequestQueue(getApplicationContext());
}
return mRequestQueue;
}
public ImageLoader getImageLoader() {
getRequestQueue();
if (mImageLoader == null) {
mImageLoader = new ImageLoader(this.mRequestQueue, new LruBitmapCache());
}
return this.mImageLoader;
}
public void addToRequestQueue(Request req, String tag) {
req.setTag(TextUtils.isEmpty(tag) ? TAG : tag);
getRequestQueue().add(req);
}
public void addToRequestQueue(Request req) {
req.setTag(TAG);
getRequestQueue().add(req);
}
public void cancelPendingRequests(Object tag) {
if (mRequestQueue != null) {
mRequestQueue.cancelAll(tag);
}
}
}
```
Do the networking code
```
StringRequest strReq = new StringRequest(Request.Method.POST, AppConfig.URL_BUYER_LOGIN,
new Response.Listener() {
@Override
public void onResponse(String response) {
}
}, new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
}
}) {
@Override
protected Map getParams() {
}
};
// Adding request to request queue
AppController.getInstance().addToRequestQueue(strReq, tag\_string\_req);
}
```
Upvotes: -1 <issue_comment>username_2: 1. You can write a "Factory" with a method that takes your `checkauthurl` and `ctx` and returns you an instance of the JsonObjectRequest. Your factory could implement some logic for re-use of objects that have the same auth Url if that makes sense in your case.
2. You can sub-class `JsonObjectRequest` and provide your `checkauthurl` and `ctx` as a parameter to the constructor. Similarly, you can implement a scheme to re-use the objects
The factory would be the suggested approach if you want Dependency Injection.
I would recommend against pre-allocating the `Token` and using it in multiple requests. Tokens expire. If the TokenService is written well, it should know when tokens will expire and refresh as needed (if possible).
Upvotes: 0 <issue_comment>username_3: ```
public class CustomJsonRequest extends JsonRequest{
public CustomJsonRequest(String url, String requestBody, Response.Listener listener,
Response.ErrorListener errorListener) {
super(url, requestBody, listener, errorListener);
}
public CustomJsonRequest(int method, String url, String requestBody, Response.Listener listener,
Response.ErrorListener errorListener) {
super(method, url, errorListener);
}
@Override
protected Response parseNetworkResponse(NetworkResponse response) {
return Response.success(Object, HttpHeaderParser.parseCacheHeaders(response));
}
@Override
public Map getHeaders() throws AuthFailureError {
Map headers = new HashMap ();
TokenService tokenservice = new TokenService(ctx);
String accesstoken = tokenservice.getToken(ApiHelper.ACCESS\_TOKEN\_SHARED\_PREF);
headers.put("Authorization", "Bearer " + accesstoken);
return headers;
}
}
```
You can extend JsonRequest class and override getHeaders() method.
Pass instance of CustomJsonRequest object when you are adding volley requests in queue.
```
VolleyUtils.getInstance().addToRequestQueue(customJsonRequest);
```
Upvotes: 1
|
2018/03/14
| 1,162 | 4,617 |
<issue_start>username_0: I am trying to write a test for a new function that I want to add. However, when I run the test the project won't build for a completely unrelated reason. A normal build works fine. As you can see below, the errors are stating that the struct WeatherDataOutput is not declared. It is declared. The test class is empty. I'm not sure why I'm getting this error in Xcode. I restarted Xcode and I cleaned the project, but nothing is working for me. I would appreciate any help on this.
Thanks
[](https://i.stack.imgur.com/GDpzN.png)
[](https://i.stack.imgur.com/xyQU6.png)
[](https://i.stack.imgur.com/TdsXE.png)<issue_comment>username_1: Make an AppController.java file and mention this file name as android:app in manifest tag.
```
public class AppController extends Application {
public static final String TAG = AppController.class.getSimpleName();
private RequestQueue mRequestQueue;
private static AppController mInstance;
private ImageLoader mImageLoader;
@Override
public void onCreate() {
super.onCreate();
mInstance = this;
}
public static synchronized AppController getInstance() {
return mInstance;
}
public RequestQueue getRequestQueue() {
if (mRequestQueue == null) {
mRequestQueue = Volley.newRequestQueue(getApplicationContext());
}
return mRequestQueue;
}
public ImageLoader getImageLoader() {
getRequestQueue();
if (mImageLoader == null) {
mImageLoader = new ImageLoader(this.mRequestQueue, new LruBitmapCache());
}
return this.mImageLoader;
}
public void addToRequestQueue(Request req, String tag) {
req.setTag(TextUtils.isEmpty(tag) ? TAG : tag);
getRequestQueue().add(req);
}
public void addToRequestQueue(Request req) {
req.setTag(TAG);
getRequestQueue().add(req);
}
public void cancelPendingRequests(Object tag) {
if (mRequestQueue != null) {
mRequestQueue.cancelAll(tag);
}
}
}
```
Do the networking code
```
StringRequest strReq = new StringRequest(Request.Method.POST, AppConfig.URL_BUYER_LOGIN,
new Response.Listener() {
@Override
public void onResponse(String response) {
}
}, new Response.ErrorListener() {
@Override
public void onErrorResponse(VolleyError error) {
}
}) {
@Override
protected Map getParams() {
}
};
// Adding request to request queue
AppController.getInstance().addToRequestQueue(strReq, tag\_string\_req);
}
```
Upvotes: -1 <issue_comment>username_2: 1. You can write a "Factory" with a method that takes your `checkauthurl` and `ctx` and returns you an instance of the JsonObjectRequest. Your factory could implement some logic for re-use of objects that have the same auth Url if that makes sense in your case.
2. You can sub-class `JsonObjectRequest` and provide your `checkauthurl` and `ctx` as a parameter to the constructor. Similarly, you can implement a scheme to re-use the objects
The factory would be the suggested approach if you want Dependency Injection.
I would recommend against pre-allocating the `Token` and using it in multiple requests. Tokens expire. If the TokenService is written well, it should know when tokens will expire and refresh as needed (if possible).
Upvotes: 0 <issue_comment>username_3: ```
public class CustomJsonRequest extends JsonRequest{
public CustomJsonRequest(String url, String requestBody, Response.Listener listener,
Response.ErrorListener errorListener) {
super(url, requestBody, listener, errorListener);
}
public CustomJsonRequest(int method, String url, String requestBody, Response.Listener listener,
Response.ErrorListener errorListener) {
super(method, url, errorListener);
}
@Override
protected Response parseNetworkResponse(NetworkResponse response) {
return Response.success(Object, HttpHeaderParser.parseCacheHeaders(response));
}
@Override
public Map getHeaders() throws AuthFailureError {
Map headers = new HashMap ();
TokenService tokenservice = new TokenService(ctx);
String accesstoken = tokenservice.getToken(ApiHelper.ACCESS\_TOKEN\_SHARED\_PREF);
headers.put("Authorization", "Bearer " + accesstoken);
return headers;
}
}
```
You can extend JsonRequest class and override getHeaders() method.
Pass instance of CustomJsonRequest object when you are adding volley requests in queue.
```
VolleyUtils.getInstance().addToRequestQueue(customJsonRequest);
```
Upvotes: 1
|
2018/03/14
| 1,750 | 6,565 |
<issue_start>username_0: I am using Docuisign for my client to use it as digital signature. I am using php rest API and i have 5 template roles and everything is working perfect.
So i want to get a request on my server from Docusign when the envelope status is completed. So that i can update the status on my side as well.
I have one solution in my mind
**1) Solution 1**
I can create a cron job to check the status of envelope but 4 crones are already running on my server so i am avoiding this solution
My Code is
```
$envelopeApi = new DocuSign\eSign\Api\EnvelopesApi($apiClient);
// Add a document to the envelope
$document = new DocuSign\eSign\Model\Document();
$document->setDocumentBase64(base64_encode(file_get_contents($documentFileName)));
$document->setName($documentName);
$document->setDocumentId("1");
// assign recipient to template role by setting name, email, and role name. Note that the
// template role name must match the placeholder role name saved in your account template.
$templateRole = new DocuSign\eSign\Model\TemplateRole();
$templateRole->setEmail($recipientEmail);
$templateRole->setName($recipientName);
$templateRole->setRoleName("Buyer");
$templateRole->setClientUserId('12345');
$docusignlogs['Recipients'][]=array("Email"=>$recipientEmail,"Name"=>$recipientName,"Role"=>"Buyer");
$templateRole1 = new DocuSign\eSign\Model\TemplateRole();
$templateRole1->setEmail($agentEmail);
$templateRole1->setName($agentName);
$templateRole1->setRoleName("SA");
$docusignlogs['Recipients'][]=array("Email"=>$agentEmail,"Name"=>$agentName,"Role"=>"SA");
//$templateRole1->setClientUserId('12345');
$all_template_roles = array($templateRole,$templateRole1);
$envelop_definition = new DocuSign\eSign\Model\EnvelopeDefinition();
$envelop_definition->setEmailSubject(" E-CONTRACT – {$subname} – {$lotjobnum}");
$envelop_definition->setTemplateId($templateid);
$envelop_definition->setDocuments(array($document));
$envelop_definition->setTemplateRoles($all_template_roles);
// set envelope status to "sent" to immediately send the signature request
$envelop_definition->setStatus("sent");
// optional envelope parameters
$options = new \DocuSign\eSign\Api\EnvelopesApi\CreateEnvelopeOptions();
$options->setCdseMode(null);
$options->setMergeRolesOnDraft(null);
// create and send the envelope (aka signature request)
$envelop_summary = $envelopeApi->createEnvelope($accountId, $envelop_definition, $options);
if(!isset($envelop_summary->errorCode)){
$document=json_decode($envelop_summary);
$envloped=$document->envelopeId;
$viewrequest = new DocuSign\eSign\Model\RecipientViewRequest();
$viewrequest->setUserName($recipientName);
$viewrequest->setEmail($recipientEmail);
$viewrequest->setAuthenticationMethod('email');
$viewrequest->setClientUserId('12345');
$viewrequest->setReturnUrl($ReturnUrl);
$envelopview=$envelopeApi->createRecipientView($accountId,$document->envelopeId,$viewrequest);
$redirecturl=$envelopview->getUrl();
}else{
$message=isset($envelop_summary->message) ? $envelop_summary->message : "unable to create envelope";
$wpdb->update( $wpdb->prefix.'reservation', array('envelope_id'=>$message), array('id'=>$reservation_id));
return builderUX_flash('danger',"Error occurred with connecting to DocuSign please contact us .");
}
```
Thanks In advance.<issue_comment>username_1: The DocuSign Connect service will make an HTTPS POST to your application's server when envelope status changes, so you don't have to regularly poll envelopes for changes.
A general overview of Connect is available here: <https://www.docusign.com/blog/dsdev-adding-webhooks-application/>
an example listener is available here: <https://github.com/docusign/docusign-soap-sdk/tree/master/PHP/Connect>
You can set up Connect for your entire account or you can request Connect for a specific envelope by including the `eventNotification` object with your Envelopes: create API call.
Upvotes: 2 <issue_comment>username_2: So i found the soultion and this is `webhooks` you can read this here [Docusign Webhooks](https://developers.docusign.com/esign-rest-api/code-examples/webhook-status)
Fot this you just need to add a few lines of code when you create the envelope.
```
// The envelope request includes a signer-recipient and their tabs object,
// and an eventNotification object which sets the parameters for
// webhook notifications to use from the DocuSign platform
$envelope_events = [
(new \DocuSign\eSign\Model\EnvelopeEvent())->setEnvelopeEventStatusCode("sent"),
(new \DocuSign\eSign\Model\EnvelopeEvent())->setEnvelopeEventStatusCode("delivered"),
(new \DocuSign\eSign\Model\EnvelopeEvent())->setEnvelopeEventStatusCode("completed"),
(new \DocuSign\eSign\Model\EnvelopeEvent())->setEnvelopeEventStatusCode("declined"),
(new \DocuSign\eSign\Model\EnvelopeEvent())->setEnvelopeEventStatusCode("voided"),
(new \DocuSign\eSign\Model\EnvelopeEvent())->setEnvelopeEventStatusCode("sent"),
(new \DocuSign\eSign\Model\EnvelopeEvent())->setEnvelopeEventStatusCode("sent")
];
$recipient_events = [
(new \DocuSign\eSign\Model\RecipientEvent())->setRecipientEventStatusCode("Sent"),
(new \DocuSign\eSign\Model\RecipientEvent())->setRecipientEventStatusCode("Delivered"),
(new \DocuSign\eSign\Model\RecipientEvent())->setRecipientEventStatusCode("Completed"),
(new \DocuSign\eSign\Model\RecipientEvent())->setRecipientEventStatusCode("Declined"),
(new \DocuSign\eSign\Model\RecipientEvent())->setRecipientEventStatusCode("AuthenticationFailed"),
(new \DocuSign\eSign\Model\RecipientEvent())->setRecipientEventStatusCode("AutoResponded")
];
$event_notification = new \DocuSign\eSign\Model\EventNotification();
$event_notification->setUrl($webhook_url);
$event_notification->setLoggingEnabled("true");
$event_notification->setRequireAcknowledgment("true");
$event_notification->setUseSoapInterface("false");
$event_notification->setIncludeCertificateWithSoap("false");
$event_notification->setSignMessageWithX509Cert("false");
$event_notification->setIncludeDocuments("true");
$event_notification->setIncludeEnvelopeVoidReason("true");
$event_notification->setIncludeTimeZone("true");
$event_notification->setIncludeSenderAccountAsCustomField("true");
$event_notification->setIncludeDocumentFields("true");
$event_notification->setIncludeCertificateOfCompletion("true");
$event_notification->setEnvelopeEvents($envelope_events);
$event_notification->setRecipientEvents($recipient_events);
```
Upvotes: 0
|
2018/03/14
| 728 | 2,286 |
<issue_start>username_0: I'm trying to create a program that allows the user to enter 10 values, and then displays the highest value in the array. Everytime I run the program, it gives me an error saying the stack was corrupted. Help please? this is my code:
```
#include
using namespace std;
int main()
{
const int SIZE = 10;
int number[SIZE];
int count;
int highest;
for (count = 0; count < SIZE; count++)
{
cout << "Enter 10 numbers" << endl;
cin >> number[SIZE];
}
for (count =1; count < SIZE; count++)
{
if (number[count] > highest)
highest = number[count];
}
cout << highest << endl;
return 0;
}
```<issue_comment>username_1: ```
cin >> number[SIZE];
```
Should be
```
cin >> number[count];
```
It looks like you misstyped here. What you're actually doing now is reading 10 numbers into the same location into the array. That location is one past the last index of the array you own, resulting undefined behavior. It looks like for you this manifested as a "corrupted stack."
Upvotes: 1 <issue_comment>username_2: firstly
`cin >> number[SIZE];` --> `std::cin >> number[count];`
Secondly `highest` is un-initialized. initialize it with `number[0]` as
```
highest = number[0]; /* first time it should have array 0th element value */
for (count =1; count < SIZE; count++) {
if (number[count] > highest)
highest = number[count];
}
```
Upvotes: 2 <issue_comment>username_3: You made a few mistakes. This would be the corrected version of your code:
```
#include
using namespace std;
int main()
{
const int SIZE = 10;
int number[SIZE];
int count;
int highest;
for (count = 0; count < SIZE; count++)
{
cout << "Enter 10 numbers" << endl;
cin >> number[count];
cout << "you entered: " << number[count] << endl;
}
highest = number[0];
for (count = 1; count < SIZE; count++)
{
if (number[count] > highest)
highest = number[count];
}
cout << highest << endl;
return 0;
}
```
but consider the following solution, you can simplify and improve your code if you start learning the stl.
```
#include
#include
#include
int main()
{
std::cout << "Enter 10 numbers:\n";
const int size = 10;
std::vector v(size);
for (auto& val : v) std::cin >> val;
std::cout << "max: " << \*std::max\_element(v.begin(),v.end()) << "\n";
}
```
Upvotes: 1
|
2018/03/14
| 366 | 1,220 |
<issue_start>username_0: I am currently creating a website that has media queries. I already have a normal navigation however when the webpage size reduces to a mobile size, I would like for the navigation to switch to another navigation bar which I have created, which includes icons which makes it easier for mobile users
```
@media only screen and (min-width : 50px) {
[](Home.html)
[](Education.html)
[![]()](Contact%20Information.html)
}
```<issue_comment>username_1: You don't need to add html to media query...
Html is part of html page
You just need to set below css with media query in your css file or in your tag.
```
@media only screen and (max-width : 767px) { .navbar a img{ width:50px !important; height:50px !important;}}
```
Upvotes: 1 <issue_comment>username_2: Use below code to show navigation properly on responsiveness mobile view.
Code::
`@media only screen and (min-width: 320px) and (max-width: 480px) {
.nav img {
height: 100px;
width: 100px;
}
} `
Note:: height: 100px; and width: 100px;
100px, is just for reference, please update height and width with your actual image size.
Upvotes: 0
|
2018/03/14
| 1,756 | 5,570 |
<issue_start>username_0: I have been following a chat bot tutorial and am stuck. I have included the exact step that I am on as a link at the bottom of this post in case you are curious what my code looks like (I was frustrated so I copied his code word for word).
During the execution of my code, it processes just over 26,000 lines before it throws the exception. My code can be found below. As you can see, I have tried various solutions including replacing /r and /n characters with nothing and adding the tag `strict=False` which should allow unterminated strings into the json, but that didn't work either.
```
with open('C:/Python34/stuff/chatbot/{}/RC_{}'.format(timeframe.split('-')[0], timeframe), buffering=1000) as f:
for row in f:
row_counter += 1
if row_counter > start_row:
try:
row = json.loads(row.replace('\n','').replace('\r',''), strict=False)
---------blah blah blah blah------------
except Exception as e:
print("RUH ROH " + str(e))
```
and the exact error message is below:
`RUH ROH Unterminated string starting at: line 1 column 368 (char 367)`
link:
<https://pythonprogramming.net/building-database-chatbot-deep-learning-python-tensorflow/>
**EDIT:**
getting rid of the try catch gave me a little more information when the error is thrown and can be found below:
```
Traceback (most recent call last):
File "C:/Python34/stuff/chatbot/chatbot_db2.py", line 103, in
row = json.loads(row.replace('\n','').replace('\r',''), strict=False)
File "C:\Python34\lib\json\\_\_init\_\_.py", line 331, in loads
return cls(\*\*kw).decode(s)
File "C:\Python34\lib\json\decoder.py", line 343, in decode
obj, end = self.raw\_decode(s, idx=\_w(s, 0).end())
File "C:\Python34\lib\json\decoder.py", line 359, in raw\_decode
obj, end = self.scan\_once(s, idx)
ValueError: Unterminated string starting at: line 1 column 368 (char 367)
```
**EDIT2:**
Following up on a comment, they suggested I print out the line that the exception was being thrown at. And it did shed some light.
`{"subreddit":"sydney","author_flair_text":null,"id":"cqugtij","gilded":0,"removal_reason":null,"downs":0,"archived":false,"created_utc":"1430439358","link_id":"t3_34e5fd","ups":6,"subreddit_id":"t5_2qkob","name":"t1_cqugtij","score_hidden":false,"author_flair_css_class":null,"parent_id":"t1_cqttsc3","controversiality":0,"score":6,"author":"SilverMeteor9798","body":"As state transport minister almost every press release from Gladys had something in there about how the liberals were \"getting on with the job\" and blaming Labor for something. It wasn't necessarily false, it just got tiresome after a while particular`
while a successful row will look like this:
`{"created_utc":"1430438400","ups":4,"subreddit_id":"t5_378oi","link_id":"t3_34di91","name":"t1_cqug90g","score_hidden":false,"author_flair_css_class":null,"author_flair_text":null,"subreddit":"soccer_jp","id":"cqug90g","removal_reason":null,"gilded":0,"downs":0,"archived":false,"author":"rx109","score":4,"retrieved_on":1432703079,"body":"\u304f\u305d\n\u8aad\u307f\u305f\u3044\u304c\u8cb7\u3063\u305f\u3089\u8ca0\u3051\u306a\u6c17\u304c\u3059\u308b\n\u56f3\u66f8\u9928\u306b\u51fa\u306d\u30fc\u304b\u306a","distinguished":null,"edited":false,"controversiality":0,"parent_id":"t3_34di91"}`
I am honestly more confused now but it does look like it ends in a `"}` for all of the objects. So either it isn't ending, or there is a character that can't be parsed?
**EDIT3 - SOLVED**
I assumed that the file was complete, but I guess there was an error downloading it and the file was cut off with an incomplete JSON Object as the last entry. So just deleting that entry solved the issue.
Thanks to everyone for the help<issue_comment>username_1: As I explained in EDIT2, I printed out the line that was giving me trouble, and saw that it did not end in a `}`, which every JSON Object should. I then went into the file, and checked the exact line that was giving me trouble by using a simple search, and I found that the line was not only truncated, but it was also the last line of my file as well.
There was definitely an error when I was either downloading or extracting this file, and it seemed to cut it short. This in turn threw the error that I got with no solution seeming to work.
**To anyone who is having this error and .replace() solutions are not working:** try to look through your data and make sure that there is in fact something there to replace or edit. In my case there was a truncating error during the download or extraction which made such solutions impossible.
Big thanks to abarnert, <NAME> and <NAME>
Upvotes: 3 [selected_answer]<issue_comment>username_2: I discovered the good guys at [Luminoso](https://luminoso.com/) have written a [Library](https://ftfy.readthedocs.io/en/latest/) to sort this kind of issue.
Apparently, *sometimes you might have to deal with text that comes out of other code. where the text has often passed through several different pieces of software, each with their own quirks, probably with Microsoft Office somewhere in the chain* --- [see this blog post](http://blog.conceptnet.io/posts/2012/fixing-common-unicode-mistakes-with-python-after-theyve-been-made/)
This is where [ftfy](https://ftfy.readthedocs.io/en/latest/) comes to the rescue.
```
from ftfy import fix_text
import json
# text = some text source with a potential unicode problem
fixed_text = fix_text(text)
data = json.loads(fixed_text)
```
Upvotes: 2
|
2018/03/14
| 767 | 2,591 |
<issue_start>username_0: I am not very sure how to explain this question, but lets look at the code below:
```
import numpy as np
def glb_mod(s_np,s):
s_np.shape = (2,-1)
s=2
return s
if __name__=='__main__':
a=1
a_np = np.arange(10)
print ('a, a_np initialized as')
print(a , a_np)
glb_mod(a_np,a)
print ('a, a_np are now')
print (a, a_np)
```
I have two global variables named:
```
a, a_np
```
after run through the function
```
glb_mod()
results:
a, a_np initialized as
1 [0 1 2 3 4 5 6 7 8 9]
a, a_np are now
1 [[0 1 2 3 4]
[5 6 7 8 9]]
```
why "a\_np" changed but "a" not change? Just wonder how should I modify the code so that when passing global variable "a\_np" into function, "a\_np" will not change after run throw the function "glb\_mod()"?<issue_comment>username_1: As I explained in EDIT2, I printed out the line that was giving me trouble, and saw that it did not end in a `}`, which every JSON Object should. I then went into the file, and checked the exact line that was giving me trouble by using a simple search, and I found that the line was not only truncated, but it was also the last line of my file as well.
There was definitely an error when I was either downloading or extracting this file, and it seemed to cut it short. This in turn threw the error that I got with no solution seeming to work.
**To anyone who is having this error and .replace() solutions are not working:** try to look through your data and make sure that there is in fact something there to replace or edit. In my case there was a truncating error during the download or extraction which made such solutions impossible.
Big thanks to abarnert, <NAME> and <NAME>
Upvotes: 3 [selected_answer]<issue_comment>username_2: I discovered the good guys at [Luminoso](https://luminoso.com/) have written a [Library](https://ftfy.readthedocs.io/en/latest/) to sort this kind of issue.
Apparently, *sometimes you might have to deal with text that comes out of other code. where the text has often passed through several different pieces of software, each with their own quirks, probably with Microsoft Office somewhere in the chain* --- [see this blog post](http://blog.conceptnet.io/posts/2012/fixing-common-unicode-mistakes-with-python-after-theyve-been-made/)
This is where [ftfy](https://ftfy.readthedocs.io/en/latest/) comes to the rescue.
```
from ftfy import fix_text
import json
# text = some text source with a potential unicode problem
fixed_text = fix_text(text)
data = json.loads(fixed_text)
```
Upvotes: 2
|
2018/03/14
| 631 | 2,373 |
<issue_start>username_0: I've trained a Tensorflow model on Google Colab, and saved that model in ".ckpt" format.
I want to download the model so I tried to do this:
```
from google.colab import files
files.download('/content/model.ckpt.index')
files.download('/content/model.ckpt.meta')
files.download('/content/model.ckpt.data-00000-of-00001')
```
I was able to get meta and index files. However, data file is giving me the following error:
>
> "MessageError: Error: Failed to download: Service Worker Response
> Error"
>
>
>
Could anybody tell me how should I solve this problem.<issue_comment>username_1: As I explained in EDIT2, I printed out the line that was giving me trouble, and saw that it did not end in a `}`, which every JSON Object should. I then went into the file, and checked the exact line that was giving me trouble by using a simple search, and I found that the line was not only truncated, but it was also the last line of my file as well.
There was definitely an error when I was either downloading or extracting this file, and it seemed to cut it short. This in turn threw the error that I got with no solution seeming to work.
**To anyone who is having this error and .replace() solutions are not working:** try to look through your data and make sure that there is in fact something there to replace or edit. In my case there was a truncating error during the download or extraction which made such solutions impossible.
Big thanks to abarnert, <NAME> and <NAME>
Upvotes: 3 [selected_answer]<issue_comment>username_2: I discovered the good guys at [Luminoso](https://luminoso.com/) have written a [Library](https://ftfy.readthedocs.io/en/latest/) to sort this kind of issue.
Apparently, *sometimes you might have to deal with text that comes out of other code. where the text has often passed through several different pieces of software, each with their own quirks, probably with Microsoft Office somewhere in the chain* --- [see this blog post](http://blog.conceptnet.io/posts/2012/fixing-common-unicode-mistakes-with-python-after-theyve-been-made/)
This is where [ftfy](https://ftfy.readthedocs.io/en/latest/) comes to the rescue.
```
from ftfy import fix_text
import json
# text = some text source with a potential unicode problem
fixed_text = fix_text(text)
data = json.loads(fixed_text)
```
Upvotes: 2
|
2018/03/14
| 804 | 3,111 |
<issue_start>username_0: I want to create a **InlineKeyboardButton** for Telegram in python like below function but I want to do it dynamically. This function groups two buttons in one line and add one button to second line. I want to make two button each line.
```
def options(bot, update):
keyboard = [[InlineKeyboardButton("Option 1", callback_data='1'),
InlineKeyboardButton("Option 2", callback_data='2')],
[InlineKeyboardButton("Option 3", callback_data='3')]]
reply_markup = InlineKeyboardMarkup(keyboard)
update.message.reply_text('Please choose:', reply_markup = reply_markup)
```
But what ever i try, i got error message like
```
data['inline_keyboard'].append([x.to_dict() for x in inline_keyboard])
AttributeError: 'list' object has no attribute 'to_dict'
```
My code is as follows:
```
def create_options(self, gid, Events):
opts = []
o = []
keyboard = []
for i, events in enumerate(Events):
if (events.gid == gid):
o.append([[InlineKeyboardButton(events.etkinlik + u" ", callback_data=i),InlineKeyboardButton(events.etkinlik + u" ", callback_data=i)]])
opts.append(o)
keyboard.append(opts)
return InlineKeyboardMarkup(keyboard)
```
Could you suggest where i am doing wrong.
Thanks a lot for your help<issue_comment>username_1: As I explained in EDIT2, I printed out the line that was giving me trouble, and saw that it did not end in a `}`, which every JSON Object should. I then went into the file, and checked the exact line that was giving me trouble by using a simple search, and I found that the line was not only truncated, but it was also the last line of my file as well.
There was definitely an error when I was either downloading or extracting this file, and it seemed to cut it short. This in turn threw the error that I got with no solution seeming to work.
**To anyone who is having this error and .replace() solutions are not working:** try to look through your data and make sure that there is in fact something there to replace or edit. In my case there was a truncating error during the download or extraction which made such solutions impossible.
Big thanks to abarnert, <NAME> and <NAME>
Upvotes: 3 [selected_answer]<issue_comment>username_2: I discovered the good guys at [Luminoso](https://luminoso.com/) have written a [Library](https://ftfy.readthedocs.io/en/latest/) to sort this kind of issue.
Apparently, *sometimes you might have to deal with text that comes out of other code. where the text has often passed through several different pieces of software, each with their own quirks, probably with Microsoft Office somewhere in the chain* --- [see this blog post](http://blog.conceptnet.io/posts/2012/fixing-common-unicode-mistakes-with-python-after-theyve-been-made/)
This is where [ftfy](https://ftfy.readthedocs.io/en/latest/) comes to the rescue.
```
from ftfy import fix_text
import json
# text = some text source with a potential unicode problem
fixed_text = fix_text(text)
data = json.loads(fixed_text)
```
Upvotes: 2
|
2018/03/14
| 717 | 2,099 |
<issue_start>username_0: a metaclass that works with number value string value and null value
like this code ; please help me
```
String.metaClass.formatx = { delegate.toString().replaceAll(/null/, '0.0').toFloat() }
m= "4".formatx()
m2=4.formatx()
m3=null.formatx()
```<issue_comment>username_1: If I were you, I'd do the following:
```
String.metaClass.formatx = { -> delegate.toFloat() }
String a = "3"
String b = null
assert 3.0f == (a?.formatx() ?: 0.0f)
assert 0.0f == (b?.formatx() ?: 0.0f)
```
That is, defend against `null` in your code with `?.` and `?:`
If you *have* to try and catch the `null`, and format it, you *could* do:
```
import org.codehaus.groovy.runtime.NullObject
String.metaClass.formatx = { -> delegate.toFloat() }
NullObject.metaClass.formatx = { -> 0.0f }
String a = "3"
String b = null
assert 3.0f == a.formatx()
assert 0.0f == b.formatx()
```
But adding a method to `NullObject` feels wrong, and I've never done it before
### Edit
This is shorter
```
import org.codehaus.groovy.runtime.NullObject
[String, Integer].each { it.metaClass.formatx = { -> delegate.toFloat() } }
NullObject.metaClass.formatx = { -> 0.0f }
println null.formatx()
println 3.formatx()
println "4".formatx()
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: I put this for example but I think THat I use much code
I repeat metaClass.formatx three times ; I dont know if is possible use OR setences INSTEAD
```
import org.codehaus.groovy.runtime.NullObject
String.metaClass.formatx = { -> delegate.toString().replaceAll(/null/, '0.0').toFloat() }
NullObject.metaClass.formatx = { -> delegate.toString().replaceAll(/null/, '0.0').toFloat() }
Integer.metaClass.formatx = { -> delegate.toString().replaceAll(/null/, '0.0').toFloat() }
m2= 4.formatx()
m= "4".formatx()
println null.formatx()
```
edit
```
import org.codehaus.groovy.runtime.NullObject
[String, Integer,NullObject].each { it.metaClass.formatx = { -> delegate.toString().replaceAll(/null/, '0.0').toFloat() } }
m2= 4.formatx()
m= "4".formatx()
println null.formatx()
```
Upvotes: 0
|
2018/03/14
| 504 | 1,732 |
<issue_start>username_0: I am trying to order by a column that I do not want to use in my `Group By` clause.
So the idea here is to be able to order by this `FileId`.
```
SELECT FileGuid, FileName, ROW_NUMBER() OVER(Order by FileId) AS rownum
FROM dbo.FileImport
GROUP BY FileGuid, FileName
ORDER BY rownum
```
Right now I am getting this error:
>
> Column 'dbo.FileImport.FileId' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
>
>
>
But if I include that in the group by, I will be getting the record of each file in that associated to that FileGuid, which I don't want to happen. Thanks in advance on any input on how I can get this working.<issue_comment>username_1: You can try this :
```
SELECT * FROM
(SELECT FileGuid, FileName, ROW_NUMBER() OVER(Partition by FileGuid, FileName
Order by FileId) AS rownum
FROM dbo.FileImport) X
WHERE rownum = <>
ORDER BY X.FileGuid,X.FileName,X.rownum
```
With this any type of filter can be applied on rownum by adding outer where clause.
Upvotes: 0 <issue_comment>username_2: You have to decide *which* `fileid` you care about. Then you can use an aggregation function:
```
SELECT FileGuid, FileName, ROW_NUMBER() OVER (Order by MIN(FileId)) AS rownum
FROM dbo.FileImport
GROUP BY FileGuid, FileName
ORDER BY rownum;
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You can use this SQL statement.
```
Select *
From
(
SELECT T.FileGuid, T.FileName, ROW_NUMBER() OVER(Order by T.FileId) AS rownum
from (
SELECT FileGuid, FileName, Max(FileID) As FileID
FROM dbo.FileImport
GROUP BY FileGuid, FileName
)T
)Temp
order by Temp.rownum
```
Let me know if question!
Upvotes: 0
|
2018/03/14
| 650 | 2,319 |
<issue_start>username_0: I am currently using kops to create AWS EC2 clusters. But it does not seem to have an option to specify 'spot' instances.
Does anybody know how to create instances of type 'spot' with kops or with kubernetes?<issue_comment>username_1: It appears that [gardener/machine-controller-manager](https://github.com/gardener/machine-controller-manager/blob/17181f6b/pkg/driver/driver_aws.go#L107) could be taught about Spot instances fairly easily, and there is [an existing issue](https://github.com/gardener/machine-controller-manager/issues/27) to do just such a thing. I can't recall off-hand if that is *the* Node Controller Manager that I recalled seeing, or it is merely *a* Node Controller Manager and thus there may be other implementations of that idea which already include spot support.
That makes a presumption that you actually meant spot for the *workers*, and not for the whole cluster. If you mean the whole cluster, then you may be much, much happier with something like [kubespray](https://github.com/kubernetes-incubator/kubespray#readme) and use that to lay a functioning cluster on top of existing machines. Just bear in mind that while kubernetes *certainly is* resilient to "damage," including the loss of a master, an etcd member, and without question the loss of a `Node`, it might frown if a huge portion of its machines vanish at once. In other words: using spot *could* mean that you spend more programmer/devops/glucose triaging spot disappearance, or you have to so vastly overprovision replicas that it starts to eat into the savings from spot in the first place.
Upvotes: 0 <issue_comment>username_2: From the docs
<https://github.com/kubernetes/kops/blob/master/docs/instance_groups.md#converting-an-instance-group-to-use-spot-instances>
>
> Follow the normal procedure for reconfiguring an InstanceGroup, but
> set the maxPrice property to your bid. For example, "0.10" represents
> a spot-price bid of $0.10 (10 cents) per hour.
>
>
>
So after `kops create cluster` but before `kops update cluster --yes` run `kops edit ig nodes --name $NAME` and set maxPrice to your max bid.
```
metadata:
creationTimestamp: "2016-07-10T15:47:14Z"
name: nodes
spec:
machineType: t2.medium
maxPrice: "0.01"
maxSize: 3
minSize: 3
role: Node
```
Upvotes: 2
|
2018/03/14
| 796 | 2,770 |
<issue_start>username_0: I have a list in a table with a `StartDate (not null)` and an `EndDate (null)` (both of type `date` without time).
An active entry is one with `StartDate <= querydate` and `EndDate with null or > querydate`.
So my query would basically be like
```
SELECT *
FROM MyDataTable
WHERE StartDate <= mydate
AND (EndDate IS NULL OR EndDate >= mydate )
```
For a statistic in a diagram, I want to know which ones are active in a specific month for the last months. I loop (programatically) over the year and month (2017-10, 2017-11, 2017-12, 2018-1, ... )
How can I select all active entries using year and month of the program loop but ignoring the day part?<issue_comment>username_1: It appears that [gardener/machine-controller-manager](https://github.com/gardener/machine-controller-manager/blob/17181f6b/pkg/driver/driver_aws.go#L107) could be taught about Spot instances fairly easily, and there is [an existing issue](https://github.com/gardener/machine-controller-manager/issues/27) to do just such a thing. I can't recall off-hand if that is *the* Node Controller Manager that I recalled seeing, or it is merely *a* Node Controller Manager and thus there may be other implementations of that idea which already include spot support.
That makes a presumption that you actually meant spot for the *workers*, and not for the whole cluster. If you mean the whole cluster, then you may be much, much happier with something like [kubespray](https://github.com/kubernetes-incubator/kubespray#readme) and use that to lay a functioning cluster on top of existing machines. Just bear in mind that while kubernetes *certainly is* resilient to "damage," including the loss of a master, an etcd member, and without question the loss of a `Node`, it might frown if a huge portion of its machines vanish at once. In other words: using spot *could* mean that you spend more programmer/devops/glucose triaging spot disappearance, or you have to so vastly overprovision replicas that it starts to eat into the savings from spot in the first place.
Upvotes: 0 <issue_comment>username_2: From the docs
<https://github.com/kubernetes/kops/blob/master/docs/instance_groups.md#converting-an-instance-group-to-use-spot-instances>
>
> Follow the normal procedure for reconfiguring an InstanceGroup, but
> set the maxPrice property to your bid. For example, "0.10" represents
> a spot-price bid of $0.10 (10 cents) per hour.
>
>
>
So after `kops create cluster` but before `kops update cluster --yes` run `kops edit ig nodes --name $NAME` and set maxPrice to your max bid.
```
metadata:
creationTimestamp: "2016-07-10T15:47:14Z"
name: nodes
spec:
machineType: t2.medium
maxPrice: "0.01"
maxSize: 3
minSize: 3
role: Node
```
Upvotes: 2
|
2018/03/14
| 1,560 | 4,524 |
<issue_start>username_0: I have delphi application, I need to rewrite it for OS X.
This app writes/reads data to/from HID-device.
I have issues when I'm trying to write string from mac.
Here is the line that I'm writing(from debugger on windows): 'Новый комплекс 1'
and this works good. Meanwhile if copy this from debugger to somewhere it becomes 'Íîâûé êîìïëåêñ 1'. Device shows it as it was written, in cyrillic. And that's OK.
When I'm trying to repeat this steps on OS X, device shows unreadable symbols. But if I do hardcode 'Íîâûé êîìïëåêñ 1' from windows example it's OK again.
Give some hints.
How it on Windows
Some code:
```
s:= 'Новый комлекс 1'
s:= AnsiToUtf8(ReplaceNull(s));
```
Here is ReplaceNULL:
```
function ReplaceNull(const Input: string): string;
var
Index: Integer;
Res: String;
begin
Res:= '';
for Index := 1 to Length(Input) do
begin
if Input[Index] = #0 then
Res:= Res + #$12
else
Res:= Res + Input[Index];
end;
ReplaceNull:= Res;
end;
```
this string I put to Tstringlist and then save to file:
```
ProgsList.SaveToFile(Mwork.pathLibs+'stream.ini', TEncoding.UTF8);
```
Other program read this list and then writes to device:
```
Progs:= TStringList.Create();
Progs.LoadFromFile(****);
s:= UTF8ToAnsi(stringreplace(Progs.Strings[i], #$12, #0, [rfReplaceAll, rfIgnoreCase]));
```
And then write it to device.
So the line which writes seems like this:
```
"'þ5'#0'ÿ'#$11'Новый комплекс 1'#0'T45/180;55;70;85;90;95;100;T45/180'#0'ÿ'"
```
On the Mac I successfully get the same string. But device can't show this in Cyrillic.<issue_comment>username_1: A Delphi `string` is encoded in UTF-16 on all platforms. There is no need to convert it, unless you are interacting with non-Unicode data outside of your app.
That being said, if you have a byte array that is encoded in a particular charset, you can convert it to another charset using Delphi's [`TEncoding.Convert()`](http://docwiki.embarcadero.com/Libraries/en/System.SysUtils.TEncoding.Convert) method. You can use the [`TEncoding.GetEncoding()`](http://docwiki.embarcadero.com/Libraries/en/System.SysUtils.TEncoding.GetEncoding) method to get a [`TEncoding`](http://docwiki.embarcadero.com/Libraries/en/System.SysUtils.TEncoding) object for a particular charset (if different than the standard supported charsets - ANSI, ASCII, UTF-7, UTF-8, and UTF-16 - which have their own property getters in `TEncoding`).
```
var
SrcEnc, DstEnc: TEncoding;
SrcBytes, ConvertedBytes: TBytes;
begin
SrcBytes := ...; // Cyrillic encoded bytes
SrcEnc := TEncoding.GetEncoding('Cyrillic'); // or whatever the real name is...
try
DstEnc := TEncoding.GetEncoding('Windows-1251');
try
ConvertedBytes := TEncoding.Convert(SrcEnc, DstEnc, SrcBytes);
finally
DstEnc.Free;
end;
finally
SrcEnc.Free;
end;
// use ConvertedBytes as needed...
end;
```
**Update**: To encode a Unicode string in a particular charset, simply call the [`TEncoding.GetBytes()`](http://docwiki.embarcadero.com/Libraries/en/System.SysUtils.TEncoding.GetBytes) method, eg:
```
s := 'Новый комлекс 1';
Enc := TEncoding.GetEncoding('Windows-1251');
try
bytes := Enc.GetBytes(s);
finally
Enc.Free;
end;
```
```
s := 'Новый комлекс 1';
bytes := TEncoding.UTF8.GetBytes(s);
```
You can use the [`TEncoding.GetString()`](http://docwiki.embarcadero.com/Libraries/Tokyo/en/System.SysUtils.TEncoding.GetString) to decode bytes in a particular charset back to a String, eg:
```
bytes := ...; // Windows-1251 encoded bytes
Enc := TEncoding.GetEncoding('Windows-1251');
try
s := Enc.GetString(bytes);
finally
Enc.Free;
end;
```
```
bytes := ...; // UTF-8 encoded bytes
s := TEncoding.UTF8.GetString(bytes);
```
Upvotes: 2 <issue_comment>username_2: The answer was next. Delphi Berlin 10.1 uses KOI8-R, and my device - cp1251.
As i'd wanted to write russian symbols(Cyrillic) i've created table of matches for symbols from KOI8-R and cp1251.
So, i take string in KOI8-R make it in cp1251.
Simple code:
```
Dict:=TDictionary.Create;
Dict.Add(#$439,#$E9);//'й'
Dict.Add(#$44E,#$FE);//'ю'
Dict.Add(#$430,#$E0);//'а'
```
....
```
function tkoitocp.getCP1251Code(str:string):string;
var i:integer; res,key,val:string; pair:Tpair;
begin
res:='';
for i:=1 to length(str) do
begin
if dict.ContainsKey(str[i]) then
begin
pair:= dict.ExtractPair(str[i]);
res:=res+pair.Value;
dict.Add(pair.Key,pair.Value);
end
else
res:=res+str[i];
end;
Result:=res;
end;
```
Upvotes: 1 [selected_answer]
|
2018/03/14
| 375 | 1,320 |
<issue_start>username_0: I want to use JQuery to get a table row and its contents as HTML.
```
var updateButton = $( this );
var currentRow = updateButton.closest( 'tr' );
window.console.log( currentRow );
```
but the resulting log doesn't seem to contain the table values input by the user.
I've found answers about how to extract the data, but I want to be able to reproduce the row as html with the existing contents in the cells so I can output a copy of the row.
Can this be done with a simple JQuery function, or do I need to rebuild the row using some combination of the result of my above code and the extracted data?<issue_comment>username_1: use parent if the button is inside the row.
```
|... update |
$(".update-button").on("click",function(){
console.log($(this).parent("tr"));
})
```
Upvotes: 0 <issue_comment>username_2: ```js
$(".btn-copy").on('click', function(){
var ele = $(this).closest('tr').clone(true);
console.log(ele)
$(this).closest('tr').after(ele);//this will duplicate tr
})
```
```css
table, th, td {
border: 1px solid black;
border-collapse: collapse;
}
```
```html
| Firstname | Lastname | action |
| --- | --- | --- |
| Jill | Smith | Click to copy |
| Eve | Jackson | Click to copy |
```
Hope this helps you
Upvotes: 2 [selected_answer]
|
2018/03/14
| 972 | 4,179 |
<issue_start>username_0: I am trying to track url changes using navigationStart event of angular. I am writing a feature module and this url tracking needs to be done in a service and not a component. I don't even have a component. In this url tracking handler, I also want to find out the hash fragment of url. I tried below code but navigationStart never gets fired. If I have to map it to angularjs then this is how we used to do it.
**angularjs code**
```
$rootScope.$on('$locationChangeStart', locationChangeHandler);
var locationChangeHandler = function (event, newUrl, oldUrl) {
}
```
**angular Code**
```
export class MyGuard implements CanActivate {
constructor( private router: Router , private activatedRoute: ActivatedRoute) {
this.activatedRoute.fragment.subscribe((fragment: string) => {
console.log("My hash fragment is here => ", fragment)
})
this.router.events
.filter(e => e instanceof NavigationStart)
.pairwise()
.subscribe((e) => {
console.log("inside navigation start");
console.log(e) })
this.router.events.subscribe(event => {
// This didn't work
//if(event instanceof NavigationStart) {
// console.log("navigation started");
// }
else if(event instanceof NavigationEnd) {
console.log("navigation ended");
}
else if(event instanceof NavigationCancel) {
console.log("navigation cancelled");
}
else if(event instanceof NavigationError) {
console.log("navigation errored");
}
else if(event instanceof RoutesRecognized) {
console.log("navigation routes recognized");
}
})
}
```<issue_comment>username_1: You are building a canActivate guard, which is only run *after* the navigation has started. That is why are you not seeing the Navigation start.
You can see the order of the routing events here: <https://angular.io/api/router/Event>
Notice that the `GuardsCheckStart` is when your `canActivate` guard is executed, which is *after* the navigation start.
I normally put this type of code in my app component:
```
import { Component } from '@angular/core';
import { Router, Event, NavigationStart, NavigationEnd, NavigationError, NavigationCancel } from '@angular/router';
@Component({
selector: 'mh-root',
templateUrl: './app.component.html',
styleUrls: ['./app.component.css']
})
export class AppComponent {
loading: boolean = true;
constructor(private router: Router) {
router.events.subscribe((routerEvent: Event) => {
this.checkRouterEvent(routerEvent);
});
}
checkRouterEvent(routerEvent: Event): void {
if (routerEvent instanceof NavigationStart) {
this.loading = true;
}
if (routerEvent instanceof NavigationEnd ||
routerEvent instanceof NavigationCancel ||
routerEvent instanceof NavigationError) {
this.loading = false;
}
}
}
```
Upvotes: 2 <issue_comment>username_1: Since I'm going in another direction here, I'm making it a new answer.
Here is what one of my `CanActivate` guards looks like (see below). Notice that it has a `canActivate` method that provides the route information and route state information. You should be able to use that method to pull off any part of the URL that you need.
```
import { Injectable } from '@angular/core';
import { ActivatedRouteSnapshot, RouterStateSnapshot, Router,
CanActivate } from '@angular/router';
import { AuthService } from './auth.service';
@Injectable()
export class AuthGuard implements CanActivate {
constructor(private authService: AuthService,
private router: Router) { }
canActivate(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): boolean {
return this.checkLoggedIn(state.url);
}
checkLoggedIn(url: string): boolean {
if (this.authService.isLoggedIn()) {
return true;
}
// Retain the attempted URL for redirection
this.authService.redirectUrl = url;
this.router.navigate(['/login']);
return false;
}
}
```
Upvotes: 1
|
2018/03/14
| 482 | 2,069 |
<issue_start>username_0: I've been searching and searching for what the purpose is for a workspace. I've asked this question in stack chats but no one seems to know.
I know workspaces are local copies of solutions and you can switch between them when testing different things on the same projects but with different branches but I can do that with standard folders as well. So I can't figure out what the advantages and disadvantages are of using workspaces over normal folders. Is having different settings for each workspace the only advantage?
The only other obvious thing I see is shown in the screenshot but that a workspace is shown as a single "Code Workspace" file with no folder structure even though it does have one while standard folders have the structure and shows all contents. [](https://i.stack.imgur.com/u5G1Y.png)
I found this article on stack and it's kinda relevant but not as specific and it's unanswered. So instead of setting a bounty I thought I'd ask exactly what I was looking for. [Asking about workspaces with settings vs user settings.](https://stackoverflow.com/questions/44629890/what-is-the-definition-of-a-workspace-in-vs-code?answertab=active#tab-top)<issue_comment>username_1: Two things which makes Workspace different from standard folders -
1. Like the other answer you linked to, you can have workspace based settings
2. In one Workspace you can open different folders which are not necessarily in the root folder which you open first.
Upvotes: 2 <issue_comment>username_2: In addition to workspace-based settings, workspaces can act like aliases that can link to a root folder (sort of like Dreamweaver's Sites feature). So you can keep a centralized folder/collection of all your workspaces in one place for easy navigation (a folder named VSC-Workspaces for example), yet they can point to and open work folders that may be saved in different locations on your hard drive, since they might be websites or python files, etc.
Upvotes: 2
|
2018/03/14
| 1,008 | 3,535 |
<issue_start>username_0: I need to sort an array. I write code, i use insertion sort, but for big `n` this code work so slow. How optimize my code. May be there is another algorithm.
```
public void insertionSort(ArrayList arrayList) {
int n = arrayList.size();
int in, out;
for(out = 1; out < n; out++)
{
int temp = arrayList.get(out);
in = out;
while (in > 0 && arrayList.get(in-1) > temp)
{
arrayList.set(in, arrayList.get(in-1));
in--;
}
arrayList.set(in,temp);
}
print(arrayList);
}
```<issue_comment>username_1: You can use counting sort instead of insertion sort. Because counting sort takes a linear time, but insertion sort at worst takes О(n^2)
Here is example of using counting sort:
```
import java.util.Arrays;
import java.util.Random;
import java.util.Scanner;
public class Main {
public static void print(int []a){
System.out.println(Arrays.toString(a));
}
public static void countingSort(int []a, int []b, int n, int k){
int []c = new int [k];
for(int i=0; i=0; j--){
c[a[j]] = c[a[j]]-1;
b[c[a[j]]] = a[j];
}
for(int i=0; i
```
Upvotes: 1 <issue_comment>username_2: You should look into QuickSort or MergeSort if you want faster sorting algorithms. Unlike InsertionSort (and SelectionSort), they are recursive, but still fairly easy to implement. You can find many examples if you look around on the internet.
Upvotes: 1 <issue_comment>username_3: As Anna stated above, counting sort can be a really good algorithm, considering you don't have a really large data set and the data is not sparse.
For example, an array of size 10k with 100 elements duplicated will have much better space efficiency than an array of size 10k with all unique elements and spread in a sparse fashion.
For example, the following array -> [5,5,4,...,2,2,1,1,5,6,7,8] will need a space of an array of size 8 (1 being the minimum and 8 being the maximum) while,
This array -> [5,100,6004,3248,45890,2384,128,8659,...,3892,128] will need a space of an array at least of size 45886 (5 being the minimum and 45890 being the maximum).
So, I'll suggest you use this algorithm when you know that the data set you have is evenly distributed within an acceptable range which won't make your program run out of memory. Otherwise you can go with something like quicksort or mergesort. That gets the work done just fine.
That being said, Anna's implementation of counting sort seemed a little over coded to me personally, so here's me sharing my implementation.
```
public int[] countSort(int[] nums) {
int min = nums[0], max = nums[0], counterLength, start = 0;
int[] counter;
// To dynamically allocate size to the counter.
// Also an essential step if there are negative elements in the input array.
// You can actively avoid this step if you know:
// 1. That the elements are not going to be negative.
// 2. The upper bound of the elements in the array.
for (int i : nums) {
if (i > max)
max = i;
else if (i < min)
min = i;
}
counterLength = max - min + 1;
counter = new int[counterLength];
for (int i : nums)
counter[i - min]++;
for (int i = 0; i < counterLength; i++) {
if (counter[i] > 0) {
int end = start + counter[i];
Arrays.fill(nums, start, end, i + min);
start = end;
}
}
return nums;
}
```
Upvotes: 0
|
2018/03/14
| 673 | 2,307 |
<issue_start>username_0: I have a text input which I'm using to pass a value through JS, to get filtered data of names from a JSON file using [OData](http://www.odata.org/documentation/) query parameters.
If someone has an apostrophe in their name, I get a bad request when I pass the URL.
I understand that apostrophes should be [escaped by double quoting](https://stackoverflow.com/questions/15517529/apostrophe-issue-in-url-with-odata). I've tried this, and while the request doesn't break, it gets passed with two apostrophes which doesn't match the data (resulting in no results).
Here's a high level of what my code looks like:
**JSON**
```
{
"value": [{
"Title": "<NAME>"
}]
}
```
**HTML**
```
Search
```
**JS**
```
var term = $('input').val();
var searchTerm = term.replace(/'/g, "''");
var serviceURL = "/api/service?&$filter=contains(Title,'" + searchTerm + "')";
$('button').on('click', function(){
$.get( serviceURL, function( data ) {
// Code to display filtered JSON data
});
});
```
When I pass the `serviceURL` above via AJAX, the URL will appear as:
```
/api/service?&$filter=contains(Title,'<NAME>''Smith')
```
The request works, but it comes through with two apostrophes which doesn't match my JSON object.
If I pass it as is - without the replace - I get a `Bad Request`.
I've tried escaping it through other traditional methods as well ('\'', encodeUriComponent)
Any idea what I am doing wrong or how to address this?
Update
======
Unfortunately it appears there was a separate issue or it was heavily cached, as everything above is now working as expected.<issue_comment>username_1: Try swapping each single quote with a double quote and vice versa.
Alternatively you could just chain together javascripts concat() method for strings. Docs here: [concat()](https://www.w3schools.com/jsref/jsref_concat_string.asp)
Upvotes: 1 <issue_comment>username_2: You have a space in the name so you'll need to encode it after doubling the apostrophes:
```
var term = "<NAME>";
var searchTerm = encodeURIComponent(term.replace(/'/g, "''"));
var serviceURL = "/api/service?&$filter=contains(Title,'" + searchTerm + "')";
```
Which will produce:
```
/api/service?&$filter=contains(Title,'John%20O''Smith')
```
Upvotes: 3
|
2018/03/14
| 357 | 1,009 |
<issue_start>username_0: **T(n) = 4T (36n/2) + cn2**
Analyze the recurrence shown above, then use the substitution method to prove the following guess : **T(n) = Ɵ(n2)**
is my answer correct?
```
T(n) <= d(36n/2) + cn²
T(n) <= 4d(36n/2)² + cn²
T(n) = 4d(36n²/4)
T(n) = d36n² + cn²
T(n) = dn² + cn²
= dn²
```<issue_comment>username_1: Try swapping each single quote with a double quote and vice versa.
Alternatively you could just chain together javascripts concat() method for strings. Docs here: [concat()](https://www.w3schools.com/jsref/jsref_concat_string.asp)
Upvotes: 1 <issue_comment>username_2: You have a space in the name so you'll need to encode it after doubling the apostrophes:
```
var term = "<NAME>";
var searchTerm = encodeURIComponent(term.replace(/'/g, "''"));
var serviceURL = "/api/service?&$filter=contains(Title,'" + searchTerm + "')";
```
Which will produce:
```
/api/service?&$filter=contains(Title,'John%20O''Smith')
```
Upvotes: 3
|
2018/03/14
| 1,365 | 5,083 |
<issue_start>username_0: I'm trying to get my Angular Table to refresh after updating the data used in the table.
The docs say "you can trigger an update to the table's rendered rows by calling its renderRows() method." but it is not like a normal child component where I can use something "@ViewChild(MatSort) sort: MatSort;" since I do not import it.
If I do import it and try something like @ViewChild('myTable') myTable: MatTableModule; then I get an error that says that renderRows() does not exist on that type.
How can I call this method? Thanks!
My table code snippet:
```
```<issue_comment>username_1: >
> `@ViewChild('myTable') myTable: MatTableModule`
>
>
>
You're not supposed to query for the string. This will query the reference (defined like ). Also the type is wrong: you're not grabbing a module from the view, you're grabbing a component.
You should import the component you want to query and do the following (change according to which component exactly you need to query):
```
@ViewChild(MatTable) matTable: MatTable
```
The argument in the `ViewChild` decorator is the component you want to query and the type is just for your convenience -- you could omit it or say `any`, but you won't have any help from TypeScript if you do not do it so it's recommended to leave it.
Upvotes: 3 <issue_comment>username_2: Make sure you import ViewChild and MatTable:
```
import {Component, ViewChild} from '@angular/core';
import {MatTable} from '@angular/material';
```
Then you can get a reference to the table using the ViewChild (note that a type T is required on MatTable - I just used any, but if you have a typed table, you will need to use that type:
```
@ViewChild(MatTable) table: MatTable;
```
Then when you modify the table in any way you will need to call the renderRows() method.
```
delete(row: any): void {
/* delete logic here */
this.table.renderRows();
}
```
Here is a very simple working example:
<https://stackblitz.com/edit/angular-bxrahf>
Some sources I found when solving this issue myself:
* <https://material.angular.io/cdk/table/api#CdkTable>
* <https://stackoverflow.com/a/49121032/8508548>
Upvotes: 8 [selected_answer]<issue_comment>username_3: You can use
```
import {Component, ViewChild} from '@angular/core';
import {MatTable} from '@angular/material';
@ViewChild(MatTable) table: MatTable;
anyFunction(): void {
this.table.renderRows();
}
```
As mention on another's answers.
Or you can pass the service to application state(@ngrx/store)
For example:
```
import { Component, OnInit } from '@angular/core';
import { MatTableDataSource } from '@angular/material/table';
import { Store } from '@ngrx/store';
import * as reducer from '../../app.reducer';
export class Test implements OnInit {
dataSource = new MatTableDataSource();
constructor(private \_store: Store){}
ngOnInit(): void {
this.\_store.select(reducer.getYourSelectorCreated)
.subscribe(res) => { <<<<--- Subscribe to listen changes on your "table data -state"
this.dataSource.data = res; <<-- Set New values to table
});
this.\_someService.fetchYourDataFromStateFunction(); <<-- Service to change the state
}
}
```
Upvotes: 2 <issue_comment>username_4: This table is not very user friendly, and it forces you to manually update, which misses the point of using Angular for the bindings. It is stated in the [documentation](https://material.angular.io/components/table/overview) that:
>
> Since the table optimizes for performance, it will not automatically check for changes to the data array. Instead, when objects are added, removed, or moved on the data array, you can trigger an update to the table's rendered rows by calling its renderRows() method.
>
>
>
To call a method on the material table component from the Typescrypt code you need to do it through a `ViewChild` reference to the table. First add a hashtagged name to the table in the template:
```
```
Then on your Typescript file, declare a public member with the same name you put after the hashtag in the template, and decorate it with `ViewChild` so that Angular injects it (I wont be showing the imports):
```
export class SomeComponent implements OnInit {
@ViewChild(MatTable) myTable!: MatTable;
```
(The "!" is needed in new versions of Angular to trick Typescript into believing it will be always non null. Turns out it will. Keep reading)
So now you could do:
```
this.myTable.renderRows();
```
And it would work **unless the table or any of the parent is inside an `*ngIf` directive**. When that directive is working, the table is not present in the DOM, and the member annotated with `ViewChild` will be undefined, so you can't call anything on it. This is not a problem of the material table in particular, it is how Angular is designed. Check [this question](https://stackoverflow.com/questions/34947154/angular-2-viewchild-annotation-returns-undefined) for solutions. My favourite is to replace the `*ngIf` with `[hidden]`. That is ok if the directive was in the table, but becomes messy when it is in the parents.
Upvotes: 4
|
2018/03/14
| 1,523 | 7,018 |
<issue_start>username_0: Need advise on the following:
In my company I am developing .NET class libraries to be used by external stakeholders, and I am sharing them on a private NuGet feed to which the stakeholders have access.
The libraries are dependent on one core library as shown below. Currently I'm maintaining these libraries in a single VS solution, obviously into one Git repo. I'm following SemVer to version them.
```
CoreLibrary
CoreLibrary.Extension1 (references CoreLibrary as project reference)
CoreLibrary.Extension2 (references CoreLibrary as project reference)
...
There might be more of these extension libraries in the near future
```
Up until now I have always manually versioned them from within Visual Studio (manually setting the correct values in AssemblyInfo.cs whenever I need to bump a version) and used a batch file in each project to pack and push that specific library to NuGet.
But now I recently started looking into VSTS and my next undertaking is to automate builds and releases as much as possible, also with automatic versioning. I stumbled upon GitVersion which I think can help greatly with that last part.
But, here's the but. If I understand correctly, GitVersion operates at the repository level, so a given semver version calculated by GitVersion applies to ALL assemblies/libraries in a repository, right ?
So what are your suggestions for release and versioning strategies for these libraries ?
1) Keep everything like now in one solution and repo and whenever there is a change in one package that needs to be published (say CoreLibrary.Extension1), this generates a NuGet package for all other libraries too. So if all packages are on 1.0.0 and there is one small change in one of them, they all get bumped to v1.1.0 and all get packed and pushed ?
2) Create a git repo per project and let GitVersion work its magic on each repo separately. Reference the CoreLibrary via NuGet in the Extension libraries.
Option 2 is the cleanest I think and has the advantage that versioning and packaging is handled per project, but might be quite an overkill on the workload in terms of maintaining them, especially given the fact that I am the sole developer of these libraries.
What are your thoughts ? Which option would you opt for given the context ? Have I missed any other options that might be worth considering ?
Setting my first steps in automated builds and release so any advise is more than welcome.
Thanks<issue_comment>username_1: I would strongly suggest against using any nuget package here. Nuget was intended as a way to manage third-party dependencies of a software project, not as a way to manage your interconnected code you build on a regular basis. For that, you'd do better to have a single repository, one visual studio solution and a single build stack you build/deploy from.
If you must use Nuget, just make a single package with a single version.
Upvotes: -1 <issue_comment>username_2: You must consider at least these two factors:
1. Effort required to maintain multiple repo's/branches and release streams.
2. Potential effort demanded of your customers when you make minor or major changes to just one or a small fraction of the libraries.
When a project is small and there's only one developer working on it, it's often easier to work entirely from within a single repository and cut a single package for all releases. Even for a single developer however, this will not scale well with the number of libraries. For one thing, VS is not particularly good at handling large projects, though it does continue to improve on this front, over time. Even so, there's a practical limit, but a single developer is unlikely to be productive enough to reach that limit in less than a few years.
Eventually, even a single developer, generally hopes to be successful enough to have a need for scale, either on the development side, or supporting customers. Consider the effect of being half-way through some major work on your extension2 when an important customer discovers a bug in extension1. Now you need to check-out an earlier commit, produce a quick fix, cut a new package, test, rinse/repeat, publish, return to the head of your repo, cherry-pick the bug fix, and continue. Had you been working in separate repo's or even branches in the same repo, the task is much easier to organize and get it right on the first try.
Now here's where I have a bone to pick regarding your assumption that you need separate repo's per Nuget package, or that GitVersion "operates at the repository level". Generally, most of the documentation and nearly all of the examples you'll find, definitely cater to the single product/package/repo format, and I'd probably recommend that in most cases, but it's not in fact a hard requirement. GitVersion functions at the branch level, and you should definitely be doing development for different libraries in different branches, if not different repo's. Having a master branch to merge everything, is probably also a good idea, but even master isn't required by Git or GitVersion. You can have multiple VS solutions and projects in a single git repo and you can produce multiple nuget packages therefrom. I am not saying you should, I am saying it is not an uncommon practice, at least for non-trivial product lines.
On the customer front, it really depends a lot on the nature of your products. Are all extensions necessary or even useful to every customer? Will you produce alpha/beta releases to early adopters? Is your code used on servers? Is it used on mobile devices? Generally, you don't want long upgrade times for any code that is run on servers, so you should think seriously about breaking up your product line into multiple, independently versioned packages. Client side has not been as critical in this areas in the past, but on mobile devices, download times become a major factor. Not everybody has unlimited free download bandwidth to work with either, so cost can be a major factor.
If you're going to produce a lot of prereleases, you'll probably want to break the product line out into separate packages. You can still produce the one package of packages, but you'll need the option eventually, to send out updates at the module level.
Once you go down the one repo/project/package/release-stream trail, you'll quickly develop the automation you need to help maintain it. It's like clean coding practices, a little painful when you first get started, but it eventually becomes second nature and the pay-off down the line can be enormous. Keep in mind that there isn't any reason why you can't have multiple repo's with their own solutions and projects and also have a VS solution at the parent directory one level above those, that gives you a fully integrated build. Just think of it as a hierarchy of build systems. You'll just need to learn how to use your `.gitignore` file or git sub-modules (I prefer the former, as the tooling for the later is weak).
Upvotes: 3 [selected_answer]
|
2018/03/14
| 1,950 | 6,450 |
<issue_start>username_0: I have a set of columns in my input data on which I am pivoting based on multiple columns.
I am facing issues with the column headers after the pivoting is done.
**Input data**
[](https://i.stack.imgur.com/NA9cr.jpg)
**Output Generated by my approach -**
[](https://i.stack.imgur.com/5b1bY.png)
**Expected Output Headers:**
**I need the headers of the output to look like -**
[](https://i.stack.imgur.com/j830f.jpg)
Steps done so far to achieve the Output I am getting -
```
// *Load the data*
scala> val input_data =spark.read.option("header","true").option("inferschema","true").option("delimiter","\t").csv("s3://mybucket/data.tsv")
// *Filter the data where residentFlag column = T*
scala> val filtered_data = input_data.select("numericID","age","salary","gender","residentFlag").filter($"residentFlag".contains("T"))
// *Now we will the pivot the filtered data by each column*
scala> val pivotByAge = filtered_data.groupBy("age","numericID").pivot("age").agg(expr("coalesce(first(numericID),'-')")).drop("age")
// *Pivot the data by the second column named "salary"*
scala> val pivotBySalary = filtered_data.groupBy("salary","numericID").pivot("salary").agg(expr("coalesce(first(numericID),'-')")).drop("salary")
// *Join the above two dataframes based on the numericID*
scala> val intermediateDf = pivotByAge.join(pivotBySalary,"numericID")
// *Now pivot the filtered data on Step 2 on the third column named Gender*
scala> val pivotByGender = filtered_data.groupBy("gender","numericID").pivot("gender").agg(expr("coalesce(first(numericID),'-')")).drop("gender")
// *Join the above dataframe with the intermediateDf*
scala> val outputDF= pivotByGender.join(intermediateDf ,"numericID")
```
How to rename the columns generated after pivoting?
Is there a different approach I can take for Pivoting the data set based on multiple columns (nearly 300)?
Any optimizations/suggestions for improving performance?<issue_comment>username_1: You could do something like this and use regex to simplify
```
var outputDF= pivotByGender.join(intermediateDf ,"numericID")
val cols: Array[String] = outputDF.columns
cols
.foreach{
cl => cl match {
case "F" => outputDF = outputDF.withColumnRenamed(cl,s"gender_${cl}")
case "M" => outputDF = outputDF.withColumnRenamed(cl,s"gender_${cl}")
case cl.matches("""\\d{2}""") => outputDF = outputDF.withColumnRenamed(cl,s"age_${cl}")
}
}
```
Upvotes: 1 <issue_comment>username_2: You can consider using [foldLeft](http://www.scala-lang.org/api/current/scala/collection/TraversableOnce.html#foldLeft[B](z:B)(op:(B,A)=>B):B) to traverse the list of to-pivot columns to successively create pivot dataframe, rename the generated pivot columns, followed by the cumulative join:
```
val data = Seq(
(1, 30, 50000, "M"),
(1, 25, 70000, "F"),
(1, 40, 70000, "M"),
(1, 30, 80000, "M"),
(2, 30, 80000, "M"),
(2, 40, 50000, "F"),
(2, 25, 70000, "F")
).toDF("numericID", "age", "salary", "gender")
// Create list pivotCols which consists columns to pivot
val id = data.columns.head
val pivotCols = data.columns.filter(_ != "numericID")
// Create the first pivot dataframe from the first column in list pivotCols and
// rename each of the generated pivot columns
val c1 = pivotCols.head
val df1 = data.groupBy(c1, id).pivot(c1).agg(expr(s"coalesce(first($id),'-')")).drop(c1)
val df1Renamed = df1.columns.tail.foldLeft( df1 )( (acc, x) =>
acc.withColumnRenamed(x, c1 + "_" + x)
)
// Using the first pivot dataframe as the initial dataframe, process each of the
// remaining columns in list pivotCols similar to how the first column is processed,
// and cumulatively join each of them with the previously joined dataframe
pivotCols.tail.foldLeft( df1Renamed )(
(accDF, c) => {
val df = data.groupBy(c, id).pivot(c).agg(expr(s"coalesce(first($id),'-')")).drop(c)
val dfRenamed = df.columns.tail.foldLeft( df )( (acc, x) =>
acc.withColumnRenamed(x, c + "_" + x)
)
dfRenamed.join(accDF, Seq(id))
}
)
// +---------+--------+--------+------------+------------+------------+------+------+------+
// |numericID|gender_F|gender_M|salary_50000|salary_70000|salary_80000|age_25|age_30|age_40|
// +---------+--------+--------+------------+------------+------------+------+------+------+
// |2 |2 |- |2 |- |- |- |2 |- |
// |2 |2 |- |2 |- |- |2 |- |- |
// |2 |2 |- |2 |- |- |- |- |2 |
// |2 |2 |- |- |2 |- |- |2 |- |
// |2 |2 |- |- |2 |- |2 |- |- |
// |2 |2 |- |- |2 |- |- |- |2 |
// |2 |2 |- |- |- |2 |- |2 |- |
// |2 |2 |- |- |- |2 |2 |- |- |
// |2 |2 |- |- |- |2 |- |- |2 |
// |2 |- |2 |2 |- |- |- |2 |- |
// |2 |- |2 |2 |- |- |2 |- |- |
// |2 |- |2 |2 |- |- |- |- |2 |
// |2 |- |2 |- |2 |- |- |2 |- |
// |2 |- |2 |- |2 |- |2 |- |- |
// |2 |- |2 |- |2 |- |- |- |2 |
// |2 |- |2 |- |- |2 |- |2 |- |
// |2 |- |2 |- |- |2 |2 |- |- |
// |2 |- |2 |- |- |2 |- |- |2 |
// |1 |- |1 |- |1 |- |1 |- |- |
// |1 |- |1 |- |1 |- |- |- |1 |
// ...
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 474 | 1,739 |
<issue_start>username_0: I know this is probably a really beginner's question and it most definitly stems from my inexperience in PHP. So don't hate me SO.
I've been learning Laravel for the last weeks. In their docs for mass assignment they talk about the `create` method. As per their example -
`$flight = App\Flight::create(['name' => 'Flight 10']);`
Yet as much as I try to search in the API docs or the source, I can't seem to find the method its referencing, at least not in the `Model` class. The only plausible thing I found was a method from the `Builder` class in Eloquent.
I'm not expecting anyone to search through the docs for me. I'm much more interested in knowing what would be the best practice to find the inheritance chain for the particular method and log it or dump it.<issue_comment>username_1: It is coming from the `Builder` class. You can see the code here:
```
/**
* Save a new model and return the instance. Allow mass-assignment.
*
* @param array $attributes
* @return \Illuminate\Database\Eloquent\Model|$this
*/
public function forceCreate(array $attributes)
{
return $this->model->unguarded(function () use ($attributes) {
return $this->newModelInstance()->create($attributes);
});
}
```
<https://github.com/laravel/framework/blob/5.5/src/Illuminate/Database/Eloquent/Builder.php#L754>
Upvotes: 0 <issue_comment>username_2: The `create()` method is located in `Illuminate\Database\Eloquent\Builder`:
<https://github.com/laravel/framework/blob/5.6/src/Illuminate/Database/Eloquent/Builder.php#L754>
You can find it by looking at `Model::__callStatic()` → `__call()`.
These are magic methods (<http://php.net/manual/en/language.oop5.magic.php>).
Upvotes: 2 [selected_answer]
|
2018/03/14
| 447 | 1,309 |
<issue_start>username_0: I'm creating a packer image using these two links:
<https://www.packer.io/docs/builders/azure.html>
<https://learn.microsoft.com/en-us/azure/virtual-machines/linux/build-image-with-packer>
I want to use a 'private image' rather than a marketplace image to build from:
```
"managed_image_resource_group_name": "myResourceGroup",
"managed_image_name": "myPackerImage",
"os_type": "Linux",
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04-LTS",
```
How do I reference my own images in "image\_publisher", "image\_offer", "image\_sku" etc?
```
"os_type": "Linux",
"image_publisher": "myPrivateRepo",
"image_offer": "UbuntuWeb",
"image_sku": "16-1.0",
```
Thanks in advance<issue_comment>username_1: I have never tried this myself, but it seems to work like [this](https://github.com/hashicorp/packer/issues/3785#issuecomment-238301845).
Instead of
```
"image_publisher": "Canonical",
"image_offer": "UbuntuServer",
"image_sku": "16.04.0-LTS",
```
write
```
"image_url": "https://my-storage-account.blob.core.windows.net/path/to/your/custom/image.vhd",
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: I used the `custom_managed_image_name` and `custom_managed_image_resource_group_name` properties to get this to work.
Upvotes: 0
|
2018/03/14
| 448 | 1,438 |
<issue_start>username_0: I am wondering how to add a outside of and in a Blogger template.<issue_comment>username_1: >
> Includes are most useful if you have a section of code that you want
> to re-use in several different places, or only include in certain
> circumstances.
>
>
> To do this, write the content inside a `b:includable`, then use
> `b:include` wherever you want it to appear.
>
>
>
Source: [Widget Tags for Layouts](https://support.google.com/blogger/answer/46995?hl=en&ref_topic=6321969)
Upvotes: 0 <issue_comment>username_2: Hello bro use this url ..
<https://bloggercode-blogconnexion.blogspot.com.eg/2017/05/tag-b-defaultmarkups.html>
(username_2)
Upvotes: 2 <issue_comment>username_3: Based on the link shared in `username_2`'s answer.
Firstly you need to define the `b:includable` in the template (best would be within region) -
```
Content I am including
```
The type is set to `Common` so that it can be referenced from anywhere in the template. Now you will have simply reference using `b:include` -
```
```
You can create as many `b:includable` as you require within the Common type `b:defaultmarkup` tag -
```
Content I am including
More Content I am including
```
This functionality was introduced in the `v3` template engine ( <https://blogger.googleblog.com/2017/03/share-your-unique-style-with-new.html> ) in Blogger but now works with older `v2` templates as well
Upvotes: 2
|
2018/03/14
| 537 | 2,270 |
<issue_start>username_0: I am reviewing how the ngrx is implemented in the example app on GH. Can someone please explain why there are two modules in one file at the link below? What reasoning the developer had to have two modules in place of one?
[Github link - auth module in ngrx library example app](https://github.com/ngrx/platform/blob/master/example-app/app/auth/auth.module.ts)
edit - the same functionality could be implemented with one module itself. why create two modules? what's the functionality split here?
edit-2 - I found the reason for two modules for Auth. The developer needs to export the services such as auth service and auth Guard for use by app module and other modules. For that he created the AuthModule which has forRoot static method for exporting the services.
The other RootAuthModule has the components for html form for getting user input and dispatching the action for authentication. This module is lazy loaded and need not export any components or any services for use by other modules.<issue_comment>username_1: Same file
=========
Both modules are small and the developer did not see the need to split it into two separate files. They are pretty similar to each other so it makes sense to have them in the same file. I don't think there's any special reason, as it does not change the big picture at all. If developer feels that at one point they've become too large, they can be separated later.
Two modules
===========
As for why two modules: one is used at the root-level and the other at children level. The most common example of this distinction is using `.forRoot` and `.forChildren` with the official `RouterModule` from `@angular/router` package. Here, instead of the two static methods, NGRX team has decided to go with a module with `forRoot` method and another one for children (since it does not need any arguments).
You can read more about [the difference between `forRoot` and `forChild` here](https://stackoverflow.com/questions/40498081/routermodule-forrootroutes-vs-routermodule-forchildroutes).
Upvotes: 0 <issue_comment>username_2: You can organize your files as you want but remember that placing two modules in one file is not recommended : <https://angular.io/guide/styleguide#rule-of-one>
Upvotes: 2
|
2018/03/14
| 1,083 | 3,849 |
<issue_start>username_0: I have the following context:
4 models:
* Project
* Investor
* Subscription
* ExternalSubscription
A `project` shall have many `investors` through either `subscriptions` or `external_subscriptions`.
I currently have a method that does something like this: `Investor.where(id: (subscription_ids + external_subscription_ids))`.
My goal is to have a `has_many` relationship (and precisely use the `has_many` activerecord feature) to get the same result.
How can I acheive this? Is it even possible?
Thanks!
```
Project
[Associations]
has_many :subscriptions
has_many :external_subscriptions
[Table description]
create_table "projects", force: :cascade do |t|
t.string "name"
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
end
```
---
```
Investor
[Associations]
has_many :subscriptions
has_many :external_subscriptions
[Table description]
create_table "investors", force: :cascade do |t|
t.string "name"
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
end
```
---
```
Subscription
[Associations]
belongs_to :project
belongs_to :investor
[Table description]
create_table "subscriptions", force: :cascade do |t|
t.integer "project_id"
t.integer "investor_id"
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
t.index ["investor_id"], name: "index_subscriptions_on_investor_id"
t.index ["project_id"], name: "index_subscriptions_on_project_id"
end
```
---
```
ExternalSubscription
[Associations]
belongs_to :project
belongs_to :investor
[Table description]
create_table "external_subscriptions", force: :cascade do |t|
t.integer "project_id"
t.integer "investor_id"
t.datetime "created_at", null: false
t.datetime "updated_at", null: false
t.index ["investor_id"], name: "index_external_subscriptions_on_investor_id"
t.index ["project_id"], name: "index_external_subscriptions_on_project_id"
end
```
I'm on rails 5.0.x
Edit
====
My real models are more complex than that. Here, I'm just showing the relationships to make it easy to discuss, but I can't merge `subscriptions` and `external_subscriptions` into the same model.<issue_comment>username_1: As it seems you don't need different information in your subscriptions and external\_subscriptions (both tables have the same fields), I would use only one model and table, and classify subscriptions based on a new field in the table. By using appropriate scopes you can access all associated models easily.
**Project**
```
class Project < ApplicationRecord
has_many :subscriptions
has_many :external_subscriptions, -> { external }, class_name: "Subscription"
has_many :normal_subscriptions, -> { normal }, class_name: "Subscription"
has_many :investors, through: :subscriptions
has_many :external_investors, through: :external_subscriptions, :source => :investor
has_many :normal_investors, through: :normal_subscriptions, :source => :investor
end
```
**Investor**
```
class Investor < ApplicationRecord
has_many :subscriptions
has_many :projects, through: : subscriptions
end
```
**Subscription**
```
class Subscription < ApplicationRecord
belongs_to :project
belongs_to :investor
enum type: [ :external, :normal ]
scope :external, -> { where(type: :external) }
scope :normal, -> { where(type: :normal) }
end
```
Then you can access different project investors as:
```
project = Project.first.
project.investors #all
project.external_investors #only external
project.normal_investors #only normal
```
Upvotes: 1 <issue_comment>username_2: I resolved my issue with an SQL view doing a UNION thanks to @MrYoshiji's comment.
Here's the POC: <https://github.com/username_2/rails-has_many-through-view>
Upvotes: 1 [selected_answer]
|
2018/03/14
| 1,204 | 4,977 |
<issue_start>username_0: I am calling getReverseGeocodingData() from callback function of getCurrentPosition. getting the error below
function is not defined?
```
getLocation() {
if (navigator.geolocation) {
navigator.geolocation.getCurrentPosition(function(position){
console.log(position.coords.latitude);
console.log(position.coords.longitude);
getReverseGeocodingData(position.coords.latitude,position.coords.longitude);
alert('Ahmad');
});
} else {
console.error("Geolocation is not supported by this browser.");
}
}
getReverseGeocodingData(lat, lng){
var latlng = new google.maps.LatLng(lat, lng);
// This is making the Geocode request
var geocoder = new google.maps.Geocoder();
geocoder.geocode({ 'latLng': latlng }, function (results, status) {
if (status !== google.maps.GeocoderStatus.OK) {
alert(status);
}
// This is checking to see if the Geoeode Status is OK before proceeding
if (status == google.maps.GeocoderStatus.OK) {
console.log(results);
var address = (results[0].formatted_address);
}
});
}
```
I am using meteor with react , the above code is inside one of react component
I tried to call if with this keyword with parenthesis and without?
I tried the following options, but it didn't help
I am getting getReverseGeocodingData is not function!!
this.getReverseGeocodingData(position.coords.latitude,position.coords.longitude).bind(this);
```
getReverseGeocodingData(position.coords.latitude,position.coords.longitude).bind(this);
this.getReverseGeocodingData(position.coords.latitude,position.coords.longitude).bind();
this.getReverseGeocodingData(position.coords.latitude,position.coords.longitude);
```
screenshot of console error
[](https://i.stack.imgur.com/kqypu.png)
I updated the code as the below but I am still the issue below
Cannot read property 'getReverseGeocodingData' of undefined
```
export default class AddUser extends React.Component {
constructor(props){
this.getReverseGeocodingData = this.getReverseGeocodingData.bind(this);
}
getReverseGeocodingData(lat, lng) {
var latlng = new google.maps.LatLng(lat, lng);
// This is making the Geocode request
var geocoder = new google.maps.Geocoder();
geocoder.geocode({ 'latLng': latlng }, function (results, status) {
if (status !== google.maps.GeocoderStatus.OK) {
alert(status);
}
// This is checking to see if the Geoeode Status is OK before proceeding
if (status == google.maps.GeocoderStatus.OK) {
console.log(results);
var address = (results[0].formatted_address);
}
});
}
getLocation() {
if (navigator.geolocation) {
navigator.geolocation.getCurrentPosition((position) => {
console.log(position.coords.latitude);
console.log(position.coords.longitude);
this.getReverseGeocodingData(position.coords.latitude, position.coords.longitude);
});
} else {
console.error("Geolocation is not supported by this browser.");
}
}
```
added screenshot for last change.
[](https://i.stack.imgur.com/RQCjC.png)
The issue fixed by adding bind to the call inside the html.
[](https://i.stack.imgur.com/Vrd5H.png)
Thanks alot.<issue_comment>username_1: You must use the `this` keyword.
`this.getReverseGeocodingData(position.coords.latitude,position.coords.longitude);`
Upvotes: 0 <issue_comment>username_2: You should use `this.getReverseGeocodingData(...)`. Also, you should explicitly bind `this` if you are passing `getLocation` to some other components (otherwise you will lose the `this` reference to the current component)
Upvotes: 2 <issue_comment>username_3: The best way I've found is to bind all of your functions needed in the constructor:
```
export default class AddUser extends React.Component {
constructor(props) {
super(props);
this.getReverseGeocodingData = this.getReverseGeocodingData.bind(this);
}
someFunction() {
this.getReverseGeocodingData(...)
...
}
}
```
This way you don't have to bind `this` on every render.
Then you need to use an arrow function to maintain the scope of `this`:
```
navigator.geolocation.getCurrentPosition(position => {
this.getReverseGeocodingData(...);
...
});
```
Upvotes: 1
|
2018/03/14
| 867 | 2,918 |
<issue_start>username_0: I am trying to apply the Jaccard coefficient as customised loss function in a Keras LSTM, using Tensorflow as backend.
I know the I have to call the following:
```python
model.compile(optimizer='rmsprop', loss=[jaccard_similarity])
```
where jaccard\_similarity function should be the keras.backend implementation of the below:
```python
def jaccard_similarity(doc1, doc2):
intersection =set(doc1).intersection(set(doc2))
union = set(doc1).union(set(doc2))
return len(intersection)/len(union)
```
The problem is that I cannot find methods to implement intersection and union functions on tensors using tensorflow as backend.
Any suggestion?<issue_comment>username_1: As I see, there is a [sets section](https://www.tensorflow.org/api_docs/python/tf/sets) in TF v1.6 docs. Probably that could help. It contains a couple of functions that compute set intersection, difference and union.
Not sure though when it was introduced and if it is possible to somehow "hack" Keras models to use a custom metric with these functions.
Upvotes: 0 <issue_comment>username_2: I've used the jaccard distance to train a semantic segmentation network in keras. The loss function I used is identidical to [this one](https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/jaccard.py). I'll paste it here:
```
from keras import backend as K
def jaccard_distance(y_true, y_pred, smooth=100):
intersection = K.sum(K.abs(y_true * y_pred), axis=-1)
sum_ = K.sum(K.abs(y_true) + K.abs(y_pred), axis=-1)
jac = (intersection + smooth) / (sum_ - intersection + smooth)
return (1 - jac) * smooth
```
Notice that this one minus the jaccard similarity (which you want to maximize). In fact, it's a continuous approximation of the jaccard distance, so it's derivative is well defined.
Upvotes: 4 [selected_answer]<issue_comment>username_3: Careful with [Artur's](https://stackoverflow.com/a/49290815/7109848) answer!
>
>
> ```
> intersection = K.sum(K.abs(y_true * y_pred), axis=-1)
> sum_ = K.sum(K.abs(y_true) + K.abs(y_pred), axis=-1)
>
> ```
>
>
Loss function in the link is incorrect!
|X| denotes the cardinality of the set, not the absolute value! Also the summation runs through classes ?
The corrected version should look something like this: (tensorflow version, not tested yet)
```
def jaccard_distance(y_true, y_pred, smooth=100):
""" Calculates mean of Jaccard distance as a loss function """
intersection = tf.reduce_sum(y_true * y_pred, axis=(1,2))
sum_ = tf.reduce_sum(y_true + y_pred, axis=(1,2))
jac = (intersection + smooth) / (sum_ - intersection + smooth)
jd = (1 - jac) * smooth
return tf.reduce_mean(jd)
```
Inputs are image tensors of shape (batch, width, height, classes).
Calculates the jaccard distance for each batch and class (shape=(batch, classes)) and returns mean value as a loss scalar.
Upvotes: 4
|
2018/03/14
| 786 | 2,326 |
<issue_start>username_0: I'm not sure how to do this so any help would be appreciated. I have two objects I want to combine into one object. I've used the spread operator to do this:
```
newObj = {...obj1, ...obj2};
```
this, for example gives me this:
```
{
[
obj1A{
"item": "stuff",
"item": "stuff"
},
obj1B{
"item": "stuff",
"item": "stuff"
}
],
[
obj2A{
"item": "stuff",
"item": "stuff"
},
obj2B{
"item": "stuff",
"item": "stuff"
}
]
}
```
But what I want is this:
```
{
[
obj1A{
"item": "stuff",
"item": "stuff"
},
obj1B{
"item": "stuff",
"item": "stuff"
},
obj2A{
"item": "stuff",
"item": "stuff"
},
obj2B{
"item": "stuff",
"item": "stuff"
}
]
}
```
Anyone know how to do this?<issue_comment>username_1: With proper unique names, you could use [`Object.assign`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/assign) and create a new object.
```js
var object1 = { obj1A: { item1: "stuff", item2: "stuff" }, obj1B: { item1: "stuff", item2: "stuff" } },
object2 = { obj2A: { item1: "stuff", item2: "stuff" }, obj2B: { item1: "stuff", item2: "stuff" } },
combined = Object.assign({}, object1, object2);
console.log(combined);
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you're using jQuery you could do something similar to this <https://jsfiddle.net/d11kfd4d/>:
```
var a = {
propertyOne: 'One',
propertyTwo: 'Two'
};
var b = {
propertyThree: 'Three',
propertyFour: 'Four'
}
var c = $.extend(a, b);
console.log(c);
```
Upvotes: 0 <issue_comment>username_3: Assuming you have a typo (changing this `[]` to this `{}`) and the keys are valids (not repeated), you can use the Spread syntax.
```js
var obj1 = { obj1A: { item: "stuff", item2: "stuff" }, obj1B: { item: "stuff", item2: "stuff" } },
obj2 = { obj2A: { item: "stuff", item2: "stuff" }, obj2B: { item: "stuff", item2: "stuff" } }
console.log({...obj1, ...obj2});
```
```css
.as-console-wrapper { max-height: 100% !important; top: 0; }
```
Upvotes: 0
|
2018/03/14
| 2,448 | 4,873 |
<issue_start>username_0: I want to merge two pandas dataframe.
```
df1 =
A B
2 11
2 13
2 15
2 19
2 25
2 35
2 41
2 47
2 46
2 51
3 9
3 15
3 17
3 23
3 25
3 29
5 4
5 23
5 28
```
with another dataframe.
```
df2 =
A B C
2 11 abc
2 13 cdd
2 35 cdd
2 41 cdd
2 47 cdd
3 9 cdd
3 15 cdd
3 17 cdd
3 23 cdd
```
Both dataframes are sorted by "A" and then "B". I want to merge by `columns['A', 'B']`; so for column "C" where the data are missing I want to fill them by `na`, but with `na_uniqueNumber` for each missing block of `na`.
**How can I updated this merge method:**
```
data_frames = [df1, df2]
df_update = reduce(lambda left,right: pd.merge(
left, right, on=['A', 'B'], how='outer'), data_frames).fillna('na')
```
**Note:** The code should update `na` with unique values only in "C" in the situation other column are present.
**Expected output:**
```
df2 =
A B C
2 11 abc
2 13 cdd
2 15 na_01
2 19 na_01
2 25 na_01
2 35 cdd
2 41 cdd
2 47 cdd
2 46 na_02
2 51 na_02
3 9 cdd
3 15 cdd
3 17 cdd
3 23 cdd
3 25 na_03
3 29 na_03
5 4 na_04
5 23 na_04
5 28 na_04
```
Thanks,<issue_comment>username_1: IIUC
```
New = df_update[df_update.C == 'na']
s=New.reset_index().groupby('A').apply(lambda x : x['index'].diff().ne(1)).cumsum()
df_update.loc[df_update.C == 'na','C']+='_'+s.astype(str).str.pad(2,fillchar='0').values
df_update
Out[124]:
A B C
0 2 11 abc
1 2 13 cdd
2 2 15 na_01
3 2 19 na_01
4 2 25 na_01
5 2 35 cdd
6 2 41 cdd
7 2 47 cdd
8 2 46 na_02
9 2 51 na_02
10 3 9 cdd
11 3 15 cdd
12 3 17 cdd
13 3 23 cdd
14 3 25 na_03
15 3 29 na_03
16 5 4 na_04
17 5 23 na_04
18 5 28 na_04
```
Upvotes: 2 <issue_comment>username_2: **Attempt 1**
```
def labels(d):
mask = d.C.isnull().values
a = d.A.values
c = d.C.values.copy()
i = np.flatnonzero(mask)
f, u = pd.factorize([
(a_, c_) for a_, c_ in zip(a[mask], (~mask).cumsum()[mask])
])
c[i] = [f'na_{g+1:02d}' for g in f]
return c
df1.merge(df2, 'left').assign(C=labels)
A B C
0 2 11 abc
1 2 13 cdd
2 2 15 na_01
3 2 19 na_01
4 2 25 na_01
5 2 35 cdd
6 2 41 cdd
7 2 47 cdd
8 2 46 na_02
9 2 51 na_02
10 3 9 cdd
11 3 15 cdd
12 3 17 cdd
13 3 23 cdd
14 3 25 na_03
15 3 29 na_03
16 5 4 na_04
17 5 23 na_04
18 5 28 na_04
```
---
**Attempt 2**
also Python 3.6
```
def labeler():
tracker = {}
return lambda k: tracker.setdefault(k, len(tracker) + 1)
def fill(d):
c_ = labeler()
return [
f'na_{c_((a, g)):02d}' if pd.isna(c) else c
for a, c, g in zip(d.A, d.C, d.C.notna().cumsum())
]
df1.merge(df2, 'left').assign(C=fill)
A B C
0 2 11 abc
1 2 13 cdd
2 2 15 na_01
3 2 19 na_01
4 2 25 na_01
5 2 35 cdd
6 2 41 cdd
7 2 47 cdd
8 2 46 na_02
9 2 51 na_02
10 3 9 cdd
11 3 15 cdd
12 3 17 cdd
13 3 23 cdd
14 3 25 na_03
15 3 29 na_03
16 5 4 na_04
17 5 23 na_04
18 5 28 na_04
```
---
**Attempt 3**
Another alternative. Not sure what I like better.
```
def labeler(d):
mask = d.C.notna()
csum = mask.cumsum()
tups = list(zip(d.A, csum, d.C, ~mask))
trac = dict(map(reversed, enumerate(
pd.unique([t[:2] for t in tups if t[-1]]), 1
)))
return list(map(
lambda t: f'na_{trac.get(t[:2]):02d}' if t[:2] in trac else t[2], tups
))
df1.merge(df2, 'left').assign(C=labeler)
A B C
0 2 11 abc
1 2 13 na_01
2 2 15 na_01
3 2 19 na_01
4 2 25 na_01
5 2 35 cdd
6 2 41 cdd
7 2 47 na_02
8 2 46 na_02
9 2 51 na_02
10 3 9 cdd
11 3 15 cdd
12 3 17 cdd
13 3 23 na_03
14 3 25 na_03
15 3 29 na_03
16 5 4 na_04
17 5 23 na_04
18 5 28 na_04
```
Upvotes: 2 <issue_comment>username_3: You can [`merge`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html) first both `DataFrame`s by left join and then for each group `A` count `NaN`s, which are replaced by `fillna`:
```
df = df1.merge(df2, how='left')
isna = df['C'].isnull()
count_nans =(isna.ne(isna.groupby(df['A']).shift()) & isna).cumsum().astype(str).str.zfill(2)
df['C'] = df['C'].fillna('na_' + count_nans)
print (df)
A B C
0 2 11 abc
1 2 13 cdd
2 2 15 na_01
3 2 19 na_01
4 2 25 na_01
5 2 35 cdd
6 2 41 cdd
7 2 47 cdd
8 2 46 na_02
9 2 51 na_02
10 3 9 cdd
11 3 15 cdd
12 3 17 cdd
13 3 23 cdd
14 3 25 na_03
15 3 29 na_03
16 5 4 na_04
17 5 23 na_04
18 5 28 na_04
```
Upvotes: 2
|
2018/03/14
| 409 | 1,367 |
<issue_start>username_0: How can I return nothing back to the twilio process after processing the sms message?
The examples in the api use the MessagingResponse and when that is used, it comes in as a Direction='reply' in the message log on the console. This incurs an additional charge (inbound + reply). Simply put, I want to do this..
```
app.post('/sms', (req, res) =>
{
console.log('hello world');
}
```
without getting a 11200 error.<issue_comment>username_1: Try an empty string as the response message (Twilio needs you to return valid XML).
```
const http = require('http');
const express = require('express');
const MessagingResponse = require('twilio').twiml.MessagingResponse;
const app = express();
app.post('/sms', (req, res) => {
const twiml = new MessagingResponse();
// twiml.message('');
res.writeHead(200, {'Content-Type': 'text/xml'});
res.end(twiml.toString());
});
http.createServer(app).listen(1337, () => {
console.log('Express server listening on port 1337');
});
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Answer from Twilio docs:
>
> In order to receive incoming messages, without sending an auto-response, the Twilio app should respond with a simple empty Response - .
>
>
>
<https://support.twilio.com/hc/en-us/articles/223134127-Receive-SMS-and-MMS-Messages-without-Responding>
Upvotes: 2
|
2018/03/14
| 1,371 | 4,611 |
<issue_start>username_0: I am trying to create a network load balancer with Terraform and it's important it is associated with elastic IPs that are protected from being destroyed.
I have code something like the following:
```
resource "aws_lb" "balancer" {
name = "${var.name}-nlb"
internal = "${var.internal}"
load_balancer_type = "network"
subnets = ["${data.aws_subnet_ids.selected.ids}"]
subnet_mapping {
subnet_id = "someid"
allocation_id = "someid"
}
subnet_mapping {
subnet_id = "someid"
allocation_id = "someid"
}
subnet_mapping {
subnet_id = "someid"
allocation_id = "someid"
}
tags = "${merge(var.tags,
map("Terraform", "true"),
map("Environment", var.environment))}"
}
```
What I am after is to make the `subnet_mapping` blocks dynamically as this code sits in a module and I want to create the number of mappings dependant on the amount of subnets passed in. Either that or pass in the blocks predefined.
Is there a way to do this? The important thing for me is that the elastic IPs associated need to stick around.<issue_comment>username_1: As mentioned in [Hendrik's answer](https://stackoverflow.com/a/57760953/2291321) this is now doable in Terraform 0.12 by using the [`dynamic` blocks](https://www.terraform.io/docs/configuration/expressions.html#dynamic-blocks) feature:
A simpler example than the above linked answer is shown in the documentation :
```
resource "aws_security_group" "example" {
name = "example" # can use expressions here
dynamic "ingress" {
for_each = var.service_ports
content {
from_port = ingress.value
to_port = ingress.value
protocol = "tcp"
}
}
}
```
---
Original:
Terraform doesn't currently allow you to use the `count` meta parameter on resource stanzas/sub resources.
There is an [issue tracking this on Github](https://github.com/hashicorp/terraform/issues/7034) but there isn't currently any work being done on it AFAIK.
The Hashicorp employee responding in that thread ([apparentlymart](https://github.com/apparentlymart)) is currently working on a new version of HCL which may in future support something like this.
Upvotes: 2 <issue_comment>username_2: A pretty ugly solution might be to create a resource for each of the possible numbers of AZ's. ex (code untested):
```
data "aws_availability_zones" "available" {}
resource "aws_lb" "lb_2_azs" {
count = "${length(data.aws_availability_zones.available.names) == 2 ? 1 : 0 }"
... all the rest of the stuff here ...
}
resource "aws_lb" "lb_3_azs" {
count = "${length(data.aws_availability_zones.available.names) == 3 ? 1 : 0 }"
... all the rest of the stuff here ...
}
```
And then in your module outputs something like this might work:
```
output "lb_id" {
value = "${element(concat(aws_lb.lb_2_azs.*.id, aws_lb.lb_3_azs.*.id, list("")), 0)}"
}
```
How to handle the listener and the other resources that might be needed for the LB:
```
resource "aws_lb_listener" "listener" {
count = "${length(concat(aws_lb.lb_2_azs.*.id, aws_lb.lb_3_azs.*.id))}"
load_balancer_arn = "${element(concat(aws_lb.lb_2_azs.*.id, aws_lb.lb_3_azs.*.id, list("")), 0)}"
... rest of the resource settings ...
}
```
I haven't tested the previous code, but here's some that I know works. I have a module for consul that creates an NLB if it's not being used for vault:
```
resource "aws_lb" "consul" {
name = "${var.lb_name}"
count = "${var.for_vault ? 0 : 1}"
internal = true
subnets = ["${var.subnet_ids}"]
load_balancer_type = "network"
idle_timeout = 60
}
resource "aws_lb_listener" "consul" {
count = "${var.for_vault ? 0 : 1}"
load_balancer_arn = "${aws_lb.consul.arn}"
port = 8500
protocol = "TCP"
default_action {
target_group_arn = "${aws_lb_target_group.consul.arn}"
type = "forward"
}
}
```
You can use the same count trick for aws\_lb\_target\_group and any other resources you need that refer to whichever of the aws\_lb resources.
Upvotes: 0 <issue_comment>username_3: You can use the dynamic blocks feature from Terraform 0.12 for that.
```
resource "aws_lb" "balancer" {
name = "${var.name}-nlb"
load_balancer_type = "network"
dynamic "subnet_mapping" {
for_each = aws_subnet.public.*.id
content {
subnet_id = subnet_mapping.value
allocation_id = aws_eip.lb[subnet_mapping.key].id
}
}
}
```
Upvotes: 4 [selected_answer]
|
2018/03/14
| 1,245 | 3,305 |
<issue_start>username_0: Given two numbers n and k, find a string s of lowercase alphabets such that sum of values of all the elements in the string is equal to k.
If there exist many such strings, find the one which is lexicographically the smallest.
The value of ith lowercase alphabet is i, for example, value of a is 1, b is 2, and so on.
Input format:
First line of input contain a single integer t, denoting number of test cases.
t lines follows each containing two space-separated integers n and k.
Output format:
Print the lexicographically smallest string of length n and having string sum value equal to k.
Constraints:
1 <= t <= 50
1 <= n <= 2\*10^5
n <= k <= 26\*n
Sample Input:
2
5 42
3 25
Sample Output:
aaamz
aaw
Explanation:
string value of aaamz is 42 also, it is smallest lexicographical string such that it contains 5 characters and has string value 42.<issue_comment>username_1: ```
generateLexicoString(int n, int k){
char[] a = new char[n];
// Initially fill the array with character 'a'
for(int i=0;i=0; i--){
if(x> 26){
a[i] = 'z';
x = (x-26)+1;
}
else if(x > 0){
a[i] = (char) (a[i] + x);
x = 0;
}
if(x == 0)
break;
}
String op="";
for(int i=0;i
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: you can avoid string append by using "System.out.print(a[i]);" as below.
`for(int i=0;i`
Upvotes: 1 <issue_comment>username_3: Sorry, But I think accepted solution here is missing one case:
if n = 2, k=28.
then the solution should be "bz".
```
generateLexicoString(int n, int k){
char[] a = new char[n];
// Initially fill the array with character 'a'
for(int i=0;i=0; i--){
if(x> 26){
a[i] = 'z';
x = (x-26)+1;
}
else if(x == 26){
a[i] = (char) (a[i] + 25);
x = 1;
}
else if(x > 0){
a[i] = (char) (a[i] + x);
x = 0;
}
if(x == 0)
break;
}
```
Char Array **a** will be our answer.
Upvotes: 1 <issue_comment>username_4: In the solution provided above, it missed the case when x will be exactly 26. (This will fail for, say, n = 2 and k = 28).
So for that, a bit of modification is required.
```
generateLexicoString(int n, int k){
char[] a = new char[n];
// Initially fill the array with character 'a'
for(int i=0;i=0; i--){
if(x> 26){
a[i] = 'z';
x = (x-26)+1;
}
else if(x == 26){
a[i] = (char) (a[i] + 25);
x = x - 25;
}
else if(x > 0){
a[i] = (char) (a[i] + x);
x = 0;
}
if(x == 0)
break;
}
String op= String.valueOf(a);
System.out.println(op);
// return op;
}
```
Upvotes: 1 <issue_comment>username_5: ```
#include
using namespace std;
int main()
{
int t;
cin>>t;
while(t--)
{
int n,k;
cin>>n>>k;
string s="";
for(int i=n;i>0;i--)
{
if(k - (i-1) > 26)
{
s = 'z' + s;
k-=26;
}
else
{
char c = 'a' - 1 + k - (i-1);
s = c + s;
k=i-1;
}
}
cout<
```
Upvotes: 0 <issue_comment>username_6: ```java
char arr[] = new char[n];
for (int i = n - 1; i >= 0; i--) {
if(k>0){
if(k>26){
k = k-26;
arr[i] = 'z';
}else{
int temp = k-i;
arr[i] = (char)(temp + 97 - 1);
k -= temp;
}
}else{
break;
}
}
return String.valueOf(arr);
```
Upvotes: 0
|