date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2018/03/14
| 962 | 3,167 |
<issue_start>username_0: When I startup my Oracle instance, is it possible to get Oracle to run some PL/SQL during the initialization?
I have tried to find something on the web but my searches have not been fruitful.
More specifically,
If I issue
```
$ sqlplus ....
startup mount;
alter database open;
```
Can I get it to then run a PL/SQL procedure?<issue_comment>username_1: You are looking for an AFTER STARTUP trigger.
ref: <https://docs.oracle.com/database/121/LNPLS/create_trigger.htm#LNPLS01374>
```
CREATE TRIGGER my_on_open_trigger
AFTER STARTUP ON DATABASE
BEGIN
<<< DO SOMETHING >>>
END my_on_open_trigger;
/
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: username_1 said (while discussing triggers' firing order in comments):
>
> I don't think the order can be set which is very real reason to fold
> into 1 if possible
>
>
>
You think wrong :)
Since Oracle 11g, there's the FOLLOWS option which enables you to set such a thing. Here's an example:
```
SQL> create table test (id varchar2(10));
Table created.
SQL> create or replace trigger t1
2 before insert on test
3 for each row
4 begin
5 dbms_output.put_Line('trigger 1');
6 end;
7 /
Trigger created.
SQL> create or replace trigger t2
2 before insert on test
3 for each row
4 follows t3
5 begin
6 dbms_output.put_Line('trigger 2');
7 end;
8 /
Warning: Trigger created with compilation errors.
SQL> create or replace trigger t3
2 before insert on test
3 for each row
4 follows t1
5 begin
6 dbms_output.put_Line('trigger 3');
7 end;
8 /
Trigger created.
SQL> insert into test values ('A');
trigger 1
trigger 3
trigger 2
1 row created.
SQL>
```
A side note: trigger T2 was created with errors as I said that it follows T3 which doesn't exist during T2's creation. At the end, all of them are valid:
```
SQL> select object_name, status
2 from user_objects
3 where object_name in ('T1', 'T2', 'T3')
4 and object_type = 'TRIGGER';
OBJECT_NAME STATUS
-------------------- -------
T3 VALID
T2 VALID
T1 VALID
SQL>
```
**[EDIT, regarding AFTER STARTUP triggers]**
I created triggers in SYS schema, similar to those 3 I posted previously:
```
SQL> create table test (id number);
Table created.
SQL> create or replace trigger t1
2 after startup on database
3 begin
4 insert into test values (1);
5 end;
6 /
Trigger created.
SQL> create or replace trigger t2
2 after startup on database
3 follows t3
4 begin
5 insert into test values (2);
6 end;
7 /
Warning: Trigger created with compilation errors.
SQL> create or replace trigger t3
2 after startup on database
3 follows t1
4 begin
5 insert into test values (3);
6 end;
7 /
Trigger created.
```
Shut my XE down, started it up, checked what's in there:
```
SQL> show user
USER is "SYS"
SQL> select * From test;
ID
----------
1
2
3
```
Blah! Really, it doesn't work as I thought it would.
Oh well, I was wrong & apologize to all of you.
Upvotes: 1
|
2018/03/14
| 1,424 | 5,882 |
<issue_start>username_0: **Context**
We're currently using [Microsoft.Azure.ServiceBus.EventProcessorHost](https://www.nuget.org/packages/Microsoft.Azure.ServiceBus.EventProcessorHost/) for data ingestion from an Azure Event-Hub. We run on **.NET framework**, not **.NET Core**.
This Microsoft [announcement](https://azure.microsoft.com/en-us/blog/event-hubs-dotnet-standard-client-reaches-ga/) (published Feb 2, 2017) suggests the newer [Microsoft.Azure.EventHubs.Processor](https://www.nuget.org/packages/Microsoft.Azure.EventHubs.Processor) is the way to go in the future, regardless of the .NET runtime you use, as they will maintain a single code base.
Meanwhile, the official Microsoft [Event-Hub samples](https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-samples) updated recently still suggest the older library (Microsoft.Azure.ServiceBus.EventProcessorHost) for the .NET framework.
Both libraries have been updated and evolved since the announcement.
**Question**
Since we're on **.NET framework**, which library should we use going forward?
Should we migrate to the newer **Microsoft.Azure.EventHubs.Processor**, in order to benefit from the latest development, improvements and bug fixes? Or is the old one kept up to speed?
The question is relevant to newcomers as well, which library should they pick to get started with Azure Event-Hubs.<issue_comment>username_1: I may be partial in my answer, but I would prefer the newer `Microsoft.Azure.EventHubs` library. This is for the following reasons:
* It is open source. You can see the roadmap, current issues, file your own issues, and in case you need it, you have access to the code with an MIT license.
* It uses .NET Standard. In case your solution ever changes to .NET Core, or any of the other .NET runtimes, you can continue to use your Event Hubs code.
* This is the library that Microsoft will be emphasizing in the future. While that is my assumption, here's why I think that:
+ The learn.microsoft.com getting started examples use the new library. <https://learn.microsoft.com/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send>
+ Microsoft has seen the benefits of open source development, and nearly all of their Azure libraries are being open sourced. i.e. Service Fabric just went open source as well: <https://blogs.msdn.microsoft.com/azureservicefabric/2018/03/14/service-fabric-is-going-open-source/>
+ The newer library was createdd with the intent to replace the older library. While Microsoft is very good at supporting old / legagcy libraries, the ultimate intent was to replace the older library. <https://azure.microsoft.com/blog/event-hubs-dotnet-standard-client-reaches-ga/>
Upvotes: 3 [selected_answer]<issue_comment>username_2: Adding a bit of context to modernize this guidance, since Microsoft has since launched a new initiative around the Azure SDKs, which includes a new Event Hubs client library. For new development, we recommend using the `Azure.Messaging.EventHubs` family of packages and the `Azure.[[ AREA ]].[[ SERVICE ]]` packages for working with other Azure services.
[Azure.Messaging.EventHubs](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/README.md) is the current generation Event Hubs client library and will be the focal point of development, with improvements and new features being released on a regular cadence. It supports the `netstandard2.0` platform, allowing it to be used with a wide range of host environments which include .NET 5, .NET Core, and the full .NET Framework. The library retains high-level feature parity with the previous generation, `Microsoft.Azure.EventHubs`, but with but with a more discrete client hierarchy and improved API.
The `Azure.Messaging.EventHubs` library is part of the initiative to improve the development experience across Azure services. To this end, it follows a set of uniform [design guidelines](https://azure.github.io/azure-sdk/general_introduction.html) intended to drive a consistent experience across development languages and employ established API patterns for all Azure services. The library also follows a set of [.NET-specific guidelines](https://azure.github.io/azure-sdk/dotnet_introduction.html) to ensure that the .NET SDK has a natural and idiomatic feel that mirrors that of the .NET base class libraries.
The `Azure.Messaging.EventHubs` library also provides the ability to share in some of the cross-service improvements made to the Azure development experience, such as a unified diagnostics pipeline offering a common view of the activities across each of the client libraries. Another key improvement is a streamlined and simplified experience for authentication using the new [Azure.Identity](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/identity/Azure.Identity/README.md) library to share credentials between the clients for different Azure services.
While we believe that there is significant benefit to adopting the modern version of the Event Hubs client library, it is important to be aware that the legacy versions have not been officially deprecated. They will continue to be supported with security and bug fixes. However, new features are unlikely to be added and there is no guarantee of feature parity between the modern and legacy versions going forward.
More information on the `Azure.Messaging.EventHubs` library is available in its [README](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/README.md) and [Samples)[https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs/samples). There is also a [Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md) available to assist with upgrading from the legacy versions.
Upvotes: 0
|
2018/03/14
| 480 | 1,613 |
<issue_start>username_0: This probably has a simple solution it's just been a while since I've used vue:
I'm trying to pass data that's pulled from a list of colors into inline css to change the background color of each item, here's an example:
```
* {{ color.value }}
new Vue({
el: '#main',
data: {
colors: [
{ value: '#00205b'},
{ value: '#0033a0'},
{ value: '#0084d4'}
],
}
})
```
I'd like to use the data pulled from `color.value` and place it into something like `v-bind:style="background-color: { color.value }"` but I can't figure out how to get this to work.<issue_comment>username_1: You can use like this: (See [style binding](https://v2.vuejs.org/v2/guide/class-and-style.html#Object-Syntax-1))
```
- {{ color.value }}
```
Or,
```
:style="{'background-color': color.value}"
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: You can create a computed property and extend each `color` object in the `colors` array with a style property. This style property will contain the styling for each element.
Finally, use the computed property in the `v-for` directive instead of the original array.
```js
new Vue({
el: '#list',
data() {
return {
colors: [{
value: '#00205b'
},
{
value: '#0033a0'
},
{
value: '#0084d4'
}
],
};
},
computed: {
extendedColors() {
return this.colors.map(color => ({ ...color,
style: {
backgroundColor: color.value
}
}));
},
}
})
```
```html
* {{ color.value }}
```
Upvotes: 0
|
2018/03/14
| 508 | 1,885 |
<issue_start>username_0: Let's say I have a utility script:
`utility.py`
```
def func1(input):
# do stuff to input
return stuff
def func2(input):
# do more stuff to input
return more_stuff
def main(argv):
stuff = func1(argv[1])
more_stuff = func2(argv[1])
all_stuff = stuff + more_stuff
# write all_stuff to stdout or file
if __name__ == "__main__":
main(argv)
```
And this script is called by another rollup script, like so:
`rollup.py`
```
import utility
def main(argv):
# assuming a list of of inputs is in argv[1]
inputs = argv[1]
for input in inputs:
utility.main(['utility.py', input])
if __name__ == "__main__":
main(argv)
```
Now let's say I want the rollup script to print or write to file **a combined output of all the stuff generated by each instance of utility.py**.
Is there any way to do this by **returning the output from utility.py to the rollup script?** Or does the rollup script have to read in all the output or output files that utility.py generated?
Note that I am *not* asking if this is considered best practices or if it's "allowed" in programming circles. I am aware that main is supposed to only return success or fail codes. I'm merely asking if it is **possible.**<issue_comment>username_1: In your code the `main` function is nothing more than a regular function named `main`. So if you're asking if you **can** use `return` in your `main` function, then absolutely yes. You most certainly can
Upvotes: 4 [selected_answer]<issue_comment>username_2: It is possible. You could, for instance, return a dictionary (or list, or tuple, or whatever) in `utility.main()` and then grub all the dictionaries and store them in a list in `rollup.main()`, inside the `for` loop, for further elaboration.
Your `utility.main()` is nothing else than a normal function in this case.
Upvotes: 0
|
2018/03/14
| 360 | 1,161 |
<issue_start>username_0: I need some help with a regex.
My problem is to extract the version number from a jar/war artefact so I can run an mvn deploy-file of my legacy to nexus
The lines are just like below
```
-DartifactId=my-jarfile-1.2.1 -Dversion= -Dpackaging=war
```
My task is to make it something like that
```
-DartifactId=my-jarfile -Dversion=1.2.1 -Dpackaging=war
```
I am trying the following but I am not having success:
```
^(?:-DartifactId=.)$(\d+)(\.\d+)(\.\d+)$|^(\d+)(\.\d+)$|^(\d+)$|^(\d+)(\.\d+)(-.+)$|^(\d+)(-.+)$
```<issue_comment>username_1: In your code the `main` function is nothing more than a regular function named `main`. So if you're asking if you **can** use `return` in your `main` function, then absolutely yes. You most certainly can
Upvotes: 4 [selected_answer]<issue_comment>username_2: It is possible. You could, for instance, return a dictionary (or list, or tuple, or whatever) in `utility.main()` and then grub all the dictionaries and store them in a list in `rollup.main()`, inside the `for` loop, for further elaboration.
Your `utility.main()` is nothing else than a normal function in this case.
Upvotes: 0
|
2018/03/14
| 536 | 2,241 |
<issue_start>username_0: I have a Microsoft Master Data Services installation, SQL Server 2014. I need to provide a REST interface to allow an external system to push data into our MDS staging area. MDSS has a WCF API available out of the box but we specifically need to expose a REST service for an external system to utilise.
I'm considering creating an independent REST API to provide this access but would like to know if other options/approaches exist before I put in the effort.<issue_comment>username_1: I'm looking to implement something similar, however there is no simple solution.
In my view the only way to do this is to create a Wrapper service (ASP.NET Web API / MVC) that then invokes the out of the box MDS WCF services.
Although I've not found anything that matches exactly this requirement, there are some good reference open source projects. Take a look at:
* [mdsutilities](https://archive.codeplex.com/?p=mdsutilities)
* [MDSManager](https://archive.codeplex.com/?p=mdsmanager)
There are a few challenges that you'll have to think about though:
1. **Metadata update:** When a new entity gets added / modified / attributes modified, how does your wrapper service change it's code? Runtime or do you run a separate small utility to update the DLLs, etc?
2. **Security:** it is best to keep security within MDS.. the list of who has access to what. So basically you'd want the users to use your web service / app which in turn will invoke the MDS WCF. if you're using AD based security, this isn't that easy because your wrapper will need to Impersonate the end user, hence invoke the MDS WCF via the end user's credentials so that correct access control can be applied. I know there are ways to handle this but requires some DC level changes for the server/ID that your wrapper service will be running on.
If you do go ahead with this, do make sure to share your insights & experience with us all.
Best of luck !
Upvotes: 3 [selected_answer]<issue_comment>username_2: Althoug the samples are still there codeplex isn't active anymore, unfortunately.
You can also find samples on Github:
[SQL MDS Samples](https://github.com/Microsoft/sql-server-samples/tree/master/samples/features/master-data-services)
Upvotes: 1
|
2018/03/14
| 485 | 1,822 |
<issue_start>username_0: I stumbled upon this infinite loop. I was supposed to check the user's input not to be of any type other than integer and also not to be a negative number.
Why am I getting this infinite loop and more importantly how do I impose these restrictions on the input?
```
#include
using namespace std;
int main(){
long long int m, k;
cin >> m >> k;
while (cin.fail() || m < 0 || k < 0){
cin.clear();
cout << "please enter another input";
cin >> m >> k;
}
return 0;
}
```<issue_comment>username_1: IF `cin.fail()` is true - we are good to go for the loop
After the first attempt of reading two numbers that failed, you clear the status and try again. This is without reading the offending item. It is bound to fail again as the data is left in the stream.
You need to add logic that on failure of reading, you take the appropriate action - like reading the offending data
Upvotes: 1 <issue_comment>username_2: If operator `>>` fails due to input not matching the target type, the characters remain in the stream (even if you clear the fail bit).
So if you repeat the same read operation again and again, it will fail again and again. Usually you will skip/ignore characters in such a case. The following example is directly taken from [cppreference/ignore](http://en.cppreference.com/w/cpp/io/basic_istream/ignore):
```
int main()
{
std::istringstream input("1\n"
"some non-numeric input\n"
"2\n");
for(;;) {
int n;
input >> n;
if (input.eof() || input.bad()) {
break;
} else if (input.fail()) {
input.clear(); // unset failbit
input.ignore(std::numeric_limits::max(), '\n'); // skip bad input
} else {
std::cout << n << '\n';
}
}
}
```
Upvotes: 2
|
2018/03/14
| 341 | 1,186 |
<issue_start>username_0: So after running this code
```
'use strict';
var myNewArray = Array(...[,,]);
console.log(myNewArray);
```
I get this
```
[undefined, undefined]
```
Can anyone explain me why I get only 2 undefined?<issue_comment>username_1: You have provided `undefined` values in the array `[,,]`.
The last item is not calculated because JavaScript considers it as a trailing comma. So if `undefined` is provided, it will be thrown away from the array.
```
[ , , ]
// ^^ -> Here is nothing, so array currently has 2 items with `undefined`
```
If you will give a value after it, it will be added in the result array.
```js
const myNewArray = Array(...[,,1]);
console.log(myNewArray);
```
Upvotes: 3 <issue_comment>username_2: In addition to Suren's answer, you can clearly see in the [specification](https://www.ecma-international.org/ecma-262/8.0/index.html#sec-literals-runtime-semantics-evaluation) that trailing elided elements are not counted into the length of the array.
>
> Elided array elements are not defined. If an element is elided at the end of an array, that element does not contribute to the length of the Array.
>
>
>
Upvotes: 1
|
2018/03/14
| 712 | 2,448 |
<issue_start>username_0: I have an XML file, let's call it Project.params with the following content
```
xml version="1.0"?
{8ff4ab3f-e607-4ccc-adc0-bececa310d17}
0
0
0
ZP
18
```
I need to update the "MyPassword" property with a string that has an ampersand. I tried the following code
```
$sourcePath = "C:\Project.params"
$Parameters = @{}
$key = "MyPassword"
#$value = "aabbcc"
$value = "aabbcc&"
$Parameters.add($key, $value)
[xml]$paramxml = Get-Content $sourcePath
$paramxml.Parameters.Parameter | ? { $_.Name -eq $key} |
% {
$parameterxmlnode = $_
$parameterxmlnode.Properties.Property | ? {$_.Name -eq "Value"} |
% {
$oldval = $_
if ($oldval.InnerText -ne $value)
{
Write-Host "Updating property" $key "..."
$oldval.set_InnerXML($value)
$ismodified = $true
}
}
}
```
I get the following error
```
Exception calling "set_InnerXml" with "1" argument(s): "Unexpected end of file has occurred. Line 1, position 7."
At line:22 char:25
+ $oldval.set_InnerXML($value)
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], MethodInvocationException
+ FullyQualifiedErrorId : DotNetMethodException
```
However, if I just remove the ampersand from the `$value` variable, everything works as expected. Can someone please help?<issue_comment>username_1: is this what you are looking for:
```
[xml]$xml = @'
xml version="1.0"?
{8ff4ab3f-e607-4ccc-adc0-bececa310d17}
0
0
0
ZP
18
'@
$xml.Parameters.Parameter.Name
$xml.Parameters.Parameter.Name = '&MyPassword'
$xml.Parameters.Parameter.Name
```
Upvotes: 0 <issue_comment>username_2: `&` needs to be escaped as `&`, as does quotes and angle brackets.
You can use [`SecurityElement.Escape()`](https://msdn.microsoft.com/en-us/library/system.security.securityelement.escape(VS.80).aspx) to escape the entire string for you:
```
$escapedValue = [System.Security.SecurityElement]::Escape($value)
$oldval.InnerXML = $escapedValue
```
... or, as @PetSerAl suggests, use the appropriate property `InnerText` (rather than `InnerXml`), this will take care of the escaping automatically:
```
$oldval.InnerText = $value
```
Upvotes: 2 [selected_answer]
|
2018/03/14
| 872 | 2,603 |
<issue_start>username_0: I was "playing" around with async calls on javascript and I am trying to make the next example work, but can't:
```
// calls function f on next event loop
function callAsync(f, ...args) {
setTimeout(() => f(...args), 0);
}
var f = (a, b, cb) => cb(null, a + b);
f2 = () => {
a = 2;
b = 3;
setTimeout(() =>
f(a, b, (err, res) => {
if (!err) console.log(res);
}), 0);
callAsync(f, a, b, (err, res) => {
if (!err) console.log(res);
});
a = 10;
b = -10;
}
f2();
```
So, by running this, you get as a result:
```
0
5
```
Why callAsync does not work - that is my question!<issue_comment>username_1: The difference is the time at which `a` and `b` are evaluated. When you pass a closure to `setTimeout`, the variables are getting read when the closure that uses them is executed - after you had reassigned them. When you pass `a` and `b` to `callAsync`, they are evaluated as part of the call - immediately. The closure inside `callAsync` will only see the value `args = [2, 3, (…)=>{…}]`.
Upvotes: 2 <issue_comment>username_2: This might make it clear. Here, approximately, are the steps taken:
```
define callAsync
define f
define f2
f2()
a = 2
b = 3
+------- setTimeout(() => f(a, b, callbackFn1), 0)
| callAsync(f, 2, 3, callbackFn)
| +------- setTimeout(() => f(2, 3, callbackFn2), 0)
| | a = 10
| | b = -10
| | end f2
+----> f(10, -10, calllbackFn1)
| callbackFn1(null, 10 + -10)
| if (!err) console.log(res)
| console.log(0) //~> 0
| end f
+--> f(2, 3, callbackFn2)
callbackFn2(null, 2 + 3)
if (!err) console.log(res)
console.log(5) //~> 5
end f
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: Your callAsync is completely async, but the values of `a` and `b` are copied at the time you call `callAsync`. It is equivalent to:
```
setTimeout((a,b) =>
f(a, b, (err, res) => {
if (!err) console.log(res);
}), 0, a, b);
```
To achieve what you want you could use an object since they are reference types they are not copied when you pass them as arguments:
```js
// calls function f on next event loop
function callAsync(f, ...args) {
setTimeout(() => f(...args), 0);
}
var f = (data, cb) => cb(null, data.a + data.b);
var data = {
a: 2,
b: 3,
};
callAsync(f, data, (err, res) => {
if (!err) console.log(res);
});
data.a = 10;
data.b = -10;
```
Upvotes: 1
|
2018/03/14
| 1,601 | 4,921 |
<issue_start>username_0: Can't figure out how to filter nested association belongs to while search in rails admin.
I have the following code in my rails\_admin config in model.
```
class Painting < ApplicationRecord
...
rails_admin do
configure :translations, :globalize_tabs
create do
field :author do
searchable [:last_name]
queryable [:last_name]
filterable true
end
field :genre
field :technique
field :material
field :active
field :painting_images
field :author_preview
field :translations
end
end
```
[](https://i.stack.imgur.com/QQhZd.png)
I tried every configuration in model. But nothing works. Whatever I enter in this dropdown input is being ignored.
I have the following logs in puma server.
```
Started GET "/admin/author?associated_collection=author&compact=true¤t_action=create&source_abstract_model=painting&query=George" for 127.0.0.1 at 2018-03-14 22:07:11 +0300
Processing by RailsAdmin::MainController#index as JSON
Parameters: {"associated_collection"=>"author", "compact"=>"true", "current_action"=>"create", "source_abstract_model"=>"painting", "query"=>"George", "model_name"=>"author"}
Admin Load (0.3ms) SELECT "admins".* FROM "admins" WHERE "admins"."id" = $1 ORDER BY "admins"."id" ASC LIMIT $2 [["id", 1], ["LIMIT", 1]]
Painting Load (0.3ms) SELECT "paintings".* FROM "paintings" WHERE "paintings"."id" IS NULL ORDER BY "paintings"."id" DESC LIMIT $1 [["LIMIT", 1]]
(0.3ms) SELECT COUNT(*) FROM "authors"
Author Load (0.4ms) SELECT "authors".* FROM "authors" ORDER BY authors.id desc LIMIT $1 [["LIMIT", 30]]
Author::Translation Load (0.6ms) SELECT "author_translations".* FROM "author_translations" WHERE "author_translations"."author_id" IN (114, 113, 112, 111, 110, 109, 108, 107, 106, 105, 104, 103, 102, 101, 100, 99, 98, 97, 96, 95, 94, 93, 92, 91, 90, 89, 88, 87, 86, 85)
Completed 200 OK in 61ms (Views: 1.4ms | ActiveRecord: 1.9ms)
```
The thing is everything I enter in input is being processed in parameters but never gets evaluated. So it just ignored, it doesn't filter associated collection by `params[:query]`
I use `globalize` gem for translations and don't know how to properly configure rails admin to use it filters.
I did searching in original repo.
I think somehow is wrong [this](https://github.com/sferik/rails_admin/blob/master/app/controllers/rails_admin/main_controller.rb#L128)
```
def get_collection(model_config, scope, pagination)
...
options = options.merge(query: params[:query]) if params[:query].present?
...
model_config.abstract_model.all(options, scope)
end
```
This query is available here, but it's ignored there.
I tried to use
`model_config.abstract_model.all(options, scope).where(last_name: params[:query])`
And this is working. But this is definitely wrong. Can someone help me with more generic approach to this problem?<issue_comment>username_1: The difference is the time at which `a` and `b` are evaluated. When you pass a closure to `setTimeout`, the variables are getting read when the closure that uses them is executed - after you had reassigned them. When you pass `a` and `b` to `callAsync`, they are evaluated as part of the call - immediately. The closure inside `callAsync` will only see the value `args = [2, 3, (…)=>{…}]`.
Upvotes: 2 <issue_comment>username_2: This might make it clear. Here, approximately, are the steps taken:
```
define callAsync
define f
define f2
f2()
a = 2
b = 3
+------- setTimeout(() => f(a, b, callbackFn1), 0)
| callAsync(f, 2, 3, callbackFn)
| +------- setTimeout(() => f(2, 3, callbackFn2), 0)
| | a = 10
| | b = -10
| | end f2
+----> f(10, -10, calllbackFn1)
| callbackFn1(null, 10 + -10)
| if (!err) console.log(res)
| console.log(0) //~> 0
| end f
+--> f(2, 3, callbackFn2)
callbackFn2(null, 2 + 3)
if (!err) console.log(res)
console.log(5) //~> 5
end f
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: Your callAsync is completely async, but the values of `a` and `b` are copied at the time you call `callAsync`. It is equivalent to:
```
setTimeout((a,b) =>
f(a, b, (err, res) => {
if (!err) console.log(res);
}), 0, a, b);
```
To achieve what you want you could use an object since they are reference types they are not copied when you pass them as arguments:
```js
// calls function f on next event loop
function callAsync(f, ...args) {
setTimeout(() => f(...args), 0);
}
var f = (data, cb) => cb(null, data.a + data.b);
var data = {
a: 2,
b: 3,
};
callAsync(f, data, (err, res) => {
if (!err) console.log(res);
});
data.a = 10;
data.b = -10;
```
Upvotes: 1
|
2018/03/14
| 688 | 2,133 |
<issue_start>username_0: So I just opened a fresh project in angular CLI and I wanted to install Material with:
>
> npm install --save @angular/material @angular/cdk
>
>
>
But it gave me the following errors:
```
C:\Users\TijlD\Desktop\projects\untitled46>npm install --save @angular/material @angular/cdk
npm WARN @angular/animations@5.2.8 requires a peer of @angular/core@5.2.8 but none was installed.
npm WARN @angular/cdk@5.2.4 requires a peer of @angular/core@^5.0.0 but none was installed.
npm WARN @angular/cdk@5.2.4 requires a peer of @angular/common@^5.0.0 but none was installed.
npm WARN @angular/material@5.2.4 requires a peer of @angular/core@^5.0.0 but none was installed.
npm WARN @angular/material@5.2.4 requires a peer of @angular/common@^5.0.0 but none was installed.
```
Does anyone know what this means and how to get rid of it?
**Edit: when I check the Angular version**
```
Angular CLI: 1.7.3
Node: 9.3.0
OS: win32 x64
Angular: 5.2.8
... animations, common, compiler, compiler-cli, core, forms
... http, language-service, platform-browser
... platform-browser-dynamic, router
@angular/cli: 1.7.3
@angular-devkit/build-optimizer: 0.3.2
@angular-devkit/core: 0.3.2
@angular-devkit/schematics: 0.3.2
@ngtools/json-schema: 1.2.0
@ngtools/webpack: 1.10.2
@schematics/angular: 0.3.2
@schematics/package-update: 0.3.2
typescript: 2.5.3
webpack: 3.11.0
```<issue_comment>username_1: NPM is telling you that Angular Material requires Angular 5.2 or a newer version. Probably you are using Angular 4 or lower.
Angular Material does not install Angular, it requires you to install it (the same way Angular requires you to have RxJS).
Check your package.json to see what version of Angular you have.
Also, update your global angular-cli package, probably you've got an older one.
You can check with `ng --version`, and can update it with `npm update -g @angular/cli`
Upvotes: 4 [selected_answer]<issue_comment>username_2: you try to install last version of angular material that not support angular 5.2
please update your angular version or try to install lower version of angular material
Upvotes: 0
|
2018/03/14
| 269 | 1,187 |
<issue_start>username_0: I am using Alibaba-cloud. Using its Object Storage Service Image processing, I need to generate square thumbnails of images. But those thumbnails must be resized in a way that it fits the square perfectly. **They shouldn't lose its aspect ratio** and the extra portions must get cropped. The thumbnails should look just like the thumbnail images that are shown in any gallery app on phones. How should I achieve this?<issue_comment>username_1: NPM is telling you that Angular Material requires Angular 5.2 or a newer version. Probably you are using Angular 4 or lower.
Angular Material does not install Angular, it requires you to install it (the same way Angular requires you to have RxJS).
Check your package.json to see what version of Angular you have.
Also, update your global angular-cli package, probably you've got an older one.
You can check with `ng --version`, and can update it with `npm update -g @angular/cli`
Upvotes: 4 [selected_answer]<issue_comment>username_2: you try to install last version of angular material that not support angular 5.2
please update your angular version or try to install lower version of angular material
Upvotes: 0
|
2018/03/14
| 479 | 1,548 |
<issue_start>username_0: I have below data
```
empid date amount
1 12-FEB-2017 10
1 12-FEB-2017 10
1 13-FEB-2017 10
1 14-FEB-2017 10
```
I need a query to return the total amount for a given id and date i.e, below result set
```
empid date amount
1 12-FEB-2017 20
1 13-FEB-2017 10
1 14-FEB-2017 10
```
but the think is, from the UI i will be getting the date as input.. if they pass the date return the result for that date .. if they dont pass the date return the result for most recent date.
below is the query that I wrote .. but it is working partially..
```
SELECT sum(amount),empid,date
FROM employee emp,
where
((date= :ddd) OR aum_valutn_dt = (select max(date) from emp))
AND emp.id = '1'
group by (empid,date)
```
Please help..<issue_comment>username_1: NPM is telling you that Angular Material requires Angular 5.2 or a newer version. Probably you are using Angular 4 or lower.
Angular Material does not install Angular, it requires you to install it (the same way Angular requires you to have RxJS).
Check your package.json to see what version of Angular you have.
Also, update your global angular-cli package, probably you've got an older one.
You can check with `ng --version`, and can update it with `npm update -g @angular/cli`
Upvotes: 4 [selected_answer]<issue_comment>username_2: you try to install last version of angular material that not support angular 5.2
please update your angular version or try to install lower version of angular material
Upvotes: 0
|
2018/03/14
| 827 | 2,827 |
<issue_start>username_0: Ive looked all over the official Dart site. They go into great depth about 'dev' channels and 'stable' channels but no clue whatsover about how to actually install a version of the SDK. There is no information about the current stable and dev versions either.
Clearly I have missed something hiding in plain site. I want to try out some official angulardart component libraries but the builder requires Dart version 2 - but I have no idea how to get that.
Can someone put me out of my misery, and tell me how I'm being an idiot
Thanks<issue_comment>username_1: Ahhhhhhh. If you go to <https://www.dartlang.org/install/archive>
And then wait for about 10 minutes, you suddenly get some links. - Problem solved.
Upvotes: 0 <issue_comment>username_2: Installation instructions for dart v2 (which currently only has a dev channel release) can be found [here](https://www.dartlang.org/install#about-sdk-release-channels-and-version-strings). To summarize,
Mac Installation with homebrew
------------------------------
```
brew tap dart-lang/dart
brew install dart --devel
```
**Edit:** If you are upgrading from Dart 1 see, [How to upgrade to Dart 2?](https://stackoverflow.com/a/50969837/3046255)
Windows Installation
--------------------
Visit [here](http://www.gekorm.com/dart-windows/) for the graphical installer or if you have Chocolatey installed do:
```
choco install dart-sdk -version 2
```
Linux
-----
```
sudo apt-get update
sudo apt-get install apt-transport-https
sudo sh -c 'curl https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -'
sudo apt-get update
export PATH=/usr/lib/dart/bin:$PATH
sudo sh -c 'curl https://storage.googleapis.com/download.dartlang.org/linux/debian/dart_unstable.list > /etc/apt/sources.list.d/dart_unstable.list'
sudo apt-get install dart
```
Upvotes: 4 [selected_answer]<issue_comment>username_3: The latest version in the `dev` channel is Dart 2 (2.0.0-alpha.36).
There is no released Dart 2 yet.
This VM and tools can still process most Dart 1 code.
There are several flags to enable Dart 2 features for the VM or tools
```
--preview-dart-2
--reify-generic-functions
--reify-generics
--limit-ints-to-64-bits
--sync-async
```
these options are highly experimental and in flux.
Not all tools support the same set of options.
New options might be added or existing ones removed when the behavior becomes the default without previous anouncement.
I found above options in <https://github.com/dart-lang/sdk/blob/b0a2e6b9c99d8a13ecd59d1636d5201adc77fc07/tools/testing/dart/test_suite.dart>
Upvotes: 0 <issue_comment>username_4: for windows go to dart.dev for install dart
if you want if dart is install in your pc
1-open terminal and type this command dart --version when will appear version dart
ex: 2.28 (sdkDart)
Upvotes: 0
|
2018/03/14
| 1,704 | 5,522 |
<issue_start>username_0: I was editing some batch files that use doff.exe to set a date variable, then search through the subfolders of a folder named for the date to find a particular file, copy it, rename it, then process it. All of a sudden, without changing the way it was written, the for /r loop is adding a period to the file path. So, after using doff and navigating to the date-named directory this is my code:
>
> for /r %%d in (.) do copy %%d\FILE.TXT Y:\Folder\subfolder\other\_folder\
>
>
>
It used to output this:
```
X:\Stuff\other_stuff\20180314>copy X:\Stuff\Other_stuff\20180314\FILE.TXT Y:\Folder\subfolder\other_folder
The system cannot find the file specified.
```
until it found the file, then it would copy it. Now, instead, it returns this (note the extra .\ in front of the filename):
```
X:\Stuff\other_stuff\20180314>copy X:\Stuff\Other_stuff\20180314\.\FILE.TXT Y:\Folder\subfolder\other_folder
The system cannot find the file specified.
```
And never finds the file, because of the extra .\ in the file path. I haven't changed anything that I know of, we use this code string in tons of places, I feel like I'm taking crazy pills. In case it's relevant, here's the full script to that point:
```
for /f "tokens=1-3 delims=/ " %%a in ('doff yyyymmdd') do (
set mm2=%%a
set dd2=%%b
set yyyy2=%%c)
X:
cd Stuff
cd Other_stuff
cd %mm2%
for /r %%d in (.) do copy %%d\FILE.TXT Y:\Folder\subfolder\other_folder\
```<issue_comment>username_1: The trailing period has **always** displayed when a single period is used for the set when using `FOR /R`. If you want to get rid of it then do this:
```
for /r %%G in (.) do copy %%~fG\FILE.TXT Y:\Folder\subfolder\other_folder\
```
Upvotes: 0 <issue_comment>username_2: First of all, no command usually changes its behaviour all of a sudden without having changed anything. I can assure you `for` does not change just like that.
Furthermore, an additional `.` in a file path does not harm at all, because `.` just means the current directory. Therefore `D:\a\b\c.ext` points to the same target as `D:\a\.\b\.\c.ext`.
(Just for the sake of completeness, there is also `..` which points to the parent directory, so it goes one level up, hence `D:\a\b\c.ext` points to the same item as `D:\a\b\d\..\c.ext`.)
Anyway, to understand what is happening and where the extra `.` comes from, you need to know that the [`for` command](http://ss64.com/nt/for.html "FOR") (without and with the [`/R` option](http://ss64.com/nt/for_r.html "FOR /R")) does not access the file system always but only when needed.
The plain `for` command accesses the file system only when at least a [wildcard](https://ss64.com/nt/syntax-wildcards.html "Wildcards") like `*` or `?` is used in the set (the set is the part in between the parentheses following the loop variable reference; type `for /?`). You can easily confirm that with the following code snippet typed into a command prompt window (to use this in a batch file, double the `%`-signs):
```cmd
>>> rem /* List all `*.txt` and `*.bin` files in current directory;
>>> rem the example output below lists two files: */
>>> for %I in (*.txt *.bin) do @echo %I
sample.txt
data.bin
>>> rem // Print all items even if there are no such files:
>>> for %I in (just/text no/file) do @echo %I
just/text
no/file
```
The first `for` loop returns existing files only, so the file system needs to be accessed. The second loop just returns the given items without accessing the file system. Both items contain a character (`/`) that is not allowed in file names, so there cannot exist such files.
For the set of `for /R` (that is again the part in between `()`) it is exactly the same. However, to retrieve the recursive directory tree, the file system has to be accessed of course. So check out the following code snippet (given the current directory is `C:\Test` that contains 3 sub-directories):
```cmd
>>> rem /* List all `*.txt` and `*.bin` files in current directory tree;
>>> rem the example output below lists some files: */
>>> for /R %I in (*.txt *.bin) do @echo %I
C:\Test\sample.txt
C:\Test\sub1\any.bin
C:\Test\sub1\data.bin
C:\Test\sub2\text.txt
>>> rem // Print all items even if there are no such files:
>>> for /R %I in (just/text no/file) do @echo %I
C:\Test\just/text
C:\Test\no/file
C:\Test\sub1\just/text
C:\Test\sub1\no/file
C:\Test\sub2\just/text
C:\Test\sub2\no/file
C:\Test\sub3\just/text
C:\Test\sub3\no/file
```
As you can see in the first example the file system is accessed and only matching files are returned. In the second example, the directory tree is retrieved from the file system, but the items in the set are just appended and not checked against the file system.
The `.` is nothing special to `for` or `for /R`, it is just a string without a wildcard, so the files system is not checked. Therefore in our test scenario the output would be:
```cmd
>>> rem // `.` is the only character in the set:
>>> for /R %I in (.) do @echo %I
C:\Test\.
C:\Test\sub1\.
C:\Test\sub2\.
C:\Test\sub3\.
```
I hope this explains the behaviour you are facing!
Just for the sake of completeness:
If you want to get rid of the appended `.` for cosmetic reasons, nest another (standard) `for` loop and use the [`~f` modifier](https://ss64.com/nt/syntax-args.html "Command Line arguments (Parameters)"):
```cmd
>>> rem // `.` is the only character in the set:
>>> for /R %I in (.) do @for &J in (%I) do echo %~fI
C:\Test
C:\Test\sub1
C:\Test\sub2
C:\Test\sub3
```
Upvotes: 1
|
2018/03/14
| 1,121 | 3,869 |
<issue_start>username_0: The following code doesn't work for me although a similar one works (<http://85.255.14.137>). The CHANGE\_ME text should get replaced by a a value on each change of the input field. How can I correct that?
```html
$(document).on('change', '.input', (function(){
var par = $(this);
var val = $(this).val();
par.find("span.value").hmtl(val));
});
CHANGE\_ME
```<issue_comment>username_1: There are a few issues with your code. For one thing you spelled `html` wrong, another issue is that you are looking for `span` with the class of `value` *within* the `input`, but it's not inside the input. You can write it like this instead:
```js
$('.input').change(function() {
var val = $(this).val();
$("span.value").html(val);
});
```
```html
CHANGE\_ME
```
Another issue, unrelated to this problem, is that you have two elements with the `ID` of `A`, which is not valid. `ID`s must be unique.
Upvotes: 0 <issue_comment>username_2: In addition to `hmtl` typo. You're traversing / searching for `span` with class `value` using `.find` method. Unfortunately that's not present there.
Instead of traversing, you can directly access the `span` with `$(".value").html(val);`
Upvotes: -1 <issue_comment>username_3: First: as [Taplar](https://stackoverflow.com/users/1586174/taplar) said, your supplied code has an extra `)` after you call the `.html()` method. As [username_1](https://stackoverflow.com/users/2827407/apad1) said, you also shouldn't have two elements with the same ID, but that's not causing your issue.
Second: this problem is a great opportunity to think carefully about DOM structure. As it stands, you can't `find()` from `$(this)` because `$(this)` is the input element that fired the `change` event. The input has no children, so `.find()` will never return anything. Because your input is inside a paragraph tag and your target span is in a paragraph that is the sibling of that parent, you have to do some complicated traversals up the dom, over, and down, like so:
```js
$(document).on('change', '.input', function() {
var par = $(this);
var val = $(this).val();
par.closest('p').siblings('p').find('span.value').text(val);
});
```
```html
CHANGE\_ME
```
We can avoid this problem if we approach our DOM structure more meaningfully, and think a bit differently about how we handle the change event.
First, let's use more semantic tags in our form. [Inputs need labels](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/label); and also keep in mind that [the output element](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/output) is a better tag for displaying content that comes out of using an input:
```html
Pick a number
CHANGE\_ME
```
With that structure, DOM traversal is a lot easier!
```js
$(document).on('change', 'form', function(e) {
// the form whose input fired `change`
const $self = $(e.currentTarget);
const $output = $self.find('output');
const val = $self.find('input').val();
$output.text(val);
```
(Note that we [don't use `this`](http://fizzy.school/replace-jquery-this) to reference the form element. `this` can be unclear, and we have better ways of referencing the element jQuery is interacting with, like the `event.currentTarget` or the `event.target`. The two are [subtly different](https://stackoverflow.com/a/5921615/6798240)!)
And here's the snippet with these changes together, for the lazy: :)
```js
$(document).on('change', 'form', function(e) {
// the form whose input fired `change`
const $self = $(e.currentTarget);
const $output = $self.find('output');
const val = $self.find('input').val();
$output.text(val);
});
```
```css
* {
box-sizing: border-box;
}
output {
display: block;
}
```
```html
Pick a number
CHANGE\_ME
```
Upvotes: 2 [selected_answer]
|
2018/03/14
| 1,688 | 6,311 |
<issue_start>username_0: In my Javascript client, I'm using Fetch API to call the server to retrieve a server-generated file. I'm using the following client-side code:
```
var _url = "";
var initParms = {
method: "GET",
mode: 'cors'
}
fetch(_url, initParms)
.then(response => {
if(response.ok){
alert(response.headers.get("content-disposition"));
return response.blob();
}
throw new Error("Network response was not OK.");
})
.then(blob => {
var url = new URL.createObjectURL(blob);
})
```
This actually works just fine. However, the server generates a `filename` for the file and includes it in the response as part of the `content-disposition` header.
I need to save this file to the user's machine using the `filename` generated by the server. In Postman, I can actually see that the `content-disposition` header of the response is set to: `Content-Disposition: attachment;filename=myfilename.txt`.
I made an attempt to read the `content-disposition` from the response (see the alert in my JS code), but I always get `null` (even though the same response shows the `content-disposition` in Postman).
Am I doing something wrong? Is there a way to retrieve the `filename` using the fetch response? Is there a better way to get the `filename` from the server along with the file?
P.S. This is my server-side code for returning the file:
**Controller Action**
```
public IHttpActionResult GetFile(){
return new FileResult("myfilename.txt","Hello World!");
}
```
**FileResult Class**
```
public class FileResult : IHttpActionResult
{
private string _fileText = "";
private string _fileName = "";
private string _contentType = "";
public FileResult(string name, string text)
{
_fileText = text;
_fileName = name;
_contentType = "text/plain";
}
public Task ExecuteActionAsync(CancellationToken token)
{
Stream \_stream = null;
if (\_contentType == "text/plain")
{
var bytes = Encoding.Unicode.GetBytes(\_fileText);
\_stream = new MemoryStream(bytes);
}
return Task.Run(() =>
{
var response = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(\_stream),
};
response.Content.Headers.ContentType =
new MediaTypeHeaderValue(\_contentType);
response.Content.Headers.ContentDisposition =
new ContentDispositionHeaderValue("attachment")
{
FileName = \_fileName
};
return response;
}, token);
```
**Edit**
My question was specifically about the fetch not the ajax api. Also, in my code, I showed that I was already reading the header from the response exactly like the accepted answer demonstrated on the suggested answer. However, as stated in my post, this solution was not working with fetch.<issue_comment>username_1: So, shortly after posting this question, I ran across [this issue on Github](https://github.com/matthew-andrews/isomorphic-fetch/issues/67). It apparently has to do with using CORS.
The suggested work around was adding `Access-Control-Expose-Headers:Content-Disposition` to the response header on the server.
This worked!
Upvotes: 6 [selected_answer]<issue_comment>username_2: You can extract the filename from the content-disposition header like this:
```
let filename = '';
fetch(`/url`, { headers: { Authorization: `Bearer ${token}` }}).then((res) => {
const header = res.headers.get('Content-Disposition');
const parts = header!.split(';');
filename = parts[1].split('=')[1];
return res.blob();
}).then((blob) => {
// Use `filename` here e.g. with file-saver:
// saveAs(blob, filename);
});
```
Upvotes: 4 <issue_comment>username_3: Decided to post this, as the accepted answer (although helpful to many people) does not actually answer the original question as to:
>
> *"How to get the filename from a file downloaded using JavaScript Fetch API?"*.
>
>
>
One can read the `filename` (as shown in the example below), and download the file using a similar approach to [this](https://stackoverflow.com/a/42274086/17865804) (the recommended `downloadjs` library in that post is no longer being maintained; hence, I wouldn't suggest using it). The below also takes into account scenarios where the `filename` includes unicode characters (i.e.,`-, !, (, )`, etc.) and hence, comes (`utf-8` encoded) in the form of, for instance, `filename*=utf-8''Na%C3%AFve%20file.txt` (see [here](https://stackoverflow.com/questions/93551/how-to-encode-the-filename-parameter-of-content-disposition-header-in-http) for more details). In such cases, the [`decodeURIComponent()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/decodeURIComponent) function is used to decode the `filename`.
Working Example
---------------
```js
const url ='http://127.0.0.1:8000/'
fetch(url)
.then(res => {
const disposition = res.headers.get('Content-Disposition');
filename = disposition.split(/;(.+)/)[1].split(/=(.+)/)[1];
if (filename.toLowerCase().startsWith("utf-8''"))
filename = decodeURIComponent(filename.replace(/utf-8''/i, ''));
else
filename = filename.replace(/['"]/g, '');
return res.blob();
})
.then(blob => {
var url = window.URL.createObjectURL(blob);
var a = document.createElement('a');
a.href = url;
a.download = filename;
document.body.appendChild(a); // append the element to the dom
a.click();
a.remove(); // afterwards, remove the element
});
```
If you are doing a [cross-origin](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS) request, make sure to set the [`Access-Control-Expose-Headers`](https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#access-control-expose-headers) response header on server side, in order to expose the [`Content-Disposition`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition) header; otherwise, the `filename` won't be accessible on client side trhough JavaScript (see furhter documentation [here](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Expose-Headers)). For instance:
```py
headers = {'Access-Control-Expose-Headers': 'Content-Disposition'}
return FileResponse("Naïve file.txt", filename="Naïve file.txt", headers=headers)
```
Upvotes: 4
|
2018/03/14
| 491 | 1,425 |
<issue_start>username_0: ```
def pascalstriangle(n):
list = [1]
for i in range(n):
print(list)
newlist = []
newlist.append(list[0])
for i in range(len(list) - 1):
newlist.append(list[i] + list[i + 1])
newlist.append(list[-1])
list = newlist
print(pascalstriangle(3))
```
I want to see this print format `[1,1,1,1,2,1]` instead of
```
[1]
[1, 1]
[1, 2, 1]
```<issue_comment>username_1: You can combine two lists in Python like this:
```
print([1, 2] + [3])
> [1, 2, 3]
```
I hope that helps!
Upvotes: 0 <issue_comment>username_2: Just merge two lists :
```
final_list = list1 + list2
```
Upvotes: 0 <issue_comment>username_3: To fix the above code, whenever you are printing `list` you can add those to other list (*in below code adding them to `final_list` with `extend`*) and finally return it at end of the function:
```
def pascalstriangle(n):
list = [1]
final_list = []
for i in range(n):
final_list.extend(list)
newlist = []
newlist.append(list[0])
for i in range(len(list) - 1):
newlist.append(list[i] + list[i + 1])
newlist.append(list[-1])
list = newlist
return final_list
print(pascalstriangle(3))
```
Result:
```
[1, 1, 1, 1, 2, 1]
```
*One thing you may want to consider is not using `list` as variable and using somethin like `my_list`.*
Upvotes: 1
|
2018/03/14
| 2,090 | 7,671 |
<issue_start>username_0: Why I get this error I try to clean and rebuild application and make
application release true and I get same error
>
> Error:Execution failed for task ':app:lintVitalRelease'.
> java.lang.IllegalStateException: Expected BEGIN\_ARRAY but was STRING at line 1 column 1 path $
>
>
>
```
apply plugin: 'com.android.application'
android {
compileSdkVersion 25
buildToolsVersion '26.0.2'
useLibrary 'org.apache.http.legacy'
defaultConfig {
applicationId "x.x.x"
minSdkVersion 15
targetSdkVersion 25
versionCode 95
versionName '5'
multiDexEnabled true
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
productFlavors {
}
}
```<issue_comment>username_1: Based off this Post
**Edit:**
I removed the link because the thread is no longer there
What you need to do is add this chunk of code to your build.gradle file
in the android{} section
```
lintOptions {
checkReleaseBuilds false
}
```
So like this
```
android {
...
lintOptions {
checkReleaseBuilds false
}
}
```
**Update:**
Here is another post talking about a similar problem. It seems like there are various reasons this error can occur. While disabling the checkReleaseBuilds will work. It's recommended to find what the problem is and fix it. Most common error seems to be missing translations in the `strings.xml` file.
I recommend checking out this post for more help
[Error when generate signed apk](https://stackoverflow.com/questions/24098494/error-when-generate-signed-apk)
Upvotes: 7 <issue_comment>username_2: ```
lintOptions {
checkReleaseBuilds false
abortOnError false
}
```
Upvotes: 4 <issue_comment>username_3: To find out why lint fails do this:
1. Run lintVitalRelease
You can do it from Gradle window
[](https://i.stack.imgur.com/XM39c.png)
2. Under Run tab, you will see error logs
[](https://i.stack.imgur.com/Twg1T.png)
For me, it was wrong constraint in ConstraintLayout XML.
[](https://i.stack.imgur.com/lM6cQ.png)
Upvotes: 7 <issue_comment>username_4: The error report is saved to `[app module]/build/reports/lint-results-yourBuildName-fatal.html`. You can open this file in a browser to read about the errors.
src: <https://stackoverflow.com/a/50239165/365229>
Upvotes: 5 <issue_comment>username_5: Make sure to have
`jcenter()`
in both buildscript and allprojects in your build.gradle android{} section
Upvotes: 0 <issue_comment>username_6: Open your build.gradle file and add the code below under android:
```
android {
lintOptions {
checkReleaseBuilds false
}
```
Upvotes: 5 <issue_comment>username_7: I was also getting an error like
```
> Failed to transform '/Users/michaelbui/Projects/counter/build/app/intermediates/flutter/debug/libs.jar' using Jetifier. Reason: FileNotFoundException, message: /Users/michaelbui/Projects/counter/build/app/intermediates/flutter/debug/libs.jar (No such file or directory). (Run with --stacktrace for more details.)
```
while building the app after the latest update, but then I disabled the lint task in android/app/build.gradle
```
lintOptions {
disable 'InvalidPackage'
checkReleaseBuilds false
}
```
and this worked for me.
Upvotes: 1 <issue_comment>username_8: File > Invalidate Caches/Restart... worked for me.
Upvotes: 3 <issue_comment>username_9: In my opinion don't use "checkReleaseBuilds false". This broke my whole release build and I only saw a white screen.
A current workaround is running:
```
flutter run --profile
flutter run --debug
flutter run --release
```
Currently that's the only thing that worked for me.
There is an open issue here: <https://issuetracker.google.com/issues/158753935>
Upvotes: 1 <issue_comment>username_10: **Simple and Working**
*Solution 1*
```
android {
lintOptions {
checkReleaseBuilds false
// Or, if you prefer, you can continue to check for errors in release builds,
// but continue the build even when errors are found:
abortOnError false
}
}
```
*Solution 2*
```
android {
lintOptions {
checkReleaseBuilds false
abortOnError false
}
}
```
*Solution 3*
```
android {
lintOptions {
disable 'MissingTranslation'
abortOnError false
}
}
```
>
> Notes: There are two type uses option ->
>
>
>
*1*
```
lintOptions {
//TODO
}
```
*2*
```
android {
lintOptions {
// TODO
}
}
```
**Thank You**
Upvotes: 4 <issue_comment>username_11: In my case there was some typos in the manifest that caused the error, just corrected them and it compiled
Upvotes: 0 <issue_comment>username_12: In my case it was happening because I was using an invalid(not existing) id to constraint a view, and when I tried to create a build, it gave me the error. What you can do is execute task,
```
./gradlew assembleRelease --stacktrace
```
This will give you the following options,
To proceed, either fix the issues identified by lint, or modify your build script as follows:
```
...
android {
lintOptions {
checkReleaseBuilds false
// Or, if you prefer, you can continue to check for errors in release builds,
// but continue the build even when errors are found:
abortOnError false
}
}
```
If you choose,
```
checkReleaseBuilds false
```
It will not check for errors in the release build and create a build,
and if you choose,
```
abortOnError false
```
It will create a build, and as well list down the error(s).
This way you can actually correct the errors.
Upvotes: 1 <issue_comment>username_13: I had some missing jars and behind a corporate firewall and there was an attempt to download them during `lintVitalRelease`. I disconnected from VPN, re-run, which downloaded the missing jars and all returned to normal.
I knew it has to be something like that because other answers mention where error reports are generated but I had nothing generated there so it was stuck somewhere for me.
Upvotes: 0 <issue_comment>username_14: If you have tried everything on Google but still can't build the release. You can follow my approach:
Firstly, If the Android Studio error log contains the phrase "debug/libs.jar", then you need to build --debug then --release:
```
flutter build apk --debug
flutter build apk --release
```
What if "profile/libs.jar" appears
, build --profile then --release:
```
flutter build apk --profile
flutter build apk --release
```
If there is an error when building debug or profile, then you continue to find that error on Google and fix it. In my case I had to remove the `permission_handler` library because it was causing the error.
Upvotes: 2 <issue_comment>username_15: Cleaning and rebuilding worked for me
```
flutter clean
flutter build apk --release
```
Upvotes: 0 <issue_comment>username_16: I have solved this by updating my `gradle-wrapper.properties` and `build.gradle`.
in `gradle-wrapper.properties` update `distributionUrl`:
```
distributionUrl=https\://services.gradle.org/distributions/gradle-7.2-all.zip
```
and `android/build.gradle` change the gradle version from 4.1.0 to 7.1.2
```
classpath 'com.android.tools.build:gradle:7.1.2'
```
Hope it will help.
Upvotes: 3
|
2018/03/14
| 1,602 | 5,409 |
<issue_start>username_0: I am trying to develop some JS that will allow a form to change the value of a submitted radio button based on the users age. Essentially, if the user is 13yo or younger AND selected radio1 value of '100', it should change the '100' to a blank.
Below is the code. No matter what i do, it seems to always see the user as being over the age of 13. Any help is appreciated!
```
var ofAge = "N"
function checkAge() {
var monInput = document.querySelector("#i_date_of_birth_month").value
var dayInput = document.querySelector("#i_date_of_birth_day").value
var yearInput = document.querySelector("#i_date_of_birth_year").value
var dateString = monInput + ", " + dayInput + ", " + yearInput
var birthDate = new Date(dateString);
var todayDate = new Date()
var age = todayDate.getFullYear() - birthDate.getFullYear();
var m = birthDate.getMonth() - todayDate.getMonth()
if (m < 0) {
age--
}
if (age > 13) {
ofAge = "Y"
} else if (age == 13) {
var d = birthDate.getDate() - todayDate.getDate()
if (d >= 0) {
ofAge = "Y"
} else {
ofAge = "N"
}
}
if (ofAge == "N") {
document.querySelector("#radio1").value = ""
} else {
document.querySelector("#radio1").value = "100"
}
}
document.getElementById("i_date_of_birth_day").addEventListener("change", checkAge)
document.getElementById("i_date_of_birth_year").addEventListener("change", checkAge)
document.getElementById("i_date_of_birth_month").addEventListener("change", checkAge)
```<issue_comment>username_1: The date string you are creating (with commas) is not in the correct format. But it would be better to set the day, month, and year directly
```
var monInput = document.querySelector("#i_date_of_birth_month").value
var dayInput = document.querySelector("#i_date_of_birth_day").value
var yearInput = document.querySelector("#i_date_of_birth_year").value
```
This is where your problem is I believe.
```
var dateString = monInput + ", " + dayInput + ", " + yearInput
```
Instead, you should create a default Date(), and use setFullYear, setMonth (warning: months in javascript are 0 based) and setDate
Upvotes: 1 <issue_comment>username_2: You need to add `else` statement if age is less than 13.
`} else {
ofAge = "N";
}`
```js
var ofAge = "N"
function checkAge() {
var monInput = document.querySelector("#i_date_of_birth_month").value
var dayInput = document.querySelector("#i_date_of_birth_day").value
var yearInput = document.querySelector("#i_date_of_birth_year").value
var dateString = monInput + ", " + dayInput + ", " + yearInput
var birthDate = new Date(dateString);
var todayDate = new Date()
var age = todayDate.getFullYear() - birthDate.getFullYear();
var m = birthDate.getMonth() - todayDate.getMonth()
if (m < 0) {
age--
}
age = parseFloat(age);
if (age > 13) {
ofAge = "Y"
} else if (age == 13) {
var d = birthDate.getDate() - todayDate.getDate()
if (d >= 0) {
ofAge = "Y"
} else {
ofAge = "N"
}
} else {
ofAge = "N";
}
if (ofAge == "N") {
document.querySelector("#radio1").value = ""
} else {
document.querySelector("#radio1").value = "100"
}
}
document.getElementById("i_date_of_birth_day").addEventListener("change", checkAge)
document.getElementById("i_date_of_birth_year").addEventListener("change", checkAge)
document.getElementById("i_date_of_birth_month").addEventListener("change", checkAge)
```
```html
```
Upvotes: 2 <issue_comment>username_3: Building on [@username_2's answer](https://stackoverflow.com/a/49286446/5463636), you also need to change the way you're processing the date. Instead of commas and spaces, use a hyphen (`-`) in between each field to build the date object.
See the code below for @username_2's answer modified to correctly build the date. The `console.log()` calls are in there so you can see what's being generated and changed.
```js
var ofAge = "N"
function checkAge() {
var monInput = document.querySelector("#i_date_of_birth_month").value
var dayInput = document.querySelector("#i_date_of_birth_day").value
var yearInput = document.querySelector("#i_date_of_birth_year").value
var dateString = monInput + "-" + dayInput + "-" + yearInput
console.log("dateString: " + dateString);
var birthDate = new Date(dateString);
console.log("birthDate: " + birthDate);
var todayDate = new Date()
var age = todayDate.getFullYear() - birthDate.getFullYear();
var m = birthDate.getMonth() - todayDate.getMonth()
if (m < 0) {
age--
}
age = parseFloat(age);
if (age > 13) {
ofAge = "Y"
} else if (age == 13) {
var d = birthDate.getDate() - todayDate.getDate()
if (d >= 0) {
ofAge = "Y"
} else {
ofAge = "N"
}
} else {
ofAge = "N";
}
if (ofAge == "N") {
document.querySelector("#radio1").value = ""
} else {
document.querySelector("#radio1").value = "100"
}
console.log("radio1 value = " + document.querySelector("#radio1").value);
}
document.getElementById("i_date_of_birth_day").addEventListener("change", checkAge)
document.getElementById("i_date_of_birth_year").addEventListener("change", checkAge)
document.getElementById("i_date_of_birth_month").addEventListener("change", checkAge)
```
```html
```
Upvotes: 2 [selected_answer]
|
2018/03/14
| 1,280 | 4,316 |
<issue_start>username_0: Please tell where to specify the upload\_preset and also help with the image uploading part.
```
let options : FileUploadOptions = {
params:{'upload_preset':'preset1'},
}
const fileTransfer: FileTransferObject = this.transfer.create();
fileTransfer.upload('assets/demo.jpg',
'https://api.cloudinary.com/v1_1/dvkvdp0bp/image/upload', options,true)
.then((data) => {
alert('DONE!');
}, (err) => {
// error
})
```<issue_comment>username_1: The date string you are creating (with commas) is not in the correct format. But it would be better to set the day, month, and year directly
```
var monInput = document.querySelector("#i_date_of_birth_month").value
var dayInput = document.querySelector("#i_date_of_birth_day").value
var yearInput = document.querySelector("#i_date_of_birth_year").value
```
This is where your problem is I believe.
```
var dateString = monInput + ", " + dayInput + ", " + yearInput
```
Instead, you should create a default Date(), and use setFullYear, setMonth (warning: months in javascript are 0 based) and setDate
Upvotes: 1 <issue_comment>username_2: You need to add `else` statement if age is less than 13.
`} else {
ofAge = "N";
}`
```js
var ofAge = "N"
function checkAge() {
var monInput = document.querySelector("#i_date_of_birth_month").value
var dayInput = document.querySelector("#i_date_of_birth_day").value
var yearInput = document.querySelector("#i_date_of_birth_year").value
var dateString = monInput + ", " + dayInput + ", " + yearInput
var birthDate = new Date(dateString);
var todayDate = new Date()
var age = todayDate.getFullYear() - birthDate.getFullYear();
var m = birthDate.getMonth() - todayDate.getMonth()
if (m < 0) {
age--
}
age = parseFloat(age);
if (age > 13) {
ofAge = "Y"
} else if (age == 13) {
var d = birthDate.getDate() - todayDate.getDate()
if (d >= 0) {
ofAge = "Y"
} else {
ofAge = "N"
}
} else {
ofAge = "N";
}
if (ofAge == "N") {
document.querySelector("#radio1").value = ""
} else {
document.querySelector("#radio1").value = "100"
}
}
document.getElementById("i_date_of_birth_day").addEventListener("change", checkAge)
document.getElementById("i_date_of_birth_year").addEventListener("change", checkAge)
document.getElementById("i_date_of_birth_month").addEventListener("change", checkAge)
```
```html
```
Upvotes: 2 <issue_comment>username_3: Building on [@username_2's answer](https://stackoverflow.com/a/49286446/5463636), you also need to change the way you're processing the date. Instead of commas and spaces, use a hyphen (`-`) in between each field to build the date object.
See the code below for @username_2's answer modified to correctly build the date. The `console.log()` calls are in there so you can see what's being generated and changed.
```js
var ofAge = "N"
function checkAge() {
var monInput = document.querySelector("#i_date_of_birth_month").value
var dayInput = document.querySelector("#i_date_of_birth_day").value
var yearInput = document.querySelector("#i_date_of_birth_year").value
var dateString = monInput + "-" + dayInput + "-" + yearInput
console.log("dateString: " + dateString);
var birthDate = new Date(dateString);
console.log("birthDate: " + birthDate);
var todayDate = new Date()
var age = todayDate.getFullYear() - birthDate.getFullYear();
var m = birthDate.getMonth() - todayDate.getMonth()
if (m < 0) {
age--
}
age = parseFloat(age);
if (age > 13) {
ofAge = "Y"
} else if (age == 13) {
var d = birthDate.getDate() - todayDate.getDate()
if (d >= 0) {
ofAge = "Y"
} else {
ofAge = "N"
}
} else {
ofAge = "N";
}
if (ofAge == "N") {
document.querySelector("#radio1").value = ""
} else {
document.querySelector("#radio1").value = "100"
}
console.log("radio1 value = " + document.querySelector("#radio1").value);
}
document.getElementById("i_date_of_birth_day").addEventListener("change", checkAge)
document.getElementById("i_date_of_birth_year").addEventListener("change", checkAge)
document.getElementById("i_date_of_birth_month").addEventListener("change", checkAge)
```
```html
```
Upvotes: 2 [selected_answer]
|
2018/03/14
| 777 | 2,718 |
<issue_start>username_0: I have HashMap where key is bird specie and value is number of perceptions. Here is my code:
```
public class Program {
public static void main(String[] args) {
HashMap species = new HashMap<>();
Scanner reader = new Scanner(System.in);
species.put("hawk (buteo jamaicensis)", 2);
species.put("eagle (aquila chrysaetos)", 4);
species.put("sparrow (passeridae)", 5);
System.out.println("What specie?"); //output "chicken"
String specie = reader.nextLine();
for (HashMap.Entry entry: species.entrySet()) {
if (entry.getKey().contains(specie)) {
System.out.println(entry.getKey()+" : "+entry.getValue()+" perceptions");
} else {
System.out.println("Not in database!");
}
}
}
```
}
How I can check if the specie exists in hashmap? For example if the specie output is "chicken" and it's not in database, then the program should print "Not in database!". Now the output is:
```
Not in database!
Not in database!
Not in database!
```
And my goal output is:
```
Not in database!
```<issue_comment>username_1: Use a boolean flag for this:
```
boolean found = false;
for (HashMap.Entry entry: species.entrySet()) {
if (entry.getKey().contains(specie)) {
System.out.println(entry.getKey()+" : "+entry.getValue()+" perceptions");
found = true;
}
} //Loop ends
if (!found) {
System.out.println("Not in database!");
}
```
Upvotes: 2 <issue_comment>username_2: You can also use [Java 8 Streams](https://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html) for this: (although I'd probably just recommend a for-loop instead)
```
Optional> match =
species.entrySet().stream().filter(entry -> entry.getKey().contains(specie)).findAny();
if (match.isPresent())
System.out.println(match.get().getKey()+" : "+match.get().getValue()+" perceptions");
else
System.out.println("Not in database!");
```
`.stream` turns the entry set into a stream.
`.filter` removes all but the element we're looking for.
`.findAny` returns an element if one exists.
---
Although, if you're looping through a Map to find what you're looking for, that sort of undermines the purpose of a Map and you might want to opt for a List of some custom class, splitting the String into 2 and then having the key be the common English name (as laune recommended in the comments) or using a more complex data structure that allows for efficient substring lookup like a [suffix tree](https://en.wikipedia.org/wiki/Suffix_tree).
Upvotes: 0 <issue_comment>username_3: I think streams is an overkill for that.
```
if(species.containsKey(selectedSpecie)) {
return species.get(selectedSpecie);
} else {
throw new IllegalStateException("Not in database!");
}
```
Upvotes: 0
|
2018/03/14
| 988 | 3,076 |
<issue_start>username_0: I want to output a sentence. When the sentence appears all individual letters should appear by rotating and getting larger.
My approach was the following:
What I did was to split up the sentence and put every letter in an individual `span` element. Then I transition the letters by using the ccs transform `-webkit-transform`. However, this doesn't work in Chrome unless you set you `display: inline-block`. Then the rotation works as intended, however the spaces are not displayed any more. This results in all words being tied together.
See below or [here](https://jsfiddle.net/ee2todev/vwze2ad4/19/)
```js
let containerDiv = "div.chart";
function displayText(_textArray) {
let sel = d3.select(containerDiv);
// add headers for all strings but the last one
for (let i = 0; i < _textArray.length - 1; i++) {
sel.append("div")
.attr("class", "header h" + i)
.append("h1")
.attr("class", "trans")
.text(_textArray[i]);
}
// add last string by wrapping each letter around a span
// which can be styled individually
let sel2 = sel.append("div")
.attr("class", "header h" + (_textArray.length - 1))
.append("h1")
.style("opacity", 1)
.attr("class", "trans");
const lastString = _textArray[_textArray.length-1];
for (let i = 0; i < lastString.length; i++) {
sel2.append("span")
.attr("class", "color-" + (i % 5))
.text(lastString[i]);
}
}
function transitionLetters(_selection){
_selection
.transition()
.duration(2000)
.delay((d,i) => i * 200)
.style("opacity", 1)
.style("-webkit-transform", "rotate(-720deg) scale(1)");
}
let myText = ["I like to eat", "ham and eggs"];
displayText(myText);
d3.selectAll("span").call(transitionLetters);
```
```css
div.header {
margin: 0 auto;
text-align: center;
width: max-content;
}
h1 span {
opacity: 0;
display: inline-block;
-webkit-transform: rotate(-0deg) scale(0.001);
}
.color-0 { color: rgb(255, 0, 171); }
.color-1 { color: rgb(0, 168, 255); }
.color-2 { color: rgb(171, 0, 255); }
.color-3 { color: rgb(255, 171, 0); }
.color-4 { color: rgb(168, 255, 0); }
```
```html
```<issue_comment>username_1: The spaces are there, they just aren't shown because a span won't show content that is only white space.
```
span { /* or some custom class for those spans */
white-space:pre;
}
```
from this answer:
<https://stackoverflow.com/a/19742967/7355135>
Upvotes: 3 [selected_answer]<issue_comment>username_2: To preserve the whitespaces, try adding the **white-space:pre** declaration to your h1 span selector.
```
h1 span {
opacity: 0;
display: inline-block;
-webkit-transform: rotate(-0deg) scale(0.001);
white-space:pre;
}
```
This post provide a summary and link for different options regarding how to deal with white spaces with css: [how to make space within span show up](https://stackoverflow.com/questions/19742946/how-to-make-space-within-span-show-up)
Upvotes: 1
|
2018/03/14
| 335 | 1,259 |
<issue_start>username_0: When Creating a new empty partitioned table schema in Big Query Web GUI, you can SET the **Require Partition filter** Option.
How can I set the same option when creating a table using bq command-line tool. Right now my command is:
```
bq mk --table --time_partitioning_field event_time my_dataset.events event_id:INTEGER,event_time:TIMESTAMP
```
The command successfully creates the Partitioned Table, but I have not seen a flag for setting the Require Partition filter.
How can I edit the Option later after creating the table?<issue_comment>username_1: If you are using a bq command line version >= 2.0.30 you should see a --require\_partition\_filter option in the mk command. Please let us know if this is not the case. Thanks!
Upvotes: 4 [selected_answer]<issue_comment>username_2: To answer your second question:
```
bq update --require_partition_filter --time_partitioning_field=event_time my_dataset.events
```
Or with a fully qualified table name:
```
bq update --require_partition_filter --time_partitioning_field=event_time project-id:my_dataset.events
```
You can also disable with:
```
bq update --norequire_partition_filter --time_partitioning_field=event_time project-id:my_dataset.events
```
Upvotes: 3
|
2018/03/14
| 790 | 2,589 |
<issue_start>username_0: I have 3 tables: Users, Community & Posts.
I want to retrieve posts of users that Joe dont follow `(id_user=1)`
**Users**
```
id_user | name
1 Joe
2 Doe
3 Moe
4 Roe
5 Clin
```
**Community**
```
id_follower | id_followed
1 3
1 5
```
**Posts**
```
id_post | id_user | post
24 4 hi
25 5 hello
26 1 how are you
27 3 come on
28 4 let go
29 2 get out
```
What I'm expecting to retrieve is
```
24 4 hi - by Roe
28 4 let go - by Roe
29 2 get out - by Doe
```
I tried this but not working
```
SELECT p*
FROM community as c
LEFT JOIN users as u ON u.id_user=c.id_followed
LEFT JOIN posts as p ON p.id_user!=c.id_followed
WHERE c.id_follower=1 AND u.id_user!=1
```<issue_comment>username_1: you could use a inner join on a not in
```
select p.*
from Posts p
inner join (
select id_user form community
where id_user not in (
select id_followed
from community
where id_follower =1
)
) t on p.id_user = t.id_user and p.id_user <>1
```
Upvotes: 0 <issue_comment>username_2: This should work:
```
select * from Posts
where id_user not in
(select id_followed from Community where id_follower = 1)
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: No point just giving an answer, lets step through the logic. There are other methods of doing this, but since you asked left join I will do as left join. Start at posts...
```
from posts p
```
lets then left join to community
```
left join community c on c.id_followed = p.id_user and ID_follower = 1
```
This now gives you a list of all posts along with a community id\_followed record...if the record from community is null, then it's from a user joe does not follow. Since we only want the records joe does not follow
```
where c.id_followed is null
```
We will add a join to users to grab the poster name and put it all together
```
from posts p
left join community c on c.id_followed = p.id_user and ID_follower = 1
left join users u on u.id_user = p.id_user
where c.id_followed is null
```
and finally put together the select line to get the fields you want, using concat to get the post by name format you want
```
select p.id_post, p.id_user, concat(p.post , ' by ', u.name)
```
put it together and run!
Upvotes: 1
|
2018/03/14
| 898 | 3,208 |
<issue_start>username_0: So, I keep getting this non-passive event listener violation for my on change event. It only started this nonsense recently. After I moved the .js file into it's own folder, this function stopped working completely. The function will not even be executed, i just get the warning instead. Every function in the .js file works except for this one.
The error:
>
> [Violation] Added non-passive event listener to a scroll-blocking >'mousewheel'
> event. Consider marking event handler as 'passive' to make the page more
> responsive. See <https://www.chromestatus.com/feature/5745543795965952>
>
>
>
The Code:
```
//Put the music into the list for the user to select from
$('#songs').on('change', function() {
var $selected = $('#songs').find(':selected').text();
$('#audio').attr('src', './../music/' + $selected);
});
Audio not supported
```
Everything I've seen about non-passive problems has to deal with functions that have .preventDefault() or modify the page based on scroll behavior. I don't know why my on change function is triggering it, so I don't know how to make it stop triggering it either.
--edit--
some more information about the nature of the bug:
It only stops the change from being transmitted to the audio element.
The warning is sent the moment the selector is clicked.
This code worked for months, the error came latter without any changes in this code preceding it.
none of my other code is being effected by the bug.
--Edit 2--
now that the code is working (by removing jQuery use) I have only one question:
**why is the warning still there?**<issue_comment>username_1: you could use a inner join on a not in
```
select p.*
from Posts p
inner join (
select id_user form community
where id_user not in (
select id_followed
from community
where id_follower =1
)
) t on p.id_user = t.id_user and p.id_user <>1
```
Upvotes: 0 <issue_comment>username_2: This should work:
```
select * from Posts
where id_user not in
(select id_followed from Community where id_follower = 1)
```
Upvotes: 2 [selected_answer]<issue_comment>username_3: No point just giving an answer, lets step through the logic. There are other methods of doing this, but since you asked left join I will do as left join. Start at posts...
```
from posts p
```
lets then left join to community
```
left join community c on c.id_followed = p.id_user and ID_follower = 1
```
This now gives you a list of all posts along with a community id\_followed record...if the record from community is null, then it's from a user joe does not follow. Since we only want the records joe does not follow
```
where c.id_followed is null
```
We will add a join to users to grab the poster name and put it all together
```
from posts p
left join community c on c.id_followed = p.id_user and ID_follower = 1
left join users u on u.id_user = p.id_user
where c.id_followed is null
```
and finally put together the select line to get the fields you want, using concat to get the post by name format you want
```
select p.id_post, p.id_user, concat(p.post , ' by ', u.name)
```
put it together and run!
Upvotes: 1
|
2018/03/14
| 473 | 1,685 |
<issue_start>username_0: I have the following code. It is a hash function that turns a word into a hashed number:
```
def myHash (string):
solution = 5
alphabet = 'abcdegjkmnoprstuvwyz'
for carac in string:
solution = solution * 21 + alphabet.find(carac)
return solution
test = 'mytest'
print (myHash(test))
```
How would I do the opposite way? From a hashed number and I have to guess the string.<issue_comment>username_1: *Since this looks like a homework, as pointed out in the comments, I'm leaving writing the code to you.*
You cannot invert this hash function due to collisions. Namely, since not all lowercases are present in `alphabet`, then `str.find` will return `-1` for characters such as *f*, *h*, *i*, *l* and *q*.
So if you receive, for example, the hash `2204`, you cannot know if the input string was `'af'`, `'ah'`, `'ai'`, `'al'` or `'aq'`.
Although, this is a weird behaviour and almost seems like a mistake, so let's discuss the case where `alphabet = 'abcdefghijklmnopqrstuvwxyz` and `solution = solution * 26 + alphabet.find(carac)`.
Then the index of the trailing character in `alphabet` is `num % 26`. You can then get the corresponding hash for the remaining characters by substraction and division and recursively obtain your string.
Upvotes: 0 <issue_comment>username_2: I think you could try this logic:
```
'''
Given all variable in your original code
'''
1. Define variable(Your String): testF, Hashed ="", myHash(test)
2. loop , stop condition (You figure it out):
testF=alphabet[int(Hashed%(len(alphabet)+1))]+testF
Hashed=int((Hashed-Hashed%(len(alphabet)+1))//(len(alphabet)+1))
```
Upvotes: -1
|
2018/03/14
| 1,306 | 5,140 |
<issue_start>username_0: I want to be able use the result of a target created in a rule in the prerequisite of another rule in GNU make. So for example:
```
PREREQ = $(shell echo "reading target1" >&2; cat target1)
target1:
echo "prereq" > $@
target2: target1 $(PREREQ)
echo foo > $@
```
`target2` should depend on `prereq` as read from the `target1` file, but that is not in the file until the `target1` recipe is executed.
Granted this is very contrived example with I am sure lots of suggestions about how to refactor this particular example but I'm not looking to refactor this example. This is just a simplified example of my more complicated problem where I need to derive prerequisites from the contents of a file that is not created until a recipe in the `Makefile` is executed.
The question is, [how] can I make expansion of `$(PREREQ)` (and therefore the execution of the `$(shell cat target1)` defer until after the `target1` rule is actually executed?
Update: I tried `.SECONDARYEXPANSION:` but that doesn't seem to do the job:
```
$ make -d target2
...
reading target1
cat: target1: No such file or directory
...
Updating goal targets....
Considering target file 'target2'.
File 'target2' does not exist.
Considering target file 'target1'.
File 'target1' does not exist.
Finished prerequisites of target file 'target1'.
Must remake target 'target1'.
echo "prereq" > target1
[ child management ]
Successfully remade target file 'target1'.
Finished prerequisites of target file 'target2'.
Must remake target 'target2'.
echo foo > target2
[ child management ]
Successfully remade target file 'target2'.
```
As you can see, "reading target" was only printed once at the very beginning demonstrating that `PREREQ` is not expanded again due to the `.SECONDEXPANSION:` and the list of targets *Consider*ed for `target2` did not include `prereq`.<issue_comment>username_1: You could write the complete rule for `target2` to a separate file and `-include` it:
* [Including Other Makefiles](https://www.gnu.org/software/make/manual/html_node/Include.html)
* [How Makefiles Are Remade](https://www.gnu.org/software/make/manual/html_node/Remaking-Makefiles.html)
The exact mechanics will depend on your specific use case, and it may well be impossible to achieve what we need using this approach, but it supports a variety of styles for automated dependency generation.
Upvotes: 0 <issue_comment>username_2: Deferring the expansion of the prerequisite `$(PREREQ)` can be achieved by conditionally creating the `target2` and relying on recursion:
```
ifndef expand-prereq
target2: target1
$(MAKE) --no-print-directory -f $(lastword $(MAKEFILE_LIST)) $@ expand-prereq=y
else
target2: target1 $(PREREQ)
echo foo > $@
endif
```
The first time `make` runs for this makefile, the variable `expand-prereq` is not defined and therefore, the first `targe2` rule is generated as a result of the conditional. This kind of dummy rule makes possible to update `target1` without expanding `$(PREREQ)`.
Matching this rule results in `target1` being updated (since `target1` is a prerequisite of this rule) and `make` being called *recursively* for the same makefile and with `target2` as target.
The second time `make` is (recursively) invoked, the variable `expand-prereq` was defined by means of the command-line argument `expand-prereq=y`, so the second `target2` rule is generated as a result of the `else` branch this time. This rule is the one that actually produces the target `target2`. Note that before this rule can be matched, `target1` has been already created as a side effect of the first dummy rule, so the expansion of `$(PREREQ)` happens after `target1` has been created (what you were looking for).
Upvotes: 2 <issue_comment>username_3: There are several solutions:
1. GNU make 4.4 has been released! Haven't tried yet, but the release notes claim that secondary expansion only expands the prerequisites when they're considered. Furthermore you can delay execution with the .WAIT special prerequisite. That works fine, I tested. If .WAIT really delays the second expansion of the prerequisites after .WAIT, you're good.
2. Recursive make. Restart make after the prerequisites for the second rule were updated. This is a bit lame solution, can't recommend it.
3. Produce the prerequisites into make include file(s). Make automatically restarts after updating include files (re-exec). I'm currently using this method, and it works great. Better than recursive make but still slow, as all the makefiles have to be parsed again. A possible solution is comparing the old and new prerequisites, and only updating the include file, if its content changed. The second rule also needs to be modified to do nothing, if the content changed. Make will run all rules before restarting, but if they don't update their targets, after the restart they'll be executed again, now with the proper prerequisites. An interesting feature of make is that you can define make variables inside a recipe, and use them in other recipes (but not in the prereq list, unless #1 above reall works).
Upvotes: 0
|
2018/03/14
| 1,815 | 6,468 |
<issue_start>username_0: I am trying to understand c++'s `const` semantics more in depth but I can't fully understand what really the constness guarantee worth is.
As I see it, the constness guarantees that there will be no mutation, but consider the following (contrived) example:
```
#include
#include
#include
class A {
public:
int i{0};
void foo() {
i = 42;
};
};
class B {
public:
A \*a1;
A a2;
B() {
a1 = &a2
}
void bar() const {
a1->foo();
}
};
int main() {
B b;
std::cout << b.a2.i << std::endl; // output is 0
b.bar();
std::cout << b.a2.i << std::endl; // output is 42
}
```
Since `bar` is `const`, one would assume that it wouldn't mutate the object `b`. But after its invocation `b` is mutated.
If I write the method `foo` like this
```
void bar() const {
a2.foo();
}
```
then the compiler catches it as expected.
So it seems that one can fairly easily circumvent the compiler with pointers. I guess my main question is, how or if I can be 100% sure that `const` methods won't cause any mutation to the objects they are invoked with? Or do I have completely false expectations about `const`?
**Why does c++ allow invocation of non-const methods over pointers in `const` methods?**
EDIT:
thanks to Galik's comment, I now found this:
<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/n4372.html>
Well, this was exactly what I was looking for! Thanks!
And I find Yakk's answer also very helpful, so I'll accept his answer.<issue_comment>username_1: This is a correct observation. In const-qualified functions (`bar` in your example) all data members of the class are behaving as if they are const data members *when accessed from this function*. With pointers, it means that the pointer itself is constant, but *not* the object it points to. As a matter of fact, your example can be very much simplified into:
```
int k = 56;
int* const i = &k
*i = 42;
```
There is a big difference between pointer to constant object and constant pointer, and one needs to understand it, so that 'promises', which were not given in the first place, would not seem to be broken.
Upvotes: -1 <issue_comment>username_2: `const` tells the caller "this shouldn't mutate the object".
`const` helps the implementor with some errors where accidentally mutating state generates errors unless the implementor casts it away.
`const` data (not references, actual data) provides guarantees to the compiler that anyone who modifies this data is doing undefined behaviour; thus, the compiler is free to assume that the data is never modified.
`const` in the `std` library makes certain guarantees about thread safety.
All of these are uses of `const`.
---
If an object isn't `const`, anyone is free to `const_cast` away `const` on a reference to the object and modify it.
If an object is `const`, compilers will not reliably diagnose you casting away `const`, and generating undefined behavior.
If you mark data as `mutable`, even if it is also marked as `const` it won't be.
The guarantees that the `std` provides based off `const` are limited by the types you in turn pass into `std` following those guarantees.
---
`const` doesn't *enforce* much on the programmer. It simply tries to help.
No language can make a hostlie programmer friendly; `const` in C++ doesn't try. Instead, it tries to make it *easier* to write `const`-correct code than to write `const`-incorrect code.
Upvotes: 4 [selected_answer]<issue_comment>username_3: Constness by itself doesn't guarantee you anything. It only takes away rights of specific code to mutate an object through a specific reference. It doesn't take away rights of other code to mutate the same object through other references, right under your feet.
Upvotes: 2 <issue_comment>username_4: >
> So it seems that one can fairly easily circumvent the compiler with pointers.
>
>
>
That is indeed true.
>
> I guess my main question is, how or if I can be 100% sure that const methods won't cause any mutation to the objects they are invoked with?
>
>
>
The language guarantees that only in a local sense.
Your class is, indirectly, the same as the following:
```
struct Foo
{
int* ptr;
Foo() : ptr(new int(0)) {};
void bar() const { *ptr = 10; }
};
```
When you use:
```
Foo f;
f.bar();
```
the member variable of `f` did not change since the pointer still points to the location after the call to `f.bar()` as it did before the call. In that sense, `f` did not change. But if you extend the "state" of `f` to include the value of what `f.ptr` points to, then the state of `f` did change. The language does not guarantee against such changes.
It's our job, as designers and developers, to document the "const" semantics of the types we create and provide functions that preserve those semantics.
>
> Or do I have completely false expectations about const?
>
>
>
Perhaps.
Upvotes: 1 <issue_comment>username_5: When a class method is declared as `const`, that means the implicit `this` pointer inside the method is pointing at a `const` object and thus the method cannot alter any of the members of that object (unless they are explicitly declared as `mutable`, which is not the case in this example).
The `B` constructor is not declared as `const`, so `this` is of type `B*`, ie a pointer to a non-const `B` object. So `a1` and `a2` are non-const, and `a1` is declared as a pointer to a non-const `A` object, so the compiler allows `a1` to be pointed at `a2`.
`bar()` is declared as `const`, so `this` is of type `const B*`, ie a pointer to a const `B` object.
When `bar()` calls `a1->foo()`, `a1` is `const` by virtue of `this` pointing at a `const B` object, so `bar()` can't change `a1` to point at something else. But the `A` object that `a1` is pointing at is still deemed to be non-const by virtue of `a1`'s declaration, and `foo()` is not declared as `const`, so the compiler allows the call. However, the compiler can't validate that `a1` is actually pointing at `a2`, a member of `B` that is supposed to be `const` inside of `bar()`, so the code breaks the `const` contract and has **undefined behavior**.
When `bar()` tries to call `a2.foo()` instead, `a2` is `const` by virtue of `this` pointing at a `const B` object, but `foo()` is not declared as `const`, so the compiler fails the call.
`const` is just a safety catch for well-behaving code. It does not stop you from shooting yourself in the foot by using misbehaving code.
Upvotes: 0
|
2018/03/14
| 1,597 | 5,775 |
<issue_start>username_0: Currently I am working on an animation for a website which involves two elements having their position changed over a period of time and usually reset to their initial position. Only one element will be visible at a time and everything ought to run as smoothly as possible.
Before you ask, a CSS-only solution is not possible as it is dynamically generated and must be synchronised. For the sake of this question, I will be using a very simplified version which simply consists of a box moving to the right. I shall be referring only to this latter example unless explicitly stated for the remainder of this question to keep things simple.
Anyway, the movement is handled by the CSS *transition* property being set so that the browser can do the heavy lifting for that. This transition must then be done away with in order to reset the element's position in an instant. The obvious way of doing so would be to do just that then reapply *transition* when it needs to get moving again, which is also right away. However, this isn't working. Not quite. I'll explain.
Take a look at the JavaScript at the end of this question or in the linked JSFiddle and you can see that is what I'm doing, but setTimeout is adding a delay of 25ms in between. The reason for this is (and it's probably best you try this yourself) if there is either no delay (which is what I want) or a very short delay, the element will either intermittently or continually stay in place, which isn't the desired effect. The higher the delay, the more likely it is to work, although in my actual animation this causes a minor jitter because the animation works in two parts and is not designed to have a delay.
This does seem like the sort of thing that could be a browser bug but I've tested this on Chrome, Firefox 52 and the current version of Firefox, all with similar results. I'm not sure where to go from here as I have been unable to find this issue reported anywhere or any solutions/workarounds. It would be much appreciated if someone could find a way to get this reliably working as intended. :)
---
[Here is the JSFiddle page with an example of what I mean.](https://jsfiddle.net/fbmnsss9/)
The markup and code is also pasted here:
```js
var box = document.getElementById("box");
//Reduce this value or set it to 0 (I
//want rid of the timeout altogether)
//and it will only function correctly
//intermittently.
var delay = 25;
setInterval(function() {
box.style.transition = "none";
box.style.left = "1em";
setTimeout(function() {
box.style.transition = "1s linear";
box.style.left = "11em";
}, delay);
}, 1000);
```
```css
#box {
width: 5em;
height: 5em;
background-color: cyan;
position: absolute;
top: 1em;
left: 1em;
}
```<issue_comment>username_1: Force the DOM to recalculate itself before setting a new transition after reset. This can be achieved for example by reading the offset of the box, something like this:
```js
var box = document.getElementById("box");
setInterval(function(){
box.style.transition = "none";
box.style.left = "1em";
let x = box.offsetLeft; // Reading a positioning value forces DOM to recalculate all the positions after changes
box.style.transition = "1s linear";
box.style.left = "11em";
}, 1000);
```
```css
body {
background-color: rgba(0,0,0,0);
}
#box {
width: 5em;
height: 5em;
background-color: cyan;
position: absolute;
top: 1em;
left: 1em;
}
```
[See also a working demo at jsFiddle](https://jsfiddle.net/v9y6rwbc/1/).
Normally the DOM is not updated when you set its properties until the script will be finished. Then the DOM is recalculated and rendered. However, if you read a DOM property after changing it, it forces a recalculation immediately.
What happens without the timeout (and property reading) is, that the `style.left` value is first changed to 1em, and then immediately to 11em. Transition takes place after the script will be fihished, and sees the last set value (11em). But if you read a position value between the changes, transition has a fresh value to go with.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Instead of making the `transition` behave as an animation, use `animation`, it will do a much better job, most importantly performance-wise and one don't need a timer to watch it.
With the `animation` events one can synchronize the animation any way suited, including fire of a timer to restart or alter it.
Either with some parts being setup with CSS
```js
var box = document.getElementById("box");
box.style.left = "11em"; // start
box.addEventListener("animationend", animation_ended, false);
function animation_ended (e) {
if (e.type == 'animationend') {
this.style.left = "1em";
}
}
```
```css
#box {
width: 5em;
height: 5em;
background-color: cyan;
position: absolute;
top: 1em;
left: 1em;
animation: move_me 1s linear 4;
}
@keyframes move_me {
0% { left: 1em; }
}
```
Or completely script based
```js
var prop = 'left', value1 = '1em', value2 = '11em';
var s = document.createElement('style');
s.type = 'text/css';
s.innerHTML = '@keyframes move_me {0% { ' + prop + ':' + value1 +' }}';
document.getElementsByTagName('head')[0].appendChild(s);
var box = document.getElementById("box");
box.style.animation = 'move_me 1s linear 4';
box.style.left = value2; // start
box.addEventListener("animationend", animation_ended, false);
function animation_ended (e) {
if (e.type == 'animationend') {
this.style.left = value1;
}
}
```
```css
#box {
width: 5em;
height: 5em;
background-color: cyan;
position: absolute;
top: 1em;
left: 1em;
}
```
Upvotes: 2
|
2018/03/14
| 762 | 2,754 |
<issue_start>username_0: Im not sure how to compare the characters char < Char and add the count
it should print
Enter a line:
antidisestablishmentarianism\(whatever the user wants to input)
Your Answer 15
```
import java.util.Scanner;
public class CountRisingPairs {
public static void main(String[] args) {
Scanner in =new Scanner(System.in);
System.out.println(" Enter a string");
String S=in.next();
int count;
int value;
for (int i=65; i<91; i++) {
count=0;
for (int j=0; j0)
System.out.println((char)i+" -- "+count);
}
}
}
```
i cant use hash map or any other type of loop.<issue_comment>username_1: For comparing chars in the input, you should probably keep a variable with the previous char to compare to. I don't think comparing to the index variable *i* is what you want. Then your if statement would be something like
```
if (value > previous) {
count++;
}
```
Also, when iterating over the input of a Scanner, you should probably do it with a while loop like this:
```
while (in.hasNext()) {
// Your counting here
}
```
You need a way to terminate that while loop - you can do that by checking for '\n' or something else. And of course, the while loop can be rewritten as a for loop if you want to.
Upvotes: 1 [selected_answer]<issue_comment>username_2: ```
String element = "antidisestablishmentarianism";
int count = 0;
for (int j=0; jy){
count++;
System.out.println("Pair: "+element.charAt(j)+""+element.charAt(j+1));
}
}
}
System.out.println(count+" pairs found");
```
Upvotes: 0 <issue_comment>username_3: The loop should iterate over all characters from the first to the next but last so you can compare adjacent characters. They can be compared just like integer values.
```
String s = "antidisestablishmentarianism";
int count = 0;
for( int i = 0; i < s.length() - 1; ++i ){
if( s.charAt(i) < s.charAt(i+1) ) count++;
}
System.out.println( "count = " + count );
```
Upvotes: 1 <issue_comment>username_4: Based on your code, also closing the scanner using try-with-resources:
```
import java.util.Scanner;
public class CountRisingPairs {
public static void main(String[] args) {
try (Scanner in = new Scanner(System.in)) {
System.out.println(" Enter a string");
String inputString = in.next();
int count = 0;
char previousChar = 100;
for (char currentChar : inputString.toCharArray()) {
if (currentChar > previousChar) {
count++;
}
previousChar = currentChar;
}
System.out.println(count);
}
}
}
```
Upvotes: 0
|
2018/03/14
| 349 | 1,227 |
<issue_start>username_0: I'm loading a json file and need the fields to have the same name
```
export interface People{
name: string
age: number
alive?: boolean
}
```
json comes like this
```
{
{
"name": "teste1"
"age": 41
"alive?": true
}
}
```
load the json
```
@Injectable()
export class Peoples{
constructor(private http: Http){}
peoples(): Observable{
return this.http.get(`${DEEP\_API}/people`)
.map(response => response.json())
}
}
```
My problem is that the Person does not recognize the "?" in the "alive?"
Any suggestion?<issue_comment>username_1: If the property name contains special characters you need to put it in `''`
```
export interface People{
name: string
age: number
'alive?': boolean
}
```
**Note** `alive?: boolean` is valid syntax, but it means the property named `alive` is optional, not that the property is named `alive?`
To access the property you need to use `person['alive?']`
Upvotes: 3 [selected_answer]<issue_comment>username_2: To add to what @Titan said, you should also be aware that by choosing to use a special character in the property name you will then be required to use **bracket notation** `["alive?"]` to access that properties value.
Upvotes: 1
|
2018/03/14
| 966 | 2,872 |
<issue_start>username_0: Let's say a table A:
```
Year product rating
07/02/2018 A good
07/02/2017 B good
07/02/2016 A bad
07/02/2015 C medium
07/02/2016 C bad
```
In a first phase, I wish to obtain the following table:
```
product year score
A 07/02/2018 1
A 07/02/2016 3
B 07/02/2017 1
C 07/02/2016 3
C 07/02/2015 2
```
The second phase:
```
product year score for oldest date
A 07/02/2016 3
B 07/02/2017 1
C 07/02/2015 2
```
what is the shortest way to do this ? ( ranking, change scores from strings to numbers, aggeregate). Can I do all these steps in one shot ?
Thank you<issue_comment>username_1: You first need to map the column to an Integer value.
You can use Map -
```
diz = {k:str(v) for k,v in zip(rating.keys(),rating.values())}
```
check this [replace values of one column in a spark df by dictionary key-values (pyspark)](https://stackoverflow.com/questions/44776283/replace-values-of-one-column-in-a-spark-df-by-dictionary-key-values-pyspark?rq=1)
Then use sorting/aggregation to get the desired output.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Just to make Abhishek solution more concrete, suppose you have dataframe given above.
**First phase**
```
import pyspark.sql.functions as fn
from pyspark.sql.types import *
# replace rating to score
df2 = df.na.replace({'good': '2', 'bad': '0', 'medium': '1'}, value=1, subset='rating')
# cast rating score to integer
df2 = df2.select('date', 'product', fn.col('rating').cast(IntegerType()).alias('score'))
# sorting by product and score
df2.sort(fn.col('product'), fn.desc('score')).show()
```
You can also just use `fn.col('score')` if you want to sort ascending instead.
**Second phase**
Pretty much the same as in first one. However, you have to apply `unix_timestamp` to parse date and sort by product and parsed date.
```
df2 = df.na.replace({'good': '2', 'bad': '0', 'medium': '1'}, value=1, subset='rating')
df2 = df2.select('date', 'product',
fn.col('rating').cast(IntegerType()).alias('score'),
fn.unix_timestamp('date', format='MM/dd/yyyy').alias('date_dt'))
df2 = df2.sort(fn.col('product'), fn.asc('date_dt'))
```
Then `groupby` product and grab first `date` and `score`
```
df2.groupby(fn.col('product')).agg(fn.first(fn.col('date')), fn.first(fn.col('score'))).show()
>>
+-------+------------------+-------------------+
|product|first(date, false)|first(score, false)|
+-------+------------------+-------------------+
| B| 07/02/2017| 2|
| C| 07/02/2015| 1|
| A| 07/02/2016| 0|
+-------+------------------+-------------------+
```
Upvotes: 0
|
2018/03/14
| 1,722 | 7,004 |
<issue_start>username_0: I am writing GUI applicaton in Qt. Currently I waste too much time on routine. It seems something wrong with my architecture. Please tell me how can I change my approach to improve code.
What I am doing:
My program can be decomposed as hierarchy of classes (not inheritance but composition). In example:
```
class D { /* something */ };
class C { /* something */ };
class B { C c1; C c2; D d1; };
class A { D d1; C c1; };
```
So, actually it is a tree hierarchy where leaf nodes (class C, class D) are "models" in Qt terminology which hold data. At the top of hierarchy MainWindow (class A) is placed, which holds first level of "views" (class D, i.e. subwidget) and leaf node with data (class C, i.e. text field).
To pass information down from main window to data I use function calls from mainwindow (pushbuttons) to leaf nodes. After that data changes and tells parents about it with signal slot mechanism. Parents continue to pass message up with signaling.
I am really bored by establishing all these communication. Now I have up to 5 levels, however it is not that much in usual code when using composition. Please tell me how can I change my approach. Due to complexity of these connections, development of the code extremely slow, almost stopped.
---
It is hard to give a concrete example, because there are a lot of code, but the idea of problem which is very difficult to solve is following:
There are two QTreeView, which differently shows data from own model inherited from QAbstractItemModel (tree inside model is not that tree in previous discussion, this tree is only single level of hierarchy). **I want to select objects in one QTreeView and change by that selection in second QTreeView.** There are total 2 QTreeView, 2 different QAbstractItemModel instances, 2 trees of own objects (for each QAbstractItemModel), and **single** data.<issue_comment>username_1: Sounds like you might have become a victim of going through too many examples. Examples tend to cram functionality where it doesn't belong, creating the possibility to develop bad programming habits.
In actual production things need to be more compartmentalized. The main window should not be the container of the "application logic", all it needs to concern itself with is holding together the main widgets.
But that doesn't seem to be your case, judging by the necessity to delegate things "*from mainwindow (pushbuttons) to leaf nodes*" as you put it.
On a grander scale, it is not advisable to mix application logic with UI at all, much less cram it all in the main window. The application logic should be its own layer, designed so that it can work without any GUI whatsoever, and then the GUI is another layer that simply hooks up to the logic core.
The logic core should not be monolith either, it should be made of individual components focusing on their particular task.
Your use case doesn't really require any crazy amount of connections, just some basic handlers for the UI elements, which should target the logic core API rather than GUI elements as you appear to be doing now.
Your clarification unfortunately makes absolutely no sense to me, it is still completely unclear what you exactly you want to do.
Let's assume your situation is something like this:
Tree 1 shows a folder structure.
Tree 2 shows the file content of the folder, selected in tree 1.
Data is an editor for the file, assuming a text file, selected in tree 2.
So, in pseudocode, presuming that `app` is your application core logic object:
Clicking an item in tree 1 says `app.setFolder(tree1.selectedItem())`
Clicking an item in tree 2 says `app.setFile(tree2.selectedItem())`
Clicking the editor "save" button says `app.save(editorUI.dataField.text())`
```
logic layer gui layer
app mainWindow
folder <-----------select----------- tree1
file <-----------select----------- tree2
save(newData) { editor
if (file) file.rewrite(newData) textField
} saveBtn: app.save(textField.text())
```
Upvotes: 1 <issue_comment>username_2: Since there is only a single data source, you could do the following:
1. Create a general model for that data source. The model should represent the data source generally, without consideration of what the views need.
2. Create two proxy viewmodels that adapt the general model to the needs of the views.
3. Couple the selection models of the views that display the viewmodels.
Given the selection models on top of the two proxy models that map to the same source, we can propagate the selection change between them. We leverage the selection mapping provided by the proxy. The `QAbstractProxyModel` has a functional implementation of `mapSelectionxxxx`.
```
void applySel(const QItemSelectionModel *src, const QItemSelection &sel,
const QItemSelection &desel, const QItemSelectionModel *dst) {
// Disallow reentrancy on the selection models
static QHash busySelectionModels;
if (busySelectionModels.contains(src) || busySelectionModels.contains(dst))
return;
busySelectionModels.insert(src);
busySelectionModels.insert(dst);
// The models must be proxies
auto \*srcModel = qobject\_cast(src->model());
auto \*dstModel = qobject\_cast(dst->model());
Q\_ASSERT(srcModel && dstModel);
// The proxies must refer to the same source model
auto \*srcSourceModel = srcModel->sourceModel();
auto \*dstSourceModel = dstModel->sourceModel();
Q\_ASSERT(srcSourceModel && (srcSourceModel == dstSourceModel));
// Convey the selection
auto const srcSel = srcModel->mapSelectionToSource(sel);
auto const srcDesel = srcModel->mapSelectionToSource(desel);
auto const dstSel = dstModel->mapSelectionFromSource(srcSel);
auto const dstDesel = dstModel->mapSelectionFromSource(srcDesel);
// we would re-enter in the select calls
dst->select(dstSel, QItemSelectionModel::Select);
dst->select(dstDesel, QItemSelectionModel::Deselect);
// Allow re-entrancy
busySelectionModels.remove(src);
busySelectionModels.remove(dst);
}
```
The above could be easily adapted for a list of destination item selection models, in case you had more than two views.
We can use this translation to couple the selection models of the views:
```
void coupleSelections(QAbstractItemView *view1, QAbstractItemView *view2) {
auto *sel1 = view1->selectionModel();
auto *sel2 = view2->selectionModel();
Q_ASSERT(sel1 && sel2);
connect(sel1, &QItemSelectionModel::selectionChanged,
[=](const QItemSelection &sel, const QItemSelection &desel){
applySel(sel1, sel, desel, sel2);
});
connect(sel2, &QItemSelectionModel::selectionChanged,
[=](const QItemSelection &sel, const QItemSelection &desel){
applySel(sel2, sel, desel, sel1);
});
}
```
The above is untested and written from memory, but hopefully will work without much ado.
Upvotes: 1 [selected_answer]
|
2018/03/14
| 581 | 2,130 |
<issue_start>username_0: Using Spark 2.2.0 on OS X High Sierra. I'm running a Spark Streaming application to read a local file:
```
val lines = ssc.textFileStream("file:///Users/userName/Documents/Notes/MoreNotes/sampleFile")
lines.print()
```
This gives me
```
org.apache.spark.streaming.dstream.FileInputDStream logWarning - Error finding new files
java.lang.NullPointerException
at scala.collection.mutable.ArrayOps$ofRef$.length$extension(ArrayOps.scala:192)
```
The file exists, and I am able to read it using SparkContext (sc) from spark-shell on the terminal. For some reason going through the Intellij application and Spark Streaming is not working. Any ideas appreciated!<issue_comment>username_1: Quoting the doc comments of `textFileStream`:
>
> Create an input stream that monitors a Hadoop-compatible filesystem
> for new files and reads them as text files (using key as LongWritable, value
> as Text and input format as TextInputFormat). Files must be written to the
> monitored directory by "moving" them from another location within the same
> file system. File names starting with . are ignored.
>
>
> @param directory HDFS directory to monitor for new file
>
>
>
So, the method expects the path to a **directory** in the parameter.
So I believe this should avoid that error:
```
ssc.textFileStream("file:///Users/userName/Documents/Notes/MoreNotes/")
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Spark streaming will not read old files, so first run the `spark-submit` command and then create the local file in the specified directory. Make sure in the `spark-submit` command, you give only directory name and not the file name. Below is a sample command. Here, I am passing the directory name through the spark command as my first parameter. You can specify this path in your Scala program as well.
```
spark-submit --class com.spark.streaming.streamingexample.HdfsWordCount --jars /home/cloudera/pramod/kafka_2.12-1.0.1/libs/kafka-clients-1.0.1.jar--master local[4] /home/cloudera/pramod/streamingexample-0.0.1-SNAPSHOT.jar /pramod/hdfswordcount.txt
```
Upvotes: 0
|
2018/03/14
| 899 | 3,165 |
<issue_start>username_0: I'm running Cordova app in the browser and I want to get some data with Fetch API from server running on another port (Express). I allowed CORS on it:
```
app.options('*', (req, res) => {
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Credentials', true);
res.header('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
res.header('Access-Control-Allow-Headers', 'Content-Type');
});
```
But when I try to get data I get this error in the browser console (Safari);
```
Origin http://localhost:8000 is not allowed by Access-Control-Allow-Origin.
```
This is whole index file:
```
const express = require('express');
const app = express();
const bodyParser = require('body-parser');
const helmet = require('helmet');
const compression = require('compression');
const paths = require('./conf/paths');
const routes = require(paths.controllers.routes);
app.options('*', (req, res) => {
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Credentials', true);
res.header('Access-Control-Allow-Methods', 'GET, POST, OPTIONS');
res.header('Access-Control-Allow-Headers', 'Content-Type');
});
app.use(helmet());
app.use(compression());
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
app.get('/test', routes.users.login);
app.listen(3000);
```
My route just sends JSON data back;
```
const db = require('../conf/db');
const hash = require('../helpers/hash');
const paths = require('../conf/paths');
const User = require(paths.models.user).User;
module.exports = {
login: (req, res, next) => {
res.send(JSON.stringify({ result: 'test' }));
}
};
```<issue_comment>username_1: Quoting the doc comments of `textFileStream`:
>
> Create an input stream that monitors a Hadoop-compatible filesystem
> for new files and reads them as text files (using key as LongWritable, value
> as Text and input format as TextInputFormat). Files must be written to the
> monitored directory by "moving" them from another location within the same
> file system. File names starting with . are ignored.
>
>
> @param directory HDFS directory to monitor for new file
>
>
>
So, the method expects the path to a **directory** in the parameter.
So I believe this should avoid that error:
```
ssc.textFileStream("file:///Users/userName/Documents/Notes/MoreNotes/")
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Spark streaming will not read old files, so first run the `spark-submit` command and then create the local file in the specified directory. Make sure in the `spark-submit` command, you give only directory name and not the file name. Below is a sample command. Here, I am passing the directory name through the spark command as my first parameter. You can specify this path in your Scala program as well.
```
spark-submit --class com.spark.streaming.streamingexample.HdfsWordCount --jars /home/cloudera/pramod/kafka_2.12-1.0.1/libs/kafka-clients-1.0.1.jar--master local[4] /home/cloudera/pramod/streamingexample-0.0.1-SNAPSHOT.jar /pramod/hdfswordcount.txt
```
Upvotes: 0
|
2018/03/14
| 1,327 | 4,496 |
<issue_start>username_0: I install nginx with WAF (Using Docker)
```
mkdir -p /usr/src \
&& cd /usr/src/ \
&& git clone --depth 1 -b v3/master --single-branch https://github.com/SpiderLabs/ModSecurity \
&& cd ModSecurity \
&& git submodule init \
&& git submodule update \
&& ./build.sh \
&& ./configure \
&& make -j$(getconf _NPROCESSORS_ONLN) \
&& make install
... previous commands to install nginx from source...
&& cd /usr/src \
&& git clone --depth 1 https://github.com/SpiderLabs/ModSecurity-nginx.git \
&& cd /usr/src/nginx-$NGINX_VERSION \
&& ./configure --with-compat --add-dynamic-module=../ModSecurity-nginx \
&& make modules \
&& cp objs/ngx_http_modsecurity_module.so /etc/nginx/modules \
&& mkdir /etc/nginx/modsec \
&& wget -P /etc/nginx/modsec/ https://raw.githubusercontent.com/SpiderLabs/ModSecurity/v3/master/modsecurity.conf-recommended \
&& mv /etc/nginx/modsec/modsecurity.conf-recommended /etc/nginx/modsec/modsecurity.conf \
&& sed -i 's/SecRuleEngine DetectionOnly/SecRuleEngine On/' /etc/nginx/modsec/modsecurity.conf \
&& sed -i 's/SecRequestBodyInMemoryLimit 131072//' /etc/nginx/modsec/modsecurity.conf \
&& sed -i 's#SecAuditLog /var/log/modsec_audit.log#SecAuditLog /var/log/nginx/modsec_audit.log#' /etc/nginx/modsec/modsecurity.conf \
&& mkdir /opt \
&& cd /opt \
&& git clone -b v3.0/master --single-branch https://github.com/SpiderLabs/owasp-modsecurity-crs.git \
&& cd owasp-modsecurity-crs/ \
&& cp /opt/owasp-modsecurity-crs/crs-setup.conf.example /opt/owasp-modsecurity-crs/crs-setup.conf
```
but suddenly began to mark this error:
>
> nginx: [emerg] "modsecurity\_rules\_file" directive Rules error. File: /opt/owasp-modsecurity-crs/crs-setup.conf. Line: 96. Column: 43. SecCollectionTimeout is not yet supported.
>
>
>
In [documentation](https://github.com/SpiderLabs/owasp-modsecurity-crs/blob/v3.0/master/crs-setup.conf.example):
==============
```
#
# -- [[ Collection timeout ]] --------------------------------------------------
#
# Set the SecCollectionTimeout directive from the ModSecurity default (1 hour)
# to a lower setting which is appropriate to most sites.
# This increases performance by cleaning out stale collection (block) entries.
#
# This value should be greater than or equal to:
# tx.reput_block_duration (see section "Blocking Based on IP Reputation") and
# tx.dos_block_timeout (see section "Anti-Automation / DoS Protection").
#
# Ref: https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#wiki-SecCollectionTimeout
# Please keep this directive uncommented.
# Default: 600 (10 minutes)
SecCollectionTimeout 600
```
==============
I solve it by adding this line to the command (disabling the rule):
```
&& sed -i 's/SecCollectionTimeout 600/# SecCollectionTimeout 600/' /opt/owasp-modsecurity-crs/crs-setup.conf
```
**But I do not know what consequences it has, or if it is the correct way to apply it.**
Some example of the one that can guide me?<issue_comment>username_1: Quoting the doc comments of `textFileStream`:
>
> Create an input stream that monitors a Hadoop-compatible filesystem
> for new files and reads them as text files (using key as LongWritable, value
> as Text and input format as TextInputFormat). Files must be written to the
> monitored directory by "moving" them from another location within the same
> file system. File names starting with . are ignored.
>
>
> @param directory HDFS directory to monitor for new file
>
>
>
So, the method expects the path to a **directory** in the parameter.
So I believe this should avoid that error:
```
ssc.textFileStream("file:///Users/userName/Documents/Notes/MoreNotes/")
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: Spark streaming will not read old files, so first run the `spark-submit` command and then create the local file in the specified directory. Make sure in the `spark-submit` command, you give only directory name and not the file name. Below is a sample command. Here, I am passing the directory name through the spark command as my first parameter. You can specify this path in your Scala program as well.
```
spark-submit --class com.spark.streaming.streamingexample.HdfsWordCount --jars /home/cloudera/pramod/kafka_2.12-1.0.1/libs/kafka-clients-1.0.1.jar--master local[4] /home/cloudera/pramod/streamingexample-0.0.1-SNAPSHOT.jar /pramod/hdfswordcount.txt
```
Upvotes: 0
|
2018/03/14
| 905 | 2,950 |
<issue_start>username_0: I've recently adopted the use of linked lists in Fortran and after some reading decided to use the [gen\_ll.f90 package](http://fortran.com/gen_ll.f90) from the [Fortran Wiki](http://fortranwiki.org/fortran/show/Linked+list) which seemed simple and sufficient to my needs.
It was only later that I realized that I must reverse the order of the terms in the list (i.e. 1->2->3->...->N->NULL needs to be N->...->3->2->1->NULL).
I was trying to generate a function which replaces the pointers %P%Next to the previous instead of the next. The logic seemed good, but I keep messing it up somehow.
I would ask for your help with coding this small function.
The source code is attached in the [gen\_ll.f90](http://fortran.com/gen_ll.f90) link and the function/subroutine is straight-forward.
This is a sample source code, one of several attempts that I made.
Not sure if this is even correct, as I got to the point of trail-and-error approach just to make sense at the moment, so this might be complete nonsense.
```
SUBROUTINE LI_Flip_Direction(List)
IMPLICIT NONE
TYPE(List_Type),INTENT(INOUT),TARGET :: List
TYPE(Link_Ptr_Type) :: Prev,Curr,Next
Curr%P => List%Head%Next
Next%P => Curr%P%Next
NULLIFY(Prev%P)
! Loop:
DO WHILE (ASSOCIATED(Next%P))
Next%P => Curr%P%Next
Curr%P%Next => Prev%P
Prev%P => Curr%P
Curr%P => Next%P
ENDDO
! Reached the new HEAD:
List%Head%Next => Curr%P
RETURN
END SUBROUTINE LI_Flip_Direction
```<issue_comment>username_1: This is the code for a subroutine that reversed the order of an existing list using the linked "library" (gen\_ll.f90), for your usage:
```
SUBROUTINE LI_Flip_Direction(List)
IMPLICIT NONE
TYPE(List_Type),INTENT(INOUT),TARGET :: List
TYPE(Link_Ptr_Type) :: Prev,Curr,Next
Curr%P => List%Head%Next
NULLIFY(Prev%P)
! Loop:
DO WHILE (ASSOCIATED(Curr%P))
Next%P => Curr%P%Next
Curr%P%Next => Prev%P
Prev%P => Curr%P
Curr%P => Next%P
ENDDO
! Reached the new HEAD:
List%Head%Next => Prev%P
RETURN
END SUBROUTINE LI_Flip_Direction
```
Upvotes: 0 <issue_comment>username_2: I would suggest to make use of the functions which are available in the presented source code. Since you try to revert a linked list which is mono-directional, the best is the retrieve the head and add it to the head of a new linked list. So you could do something like :
```
subroutine LI_Flip_Direction(list)
implicit none
type(List_Type),intent(inout) :: list
type(List_Type) :: tmp
type(Link_Ptr_Type) :: link
call LI_Init_List(tmp)
do while(associated(list%Head%Next))
link = LI_Remove_Head(list)
call LI_Add_To_Head(link,tmp)
end do
if (associated(tmp%Head%Next)) list%Head%Next => tmp%Head%Next
nullify(tmp%Head%Next)
end subroutine LI_Flip_Direction
```
Take into account that everything here depends on a good implementation of the used code.
Upvotes: 1
|
2018/03/14
| 1,377 | 4,652 |
<issue_start>username_0: I have a room model which has a many to one relation with a venue.
A venue can have many rooms.
I am trying to set up my http methods in my rest framework so that way when I add permissions things work well.
So if say someone wants to delete a room from a venue, I must make sure that
1 that person has permissions to that venue
2 that room is attached to that venue
I would like to get a venue model
then get the venue models room\_set and check the room\_set to see if a room exists with the room primarykey of the model I wish to delete.
What I have so far:
```
class GetEditDeleteVenueRoom(APIView):
def get(self, request, *args, **kwargs):
pass
def post(self, request, *args, **kwargs):
print('wait its over here')
def delete(self, request, *args, **kwargs):
venuepk = kwargs.get('venuepk', None)
venue = get_object_or_404(Venue, pk=venuepk)
venuerooms = venue.room_set
print(venuerooms)
return Response({})
```
my hope is I could just interate venue rooms and check each object in venue rooms but I have a strong feeling its not going to work because venuerooms is not python objects? Perhaps it is. I will be updating this question after I do the for loop or possibilly deleting it if I find everything in working order.
My question is how do I get the room set and check to see if a room with the roompk I am searching for is in it.
so as i expected I got an error the code I attempted:
```
def delete(self, request, *args, **kwargs):
venuepk = kwargs.get('venuepk', None)
venue = get_object_or_404(Venue, pk=venuepk)
venuerooms = venue.room_set
roompk = kwargs.get('roompk')
roomobject = None
for room in venuerooms:
if room.pk == roompk:
roomobject = room
roomobject.delete()
print(venuerooms)
return Response({})
```
the error i got:
```
File "/home/rickus/Documents/softwareProjects/211hospitality/suitsandtables/backend/virtualsuits/suitsandtables/venues/views.py", line 125, in delete
for room in venuerooms:
TypeError: 'RelatedManager' object is not iterable
```
any ideas of what I could do?
new edit:
so I implemented the answer below and added `all()` to my venuerooms definition
but now I have a new error. This one doesn't make sense as I am clearly overriding the None attribute in the for loop.
```
def delete(self, request, *args, **kwargs):
venuepk = kwargs.get('venuepk', None)
venue = get_object_or_404(Venue, pk=venuepk)
venuerooms = venue.room_set.all()
roompk = kwargs.get('roompk')
roomobject = None
for room in venuerooms:
if room.pk == roompk:
print(room)
roomobject = room
roomobject.delete()
print(venuerooms)
return Response({})
```
error:
```
File "/home/rickus/Documents/softwareProjects/211hospitality/suitsandtables/backend/virtualsuits/suitsandtables/venues/views.py", line 129, in delete
roomobject.delete()
AttributeError: 'NoneType' object has no attribute 'delete'
[14/Mar/2018 20:01:53] "DELETE /api/suitsadmin/venue/1/room/15 HTTP/1.1
```<issue_comment>username_1: This is the code for a subroutine that reversed the order of an existing list using the linked "library" (gen\_ll.f90), for your usage:
```
SUBROUTINE LI_Flip_Direction(List)
IMPLICIT NONE
TYPE(List_Type),INTENT(INOUT),TARGET :: List
TYPE(Link_Ptr_Type) :: Prev,Curr,Next
Curr%P => List%Head%Next
NULLIFY(Prev%P)
! Loop:
DO WHILE (ASSOCIATED(Curr%P))
Next%P => Curr%P%Next
Curr%P%Next => Prev%P
Prev%P => Curr%P
Curr%P => Next%P
ENDDO
! Reached the new HEAD:
List%Head%Next => Prev%P
RETURN
END SUBROUTINE LI_Flip_Direction
```
Upvotes: 0 <issue_comment>username_2: I would suggest to make use of the functions which are available in the presented source code. Since you try to revert a linked list which is mono-directional, the best is the retrieve the head and add it to the head of a new linked list. So you could do something like :
```
subroutine LI_Flip_Direction(list)
implicit none
type(List_Type),intent(inout) :: list
type(List_Type) :: tmp
type(Link_Ptr_Type) :: link
call LI_Init_List(tmp)
do while(associated(list%Head%Next))
link = LI_Remove_Head(list)
call LI_Add_To_Head(link,tmp)
end do
if (associated(tmp%Head%Next)) list%Head%Next => tmp%Head%Next
nullify(tmp%Head%Next)
end subroutine LI_Flip_Direction
```
Take into account that everything here depends on a good implementation of the used code.
Upvotes: 1
|
2018/03/14
| 842 | 2,809 |
<issue_start>username_0: In models.py I have following models:
```
class Project(models.Model):
project_name = models.CharField(max_length=255, unique=True, blank=False)
def __str__(self):
return str(self.project_name)
class Profile(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE)
role = models.CharField(choices=ROLE_CHOICES, max_length=255, default='Agent')
town = models.CharField(max_length=100)
project = models.ManyToManyField(Project)
def __str__(self):
return str('Advanced user informations')
class News(models.Model):
title = models.CharField(max_length=255, blank=False)
content = HTMLField()
author = models.ForeignKey(User, on_delete=models.CASCADE)
project = models.ForeignKey(Project, on_delete=models.CASCADE)
```
In my views.py I have:
```
def news(request):
news_list = News.objects.all().order_by('-id')
paginator = Paginator(news_list, 5)
page = request.GET.get('page')
news = paginator.get_page(page)
return render(request, 'news.html', {'news': news})
```
Now I want to achieve that a User can only see news for a project he participates.
Something like:
```
News.objects.filter(News with a project that the User is linked to)
```
But I am not sure what could be a valid way to solve this. Maybe someone has a tip?<issue_comment>username_1: This is the code for a subroutine that reversed the order of an existing list using the linked "library" (gen\_ll.f90), for your usage:
```
SUBROUTINE LI_Flip_Direction(List)
IMPLICIT NONE
TYPE(List_Type),INTENT(INOUT),TARGET :: List
TYPE(Link_Ptr_Type) :: Prev,Curr,Next
Curr%P => List%Head%Next
NULLIFY(Prev%P)
! Loop:
DO WHILE (ASSOCIATED(Curr%P))
Next%P => Curr%P%Next
Curr%P%Next => Prev%P
Prev%P => Curr%P
Curr%P => Next%P
ENDDO
! Reached the new HEAD:
List%Head%Next => Prev%P
RETURN
END SUBROUTINE LI_Flip_Direction
```
Upvotes: 0 <issue_comment>username_2: I would suggest to make use of the functions which are available in the presented source code. Since you try to revert a linked list which is mono-directional, the best is the retrieve the head and add it to the head of a new linked list. So you could do something like :
```
subroutine LI_Flip_Direction(list)
implicit none
type(List_Type),intent(inout) :: list
type(List_Type) :: tmp
type(Link_Ptr_Type) :: link
call LI_Init_List(tmp)
do while(associated(list%Head%Next))
link = LI_Remove_Head(list)
call LI_Add_To_Head(link,tmp)
end do
if (associated(tmp%Head%Next)) list%Head%Next => tmp%Head%Next
nullify(tmp%Head%Next)
end subroutine LI_Flip_Direction
```
Take into account that everything here depends on a good implementation of the used code.
Upvotes: 1
|
2018/03/14
| 1,275 | 4,140 |
<issue_start>username_0: I am cleaning up a table that contains all sorts of weird naming conventions. One of the names I keep seeing is a string of dates. Some of these names contain numbers which are okay but I would like to remove date formats from the strings.
Edit - Dates are either in mm/YY or mm/YYYY format. The dates are normally from 2017 onwards as I have seen (we want more recent updates).
For example:
```
names <- c('IT Company 09/18', 'Tech Company 9/17', '9/2018 XYZ Company', '50/50 Phone Company')
```
Should be:
```
c('IT Company', 'Tech Company', 'XYZ Company', '50/50 Phone Company')
```
I tried to use this function here to flag strings with "/" and dates but it also extracts numbers that are not dates:
```
names2 <- names[grepl("[[:digit:]]", names) & grepl("/", names)]
```
Output
```
> names2
[1] "IT Company 09/18"
[2] "Tech Company 9/17"
[3] "9/2018 XYZ Company"
[4] "50/50 Phone Company"
```
Is there a specific date expression I can use in place of [[:digit:]] to find strings with dates?
Also, what is the function to remove dates including the slash from a string?<issue_comment>username_1: I believe the following will do what you want. It uses a regex followed by `trimws` to trim the white spaces from the beginning and end of the result.
```
trimws(gsub("[[:digit:]]{1,4}/[[:digit:]]{1,4}", "", names))
#[1] "IT Company" "Tech Company" "XYZ Company" "Phone Company"
```
Upvotes: 1 <issue_comment>username_2: Sounds like a job for the `stringr` packages' `string_remove_all` function.
The trick is getting the regex pattern right. The fact that you don't have a standard date format you're looking out for and that you'd like to retain 50/50 make life tough.
This worked for me:
```
library('stringr')
date.pattern <- ' ?(0|1)?[1-9]/([0-9]{4}|[0-9]{2}) ?'
names <- c('IT Company 09/18', 'Tech Company 9/17', '9/2018 XYZ Company', '50/50 Phone Company')
str_remove_all(names, date.pattern)
```
That regex is supposed to say
* "A date is one number, or two numbers where the first number is 0 or 1. (Assumes the first number is always a month, which may have a leading zero for single digit months).
* Then it (lazily) allows for years of length 2 or 4. This could be made more precise if you have a sense of which years you expect to encounter. "0001" probably isn't a year in your dataset.
* Finally, it will also remove leading or trailing spaces if they exist, since it looks like dates can come before and after the part you care about and are space-delimited from the part you care about.
Output
```
[1] "IT Company" "Tech Company" "XYZ Company"
[4] "50/50 Phone Company"
```
It might be smart to also run this through `str_extract_all` and inspect the outputs to see if they all have the appearance of dates.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Here is a `stringr` solution. This is complicated because it is hard to tell whether something is a date just by looking at it: you need to check your outputs and adjust this code if there are more cases. The first regular expression catches formats that are 1/2 digits, `/`, a `0` or `1`, and one more digit, assuming any dates are post year 2000. The second line gets rid of digit, `/` and then four digits. This nicely excludes `50/50`, but what if it was called `12/50 Phone Company`? That could be interpreted as Dec 1950 in a certain date format. You'll have to make sure you don't accidentally exclude any similar special cases, there isn't a universal way to tell whether something is meant to be a date or not.
I would read up on regular expressions; this is a [good resource](http://r4ds.had.co.nz/strings.html) to start.
```r
library(stringr)
names <- c('IT Company 09/18', 'Tech Company 9/17', '9/2018 XYZ Company', '50/50 Phone Company')
names %>%
str_replace("\\d{1,2}/(0|1)\\d", "") %>%
str_replace("\\d/\\d{4}", "") %>%
str_trim()
#> [1] "IT Company" "Tech Company" "XYZ Company"
#> [4] "50/50 Phone Company"
```
Created on 2018-03-14 by the [reprex package](http://reprex.tidyverse.org) (v0.2.0).
Upvotes: 1
|
2018/03/14
| 1,150 | 3,967 |
<issue_start>username_0: So I'm trying to make my bot's streaming to be with depression but I've tried multiple things and they don't work.
I've tried these methods:
```
client.user.setPresence({ game: { name: 'with depression' }, status: 'online' });
bot.user.setGame('with depression', 'https://www.twitch.tv/monstercat');
```
None of these seem to be working the way they should. Any help is appreciated.<issue_comment>username_1: `.setGame` is discontinued. Use:
```
client.user.setActivity("Game");
```
To set a playing game status.
As an addition, if you were using an earlier version of discord.js, try this:
```
client.user.setGame("Game");
```
In newer versions of discord.js, this is deprecated.
Upvotes: 3 <issue_comment>username_2: ```
client.user.setStatus('dnd', 'Made by KwinkyWolf')
```
And change 'dnd' to whatever status you want it to have. And then the next field 'Made by KwinkyWolf' is where you change the game. Hope this helped :)
List of status':
* online
* idle
* dnd
* invisible
Not sure if they're still the same, or if there's more but hope that helped too :)
Upvotes: 2 <issue_comment>username_3: Use this:
```js
const { ActivityType } = require('discord.js')
client.user.setPresence({
activities: [{
name: 'with depression',
type: ActivityType.Streaming,
url: 'https://twitch.tv/monstercat'
}],
status: 'online'
});
```
These are the supported activity types & status types:
```js
// Accepted activity types
ActivityType.Playing
ActivityType.Listening
ActivityType.Watching
ActivityType.Competing
ActivityType.Streaming // Lets you use url parameter. This can be a YouTube or Twitch link.
ActivityType.Custom // Unsupported in discord.js. username_5 be added at some point.
// Accepted statusses
"online"
"offline"
"idle"
"dnd"
```
Upvotes: 4 <issue_comment>username_4: Simple way to initiate the message on startup:
```
bot.on('ready', () => {
bot.user.setStatus('available')
bot.user.setPresence({
game: {
name: 'with depression',
type: "STREAMING",
url: "https://www.twitch.tv/monstercat"
}
});
});
```
You can also just declare it elsewhere after startup, to change the message as needed:
```
bot.user.setPresence({ game: { name: 'with depression', type: "streaming", url: "https://www.twitch.tv/monstercat"}});
```
Upvotes: 3 <issue_comment>username_5: `setGame` has been discontinued. You must use `client.user.setActivity`.
Don't forget, if you are setting a streaming status, you **MUST** specify a Twitch URL
An example is here:
```js
client.user.setActivity("with depression", {
type: "STREAMING",
url: "https://www.twitch.tv/example-url"
});
```
Upvotes: 1 <issue_comment>username_6: Bumping this all the way from 2018, sorry not sorry. But the newer users questioning how to do this need to know that **game** does not work anymore for this task.
```
bot.user.setStatus('available')
bot.user.setPresence({
game: {
name: 'with depression',
type: "STREAMING",
url: "https://www.twitch.tv/monstercat"
}
}
```
does not work anymore. You will now need to do this:
```
bot.user.setPresence({
status: 'online',
activity: {
name: 'with depression',
type: 'STREAMING',
url: 'https://www.twitch.tv/monstercat'
}
})
```
This is referenced here as "game" is not a valid property of setPresence anymore. Read
the [PresenceData Documentation](https://discord.js.org/#/docs/main/stable/typedef/PresenceData) for more information about this.
Upvotes: 3 <issue_comment>username_7: With the arrival of the *new version* of **discord.js** this code down here should work perfectly fine
```
const { ActivityType } = require("discord.js");
client.on("ready", () => {
console.log(`Logged in as ${client.user.tag}!`);
client.user.setActivity("over you", {
type: ActivityType.Watching,
});
}
```
Upvotes: 2
|
2018/03/14
| 2,381 | 9,532 |
<issue_start>username_0: I have a project with 4 classes: 2 activities, 1 adapter and 1 class for simple object. Names of the classes are: **MainActivity**, **AddingItemsActivity**, **ItemAdapter**, **SimpleItem**. In a layout corresponding to the **MainActivity**, there is a list view named **SimpleListView**. **SimpleListView** should contain **SimpleItem** objects. **ItemAdapter** is made to handle **SimpleListView**. Updating, adding items to **SimpleListView** from **MainActivity** is very easy. What I would like to reach is updating, adding items, which appear on the **SimpleListView**, from **AddingItemsActivity** (appear when user come back to the **MainActivity**). Could you tell me what should I do to reach that?
PS: I would like to ask: "how to update **SimpleListView** from **AddingItemsActivity**?" but I have read that it is not proper question, beacuse **SimpleListView** does not exist in **AddingItemsActivity**.
**MainActivity**
```
public class MainActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Button toAddingActivity = findViewById(R.id.toAddingActivitybutton);
final ListView simpleItemsListView = findViewById(R.id.SimpleListView);
final ItemAdapter mAdapter = new ItemAdapter(this, R.layout.simple_item_adapter);
toAddingActivity.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
Intent intent = new Intent(getBaseContext(), AddingItemsActivity.class);
startActivity(intent);
}
});
}
}
```
**AddingItemsActivity**
```
public class AddingItemsActivity extends AppCompatActivity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_adding_items);
Button addItem = findViewById(R.id.AddItembutton);
// final ListView simpleItemsListView = findViewById(R.id.SimpleListView);
// final ItemAdapter mAdapter = new ItemAdapter(this, R.layout.simple_item_adapter);
addItem.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
// first reflex try, which does not work:
// SimpleItem item = new SimpleItem("String number 1", "String number 2");
// mAdapter.add(item);
// simpleItemsListView.setAdapter(mAdapter);
}
});
}
}
```
**ItemAdapter**
```
public class ItemAdapter extends ArrayAdapter{
public ItemAdapter(Context context, int textViewResourceId) {
super(context, textViewResourceId);
}
public ItemAdapter(Context context, int resource, List items) {
super(context, resource, items);
}
@Override
public View getView(int position, View convertView, ViewGroup parent) {
View v = convertView;
if (v == null) {
LayoutInflater vi;
vi = LayoutInflater.from(getContext());
v = vi.inflate(R.layout.simple\_item\_adapter, null);
}
SimpleItem item = getItem(position);
if (item != null) {
TextView tv1 = v.findViewById(R.id.adapterTextView1);
TextView tv2 = v.findViewById(R.id.adapterTextView2);
tv1.setText(item.getStr1());
tv2.setText(item.getStr2());
}
return v;
}
}
```
**SimpleItem**
```
public class SimpleItem {
private String str1;
private String str2;
public SimpleItem(String s1, String s2)
{
str1 = s1;
str2 = s2;
}
public String getStr1()
{
return str1;
}
public String getStr2()
{
return str2;
}
}
```<issue_comment>username_1: 1. Use `startActivityForResult` to get the result as `SimpleItem` from `AddingItemsActivity`.
`Intent intent = new Intent(getBaseContext(), AddingItemsActivity.class);
startActivityForResult(intent,1);`
2. Create a `SimpleItem` in `AddingItemsActivity`, add values to it and use `setResult` to give simple item instance back to MainActivity
```
// inside on click
SimpleItem item = new SimpleItem("String number 1", "String number 2");
Intent returnIntent = new Intent();
returnIntent.putExtra("result", item);
setResult(Activity.RESULT_OK,returnIntent);
finish();`
```
3. In MainActivity
```
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == 1) {
if(resultCode == Activity.RESULT_OK){
SimpleItem result = (SimpleItem)data.getSerializableExtra("result");
// add result to the list, used by adapter
// notify adapter using notifyDataSetChanged
}
}
}//onActivityResult
```
Note : add `public class SimpleItem implements Serializable` and seems like you forgot to create and pass list to adapter instances, so simply create it
Upvotes: 2 <issue_comment>username_2: your adaper is not complete you must override getCount method and set your list.size() to it . then from your MainActivity you mast create a list of simle item and pass it to your adapter .
```
ArrayList items=new ArrayList<>();
final ItemAdapter mAdapter = new ItemAdapter(this, R.layout.simple\_item\_adapter,items);
```
then you can put your list to intent and pass it to AddingItemsActivity .
```
Intent intent=new Intent (this,AddingItemsActivity.class);
intent.putExtra("Key",items);
startActivityForResult(intent,your request code (exam : 14));
```
and in AddingItemsActivity :
```
Bundle bundel=getIntent().getExtras();
ArrayList items=(ArrayList)bundle.get("Key");
```
change or add items to list and return it to MainActivity :
```
Intent returnIntent = new Intent();
returnIntent.putExtra("returnedList", item);
setResult(Activity.RESULT_OK,returnIntent);
finish();
```
and in onActivityResult of MainActivity :
```
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == 14) {
if(resultCode == Activity.RESULT_OK){
// ArrayList items=(ArrayList)data.getSerializableExtra("returnedList");
items=(ArrayList)data.getSerializableExtra("returnedList");
mAdapter .notifyDataSetChanged();
}
}
}
```
And dont forgot implemens your simpleItem class of Serializable
Upvotes: 1 <issue_comment>username_2: try this :
MainActivity :
```
public class MainActivity extends AppCompatActivity {
private ArrayList items = new ArrayList<>();
private Button toAddingActivity;
private ListView simpleItemsListView;
private ItemAdapter mAdapter;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity\_main);
toAddingActivity = (Button) findViewById(R.id.toAddingActivitybutton);
simpleItemsListView = (ListView) findViewById(R.id.SimpleListView);
mAdapter = new ItemAdapter(this, R.layout.simple\_item\_adapter, items);
simpleItemsListView.setAdapter(mAdapter);
toAddingActivity.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
Intent intent = new Intent(MainActivity.this, AddingItemsActivity.class);
startActivityForResult(intent, 14);
}
});
mAdapter.notifyDataSetChanged();
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == 14 && resultCode == Activity.RESULT\_OK) {
Bundle bundle = data.getExtras();
SimpleItem item=(SimpleItem) bundle.get("reKey");
items.add(item);
mAdapter.notifyDataSetChanged();
}
}
}
```
ItemAdapter :
```
public class ItemAdapter extends ArrayAdapter{
private ArrayList items =new ArrayList<>();
private Activity activity;
private int layoutResource;
public ItemAdapter(@NonNull Activity act, int resource, @NonNull ArrayList data) {
super(act, resource, data);
items =data;
activity=act;
layoutResource=resource;
}
@Override
public int getCount() {
return items.size();
}
@Override
public SimpleItem getItem(int position) {
return items.get(position);
}
@Override
public View getView(int position, @Nullable View convertView, @NonNull ViewGroup parent) {
ViewHolder holder;
View row=convertView;
if(row==null || (row.getTag()==null)){
LayoutInflater inflater=LayoutInflater.from(activity);
row=inflater.inflate(layoutResource,null);
holder=new ViewHolder();
holder.tv1=row.findViewById(R.id.adapterTextView1);
holder.tv2=row.findViewById(R.id.adapterTextView2);
row.setTag(holder);
}else {
holder=(ViewHolder)row.getTag();
}
holder.simpleItem=items.get(position);
holder.tv1.setText(holder.simpleItem.getStr1());
holder.tv2.setText(holder.simpleItem.getStr2());
return row;
}
class ViewHolder{
TextView tv1;
TextView tv2;
SimpleItem simpleItem;
}
```
}
AddingItemsActivity :
```
public class AddingItemsActivity extends AppCompatActivity{
private Button addItem;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_adding_items);
addItem=(Button)findViewById(R.id.AddItembutton);
addItem.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
SimpleItem item = new SimpleItem("String number 1", "String number 2");
Intent returnIntent = new Intent();
returnIntent.putExtra("reKey",item);
setResult(Activity.RESULT_OK,returnIntent);
finish();
}
});
}
```
}
you can change AddingItemsActivity onclick method
Upvotes: 1
|
2018/03/14
| 1,209 | 4,333 |
<issue_start>username_0: Basically I am building my project in random locations and the project folder and nothing is working so I was wondering if it was something else. I am getting these errors:
```
Error 1: Error building Player: DirectoryNotFoundException: Directory '/Shaders/Water' not found.
Error 2: Build completed with a result of 'Failed'
UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr)
Error 3: UnityEditor.BuildPlayerWindow+BuildMethodException: 2 errors
at UnityEditor.BuildPlayerWindow+DefaultBuildMethods.BuildPlayer (BuildPlayerOptions options) [0x0020e] in C:\buildslave\unity\build\Editor\Mono\BuildPlayerWindowBuildMethods.cs:181
at UnityEditor.BuildPlayerWindow.CallBuildMethods (Boolean askForBuildLocation, BuildOptions defaultBuildOptions) [0x00065] in C:\buildslave\unity\build\Editor\Mono\BuildPlayerWindowBuildMethods.cs:88
UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr)
Error 4: DirectoryNotFoundException: Directory '/Shaders/Water' not found.
System.IO.Directory.GetFileSystemEntries (System.String path, System.String searchPattern, FileAttributes mask, FileAttributes attrs) (at /Users/builduser/buildslave/mono/build/mcs/class/corlib/System.IO/Directory.cs:514)
System.IO.Directory.GetFiles (System.String path, System.String searchPattern) (at /Users/builduser/buildslave/mono/build/mcs/class/corlib/System.IO/Directory.cs:308)
System.IO.Directory.GetFiles (System.String path) (at /Users/builduser/buildslave/mono/build/mcs/class/corlib/System.IO/Directory.cs:303)
UltimateWater.Editors.EditorShaderCollectionBuilder.CleanUpUnusedShaders () (at Assets/Ultimate Water System v1.0.0/Scripts/Editor/EditorShaderCollectionBuilder.cs:54)
UltimateWater.Editors.WaterShadersCleanupTask.OnWillSaveAssets (System.String[] paths) (at Assets/Ultimate Water System v1.0.0/Scripts/Editor/EditorShaderCollectionBuilder.cs:140)
System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) (at /Users/builduser/buildslave/mono/build/mcs/class/corlib/System.Reflection/MonoMethod.cs:222)
Rethrow as TargetInvocationException: Exception has been thrown by the target of an invocation.
System.Reflection.MonoMethod.Invoke (System.Object obj, BindingFlags invokeAttr, System.Reflection.Binder binder, System.Object[] parameters, System.Globalization.CultureInfo culture) (at /Users/builduser/buildslave/mono/build/mcs/class/corlib/System.Reflection/MonoMethod.cs:232)
System.Reflection.MethodBase.Invoke (System.Object obj, System.Object[] parameters) (at /Users/builduser/buildslave/mono/build/mcs/class/corlib/System.Reflection/MethodBase.cs:115)
UnityEditor.AssetModificationProcessorInternal.OnWillSaveAssets (System.String[] assets, System.String[]& assetsThatShouldBeSaved, System.String[]& assetsThatShouldBeReverted, Int32 explicitlySaveAsset) (at C:/buildslave/unity/build/Editor/Mono/AssetModificationProcessor.cs:147)
UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr)
```
The project isn't building.<issue_comment>username_1: Changing the Build System from Gradle to Internal fixed the problem for me. See the snapshot below.
[](https://i.stack.imgur.com/9Yneb.png)
Upvotes: 0 <issue_comment>username_2: In my case, there was a problem with the Unity Monetization plugin, I reinstalled it and now everything works
Upvotes: 0 <issue_comment>username_3: finally i find it solition just follow steps.
1. open unity hub
2. go to installs
3. find your on work unity version
4. click add modules
5. end add Windows Build Support (IL2CCP)
6. end build again
Upvotes: 0 <issue_comment>username_4: I had a similar problem but not exactly this.
You can try to change the API compatibility level under Project Settings > Player.
While searching for the solution, I found that people were switching to.Net2, but mine was .Net2 by default, so I switched to .Net 4.x and it builds successfully.
Upvotes: 1 <issue_comment>username_5: In my case, it notified me that the a file path was missing:
>
> tundra: error: Failed to open file "[...]/Library/Il2cppBuildCache/iOS/buildstate/tundra.log.json" for structured logging
>
>
>
I created the path and the file manually, and then it worked.
Upvotes: 0
|
2018/03/14
| 585 | 2,096 |
<issue_start>username_0: I have two checkboxes, form controls, if I select one, the other one is deselected. I cannot use ActiveX controls because there are many other checkboxes from other sheets are form controls. I cannot use option button because I need to get "true" "false" value.
I used this code below:
```
Private Sub CheckBox1_Click()
If CheckBox1.Value = True Then
CheckBox2.Value = False
CheckBox2.Enabled = False
Else
CheckBox2.Enabled = True
End If
End Sub
Private Sub CheckBox2_Click()
If CheckBox2.Value = True Then
CheckBox1.Value = False
CheckBox1.Enabled = False
Else
CheckBox1.Enabled = True
End If
End Sub
```
But it shows
>
> Run-time error '424', Object required
>
>
>
I am very new to VBA and still learning. I have no idea to fix it, please help me!! Thank you very much!!<issue_comment>username_1: Changing the Build System from Gradle to Internal fixed the problem for me. See the snapshot below.
[](https://i.stack.imgur.com/9Yneb.png)
Upvotes: 0 <issue_comment>username_2: In my case, there was a problem with the Unity Monetization plugin, I reinstalled it and now everything works
Upvotes: 0 <issue_comment>username_3: finally i find it solition just follow steps.
1. open unity hub
2. go to installs
3. find your on work unity version
4. click add modules
5. end add Windows Build Support (IL2CCP)
6. end build again
Upvotes: 0 <issue_comment>username_4: I had a similar problem but not exactly this.
You can try to change the API compatibility level under Project Settings > Player.
While searching for the solution, I found that people were switching to.Net2, but mine was .Net2 by default, so I switched to .Net 4.x and it builds successfully.
Upvotes: 1 <issue_comment>username_5: In my case, it notified me that the a file path was missing:
>
> tundra: error: Failed to open file "[...]/Library/Il2cppBuildCache/iOS/buildstate/tundra.log.json" for structured logging
>
>
>
I created the path and the file manually, and then it worked.
Upvotes: 0
|
2018/03/14
| 1,728 | 3,824 |
<issue_start>username_0: Let us say I divide the x-axis as follows:
```
def linear_delta(x):
return 10*(-2.39916666666673e-06*x+0.014588678333333336)
current = 26.0
my_array = [current]
while current < 50.1:
current+=linear_delta(current)
my_array.append(current)
```
With linear\_delta being a linear function that changes linearly as we move along the x-axis.
I.e.,
the x-axis starts at `26.0`, then next breakpoint is at
```
26.0+d1=26.1446, where d1=linear_delta(26.0) etc.
```
And the general division of the x-axis looks like:
```
|--------|-------|------|-----|----|
s d1 d2 d3 d4 d5 e
```
where, `s`: start position (`26.0` in our example); `e`: end position in our example (`50.21067768889078` in our example).
Now, given a position in the x-axis, let us say `x`. How can I know which position in the x-axis this `x` follows into?
In the case of an evenly spaced axis it would be `math.floor((x-start)/len)`, with `len=e-s`.
Is there a similar simple and fast formula for this uneven case with a linear delta function?<issue_comment>username_1: It is best viewed as one of iterating a linear function. Note that if you have code like `x += g(x)` where `g(x) = k*x+b` then this is equivalent to `x = f(x)` where `m = k+1`. In your case, `m = 1 + 10*-2.39916666666673e-06` which is `0.9999760083333333`
If you are iterating `f(x) = m*x+b` starting with a seed value `x_0`, by taking the partial sum of a geometric series you can work out that after `k` iterations you have the value
```
m^k*x_0 + b*(m^k-1)/(m-1)
```
To find the largest `k` where this expression is <= a given x, first solve for `m^k`:
```
m^k <= (x+b/(m-1))/(x_0 + b/(m-1))
```
Thus
```
k ?? math.log((x+b/(m-1))/(x_0 + b/(m-1)),m)
```
Where `??` is either `<=` or `>=` depending on if `m < 1` or `m>1`. In your case, `m < 1` hence `??` is `>=` and you would need to take the floor.
For example:
```
from math import log, floor
x_0 = 26.0
m = 1+ 10*-2.39916666666673e-06
b = 10*0.014588678333333336
def index(x):
return floor(log((x+b/(m-1))/(x_0 + b/(m-1)),m))
```
Test:
```
>>> index(35)
62
>>> my_array[62:64]
[34.99971883425056, 35.144765915996196]
```
Upvotes: 2 [selected_answer]<issue_comment>username_2: Since StackOverflow doesn't support MathTeX, you get a crude markdown version instead.
Your series is a recurrence relation:
```
x1 = x0 + linear_delta(x0)
x2 = x1 + linear_delta(x1)
x3 = ...
```
And `linear\_delta has the form:
```
linear_delta(x) = m*x + b
```
Steps to get the recurrence relation into simplified form:
```
x1 = x0 + linear_delta(x0)
= x0 + m*x0 + b
= (m+1)*x0 + b
let n=m+1
= n*x0 + b
x2 = n*x1 + b = n*(n*x0 + b) + b
= n²*x0 + n*b + b
x3 = n*x2 + b
= n*(n²*x0 + n*b + b) + b
= n³*x0 + n²*b + n*b + b
xk = n^k*x0 + b*sum(n^h, h=0..k-1)
```
If we can solve the summation for an explicit function of `n` and `k`, then we have an explicit formula for `x`. Invert that formula and we can find the index for a given `x`.
Following the recurrence relation solving instructions from <https://www.wikihow.com/Solve-Recurrence-Relations>
Let's adapt their example:
>
> 2, 5, 14, 41, 122
>
>
> a\_n = 3 \* a\_(n-1) - 1
>
>
>
Using our recurrence relation:
a\_n = (m+1) \* a\_(n-1) + b
```
A(x) = sum(a_k * x^k, k=0..∞)
= a0 + sum(a_k * x^k, k=1..∞)
= a0 + sum(((m+1) * a_(k-1) + b) * x^k, k=1..∞)
= a0 + sum((m+1) * a_(k-1) * x^k, k=1..∞) + sum(b * x^k, k=1..∞)
= a0 + (m+1)*x*sum(a_(k-1) * x^(k-1), k=1..∞) + b*sum(x^k, k=1..∞)
A(x) = a0 + (m+1)*x*A(x) + b*x/(1-x)
(1 - (m+1)*x)*A(x) = a0 + b*x/(1-x)
A(x) = (a0 + b*x/(1-x)) / (1 - (m+1)*x)
A(x) = a0*(1-x) + b*x
-------------------
(x-1)*((m+1)*x - 1)
```
And continue solving per the instructions
Upvotes: 0
|
2018/03/14
| 1,096 | 4,445 |
<issue_start>username_0: I'm building an Node application which will query simple and more complex (multiple joins) queries. I'm looking for suggestions on how I should manage the mySQL connections.
I have the following elements:
* server.js : Express
* router1.js (fictive name) : Express Router middleware
* router2.js (fictive name) : Express Router middleware
```
//this is router1
router.get('/', function (req, res){
connection.connect(function(Err){...});
connection.query('SELECT* FROM table WHERE id = "blah"', function(err,results,fields){
console.log(results);
});
...
connection.end();
})
```
Should I connect to mysql everytime '/router1/' is requested, like in this example, or it's better to leave one connection open one at start up? As:
```
connection.connect();
```
outside of:
```
router.get('/',function(req,res){
...
});
```
?<issue_comment>username_1: Connection pooling is how it should be done. Opening a new connection for every request slows down the application and it can sooner or later become a bottleneck, as node does not automatically closes the connections unlike PHP. Thus connection pool ensures that a fixed number of connections are always available and it handles the closing of unnecessary connections as and when required.
This is how I start my express app using Sequelize. For Mongoose, it is more or less simlar except the library API.
```
const sequelize = new Sequelize('database', 'username', 'password', {
host: 'localhost',
dialect: 'mysql',
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
});
sequelize.authenticate()
.then(
// On successfull connection, open a port
// and listen to requests. This is where the application
// starts listening to requests.
() => {
const server = http.createServer(app);
server.listen(port);
},
)
.catch(err => {
console.error('Unable to connect to the database:', err);
console.error('Cancelling app server launch');
});
```
The app is started only after a database connection has been established. This ensures that the server won't be active without any database connection. Connection pool will keep the connections open by default, and use a connection out of the pool for all queries.
Upvotes: 2 <issue_comment>username_2: I am using mysql2 for this, it is basicly mysql but with promises. If you use mysql you can also do this.
Create a seperate file called connection.js or something.
```
const mysql = require('mysql2');
const connection = mysql.createPool({
host: "localhost",
user: "",
password: "",
database: ""
// here you can set connection limits and so on
});
module.exports = connection;
```
Then it is probaly better you create some models and call these from within your controllers, within your `router.get('/', (req, res) => {here});`
A model would look like this:
```
const connection = require('../util/connection');
async function getAll() {
const sql = "SELECT * FROM tableName";
const [rows] = await connection.promise().query(sql);
return rows;
}
exports.getAll = getAll;
```
You can do this with or without promises, it doesn't matter.
Your connection to the pool is automatically released when the query is finished.
Then you should call getAll from your router or app.
I hope this helped, sorry if not.
Upvotes: 3 <issue_comment>username_3: ```
> npm install mysql
```
mysql is a great module which makes working with MySQL very easy and it provides all the capabilities you might need.
Once you have mysql installed, all you have to do to connect to your database is
```
var mysql = require('mysql')
var conn = mysql.createConnection({
host: 'localhost',
user: 'username',
password: '<PASSWORD>',
database: 'database'
})
conn.connect(function(err) {
if (err) throw err
console.log('connected')
})
```
Now you are ready to begin writing and reading from your database.
Upvotes: -1 <issue_comment>username_4: If you use `createPool` `mysql` will manage opening and closing connections and you will have better performance. It doesn't matter if you use `mysql` or mysql2 or sequlize. use a separate file for `createPool` and export it. You can use it everywhere. Don't use classes and just do it functionally for better performance in nodejs.
Upvotes: 0
|
2018/03/14
| 3,520 | 8,500 |
<issue_start>username_0: With the base64-encoded string `JVBERi0xLjENCiXi48/TDQoxIDAgb2JqDQo8PCAN` I am getting difference results from emacs than from the clojure code below.
Can anyone explain to me why?
The `elisp` below gives the correct output, giving me ultimately a valid pdf document (when i past the entire string). I am sure my emacs buffer is set to `utf-8`:
```
(base64-decode-string "JVBERi0xLjENCiXi48/TDQoxIDAgb2JqDQo8PCAN")
"%PDF-1.1
%âãÏÓ
1 0 obj
<<
```
Here is the same output with the chars in decimal (i think):
```
"%PDF-1.1
%\342\343\317\323
1
```
The `clojure` below gives incorrect output, rendering the pdf document invalid when i give the entire string:
```
(import 'java.util.Base64 )
(defn decode [to-decode]
(let [
byts (.getBytes to-decode "UTF-8")
decoded (.decode (java.util.Base64/getDecoder) byts)
]
(String. decoded "UTF-8")))
(decode "JVBERi0xLjENCiXi48/TDQoxIDAgb2JqDQo8PCAN")
"%PDF-1.1
%����
1 0 obj
<<
```
Same output, chars in decimal (i think). I couldn't even copy/paste this, i had to type it in. This is what it looks like when i opened the PDF in `text-mode` for the first three columns:
```
"%PDF-1.1
%\357\277\275\357\277\275\357\277\275\357\277\275
1"
```
---
Edit Taking emacs out of the equation:
--------------------------------------
If i write the encoded string to a file called `encoded.txt` and pipe it through the linux program `base64 --decode` i get valid output and a good pdf also:
This is clojure:
```
(defn decode [to-decode]
(let [byts (.getBytes to-decode "ASCII")
decoded (.decode (java.util.Base64/getDecoder) byts)
flip-negatives #(if (neg? %) (char (+ 255 %)) (char %))
]
(String. (char-array (map flip-negatives decoded)) )))
(spit "./output/decoded.pdf" (decode "<KEY>b2JqDQo8PCAN"))
(spit "./output/encoded.txt" "<KEY>")
```
Then this at the shell:
```
➜ output git:(master) ✗ cat encoded.txt| base64 --decode > decoded2.pdf
➜ output git:(master) ✗ diff decoded.pdf decoded2.pdf
2c2
< %áâÎÒ
---
> %����
➜ output git:(master) ✗
```
---
update - this seems to work
---------------------------
Alan Thompson's answer below put me on the correct track, but geez what a pain to get there.
Here's the idea of what works:
```
(def iso-latin-1-charset (java.nio.charset.Charset/forName "ISO-8859-1" ))
(as-> some-giant-string-i-hate-at-this-point $
(.getBytes $)
(String. $ iso-latin-1-charset)
(base64/decode $ "ISO-8859-1")
(spit "./output/a-pdf-that-actually-works.pdf" $ :encoding "ISO-8859-1" ))
```<issue_comment>username_1: Returning the results as a string, I get:
```
(b64/decode-str "JVBERi0xLjENCiXi48/TDQoxIDAgb2JqDQo8PCAN")
=> "%PDF-1.1\r\n%����\r\n1 0 obj\r\n<< \r"
```
and as a vector of ints:
```
(mapv int (b64/decode-str "JVBERi0xLjENCiXi48/TDQoxIDAgb2JqDQo8PCAN"))
=> [37 80 68 70 45 49 46 49 13 10 37 65533 65533 65533 65533 13 10 49 32 48
32 111 98 106 13 10 60 60 32 13]
```
Since both the beginning and end of the string look OK, I suspect the B64 string might be malformed?
---
Update
======
I went to <http://www.base64decode.org> and got the result
```
"Malformed input... :("
```
[](https://i.stack.imgur.com/MNW38.png)
---
Update #2
=========
The root of the problem is that the source characters are ***not*** UTF-8 encoded. Rather, they are ISO-8859-1 (aka ISO-LATIN-1) encoded. See this code:
```
(defn decode-bytes
"Decodes a byte array from base64, returning a new byte array."
[code-bytes]
(.decode (java.util.Base64/getDecoder) code-bytes))
(def iso-latin-1-charset (java.nio.charset.Charset/forName "ISO-8859-1" )) ; aka ISO-LATIN-1
(let [b64-str "JVBERi0xLjENCiXi48/TDQoxIDAgb2JqDQo8PCAN"
bytes-default (vec (.getBytes b64-str))
bytes-8859 (vec (.getBytes b64-str iso-latin-1-charset))
src-byte-array (decode-bytes (byte-array bytes-default))
src-bytes (vec src-byte-array)
src-str-8859 (String. src-byte-array iso-latin-1-charset)
]... ))
```
with result:
```
iso-latin-1-charset => <#sun.nio.cs.ISO_8859_1 #object[sun.nio.cs.ISO_8859_1 0x3edbd6e8 "ISO-8859-1"]>
bytes-default => [74 86 66 69 82 105 48 120 76 106 69 78 67 105 88 105 52 56 47 84 68 81 111 120 73 68 65 103 98 50 74 113 68 81 111 56 80 67 65 78]
bytes-8859 => [74 86 66 69 82 105 48 120 76 106 69 78 67 105 88 105 52 56 47 84 68 81 111 120 73 68 65 103 98 50 74 113 68 81 111 56 80 67 65 78]
(= bytes-default bytes-8859) => true
src-bytes => [37 80 68 70 45 49 46 49 13 10 37 -30 -29 -49 -45 13 10 49 32 48 32 111 98 106 13 10 60 60 32 13]
src-str-8859 => "%PDF-1.1\r\n%âãÏÓ\r\n1 0 obj\r\n<< \r"
```
So the `java.lang.String` constructor will work correctly with a `byte[]` input, even when the high bit is set (making them look like "negative" values), as long as you tell the constructor the correct `java.nio.charset.Charset` to use for interpreting the values.
Interesting that the object type is `sun.nio.cs.ISO_8859_1`.
---
Update #3
=========
See the SO question below for a list of libraries that can (usually) autodetect the encoding of a byte stream (e.g. UTF-8, ISO-8859-1, ...)
[What is the most accurate encoding detector?](https://stackoverflow.com/questions/3759356/what-is-the-most-accurate-encoding-detector)
Upvotes: 2 [selected_answer]<issue_comment>username_2: I think you need to verify the actual bytes that are produced in both scenarios. I would save both decoded results in a file and then compare them using for example `xxd` command line tool to get the hex display of the bytes in the files.
I suspect your emacs and clojure application uses different font causing the same non-ASCII bytes to be rendered differently, e.g. the same byte value is rendered as `â` in emacs and `�` in clojure output.
I would also check if elisp indeed creates the resulting string using UTF-8. [`base64-decode-string`](http://www.gnu.org/software/emacs/manual/html_node/elisp/Base-64.html) mentions [*unibytes*](https://www.gnu.org/software/emacs/manual/html_node/elisp/String-Conversion.html) and I am not sure it's really UTF-8. *Unibyte* sounds like encoding characters using always one byte per character whereas UTF-8 uses one to four bytes per character.
Upvotes: 0 <issue_comment>username_3: Update
------
@glts made a correct point in his comment to the question. If we go to <http://www.utilities-online.info/base64/> (for example), and we try to decode the original string, we get a third, different result:
```
%PDF-1.1
%⣏Ӎ
1 0 obj
<<
```
However, if we try to encode the data the OP posted, we get a different Base64 string: `JVBERi0xLjEKICXDosOjw4/DkwogMSAwIG9iagogPDwg`, which if we run using the original `decode` implementation as written by the OP we get the same output:
```
(decode "JVBERi0xLjEKICXDosOjw4/DkwogMSAwIG9iagogPDwg")
"%PDF-1.1\n %âãÏÓ\n 1 0 obj\n << "
```
No need to make any conversions. I guess you should check out the encoder.
Original answer
---------------
This problem is due to java's `Byte` being signed.. So much fun!
When you convert it to string, it truncates all negative values to 65533, which is plain wrong:
```
(map long (decode "JVBERi0xLjENCiXi48/TDQoxIDAgb2JqDQo8PCAN"))
;; (37 80 68 70 45 49 46 49 13 10 37 65533 65533 65533 65533 13 10 49 32 48 32 111 98 106 13 10 60 60 32 13)
```
lets see what happens:
```
(defn decode [to-decode]
(let [byts (.getBytes to-decode "UTF-8")
decoded (.decode (java.util.Base64/getDecoder) byts)]
decoded))
(into [] (decode "JVBERi0xLjENCiXi48/TDQoxIDAgb2JqDQo8PCAN"))
;; [37 80 68 70 45 49 46 49 13 10 37 -30 -29 -49 -45 13 10 49 32 48 32 111 98 106 13 10 60 60 32 13]
```
See the negatives? lets try to fix that:
```
(into [] (char-array (map #(if (neg? %) (char (+ 255 %)) (char %))(decode "JVBERi0xLjENCiXi48/TDQoxIDAgb2JqDQo8PCAN"))))
;; [\% \P \D \F \- \1 \. \1 \return \newline \% \á \â \Î \Ò \return \newline \1 \space \0 \space \o \b \j \return \newline \< \< \space \return]
```
And if we turn this into a string, we get what emacs gave us:
```
(String. (char-array (map #(if (neg? %) (char (+ 255 %)) (char %)) (decode "JVBERi0xLjENCiXi48/TDQoxIDAgb2JqDQo8PCAN"))))
;; "%PDF-1.1\r\n%áâÎÒ\r\n1 0 obj\r\n<< \r"
```
Upvotes: 0
|
2018/03/14
| 395 | 1,516 |
<issue_start>username_0: I'm using Magento 2.2, and I'm wondering, how would I remove the header and footer ONLY from the home page? I use various stores, so the home page just acts as a portal to each store.
I explicitly **only** want to remove the header and footer from the home page. They should appear on all other pages.
Thank you.<issue_comment>username_1: I'm sure there are other ways of not rendering the content in the backend code, but you can certainly do it in CSS as well.
```
.cms-home .header {
display: none;
}
.cms-home .page-footer {
display: none;
}
```
If you're looking for a code solution, I believe you would override one of the xml layout files to do this. It has been a while since I have done anything with Magento, so I'm not sure if that's possible, but it should be if I remember correctly.
This looks like it could be a good example to go on for removing it using the layout xml option: [How to remove 'Subscribe' field from Luma footer](https://magento.stackexchange.com/questions/164340/how-to-remove-subscribe-field-from-luma-footer)
Upvotes: 0 <issue_comment>username_2: The beste way of doing this is to remove the header and footer from your XML-Rendering file. If you only hide it with CSS then this section will be rendered and need some resources. You can try something like this:
```
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You need to overwrite cms\_index\_index.xml layout. Then it will apply change only to home index page
Upvotes: 2
|
2018/03/14
| 548 | 1,646 |
<issue_start>username_0: I keep getting a
>
> ninja: error: '/root/code/CalcLib/libCalcLibAndroidx86.a', needed by
> '/root/code/compcorpsdk/build/intermediates/cmake/prod/release/obj/x86/libJumboFFT.so',
> missing and no known rule to make it
>
>
>
Although I checked both directories with the correct files exist and I believe I have the include-directories paths set up correctly in the CmakeList.txt.
**CMakeLists.txt**
```
cmake_minimum_required(VERSION 3.4.1)
set(
P1 "/root/code/CalcLib"
CACHE STRING ""
)
message(${ANDROID_ABI})
file(GLOB CPP_FILES "*.cpp")
add_library(
JumboFFT
SHARED
${CPP_FILES}
)
include_directories(src/main/jni)
include_directories(${P1})
target_link_libraries(
JumboFFT
log
android
OpenSLES
${P1}/libCalcLibAndroid${ANDROID_ABI}.a
)
```
**Gradle Assemble output**
[Gradle Asemble output text](https://i.stack.imgur.com/IWZWc.png)<issue_comment>username_1: I believe you should add it first as a library using `add_library` and `set_target_properties` and then link it as such:
```
...
add_library(libCalcLibAndroid STATIC IMPORTED)
set_target_properties(libCalcLibAndroid PROPERTIES
IMPORTED_LINK_INTERFACE_LIBRARIES ""
IMPORTED_LOCATION "${P1}/libCalcLibAndroid${ANDROID_ABI}.a"
)
target_link_libraries(
JumboFFT
log
android
OpenSLES
libCalcLibAndroid
)
```
Upvotes: 0 <issue_comment>username_2: ```
set_target_properties
```
in cmake does not like relative paths
see here:
[cmake:missing and no known rule to make it when I import a prebuilt library](https://stackoverflow.com/questions/41478323/cmakemissing-and-no-known-rule-to-make-it-when-i-import-a-prebuilt-library)
Upvotes: 2
|
2018/03/14
| 496 | 1,781 |
<issue_start>username_0: Can someone please explain why this doesnt work in Python and how can I make it work? I dont wanna do sa=0 or something first because this code is placed inside a function and I dont want sa to start with 0 each time the function executes.
```
def tr(ok):
s = 1
sa = sa + (ok / s)
print (sa)
for i in range(10):
ok = 2
```
The error I get is that sa is a unresolved reference
EDIT: sa += (ok/s) doesnt work either<issue_comment>username_1: if you are willing to take a result of a function you should use return(var\_nam), anyway we don't understand what you want from your code.
the full code or a bittere expelnation can help us help you
Upvotes: 0 <issue_comment>username_2: >
> The error I get is that sa is a unresolved reference
>
>
>
The reason this happens is due to local scope. When executing your tr function, only variables either passed **into** the function, declared as a global variable, or declared **in** the function are seen and used.
```
def tr(ok):
s = 1
sa = sa + (ok / s)
print (sa)
```
In your code, the variables are:
* ok
* s
* sa
The computer can't finish assigning a value to sa because it needs sa to already have a value assigned in order to calculate the value of sa. If I gave you a math problem stating X+1 and asked you what the result of that would be, you'd be unable to because I never told you what X *is* in the first place. So here's what you should do:
Declare what sa is supposed to be inside the function:
```
sa = *something that the computer can set sa equal to*
sa + (ok / s)
```
Pass the value into the function by modifying *both* the function call and the function definition:
```
def tr(ok, sa):
```
Or declare sa as a global variable.
Upvotes: 1
|
2018/03/14
| 841 | 2,748 |
<issue_start>username_0: I am trying to use `matplotlib` and `mpld3` to produce some html plots on my `Django` report app.
Basically I have a controller for the plot that is the following:
```
from django.shortcuts import render
import mpld3
from matplotlib.pyplot import figure, title, bar
def cpfLogin(request):
mpl_figure = figure(1)
xvalues = (1,2,3,4,5)
yvalues = (1,2,3,4,5)
width = 0.5 # the width of the bars
title(u'Custom Bar Chart')
bar(xvalues, yvalues, width)
fig_html = mpld3.fig_to_html(mpl_figure)
context = {
'figure': fig_html,
}
return render(request, 'reports/CPFReport.html', context)
```
The code for reports/CPFReport.html is:
```
{% load i18n %}
{% block extrahead %}
.chart\_title {
font-weight: bold;
font-size: 14px;
}
{% endblock %}
{% block content %}
{% trans "Custom Bar Chart" %}
{{ figure|safe }}
{% endblock %}
```
The code is executed right and the plot is displayed correctly but after a couple of seconds the app terminates with the following error:
>
> Assertion failed: (NSViewIsCurrentlyBuildingLayerTreeForDisplay() !=
> currentlyBuildingLayerTree), function
> NSViewSetCurrentlyBuildingLayerTreeForDisplay, file
> /BuildRoot/Library/Caches/com.apple.xbs/Sources/AppKit/AppKit-1561.20.106/AppKit.subproj/NSView.m,
> line 14480.
>
>
>
I found out that if I comment all the code this exception is thrown when any of the `matplotlib` libraries are called.
Does anyone has a workaround or solution for this problem?<issue_comment>username_1: Maybe I find the solution,
just add the follow code in top.
```
import matplotlib
matplotlib.use('Agg')
```
For my case, I use python3, flask and matplotlib.
reference:
<https://gist.github.com/tebeka/5426211>
Upvotes: 0 <issue_comment>username_2: In my case I had to avoid importing :
```
import matplotlib.pyplot as plt
fig,ax = plt.subplots(figsize=(8,9))
l = plt.plot(x,s, 'y-', label="line")
```
and substitute it with:
```
from matplotlib.figure import Figure
fig = Figure()
ax = fig.add_subplot(111))
l = ax.plot(x,s, 'y-', label="line")
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: Adding plt.close() after saving the figure using fig.savefig('../static/images/something.png') helped me.
Upvotes: 0 <issue_comment>username_4: To complete username_1's answer, in this case it's linked to using matplotlib with a webserver. The solution recommended by [matplotlib documentation](https://matplotlib.org/faq/howto_faq.html#howto-webapp) is to use the Agg backend :
```
import matplotlib
matplotlib.use('Agg')
# then import pyplot and mpld3
import mpld3
from matplotlib.pyplot import figure, title, bar
```
Upvotes: 0
|
2018/03/14
| 1,698 | 7,297 |
<issue_start>username_0: Just like the example [here](https://developer.android.com/studio/build/build-variants.html#build-types) I am extending my build types to add staging:
```
android {
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
debug {
applicationIdSuffix '.debug'
}
staging {
initWith release
applicationIdSuffix '.staging'
}
}
}
```
But I also have a dependency:
```
implementation project(':mylibrary')
```
Compilation fails because it doesn't know what to pair `staging` to in `:mylibrary`:
```
* What went wrong:
Could not determine the dependencies of task ':app:compileStagingJavaWithJavac'.
> Could not resolve all task dependencies for configuration ':app:stagingCompileClasspath'.
> Could not resolve project :mylibrary.
Required by:
project :app
> Unable to find a matching configuration of project :mylibrary:
- Configuration 'debugApiElements':
- Required com.android.build.api.attributes.BuildTypeAttr 'staging' and found incompatible value 'debug'.
- Required com.android.build.gradle.internal.dependency.AndroidTypeAttr 'Aar' and found compatible value 'Aar'.
- Found com.android.build.gradle.internal.dependency.VariantAttr 'debug' but wasn't required.
- Required org.gradle.api.attributes.Usage 'java-api' and found compatible value 'java-api'.
- Configuration 'debugRuntimeElements':
- Required com.android.build.api.attributes.BuildTypeAttr 'staging' and found incompatible value 'debug'.
- Required com.android.build.gradle.internal.dependency.AndroidTypeAttr 'Aar' and found compatible value 'Aar'.
- Found com.android.build.gradle.internal.dependency.VariantAttr 'debug' but wasn't required.
- Required org.gradle.api.attributes.Usage 'java-api' and found incompatible value 'java-runtime'.
- Configuration 'releaseApiElements':
- Required com.android.build.api.attributes.BuildTypeAttr 'staging' and found incompatible value 'release'.
- Required com.android.build.gradle.internal.dependency.AndroidTypeAttr 'Aar' and found compatible value 'Aar'.
- Found com.android.build.gradle.internal.dependency.VariantAttr 'release' but wasn't required.
- Required org.gradle.api.attributes.Usage 'java-api' and found compatible value 'java-api'.
- Configuration 'releaseRuntimeElements':
- Required com.android.build.api.attributes.BuildTypeAttr 'staging' and found incompatible value 'release'.
- Required com.android.build.gradle.internal.dependency.AndroidTypeAttr 'Aar' and found compatible value 'Aar'.
- Found com.android.build.gradle.internal.dependency.VariantAttr 'release' but wasn't required.
- Required org.gradle.api.attributes.Usage 'java-api' and found incompatible value 'java-runtime'.
```
That's fair enough, but I cannot go through all of my libraries adding `staging` just to get a different application suffix.
I tried:
```
debugImplementation project(path: ':mylibrary', configuration: 'debug')
releaseImplementation project(path: ':mylibrary', configuration: 'release')
stagingImplementation project(path: ':mylibrary', configuration: 'release')
```
But it fails:
```
* What went wrong:
Could not determine the dependencies of task ':app:compileReleaseJavaWithJavac'.
> Could not resolve all task dependencies for configuration ':app:releaseCompileClasspath'.
> Could not resolve project :mylibrary.
Required by:
project :app
> Project :app declares a dependency from configuration 'releaseImplementation' to configuration 'release' which is not declared in the descriptor for project :mylibrary.
debugImplementation project(path: ':mylibrary', configuration: 'default')
releaseImplementation project(path: ':mylibrary', configuration: 'default')
stagingImplementation project(path: ':mylibrary', configuration: 'default')
```
This works but every build has release build of library. I don't want that, I need debug to have debug version of library.
I've seen [this q](https://stackoverflow.com/questions/28081846/use-different-build-types-of-library-module-in-android-app-module-in-android-stu), but it's pre "`implementation`" and `publishNonDefault true` had no effect, same error as above.
>
> publishNonDefault is deprecated and has no effect anymore. All variants are now published.
>
>
>
Gradlew version 4.6<issue_comment>username_1: Been here, done that :P
You'll need to specify `matchingFallback` with the Android Gradle Plugin 3.0.0 for the plugin to know which fallback build type of library to use when being compiled with app code in case a certain build type defined in your app is not found in library.
```
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
debug {
applicationIdSuffix '.debug'
}
staging {
initWith release
applicationIdSuffix '.staging'
matchingFallbacks = ['release']
}
}
```
More info here: [Migrate to Android Plugin for Gradle 3.0.0](https://developer.android.com/studio/build/gradle-plugin-3-0-0-migration.html#resolve_matching_errors).
Upvotes: 7 [selected_answer]<issue_comment>username_2: I was creating a `staging` buildType in my **react native project** too, the build was fine, but for some reason **it was loading an old version of the app**.
If your react native is loading a old version:
==============================================
Try the following changes at `/android/app/build.gradle` file:
1. Change the buildType name from `staging` to `releaseStaging`
---------------------------------------------------------------
I don't know the reason yet. But this is the main change that made my react native app open the correct version of the app.
```
buildTypes {
...
releaseStaging {
initWith release
...
}
}
```
2. Include the `matchingFallbacks = ['release']`, just like [@username_1](https://stackoverflow.com/a/49288390/19110863) mentioned.
---------------------------------------------------------------------------------------------------------------------------------
This fixed some build errors.
```
buildTypes {
...
releaseStaging {
initWith release
matchingFallbacks = ['release']
...
}
}
```
3. If you have hermes enabled, you should include a `releaseStagingImplementation`, just like the code below.
-------------------------------------------------------------------------------------------------------------
After including this, my app stopped to crash while starting.
```
if (enableHermes) {
def hermesPath = "../../node_modules/hermes-engine/android/";
debugImplementation files(hermesPath + "hermes-debug.aar")
releaseImplementation files(hermesPath + "hermes-release.aar")
releaseStagingImplementation files(hermesPath + "hermes-release.aar")
} else {
implementation jscFlavor
}
```
Upvotes: 0
|
2018/03/14
| 7,411 | 28,526 |
<issue_start>username_0: How can you delete a Document with all it's collections and nested subcollections? (inside the functions environment)
In the RTDB you can `ref.child('../someNode).setValue(null)` and that completes the desired behavior.
I can think of two ways you could achieve the desired delete behavior, both with tremendously ghastly drawbacks.
1. Create a 'Super' function that will spider every document and delete them in a batch.
This function would be complicated, brittle to changes, and might take a lengthy execution time.
2. Add 'onDelete' triggers for each Document type, and make it delete any direct subcollections. You'll call delete on the root document, and the deletion calls will propagate down the 'tree'. This is sluggish, scales atrociously and is costly due to the colossal load of function executions.
Imagine you would have to delete a 'GROUP' and all it's children. It would be deeply chaotic with #1 and pricey with #2 (1 function call per doc)
```
groups > GROUP > projects > PROJECT > files > FILE > assets > ASSET
> urls > URL
> members > MEMBER
> questions > QUESTION > answers > ANSWER > replies > REPLY
> comments > COMMENT
> resources > RESOURCE > submissions > SUBMISSION
> requests > REQUEST
```
Is there a superior/favored/cleaner way to delete a document and all it's nested subcollections?
It ought to be possible considering you can do it from the console.<issue_comment>username_1: i don't know how much helpful for you but test it and compare the execution time which i get use it from fire store doc
>
>
> ```
> /** Delete a collection in batches to avoid out-of-memory errors.
> * Batch size may be tuned based on document size (atmost 1MB) and application requirements.
>
> ```
>
>
```
*/
void deleteCollection(CollectionReference collection, int batchSize) {
try {
// retrieve a small batch of documents to avoid out-of-memory errors
ApiFuture future = collection.limit(batchSize).get();
int deleted = 0;
// future.get() blocks on document retrieval
List documents = future.get().getDocuments();
for (QueryDocumentSnapshot document : documents) {
document.getReference().delete();
++deleted;
}
if (deleted >= batchSize) {
// retrieve and delete another batch
deleteCollection(collection, batchSize);
}
} catch (Exception e) {
System.err.println("Error deleting collection : " + e.getMessage());
}
}
```
Upvotes: 2 <issue_comment>username_2: You can call `firebase.firestore().doc("whatever").set()` and that will delete everything in that document.
The only way `.set` does not erase everything is if you set the `merge` flag to `true`.
See [Firestore Documentation on Add Data](https://firebase.google.com/docs/firestore/manage-data/add-data)
```
var cityRef = db.collection('cities').doc('BJ');
var setWithMerge = cityRef.set({
capital: true
}, { merge: true });
```
Upvotes: -1 <issue_comment>username_3: Unfortunately, your analysis is spot on and indeed this use case does require a lot of ceremony. According to official documentation, there is no support for deep deletes in a single *shot* in `firestore` neither via client libraries nor [rest-api](https://firebase.google.com/docs/firestore/reference/rest/v1beta1/projects.databases.documents/delete?authuser=0) nor the `cli` tool.
The `cli` is open sourced and its implementation lives here: <https://github.com/firebase/firebase-tools/blob/master/src/firestore/delete.js>. They basically implemented option 1. you described in your question, so you can take some inspiration from there.
Both options 1. and 2. are far from ideal situation and to make your solution 100% reliable you will need to keep a **persistent** queue with deletion tasks, as any error in the long running procedure will leave your system in some ill-defined state.
I would discourage to go with raw option 2. as recursive cloud function calls may very easily went wrong - for example, hitting max. limits.
In case the link changed, below the full source of <https://github.com/firebase/firebase-tools/blob/master/src/firestore/delete.js>:
```
"use strict";
var clc = require("cli-color");
var ProgressBar = require("progress");
var api = require("../api");
var firestore = require("../gcp/firestore");
var FirebaseError = require("../error");
var logger = require("../logger");
var utils = require("../utils");
/**
* Construct a new Firestore delete operation.
*
* @constructor
* @param {string} project the Firestore project ID.
* @param {string} path path to a document or collection.
* @param {boolean} options.recursive true if the delete should be recursive.
* @param {boolean} options.shallow true if the delete should be shallow (non-recursive).
* @param {boolean} options.allCollections true if the delete should universally remove all collections and docs.
*/
function FirestoreDelete(project, path, options) {
this.project = project;
this.path = path;
this.recursive = Boolean(options.recursive);
this.shallow = Boolean(options.shallow);
this.allCollections = Boolean(options.allCollections);
// Remove any leading or trailing slashes from the path
if (this.path) {
this.path = this.path.replace(/(^\/+|\/+$)/g, "");
}
this.isDocumentPath = this._isDocumentPath(this.path);
this.isCollectionPath = this._isCollectionPath(this.path);
this.allDescendants = this.recursive;
this.parent = "projects/" + project + "/databases/(default)/documents";
// When --all-collections is passed any other flags or arguments are ignored
if (!options.allCollections) {
this._validateOptions();
}
}
/**
* Validate all options, throwing an exception for any fatal errors.
*/
FirestoreDelete.prototype._validateOptions = function() {
if (this.recursive && this.shallow) {
throw new FirebaseError("Cannot pass recursive and shallow options together.");
}
if (this.isCollectionPath && !this.recursive && !this.shallow) {
throw new FirebaseError("Must pass recursive or shallow option when deleting a collection.");
}
var pieces = this.path.split("/");
if (pieces.length === 0) {
throw new FirebaseError("Path length must be greater than zero.");
}
var hasEmptySegment = pieces.some(function(piece) {
return piece.length === 0;
});
if (hasEmptySegment) {
throw new FirebaseError("Path must not have any empty segments.");
}
};
/**
* Determine if a path points to a document.
*
* @param {string} path a path to a Firestore document or collection.
* @return {boolean} true if the path points to a document, false
* if it points to a collection.
*/
FirestoreDelete.prototype._isDocumentPath = function(path) {
if (!path) {
return false;
}
var pieces = path.split("/");
return pieces.length % 2 === 0;
};
/**
* Determine if a path points to a collection.
*
* @param {string} path a path to a Firestore document or collection.
* @return {boolean} true if the path points to a collection, false
* if it points to a document.
*/
FirestoreDelete.prototype._isCollectionPath = function(path) {
if (!path) {
return false;
}
return !this._isDocumentPath(path);
};
/**
* Construct a StructuredQuery to find descendant documents of a collection.
*
* See:
* https://firebase.google.com/docs/firestore/reference/rest/v1beta1/StructuredQuery
*
* @param {boolean} allDescendants true if subcollections should be included.
* @param {number} batchSize maximum number of documents to target (limit).
* @param {string=} startAfter document name to start after (optional).
* @return {object} a StructuredQuery.
*/
FirestoreDelete.prototype._collectionDescendantsQuery = function(
allDescendants,
batchSize,
startAfter
) {
var nullChar = String.fromCharCode(0);
var startAt = this.parent + "/" + this.path + "/" + nullChar;
var endAt = this.parent + "/" + this.path + nullChar + "/" + nullChar;
var where = {
compositeFilter: {
op: "AND",
filters: [
{
fieldFilter: {
field: {
fieldPath: "__name__",
},
op: "GREATER_THAN_OR_EQUAL",
value: {
referenceValue: startAt,
},
},
},
{
fieldFilter: {
field: {
fieldPath: "__name__",
},
op: "LESS_THAN",
value: {
referenceValue: endAt,
},
},
},
],
},
};
var query = {
structuredQuery: {
where: where,
limit: batchSize,
from: [
{
allDescendants: allDescendants,
},
],
select: {
fields: [{ fieldPath: "__name__" }],
},
orderBy: [{ field: { fieldPath: "__name__" } }],
},
};
if (startAfter) {
query.structuredQuery.startAt = {
values: [{ referenceValue: startAfter }],
before: false,
};
}
return query;
};
/**
* Construct a StructuredQuery to find descendant documents of a document.
* The document itself will not be included
* among the results.
*
* See:
* https://firebase.google.com/docs/firestore/reference/rest/v1beta1/StructuredQuery
*
* @param {boolean} allDescendants true if subcollections should be included.
* @param {number} batchSize maximum number of documents to target (limit).
* @param {string=} startAfter document name to start after (optional).
* @return {object} a StructuredQuery.
*/
FirestoreDelete.prototype._docDescendantsQuery = function(allDescendants, batchSize, startAfter) {
var query = {
structuredQuery: {
limit: batchSize,
from: [
{
allDescendants: allDescendants,
},
],
select: {
fields: [{ fieldPath: "__name__" }],
},
orderBy: [{ field: { fieldPath: "__name__" } }],
},
};
if (startAfter) {
query.structuredQuery.startAt = {
values: [{ referenceValue: startAfter }],
before: false,
};
}
return query;
};
/**
* Query for a batch of 'descendants' of a given path.
*
* For document format see:
* https://firebase.google.com/docs/firestore/reference/rest/v1beta1/Document
*
* @param {boolean} allDescendants true if subcollections should be included,
* @param {number} batchSize the maximum size of the batch.
* @param {string=} startAfter the name of the document to start after (optional).
* @return {Promise} a promise for an array of documents.
\*/
FirestoreDelete.prototype.\_getDescendantBatch = function(allDescendants, batchSize, startAfter) {
var url;
var body;
if (this.isDocumentPath) {
url = this.parent + "/" + this.path + ":runQuery";
body = this.\_docDescendantsQuery(allDescendants, batchSize, startAfter);
} else {
url = this.parent + ":runQuery";
body = this.\_collectionDescendantsQuery(allDescendants, batchSize, startAfter);
}
return api
.request("POST", "/v1beta1/" + url, {
auth: true,
data: body,
origin: api.firestoreOrigin,
})
.then(function(res) {
// Return the 'document' property for each element in the response,
// where it exists.
return res.body
.filter(function(x) {
return x.document;
})
.map(function(x) {
return x.document;
});
});
};
/\*\*
\* Progress bar shared by the class.
\*/
FirestoreDelete.progressBar = new ProgressBar("Deleted :current docs (:rate docs/s)", {
total: Number.MAX\_SAFE\_INTEGER,
});
/\*\*
\* Repeatedly query for descendants of a path and delete them in batches
\* until no documents remain.
\*
\* @return {Promise} a promise for the entire operation.
\*/
FirestoreDelete.prototype.\_recursiveBatchDelete = function() {
var self = this;
// Tunable deletion parameters
var readBatchSize = 7500;
var deleteBatchSize = 250;
var maxPendingDeletes = 15;
var maxQueueSize = deleteBatchSize \* maxPendingDeletes \* 2;
// All temporary variables for the deletion queue.
var queue = [];
var numPendingDeletes = 0;
var pagesRemaining = true;
var pageIncoming = false;
var lastDocName;
var failures = [];
var retried = {};
var queueLoop = function() {
if (queue.length == 0 && numPendingDeletes == 0 && !pagesRemaining) {
return true;
}
if (failures.length > 0) {
logger.debug("Found " + failures.length + " failed deletes, failing.");
return true;
}
if (queue.length <= maxQueueSize && pagesRemaining && !pageIncoming) {
pageIncoming = true;
self
.\_getDescendantBatch(self.allDescendants, readBatchSize, lastDocName)
.then(function(docs) {
pageIncoming = false;
if (docs.length == 0) {
pagesRemaining = false;
return;
}
queue = queue.concat(docs);
lastDocName = docs[docs.length - 1].name;
})
.catch(function(e) {
logger.debug("Failed to fetch page after " + lastDocName, e);
pageIncoming = false;
});
}
if (numPendingDeletes > maxPendingDeletes) {
return false;
}
if (queue.length == 0) {
return false;
}
var toDelete = [];
var numToDelete = Math.min(deleteBatchSize, queue.length);
for (var i = 0; i < numToDelete; i++) {
toDelete.push(queue.shift());
}
numPendingDeletes++;
firestore
.deleteDocuments(self.project, toDelete)
.then(function(numDeleted) {
FirestoreDelete.progressBar.tick(numDeleted);
numPendingDeletes--;
})
.catch(function(e) {
// For server errors, retry if the document has not yet been retried.
if (e.status >= 500 && e.status < 600) {
logger.debug("Server error deleting doc batch", e);
// Retry each doc up to one time
toDelete.forEach(function(doc) {
if (retried[doc.name]) {
logger.debug("Failed to delete doc " + doc.name + " multiple times.");
failures.push(doc.name);
} else {
retried[doc.name] = true;
queue.push(doc);
}
});
} else {
logger.debug("Fatal error deleting docs ", e);
failures = failures.concat(toDelete);
}
numPendingDeletes--;
});
return false;
};
return new Promise(function(resolve, reject) {
var intervalId = setInterval(function() {
if (queueLoop()) {
clearInterval(intervalId);
if (failures.length == 0) {
resolve();
} else {
reject("Failed to delete documents " + failures);
}
}
}, 0);
});
};
/\*\*
\* Delete everything under a given path. If the path represents
\* a document the document is deleted and then all descendants
\* are deleted.
\*
\* @return {Promise} a promise for the entire operation.
\*/
FirestoreDelete.prototype.\_deletePath = function() {
var self = this;
var initialDelete;
if (this.isDocumentPath) {
var doc = { name: this.parent + "/" + this.path };
initialDelete = firestore.deleteDocument(doc).catch(function(err) {
logger.debug("deletePath:initialDelete:error", err);
if (self.allDescendants) {
// On a recursive delete, we are insensitive to
// failures of the initial delete
return Promise.resolve();
}
// For a shallow delete, this error is fatal.
return utils.reject("Unable to delete " + clc.cyan(this.path));
});
} else {
initialDelete = Promise.resolve();
}
return initialDelete.then(function() {
return self.\_recursiveBatchDelete();
});
};
/\*\*
\* Delete an entire database by finding and deleting each collection.
\*
\* @return {Promise} a promise for all of the operations combined.
\*/
FirestoreDelete.prototype.deleteDatabase = function() {
var self = this;
return firestore
.listCollectionIds(this.project)
.catch(function(err) {
logger.debug("deleteDatabase:listCollectionIds:error", err);
return utils.reject("Unable to list collection IDs");
})
.then(function(collectionIds) {
var promises = [];
logger.info("Deleting the following collections: " + clc.cyan(collectionIds.join(", ")));
for (var i = 0; i < collectionIds.length; i++) {
var collectionId = collectionIds[i];
var deleteOp = new FirestoreDelete(self.project, collectionId, {
recursive: true,
});
promises.push(deleteOp.execute());
}
return Promise.all(promises);
});
};
/\*\*
\* Check if a path has any children. Useful for determining
\* if deleting a path will affect more than one document.
\*
\* @return {Promise} a promise that retruns true if the path has
\* children and false otherwise.
\*/
FirestoreDelete.prototype.checkHasChildren = function() {
return this.\_getDescendantBatch(true, 1).then(function(docs) {
return docs.length > 0;
});
};
/\*\*
\* Run the delete operation.
\*/
FirestoreDelete.prototype.execute = function() {
var verifyRecurseSafe;
if (this.isDocumentPath && !this.recursive && !this.shallow) {
verifyRecurseSafe = this.checkHasChildren().then(function(multiple) {
if (multiple) {
return utils.reject("Document has children, must specify -r or --shallow.", { exit: 1 });
}
});
} else {
verifyRecurseSafe = Promise.resolve();
}
var self = this;
return verifyRecurseSafe.then(function() {
return self.\_deletePath();
});
};
module.exports = FirestoreDelete;
```
Upvotes: 4 <issue_comment>username_4: As mentioned above, you need to write good bit of code for this. For each document that is to be deleted you need to check if it has one or more collections. If it does, then you need to queue those up for deletion too. I wrote the code below to do this. It's not tested to be scalable to large data sets, which is fine for me as I'm using it to clean up after small scale integration tests. If you need something more scalable, feel free to take this as a starting point and play around with batching more.
```
class FirebaseDeleter {
constructor(database, collections) {
this._database = database;
this._pendingCollections = [];
}
run() {
return new Promise((resolve, reject) => {
this._callback = resolve;
this._database.getCollections().then(collections => {
this._pendingCollections = collections;
this._processNext();
});
});
}
_processNext() {
const collections = this._pendingCollections;
this._pendingCollections = [];
const promises = collections.map(collection => {
return this.deleteCollection(collection, 10000);
});
Promise.all(promises).then(() => {
if (this._pendingCollections.length == 0) {
this._callback();
} else {
process.nextTick(() => {
this._processNext();
});
}
});
}
deleteCollection(collectionRef, batchSize) {
var query = collectionRef;
return new Promise((resolve, reject) => {
this.deleteQueryBatch(query, batchSize, resolve, reject);
});
}
deleteQueryBatch(query, batchSize, resolve, reject) {
query
.get()
.then(snapshot => {
// When there are no documents left, we are done
if (snapshot.size == 0) {
return 0;
}
// Delete documents in a batch
var batch = this._database.batch();
const collectionPromises = [];
snapshot.docs.forEach(doc => {
collectionPromises.push(
doc.ref.getCollections().then(collections => {
collections.forEach(collection => {
this._pendingCollections.push(collection);
});
})
);
batch.delete(doc.ref);
});
// Wait until we know if all the documents have collections before deleting them.
return Promise.all(collectionPromises).then(() => {
return batch.commit().then(() => {
return snapshot.size;
});
});
})
.then(numDeleted => {
if (numDeleted === 0) {
resolve();
return;
}
// Recurse on the next process tick, to avoid
// exploding the stack.
process.nextTick(() => {
this.deleteQueryBatch(query, batchSize, resolve, reject);
});
})
.catch(reject);
}
}
```
Upvotes: 2 <issue_comment>username_5: according to firebase documentation:
<https://firebase.google.com/docs/firestore/solutions/delete-collections>
Deleting collection with nested subcollections might be done easy and neat with node-JS on the server side.
```
const client = require('firebase-tools');
await client.firestore
.delete(collectionPath, {
project: process.env.GCLOUD_PROJECT,
recursive: true,
yes: true
});
```
Upvotes: 6 [selected_answer]<issue_comment>username_6: ```
// You can add all the collection hierarchy to object
private collectionsHierarchy = {
groups: [
[
'groups',
'projects',
'files',
'assets',
'urls',
'members'
]
]
};
async deleteDocument(rootDocument: string) {
// if (!rootDocument.startsWith(`groups/${this.groupId()}`)) {
// rootDocument = `groups/${this.groupId()}/${rootDocument}`;
// }
const batchSize: number = 100;
let root = await this.db
.doc(rootDocument)
.get()
.toPromise();
if (!root.exists) {
return;
}
const segments = rootDocument.split('/');
const documentCollection = segments[segments.length - 2];
const allHierarchies = this.collectionsHierarchy[documentCollection];
for (let i = 0; i < allHierarchies.length; i = i + 1) {
const hierarchy = allHierarchies[i];
const collectionIndex = hierarchy.indexOf(documentCollection) + 1;
const nextCollections: [] = hierarchy.slice(collectionIndex);
const stack = [`${root.ref.path}/${nextCollections.shift()}`];
while (stack.length) {
const path = stack.pop();
const collectionRef = this.db.firestore.collection(path);
const query = collectionRef.orderBy('__name__').limit(batchSize);
let deletedIems = await this.deleteQueryBatch(query, batchSize);
const nextCollection = nextCollections.shift();
deletedIems = deletedIems.map(di => `${di}/${nextCollection}`);
stack.push(...deletedIems);
}
}
await root.ref.delete();
}
private async deleteQueryBatch(
query: firebase.firestore.Query,
batchSize: number
) {
let deletedItems: string[] = [];
let snapshot = await query.get();
if (snapshot.size === 0) {
return deletedItems;
}
const batch = this.db.firestore.batch();
snapshot.docs.forEach(doc => {
deletedItems.push(doc.ref.path);
batch.delete(doc.ref);
});
await batch.commit();
if (snapshot.size === 0) {
return deletedItems;
}
const result = await this.deleteQueryBatch(query, batchSize);
return [...deletedItems, ...result];
}
```
Upvotes: 0 <issue_comment>username_7: Solution using Node.js Admin SDK
```js
export const deleteDocument = async (doc: FirebaseFirestore.DocumentReference) => {
const collections = await doc.listCollections()
await Promise.all(collections.map(collection => deleteCollection(collection)))
await doc.delete()
}
export const deleteCollection = async (collection: FirebaseFirestore.CollectionReference) => {
const query = collection.limit(100)
while (true) {
const snap = await query.get()
if (snap.empty) {
return
}
await Promise.all(snap.docs.map(doc => deleteDocument(doc.ref)))
}
}
```
Upvotes: 2 <issue_comment>username_8: You can write a handler which will recursive delete all nested descendants when triggers onDelete Firestore event.
Example of handler:
```
const deleteDocumentWithDescendants = async (documentSnap: FirebaseFirestore.QueryDocumentSnapshot) => {
return documentSnap.ref.listCollections().then((subCollections) => {
subCollections.forEach((subCollection) => {
return subCollection.get().then((snap) => {
snap.forEach((doc) => {
doc.ref.delete();
deleteDocumentWithDescendants(doc);
});
});
});
});
};
// On any document delete
export const onDocumentDelete = async (documentSnap: FirebaseFirestore.QueryDocumentSnapshot) => {
await deleteDocumentWithDescendants(documentSnap);
};
```
Tie it up with firestore event:
```
exports.onDeleteDocument = functions.firestore.document('{collectionId}/{docId}')
.onDelete(onDocumentDelete);
```
Upvotes: 1 <issue_comment>username_9: Another solution using Node.js Admin SDK with Batch.
```js
const traverseDocumentRecursively = async (
docRef: FirebaseFirestore.DocumentReference,
accumulatedRefs: FirebaseFirestore.DocumentReference[],
) => {
const collections = await docRef.listCollections();
if (collections.length > 0) {
for (const collection of collections) {
const snapshot = await collection.get();
for (const doc of snapshot.docs) {
accumulatedRefs.push(doc.ref);
await traverseDocumentRecursively(doc.ref, accumulatedRefs);
}
}
}
};
```
```js
import { chunk } from 'lodash';
const doc = admin.firestore().collection('users').doc('001');
const accumulatedRefs: FirebaseFirestore.DocumentReference[] = [];
await traverseDocumentRecursively(doc, accumulatedRefs);
await Promise.all(
// Each transaction or batch of writes can write to a maximum of 500 documents
chunk(accumulatedRefs, 500).map((chunkedRefs) => {
const batch = admin.firestore().batch();
for (const ref of chunkedRefs) {
batch.delete(ref);
}
return batch.commit();
}),
);
```
Upvotes: 0 <issue_comment>username_10: Not sure if this is helpful for anyone here, but I am frequently facing the error "Fatal error deleting docs " when using **firebase-tools.firestore.delete** method (firebase-tools version 9.22.0).
I am currently handling these deletion failures using the returned error message in order to avoid rewriting the code cited at username_5's answer. It uses **admin.firestore** to effectively delete the failed docs.
It's a poor solution since it relies on the error message, but at least it doesn't force us to copy the whole FirestoreDelete code to modify just a few lines of it:
```ts
firebase_tools.firestore
.delete(path, {
project: JSON.parse(process.env.FIREBASE_CONFIG!).projectId,
recursive: true,
yes: true,
token: getToken(),
})
.catch((err: Error) => {
if (err.name == "FirebaseError") {
// If recursive delete fails to delete some of the documents,
// parse the failures from the error message and delete it manually
const failedDeletingDocs = err.message.match(
/.*Fatal error deleting docs ([^\.]+)/
);
if (failedDeletingDocs) {
const docs = failedDeletingDocs[1].split(", ");
const docRefs = docs.map((doc) =>
firestore.doc(doc.slice(doc.search(/\(default\)\/documents/) + 19))
);
firestore
.runTransaction(async (t) => {
docRefs.forEach((doc) => t.delete(doc));
return docs;
})
.then((docs) =>
console.log(
"Succesfully deleted docs after failing: " + docs.join(", ")
)
)
.catch((err) => console.error(err));
}
}
});
```
Upvotes: 0 <issue_comment>username_11: For those who don't want or can't use cloud functions, I found a `recursiveDelete` function in the admin sdk:
<https://googleapis.dev/nodejs/firestore/latest/Firestore.html#recursiveDelete>
```js
// Recursively delete a reference and log the references of failures.
const bulkWriter = firestore.bulkWriter();
bulkWriter
.onWriteError((error) => {
if (error.failedAttempts < MAX_RETRY_ATTEMPTS) {
return true;
} else {
console.log('Failed write at document: ', error.documentRef.path);
return false;
}
});
await firestore.recursiveDelete(docRef, bulkWriter);
```
Upvotes: 4 <issue_comment>username_12: There is now a simple way delete a document and all of its subcollections using NodeJS.
This was made available in nodejs-firestore version v4.11.0.
From the docs:
`recursiveDelete()`
Recursively deletes all documents and subcollections at and under the specified level.
```
import * as admin from 'firebase-admin'
const ref = admin.firestore().doc('my_document')
admin.firestore().recursiveDelete(ref)
```
Upvotes: 4 <issue_comment>username_13: If you are looking to delete user data, a solution to consider in 2022 is the [Delete User Data](https://extensions.dev/extensions/firebase/delete-user-data) Firebase Extension.
Once this is active, you can simply delete the user from Firebase Auth to trigger the recursive deletion of the user documents:
```js
import admin from "firebase-admin";
admin.auth().deleteUser(userId);
```
Upvotes: 0
|
2018/03/14
| 340 | 1,236 |
<issue_start>username_0: I am translating a very large CTE Teradata query and got stuck at this following portion that is its own subquery, which is being cross joined into a much large subquery.
How can I translate this query into Bigquery?
```
(select row_number() over (order by tablename) subsequent_month from dbc.tables qualify row_number() over (order by tablename) <= 24)
```
Thoughts guys?<issue_comment>username_1: You need a subquery for the `qualify`:
```
from (select row_number() over (order by tablename) as subsequent_month
from dbc.tables
) t
where subsequent_month < 24;
```
In Teradata, `qualify` is a "where" clause that works on window functions. It is analogous to `having` which works on aggregation functions.
Upvotes: 0 <issue_comment>username_2: Below is for BigQuery Standard SQL
```sql
#standardSQL
SELECT subsequent_month FROM (
SELECT ROW_NUMBER() OVER (ORDER BY tablename) subsequent_month
FROM dbc.tables
) WHERE subsequent_month <= 24
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: This simply returns 24 rows with consecutive numbers from 1 to 24.
So translate it to a similar query against a any table in BigQuery or use an existing *numbers* table.
Upvotes: 0
|
2018/03/14
| 442 | 1,672 |
<issue_start>username_0: I have a requirement that states I print out a list of users to the screen. These users are all hyperlinks. When I click on one, I don't know how to determine which one got clicked and my request has nothing in it.
Here is the particular link in the jsp:
```
[NAME](/sponsor/manageuser.htm) |
```
How can I, with javascript, which I have little experience with, determine which NAME is clicked. There is a whole list and I have to get this one that was clicked to my controller class. When a name is clicked, it does go to my controller method, but the request has nothing in it. This is a POST method. Please help and thanks in advance. I have to give a demo on this tomorrow.
Using Spring MVC. What I'm trying to do is get this value NAME into the implicit JSP request object so I can grab it when the form is submitted in my controller class.<issue_comment>username_1: In Javascript you can add an event listener and on its handler you can access that event target and get the text content of it (in your case NAME).
So something like this will get your started
```
[LINK NAME](/sponsor/manageuser.htm) |
window.onload = function() {
document.getElementById("myLink").addEventListener("click", (event) => {
alert(event.target.textContent);
});
};
```
plunker link: <https://plnkr.co/edit/dt71QKRNUf0cSA8XKIur?p=preview>
Upvotes: 1 <issue_comment>username_2: You can pass this on an onClick handler to get the target and then you can get any data you want about the target such as the innerText attribute
```
function send(target) {
console.log(target.innerText);
}
[NAME1](#) | [NAME2](#) | [NAME3](#) |
```
Upvotes: 0
|
2018/03/14
| 1,198 | 4,068 |
<issue_start>username_0: I need to replace 3 heap allocations for 3 instances of std::vector, with
only one contiguous heap allocation and then share it between those 3 vectors. These vector sizes are not going to be changed, so I don't need to be worried about allocation of new contiguous storage in case of pushing or inserting elements to them. My experimental result shows I get up to 2X speed up for different sizes, when I replace these 3 vectors of size *n*, with one vector of size of \*3\*n\*.
However, I don't know exactly how to achieve the job of making smaller vectors out of the big one, without doing any extra heap allocation.
```
std::array, 3>
chunck\_vector(size\_t size)
{
std::vector \* underlying\_vec = new std::vector(3\*size, 1.0);
// how to avoid extra heap allocations in constructor of small vectors
std::vector vec0(underlying\_vec->begin() , underlying\_vec->begin() + size);
std::vector vec1(underlying\_vec->begin() + size , underlying\_vec->begin() + 2\*size);
std::vector vec2(underlying\_vec->begin() + 2\*size, underlying\_vec->end());
return {vec0, vec1, vec2};
}
int main(int argc, char const \*argv[])
{
int size = 1000;
auto&& chunked\_vecs = chunck\_vector(size);
// passing each chunk to different functions
// each chunk should be responsible for managing its resources
foo0(std::get<0>(chunked\_vecs));
foo1(std::get<1>(chunked\_vecs));
foo2(std::get<2>(chunked\_vecs));
return 0;
}
```
I tried writing my own vector class, which its constructor accepts two iterators specifying begin and end of the portion of the *underlying\_vec* storage belongs to this vector, but it doesn't sound a clean solution when it comes to the job of freeing resources when the *underlying* vector is not accessible any more and solving the memory leakage problem.
Apparently using a customized allocator which is shared between these three vectors and allocates a contiguous memory once and assign it to the corresponding vectors seems a better solution, however since I've never written one, any hint or suggestion to help me start coding will be appreciated.<issue_comment>username_1: This is the kind of thing [gsl::span](https://github.com/isocpp/CppCoreGuidelines/blob/master/docs/gsl-intro.md#span-parameters-how-should-i-choose-between-span-and-traditional-ptr-length-parameters) is designed for.
You can find an implementation [HERE](https://github.com/Microsoft/GSL/blob/master/include/gsl/span). It is proposed for inclusion in the `C++` standard libraries.
You can use it like this:
```
void double_data(gsl::span sp)
{
for(auto& i: sp)
i \*= 2;
}
void tripple\_data(gsl::span sp)
{
for(auto& i: sp)
i \*= 3;
}
int main()
{
std::vector v = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10};
auto sp = gsl::make\_span(v); // the whole vector
auto sp1 = sp.subspan(0, 5); // first 5 elements
auto sp2 = sp.subspan(5, 5); // last five elements
double\_data(sp1); // process it like you would a container
tripple\_data(sp2);
for(auto i: v)
std::cout << i << ' ';
std::cout << '\n';
}
```
**Output:**
```
2 4 6 8 10 18 21 24 27 30
```
Upvotes: 2 <issue_comment>username_2: You can use `std::shared_ptr`, whose primary purpose is management of shared resources. First of all, create your buffer:
```
std::shared_ptr underlying(new double[3 \* size], std::default\_delete());
```
Here you have to use `default_delete` as an explicit deleter, so the correct `operator delete[]` is used for deallocation. BTW I heard that in C++17 you no longer need to use an explicit deleter, if you use `shared_ptr`.
Then define your smaller containers, using the *aliasing constructor*:
```
std::shared_ptr vec0(underlying, underlying.get());
std::shared\_ptr vec1(underlying, underlying.get() + size);
std::shared\_ptr vec2(underlying, underlying.get() + 2 \* size);
```
Here you can use your "vectors" until the last of them goes out of scope, and when that happens, the buffer is deallocated. However, these are not vectors — e.g. they don't store their size, only the pointer to first element.
Upvotes: 1
|
2018/03/14
| 646 | 1,866 |
<issue_start>username_0: I have a react application I created with `create-react-app` and I have added storybook to it. When I run `yarn run storybook` it does not reload when I change files.
My application is laid out like
```
src
->components
--->aComponent.js
->stories
--->index.js
->index.js
```
Here is my package.json
```
{
"name": "my-app",
"version": "0.1.0",
"private": true,
"dependencies": {
"immutable": "^3.8.2",
"react": "^16.2.0",
"react-dom": "^16.2.0",
"react-scripts": "1.1.1"
},
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test --env=jsdom",
"eject": "react-scripts eject",
"storybook": "start-storybook -p 9009 -s public",
"build-storybook": "build-storybook -s public"
},
"devDependencies": {
"@storybook/addon-actions": "^3.3.14",
"@storybook/addon-links": "^3.3.14",
"@storybook/addons": "^3.3.14",
"@storybook/react": "^3.3.14",
"babel-core": "^6.26.0",
"flow-bin": "^0.67.1"
}
}
```
I have no custom web pack config. What do I need to do to get hot module reloading working?<issue_comment>username_1: Had the same issue, should pass `module` as a 2nd argument for `storiesOf`
```js
// V Won't hot reload with `module`
storiesOf('Welcome', module).add('to Storybook', () => );
```
Upvotes: 0 <issue_comment>username_2: I had this problem too,
my environment was:
```
ubuntu 18.04
"react": "^16.10.1",
"react-dom": "^16.10.1",
"react-scripts": "3.1.2",
"@storybook/react": "^5.2.1"
```
I solved this problem with running as the sudo user like this:
```
sudo yarn run storybook
```
Upvotes: 0 <issue_comment>username_3: You must increase your watch count
`echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p`
Upvotes: 2
|
2018/03/14
| 386 | 1,225 |
<issue_start>username_0: I am using the following Javscript to change table data element ID on mouse over so that I can apply different CSS on the mouse action. It is working fine, but only on the first row of my table. Does anyone know how I can amend the script so that it is applied to each individual row in my table?
```
$('#tdcns').mouseover(function(){
$(this).attr('id', 'hovertdcns');
});
$('#tdcns').mouseout(function(){
$(this).attr('id', 'tdcns');
});
```
Any guidance is much appreciated!<issue_comment>username_1: Had the same issue, should pass `module` as a 2nd argument for `storiesOf`
```js
// V Won't hot reload with `module`
storiesOf('Welcome', module).add('to Storybook', () => );
```
Upvotes: 0 <issue_comment>username_2: I had this problem too,
my environment was:
```
ubuntu 18.04
"react": "^16.10.1",
"react-dom": "^16.10.1",
"react-scripts": "3.1.2",
"@storybook/react": "^5.2.1"
```
I solved this problem with running as the sudo user like this:
```
sudo yarn run storybook
```
Upvotes: 0 <issue_comment>username_3: You must increase your watch count
`echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p`
Upvotes: 2
|
2018/03/14
| 396 | 1,450 |
<issue_start>username_0: When Webpack attempts to fetch the update JSON file, it fails with the console error message:
```
[HMR] Update failed: Error: Manifest request to https://subdomain.localhost23dae8e1865781c26fcd.hot-update.json timed out.
```
Note the omission of a slash between the TLD and the path…
**Devserver config:**
```
{
public: `subdomain.localhost`,
publicPath: 'https://subdomain.localhost/',
port: 9000,
https: false,
contentBase: path.join(__dirname),
watchContentBase: true,
historyApiFallback: true,
compress: true,
hot: true
};
```
*What configuration is required to ensure update manifest will load from the correct path?*
* No other parts of our dev/prod environments are failing… which leads
me to believe the configuration error exists in a niche.
* <https://subdomain.localhost/webpack-dev-server> links to valid assets
at the correct URL's<issue_comment>username_1: I believe `publicPath` should be just `/` instead of the full path.
Upvotes: 2 <issue_comment>username_2: Surprisingly, the hot module replacement plugin will actually look to the `config.output.publicPath` property *rather than* the `config.devServer.[static].publicPath` value.
`devServer.static.publicPath` (or WDS < 4, `devServer.publicPath`) should be the *same* as `output.publicPath`.
Correcting the output property to use the full path `https://subdomain.localhost/` corrects this problem.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 761 | 2,509 |
<issue_start>username_0: I'm having trouble finding the error in my code. I'm working with the class ColHe which is a collection of objects of type Header, and the main goal is to read a file's header and obtain a list with it's elements.
This is my Header class:
```
class Header:
def __init__(self, col):
self._day = 'Day:'
self._date = col[1]
self._time = 'Time:'
self._hours = col[3]
self._company = 'Company:'
self._name = 'NSHF'
def __str__(self):
return str(self._day + self._date + ", " + self._time + self._hours
+ ", " + self._company + self._name)
```
Along with its methods of getters and setters. And this is my `ColHe` class:
```
from collections import UserList
from copy import deepcopy
import constants
from Header import Header
class ColHe(UserList):
def __init__(self, file_name):
super().__init__()
in_file=open(file_name)
for i in range(constants.HEADER):
self.append(Header((in_file.readline().strip())))
in_file.close()
def getData(self):
return deepcopy(self)
def __str__(self):
st = ""
for he in self:
st+=str(he)
return st
```
Where constants.Header equals 6.
I ran the program with:
```
a=ColHe("file.txt")
print(a.getData())
```
And got this as an output:
```
Day:a, Time::, Company:NSHFDay:6, Time:1, Company:NSHFDay:i, Time:e, Company:NSHFDay:4, Time:5, Company:NSHFDay:o, Time:p, Company:NSHFDay:S, Time:F, Company:NSHF
```
However, the output I'm looking for looks more like this:
```
["Day:", self._date, "Time:", self._time, "Company:", "NSHF"]
```
An example of the content of a file would be:
```
Day:
06:11:2017
Time:
14:55
Company:
NSHF
```<issue_comment>username_1: When you print something in python, you don't print the *name* you have it stored under, you print its *value*. IE:
```
print(self._date) # print whatever is stored in _date
print("self._date") # print the static string 'self._date'
```
There's no *reasonable* way to get to your desired output from your demonstrated input. You're misunderstanding core concepts about what values are, where they are stored, and what happens when you reference them.
Upvotes: 1 <issue_comment>username_2: Use str.format (or f-strings if your code is python 3.6+ only)
```
def __str__(self):
return 'Day:\n{self._day}\nTime:\n{self._time}\nCompany:\n{self._company}'.format(self=self)
```
Upvotes: 0
|
2018/03/14
| 719 | 2,712 |
<issue_start>username_0: I'm using Sublime Text 3.0. When I double click on a line to highlight and copy the contents of a line, it is also copying the first space in the next line as well. I included a screenshot of it. In the screenshot I wanted to copy the second line, but it automatically is taking the first space in the third line.
I can of course, drag my mouse over the text to highlight it, and then copy, but that's not what I want to do. I simply want to select a single line with a double click.
How do I fix this? Is there a setting to disable it from copying that first space in the next line?
[](https://i.stack.imgur.com/X8Wto.png)<issue_comment>username_1: Just move the cursor to the line you want to copy and Ctrl-C (Cmd-C on Mac). Do not select the line, just stay on it. Then paste it anywhere you want to paste, and it will insert a new line before the one your cursor is on, with the previously copied text.
Is it what you want or do I miss something?
Upvotes: 0 <issue_comment>username_2: What you're actually seeing is not it selecting the space at the start of the next line, but the newline at the end of the line that you selected.
That is to say, when you double click on a line to select it, use `Selection > Expand Selection to Line` or even just press `Ctrl+C` to copy the current line when there is no selection (assuming `copy_with_empty_selection` is turned on), the contents of the line includes the character that terminates it as well, since that's arguably part of the line.
There's not a setting that controls this behaviour as far as I'm aware.
Probably the most expedient solution would be to create a file named `copy_line.sublime-macro` in your `User` package (use `Preferences > Browse Preferences` if you're not sure where that is) with the following contents:
```json
[
{ "command": "expand_selection", "args": {"to": "line"} },
{ "command": "split_selection_into_lines" },
{ "command": "copy" }
]
```
Then add a key binding to execute it when you press `Ctrl+C` but don't have anything selected:
```json
{
"keys": ["ctrl+c"],
"command": "run_macro_file", "args": {"file": "res://Packages/User/copy_line.sublime-macro"},
"context": [
{ "key": "selection_empty", "operator": "equal", "operand": "true", "match_all": true}
]
},
```
Now when you press the copy key with nothing selected, the whole line is selected (which includes the newline), split into lines (which don't have the newline) and then copied to the clipboard.
This works even if you have multiple cursors, although they all have to have no selection for it to trigger.
Upvotes: 2
|
2018/03/14
| 240 | 973 |
<issue_start>username_0: The link to the page in question is: <https://rescueform.org>
The overflow for the main content area is set to scroll, and is scrolling properly, however the vertical scroll bar is not visible in Google Chrome. The scroll bar does display properly in IE, Firefox, etc.
Also, I'm new to the site and I'm not sure if it's required to post the code in this post. I'm hoping that you will view the source code using your developer tools so that you can see the differences between the browsers. My main CSS file is named styles.css.<issue_comment>username_1: in "indexStyles.css" you have
```
::-webkit-scrollbar {
display: none;
}
```
remove it.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Unfortunately, Google chrome always displays a (disabled) scroll bar when `overflow-scroll` is enabled. So I have to use another overflow setting like `overflow-auto`. VERY annoying, Chrome. Other browsers do live in this century.
Upvotes: 0
|
2018/03/14
| 846 | 2,015 |
<issue_start>username_0: I am trying to write the current date to a file.
I want to write it to the 2nd row 2nd column (columns are seperated by ';')
**Using Shell**
What I tried so far, Not Working:
```
paste <(echo "$(date)") <(awk -F ";" '{print $2}' file)
```
Is there any smart way to do so?
example 'file':
```
John;Wed Mar 14 19:41:38 CET 2018;18
Sandra;Mon Mar 14 19:41:38 CET 2018;21
David;Sun Mar 14 19:41:38 CET 2018;25
```
example after editing 'file':
```
John;Wed Mar 14 19:41:38 CET 2018;18
Sandra;The Date When Editing The File With Script;21
David;Sun Mar 14 19:41:38 CET 2018;25
```<issue_comment>username_1: To replace the second row second column with the current date:
```
$ awk -F";" -v d="$(date)" 'NR==2{$2=d} 1' OFS=";" file
John;Wed Mar 14 19:41:38 CET 2018;18
Sandra;Wed Mar 14 13:28:59 PDT 2018;21
David;Sun Mar 14 19:41:38 CET 2018;25
```
### How it works
* `-F";"`
This tells awk to use `;` as the field separator.
* `-v d="$(date)"`
This defines an awk variable `d` that contains the current date.
* `NR==2{$2=d}`
On the second row, `NR==2`, this tells awk to replace the second column, `$2`, with that value of variable `d`.
* `1`
This is awk's shorthand for print-the-current-line.
* `OFS=";"`
This tells awk to use `;` as the field separator on output.
### Modifying the file in-place
To modify the file in-place using a modern GNU awk:
```
gawk -i inplace -F";" -v d="$(date)" 'NR==2{$2=d} 1' OFS=";" file
```
To modify the file using macOS or BSD or older GNU awk:
```
awk -F";" -v d="$(date)" 'NR==2{$2=d} 1' OFS=";" file >tmp && mv tmp file
```
### Matters of style
The following three lines are all equivalent. Which one you use is a matter of style:
```
awk -F";" -v d="$(date)" 'NR==2{$2=d} 1' OFS=";" file
awk -F";" -v d="$(date)" -v OFS=";" 'NR==2{$2=d} 1' file
awk -F";" 'NR==2{$2=d} 1' d="$(date)" OFS=";" file
```
Upvotes: 2 <issue_comment>username_2: Can you try this one:
```
sed "2s/\(^[^\;]*\;[^\;]*\)/\1\;`date`/"
```
Upvotes: 0
|
2018/03/14
| 1,219 | 3,285 |
<issue_start>username_0: I am trying to prepare two input files based on information in a third file. File 1 is for sample1 and File 2 is for sample2. Both these files have lines with tab delimited columns. The first column contains unique identifier and the second column contains information.
File 1
```
>ENT01 xxxxxxxxxxxxxx
>ENT02 xyxyxyxyxyxy
>ENT03 ththththththt
```
..so on. Similarly, File 2 contains
```
>ENG012 ggggggggggggg
>ENG098 ksksksksksks
>ENG234 wewewewewew
```
I have a File 3 that contains two columns each corresponding to the identifier from File 1 and File 2
```
>ENT01 >ENG78
>ENT02 >ENG098
>ENT02 >ENG012
>ENT02 >ENG234
>ENT03 >ENG012
```
and so on. I want to prepare input files for File 1 and File 2 by following the order in file 3. If an entry is repeated in file 3 (ex ENT02) I want to repeat the information for that entry. The expected output is
For File 1:
```
>ENT01 xxxxxxxxxx
>ENT02 xyxyxyxyxyxy
>ENT02 xyxyxyxyxyx
>ENT02 xyxyxyxyxyx
>ENT03 ththththththth
```
And for file 2
```
>ENG78 some info
>ENG098 ksksksksks
>ENG012 gggggggg
>ENG234 wewewewewew
>ENG012 gggggggg
```
All the the entries in file 1 and file 2 are unique but not in file 3. Also, there are some entries in file3 in either column that is not present in either file 1 or file 2. The current logic I am using is that finding an intersection of identifiers from column 1 in both files1&2 with respective columns in file 3, storing this as a list and using this list to compare with File1 and File 2 separately to generate output for File 1 & 2. I am working with the following lines
`awk 'FNR==NR{a[$1]=$0;next};{print a[$1]}' file1 intersectlist`
`grep -v -x -f idsnotfoundinfile1 file3`
I am not able to get the right output as I think at some point it is getting sorted and only uniq values are printed out. Can someone please help me clear work this out.<issue_comment>username_1: To replace the second row second column with the current date:
```
$ awk -F";" -v d="$(date)" 'NR==2{$2=d} 1' OFS=";" file
John;Wed Mar 14 19:41:38 CET 2018;18
Sandra;Wed Mar 14 13:28:59 PDT 2018;21
David;Sun Mar 14 19:41:38 CET 2018;25
```
### How it works
* `-F";"`
This tells awk to use `;` as the field separator.
* `-v d="$(date)"`
This defines an awk variable `d` that contains the current date.
* `NR==2{$2=d}`
On the second row, `NR==2`, this tells awk to replace the second column, `$2`, with that value of variable `d`.
* `1`
This is awk's shorthand for print-the-current-line.
* `OFS=";"`
This tells awk to use `;` as the field separator on output.
### Modifying the file in-place
To modify the file in-place using a modern GNU awk:
```
gawk -i inplace -F";" -v d="$(date)" 'NR==2{$2=d} 1' OFS=";" file
```
To modify the file using macOS or BSD or older GNU awk:
```
awk -F";" -v d="$(date)" 'NR==2{$2=d} 1' OFS=";" file >tmp && mv tmp file
```
### Matters of style
The following three lines are all equivalent. Which one you use is a matter of style:
```
awk -F";" -v d="$(date)" 'NR==2{$2=d} 1' OFS=";" file
awk -F";" -v d="$(date)" -v OFS=";" 'NR==2{$2=d} 1' file
awk -F";" 'NR==2{$2=d} 1' d="$(date)" OFS=";" file
```
Upvotes: 2 <issue_comment>username_2: Can you try this one:
```
sed "2s/\(^[^\;]*\;[^\;]*\)/\1\;`date`/"
```
Upvotes: 0
|
2018/03/14
| 914 | 2,503 |
<issue_start>username_0: I have in Javascript my array and ajax call:
```
righe = [];
righe.push({
ragione_sociale: $('#ragione_sociale').val(),
via: $('#via').val(),
cap: $('#cap').val(),
localita: $('#localita').val(),
provincia: $('#provincia').val(),
telefono: $('#telefono').val(),
fax: $('#fax').val(),
settore: $('#settore').val(),
attivita: $('#attivita').val(),
note: $('#note').val()
});
$.ajax({
type: "POST",
url: "/ajaxcall/inserisciAzienda.php",
data: {righe : righe},
success: function(data){
console.log("okkk "+data);
}......
```
and this is my inserisciAzienda.php:
```
php
$dati = array ($_POST["righe"]);
echo "result: ".$dati[0];
</code
```
?>
but I have the following error:
```
okkk
**Notice**: Array to string conversion in **/Applications/
XAMPP/xamppfiles/htdocs/app/badges/ajaxcall/inserisciAzienda.php**
on line **11**
result: Array
```
I'm not able to get the array on the php file.<issue_comment>username_1: To replace the second row second column with the current date:
```
$ awk -F";" -v d="$(date)" 'NR==2{$2=d} 1' OFS=";" file
John;Wed Mar 14 19:41:38 CET 2018;18
Sandra;Wed Mar 14 13:28:59 PDT 2018;21
David;Sun Mar 14 19:41:38 CET 2018;25
```
### How it works
* `-F";"`
This tells awk to use `;` as the field separator.
* `-v d="$(date)"`
This defines an awk variable `d` that contains the current date.
* `NR==2{$2=d}`
On the second row, `NR==2`, this tells awk to replace the second column, `$2`, with that value of variable `d`.
* `1`
This is awk's shorthand for print-the-current-line.
* `OFS=";"`
This tells awk to use `;` as the field separator on output.
### Modifying the file in-place
To modify the file in-place using a modern GNU awk:
```
gawk -i inplace -F";" -v d="$(date)" 'NR==2{$2=d} 1' OFS=";" file
```
To modify the file using macOS or BSD or older GNU awk:
```
awk -F";" -v d="$(date)" 'NR==2{$2=d} 1' OFS=";" file >tmp && mv tmp file
```
### Matters of style
The following three lines are all equivalent. Which one you use is a matter of style:
```
awk -F";" -v d="$(date)" 'NR==2{$2=d} 1' OFS=";" file
awk -F";" -v d="$(date)" -v OFS=";" 'NR==2{$2=d} 1' file
awk -F";" 'NR==2{$2=d} 1' d="$(date)" OFS=";" file
```
Upvotes: 2 <issue_comment>username_2: Can you try this one:
```
sed "2s/\(^[^\;]*\;[^\;]*\)/\1\;`date`/"
```
Upvotes: 0
|
2018/03/14
| 527 | 1,664 |
<issue_start>username_0: Running this:
```
$WMI = Get-WMIObject -Class Win32_DiskDrive
ForEach ($Drive in $WMI){
$Drive.DeviceID + ": " + $Drive.Status
}
```
Returns results like:
```
\\.\PHYSICALDRIVE1: OK
\\.\PHYSICALDRIVE0: OK
\\.\PHYSICALDRIVE2: OK
\\.\PHYSICALDRIVE3: OK
```
Is there an easy way to restrict results to \.\PHYSICALDRIVE0?
Basically, I am trying to return a simple "OK" for Physical Drive 0's health.
I tried the following:
```
$WMI = Get-WMIObject -Class Win32_DiskDrive
ForEach ($Drive in $WMI | Where $Drive.DeviceID -Contains "\\.\PHYSICALDRIVE0"){
$Drive.Status
}
```
But nothing is outputted (at all). How would I restrict the status output to just the Physical Drive 0 from DeviceID?<issue_comment>username_1: I would do it this way:
```
$WMI = Get-WMIObject -Class Win32_DiskDrive
ForEach ($Drive in $WMI) {
if ($Drive.DeviceID -contains "\\.\PHYSICALDRIVE0") {
$Drive.Status
}
}
```
You look thru `$wmi` and check if each value contains this specific value.
If you have a match, it will show the Disk status.
Upvotes: 2 [selected_answer]<issue_comment>username_2: There are many ways, depending on how specific you wish to be:
```
Get-WMIObject -Class Win32_DiskDrive | Where-Object DeviceID -eq '\\.\PHYSICALDRIVE0'
Get-WMIObject -Class Win32_DiskDrive | Where-Object DeviceID -match 'E0$'
Get-WMIObject -Class Win32_DiskDrive | Where-Object Index -eq 0
```
If you wish these to run on older versions of PowerShell, then use the older syntax by placing each "test" into a scriptblock:
```
Get-WMIObject -Class Win32_DiskDrive | Where-Object { $_.DeviceID -eq '\\.\PHYSICALDRIVE0' }
```
Upvotes: 0
|
2018/03/14
| 2,438 | 9,520 |
<issue_start>username_0: How can I remove elements from a `List` property of every object in a collection and return the same collection?
```
class Student {
String name;
List friends;
// TODO constructor, getters and setters
}
Student chris = new Student("chris",
new ArrayList<>(Arrays.asList("sean", "mike", "mary", "mark")));
Student tim = new Student("tim",
new ArrayList<>(Arrays.asList("mike", "john", "steve", "mary")));
List students = new ArrayList<>(Arrays.asList(chris, tim));
List badKids = new ArrayList("mike", "sean");
```
I want to return `students` with `friends` list without the `badKids` using streams. Is this possible? The return object would be (using JSON for clarity):
```
[
{ name: "chris", friends: ["mary", "mark"] },
{ name: "tim", friends: ["john", "steve", "mary"]
]
```
I've used streams on collections at a beginner level, but the nesting is what confuses me:
```
List studentsWithGoodFriends = students.stream()
.map(student -> student.getFriends().flatMap(student -> student.stream())
.filter(friend-> !badKids.contains(friend))
.map(Student student -> new Student...
```
Then I'm lost. I am familiar with returning a list filtered by an object property, but not filtering the list property.<issue_comment>username_1: You need to adjust your code to at least compile first. Stuff like class if a capital C will make your code broken even if your logic works. I tried not to change much, just enough to get it to work, so you can see where you got it wrong.
I also included several examples with different techniques so you can see options involving Streams as you requested, but also, function references and a more traditional one. I hope it helps.
```
import java.util.*;
import java.util.function.Predicate;
import java.util.stream.Collectors;
class Student {
String name;
List< String > friends;
public Student( String name, List< String > friends ) {
this.name = name;
this.friends = friends;
}
public void removeFriendsIf( Predicate< String > test ) { // for example 5
getFriends( ).removeIf( test );
}
public List< String > getFriends( ) { // for example 4 -> students.forEach( e -> e.getFriends().removeIf( badKids::contains ) );
return friends;
}
public void removeFriends( List< String > badFriends ) { // no need for example here
getFriends( ).removeAll( badFriends );
}
}
class Initech {
public static void main( String[] reports ) {
Student chris = new Student( "chris", Arrays.asList( "sean", "mike", "mary", "mark" ) );
Student tim = new Student( "tim", Arrays.asList( "mike", "john", "steve", "mary" ) );
Student other = new Student( "tim", Arrays.asList( "john", "steve", "mary" ) );
List< Student > students = new ArrayList<>( );
students.add( chris );
students.add( tim );
students.add( other );
List< String > badKids = Arrays.asList( "mike", "sean" );
// Example 1 ----
// All students that do not have any of the bad friends
List< Student > studentsWithOnlyGoodFriends = students.stream( )
.filter( e -> e.friends.stream( )
.noneMatch( badKids::contains ) )
.collect( Collectors.toList( ) );
studentsWithOnlyGoodFriends.stream( )
.map( e -> e.friends )
.forEach( System.out::println );
System.out.println( );
// Example 2 ----
// All students but removing the bad apples
List< Student > studentsLostBadFriends = students.stream( )
.peek( e -> e.friends = e.friends.stream( )
.filter( f -> !badKids.contains( f ) )
.collect( Collectors.toList( ) ) )
.collect( Collectors.toList( ) );
studentsLostBadFriends.stream( )
.map( e -> e.friends )
.forEach( System.out::println );
System.out.println( );
//Example 3 ----
// The same as 2, without streams and with ArrayLists
chris = new Student( "chris", new ArrayList<>( Arrays.asList( "sean", "mike", "mary", "mark" ) ) );
tim = new Student( "tim", new ArrayList<>( Arrays.asList( "mike", "john", "steve", "mary" ) ) );
other = new Student( "tim", new ArrayList<>( Arrays.asList( "john", "steve", "mary" ) ) );
students.add( chris );
students.add( tim );
students.add( other );
students.forEach( e -> e.friends.removeIf( badKids::contains ) );
students.stream( )
.map( e -> e.friends )
.forEach( System.out::println );
//Example 4 ----
// The same as 3, without streams and with ArrayLists and the getter methods
chris = new Student( "chris", new ArrayList<>( Arrays.asList( "sean", "mike", "mary", "mark" ) ) );
tim = new Student( "tim", new ArrayList<>( Arrays.asList( "mike", "john", "steve", "mary" ) ) );
other = new Student( "tim", new ArrayList<>( Arrays.asList( "john", "steve", "mary" ) ) );
students.add( chris );
students.add( tim );
students.add( other );
students.forEach( e -> e.getFriends( )
.removeIf( badKids::contains ) );
students.stream( )
.map( e -> e.friends )
.forEach( System.out::println );
//Example 5 ----
// The same as 4, without streams and with ArrayLists and the getter methods
chris = new Student( "chris", new ArrayList<>( Arrays.asList( "sean", "mike", "mary", "mark" ) ) );
tim = new Student( "tim", new ArrayList<>( Arrays.asList( "mike", "john", "steve", "mary" ) ) );
other = new Student( "tim", new ArrayList<>( Arrays.asList( "john", "steve", "mary" ) ) );
students.add( chris );
students.add( tim );
students.add( other );
students.forEach( e -> e.removeFriendsIf( badKids::contains ) );
students.stream( )
.map( e -> e.friends )
.forEach( System.out::println );
}
}
```
Upvotes: 0 <issue_comment>username_2: You don't need streams for this task, especially if you want to mutate the list of friends in-place:
```
students.forEach(s -> s.getFriends().removeAll(badKids));
```
And that's it. This uses the [`Collection.removeAll`](https://docs.oracle.com/javase/9/docs/api/java/util/Collection.html#removeAll-java.util.Collection-) method.
Important: for this to work, the list of friends returned by the `Student.getFriends()` method must be mutable, such as `ArrayList`.
---
Despite the conciseness of the above solution, it breaks encapsulation, because we are mutating the list of friends of every student from outside the `Student` class. To fix this, you'd need to add a method to the `Student` class:
```
void removeBadKids(List badKids) {
friends.removeAll(badKids);
}
```
Thus, the solution would now become:
```
students.forEach(s -> s.removeBadKids(badKids));
```
Upvotes: 3 [selected_answer]<issue_comment>username_3: You can do this for example on this way:
```
package demo;
import java.util.List;
import java.util.function.Predicate;
import static java.util.Arrays.asList;
import static java.util.stream.Collectors.*;
public class Example {
public static void main(String[] args) {
List students = asList(getChris(), getTim());
List badFriends = asList("mike", "sean");
List cleanedStudentList = students.stream()
.map(student -> cleanBadFriendOnStudent(student, badFriends))
.collect(toList());
cleanedStudentList.forEach(System.out::println);
}
private static Student cleanBadFriendOnStudent(Student student, List badKids) {
List cleanedFriendList = student.friends.stream()
.filter(not(badKids::contains))
.collect(toList());
return new Student(student.name, cleanedFriendList);
}
private static Predicate not(Predicate predicate) {
return predicate.negate();
}
private static Student getTim() {
return new Student("tim", asList("mike", "john", "steve", "mary"));
}
private static Student getChris() {
return new Student("chris", asList("sean", "mike", "mary", "mark"));
}
private static class Student {
private final String name;
private final List friends;
Student(String name, List friends) {
this.name = name;
this.friends = friends;
}
@Override
public String toString() {
return "Student{" +
"name='" + name + '\'' +
", friends=" + friends +
'}';
}
}
}
```
Output:
```
Student{name='chris', friends=[mary, mark]}
Student{name='tim', friends=[john, steve, mary]}
```
Upvotes: 0 <issue_comment>username_4: You can do this:
```
List good = st.stream()
.peek(s -> s.getFriends().removeAll(bad))
.collect(Collectors.toList());
```
But using peek this way (should be used to log) is an anti-pattern, you can use map instead.
```
List good = st.stream()
.map(s -> {
s.getFriends().removeAll(bad);
return s;
})
.collect(Collectors.toList());
```
Upvotes: 1
|
2018/03/14
| 467 | 1,595 |
<issue_start>username_0: I cant link my style in Symfony.
My CSS file is in assets dir
* yes i have allready downloaded asset from symfony
* yes i have tried change directories<issue_comment>username_1: Your assets must be in the Asset folder before compiling them. The compiled version should be in the web folder (SF 3 without flex) or in the public folder (sf4 with flex)
Upvotes: 0 <issue_comment>username_2: In SF3, you should store your assets in
```
Store your assets in the web/ directory.
```
<https://symfony.com/doc/3.4/best_practices/web-assets.html>
In SF4, you should store your assets in
```
Store your assets in the assets/ directory at the root of your project.
```
and use Webpack Encore to compile, combine and minimize web assets.
<https://symfony.com/doc/current/best_practices/web-assets.html>
This is my configuration (SF3):
`config_dev.yml`
```
framework:
# Assets CDN & dev paths
# change here and in the config_prod.xml
assets:
packages:
avatar:
base_path: upload/members/pics/
app_css:
base_path: css/
app_img:
base_path: img/
....
```
`config_prod.yml - if you want to use CDN`
```
framework:
# Assets CDN & dev paths
# change here and in the config_prod.xml
assets:
packages:
avatar:
base_path: xx.xx.xx.xx/upload/members/pics/ (IP Address of CDN)
app_css:
base_path: xx.xx.xx.xx/css/(IP Address of CDN)
app_img:
base_path: xx.xx.xx.xx/img/ (IP Address of CDN)
....
```
In twig :
```
```
Upvotes: 1
|
2018/03/14
| 1,166 | 4,241 |
<issue_start>username_0: I am currently struggling with my implementation of an enum type.
I'd like to have an enum "Day" with the specific types "Monday", "Tuesday", ... "Sunday".
Now I want to have an object "task" on each of the days and be able to have several "Mondays" or several "Sundays" containing different tasks.
```
public enum Day{
MONDAY(0),
TUESDAY(1),
WEDNESDAY(2),
THURSDAY(3),
FRIDAY(4);
SATURDAY(5);
SUNDAY(6);
/* Number of day */
int dayNumber;
/* menus of the day*/
Task task;
/*
* Initializes the day with given number
*/
private Day(int dayNumber) {
this.dayNumber = dayNumber;
this.task = new Task();
}
public Task getTask () {
return task;
}
public void setTask() {
this.task = task;
}
}
```
How do I create several Mondays, Tuesdays ... and so on and set different tasks for them? In my implementation I always end up with several Day's containing the same tasks. For example I create
```
Day day1 = Day.MONDAY;
Day day2 = Day.MONDAY;
Day day3 = Day.MONDAY;
day1.setTask(new Task(x));
day2.setTask(new Task(y));
day3.setTask(new Task(z));
```
Now the implementation above in my program results in all three days (day1, day2 and day3) containing the task initialized with "z" and I dont quite understand why. How do i manage to initialize days of the same type ("Day.MONDAY" in this example) with different values?<issue_comment>username_1: You cannot create several instances of an enum literal.
Enum instances are created by the the Java runtime and they are **singleton** objects.
This means that there will be only one `MONDAY` object. Period.
You will need to redesign your code by either changing the direction of your composition (make a `Task` know the day it's due), or by using other `Date` or `DateTime` classes.
Upvotes: 0 <issue_comment>username_2: All 3 variables `day1`, `day2`, and `day3` refer to the *same* enum object, so when you call `setTask`, the changes show up through all 3 variables.
If you need different Mondays, then an enum is not the proper solution. Create a `Day` class with different Monday instances.
If you need multiple tasks on the same day, then have a `List` instead of one `Task` in the enum definition, along with an implementation that adds `Task`s to the list.
Upvotes: 0 <issue_comment>username_3: Building on Jim's comment, I think he has the right of it. What you probably want to do is make an object that can bind `Day` types to `Task`s.
```
class Binding {
Map> bindings = new HashMap<>();
public void add( Day day, Task task ) {
List tasks = bindings.get( day );
if( tasks == null ) {
tasks = new ArrayList<>();
bindings.put( day, tasks );
}
tasks.add( task );
}
}
```
Code is untested.
Upvotes: 2 <issue_comment>username_4: Coupling the `Task` in the `Day` enum should be very probably avoided.
Having a mutable state is able in an enum but it will make it less readable and less reusable.
You should probably redesign the responsibilities between the `Task` and the `Day`.
You have broadly two possibilities.
**Decoupled Day and Task**
Create a class that composes a `Task` and a `Day` enum :
```
public class DayTask{
private Day day;
private Task task;
....
}
```
You could so write :
```
DayTask oneMondayTask = new DayTask(Day.MONDAY, new Task(...));
DayTask anotherMondayTask = new DayTask(Day.MONDAY, new Task(...));
```
Here the coupling between `Day` and `Task` is located only in the class that composes them.
Advantage : both `Day` and `Task` may be used alone.
Drawback : more code and structure to manipulate.
**Coupled Day to Task**
Here `Task` depends on a `Day`.
Advantage : less code and structure to manipulate.
Drawback : If `Task` doesn't require a `Day` in anycase, it gives more responsibilities/complexities than required in `Task`.
```
public class Task{
private Day day;
....
}
```
You could so write :
```
Task oneMondayTask = new Task(Day.MONDAY);
Task anotherMondayTask = new Task(Day.MONDAY);
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 1,814 | 6,244 |
<issue_start>username_0: I'm attempting to write an avro file from python, for the most part following the [official tutorial](https://avro.apache.org/docs/1.8.2/gettingstartedpython.html).
I have what appears to be a valid schema:
```
{"namespace": "example.avro",
"type": "record",
"name": "Stock",
"fields": [
{"name": "ticker_symbol", "type": "string"},
{"name": "sector", "type": "string"},
{"name": "change", "type": "float"},
{"name": "price", "type": "float"}
]
}
```
Here is the relevant code
```
avro_schema = schema.parse(open("stock.avsc", "rb").read())
output = BytesIO()
writer = DataFileWriter(output, DatumWriter(), avro_schema)
for i in range(1000):
writer.append(_generate_fake_data())
writer.flush()
with open('record.avro', 'wb') as f:
f.write(output.getvalue())
```
However, when I try to read the output from this file using the cli avro-tools:
```
avro-tools fragtojson --schema-file stock.avsc ./record.avro --no-pretty
```
I get the following error:
```
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/local/Cellar/avro-tools/1.8.2/libexec/avro-tools-1.8.2.jar) to method sun.security.krb5.Config.getInstance()
WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Exception in thread "main" org.apache.avro.AvroRuntimeException: Malformed data. Length is negative: -40
at org.apache.avro.io.BinaryDecoder.doReadBytes(BinaryDecoder.java:336)
at org.apache.avro.io.BinaryDecoder.readString(BinaryDecoder.java:263)
at org.apache.avro.io.ResolvingDecoder.readString(ResolvingDecoder.java:201)
at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:422)
at org.apache.avro.generic.GenericDatumReader.readString(GenericDatumReader.java:414)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:181)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
at org.apache.avro.generic.GenericDatumReader.readField(GenericDatumReader.java:232)
at org.apache.avro.generic.GenericDatumReader.readRecord(GenericDatumReader.java:222)
at org.apache.avro.generic.GenericDatumReader.readWithoutConversion(GenericDatumReader.java:175)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:153)
at org.apache.avro.generic.GenericDatumReader.read(GenericDatumReader.java:145)
at org.apache.avro.tool.BinaryFragmentToJsonTool.run(BinaryFragmentToJsonTool.java:82)
at org.apache.avro.tool.Main.run(Main.java:87)
at org.apache.avro.tool.Main.main(Main.java:76)
```
I'm pretty sure the relevant error is
```
Malformed data. Length is negative: -40
```
But I can't tell what I'm doing wrong. My suspicion is that I'm writing the avro file incorrectly.
I want to write to a bytes array (instead of directly to a file like in the example) because ultimately I'm going to ship this avro buffer off to AWS Kinesis Firehose using `boto3`.<issue_comment>username_1: You cannot create several instances of an enum literal.
Enum instances are created by the the Java runtime and they are **singleton** objects.
This means that there will be only one `MONDAY` object. Period.
You will need to redesign your code by either changing the direction of your composition (make a `Task` know the day it's due), or by using other `Date` or `DateTime` classes.
Upvotes: 0 <issue_comment>username_2: All 3 variables `day1`, `day2`, and `day3` refer to the *same* enum object, so when you call `setTask`, the changes show up through all 3 variables.
If you need different Mondays, then an enum is not the proper solution. Create a `Day` class with different Monday instances.
If you need multiple tasks on the same day, then have a `List` instead of one `Task` in the enum definition, along with an implementation that adds `Task`s to the list.
Upvotes: 0 <issue_comment>username_3: Building on Jim's comment, I think he has the right of it. What you probably want to do is make an object that can bind `Day` types to `Task`s.
```
class Binding {
Map> bindings = new HashMap<>();
public void add( Day day, Task task ) {
List tasks = bindings.get( day );
if( tasks == null ) {
tasks = new ArrayList<>();
bindings.put( day, tasks );
}
tasks.add( task );
}
}
```
Code is untested.
Upvotes: 2 <issue_comment>username_4: Coupling the `Task` in the `Day` enum should be very probably avoided.
Having a mutable state is able in an enum but it will make it less readable and less reusable.
You should probably redesign the responsibilities between the `Task` and the `Day`.
You have broadly two possibilities.
**Decoupled Day and Task**
Create a class that composes a `Task` and a `Day` enum :
```
public class DayTask{
private Day day;
private Task task;
....
}
```
You could so write :
```
DayTask oneMondayTask = new DayTask(Day.MONDAY, new Task(...));
DayTask anotherMondayTask = new DayTask(Day.MONDAY, new Task(...));
```
Here the coupling between `Day` and `Task` is located only in the class that composes them.
Advantage : both `Day` and `Task` may be used alone.
Drawback : more code and structure to manipulate.
**Coupled Day to Task**
Here `Task` depends on a `Day`.
Advantage : less code and structure to manipulate.
Drawback : If `Task` doesn't require a `Day` in anycase, it gives more responsibilities/complexities than required in `Task`.
```
public class Task{
private Day day;
....
}
```
You could so write :
```
Task oneMondayTask = new Task(Day.MONDAY);
Task anotherMondayTask = new Task(Day.MONDAY);
```
Upvotes: 3 [selected_answer]
|
2018/03/14
| 825 | 3,200 |
<issue_start>username_0: I'm a total noob an only started learning JS several days ago. I'm somewhat familiar with html and css though. I created this code as an exercise. It tells wheter the number put in the form is positive, negative or 0. I also wanted it to list all the numbers used. However, I don't want the app to list the result (e.g. "Positive number") if someone mistakenly hits "Check" before typing in another number. My only solution was to add the document.create.element part after each condition. But I feel that there should be a simpler way, is it possible to wite it just once and then just invoke a kind of shortcut for it in the two remaining conditional statements?
```html
What number is it?
Check
function calculate() {
var num = document.getElementById("number").value;
if (num > 0) {
var para = document.createElement("p");
var node = document.createTextNode(num);
para.appendChild(node);
var element = document.getElementById("div1");
element.appendChild(para);
number.value = "Positive number";
}
if (num == 0) {
var para = document.createElement("p");
var node = document.createTextNode(num);
para.appendChild(node);
var element = document.getElementById("div1");
element.appendChild(para);
number.value = 0;
}
if (num < 0) {
var para = d ocument.createElement("p");
var node = d ocument.createTextNode(num);
para.appendChild(node);
var element = d ocument.getElementById("div1");
element.appendChild(para);
number.value = "Negative number";
}
}
```<issue_comment>username_1: I think you want something like below. I just removed all duplicate code, call it only once if enter value by user is number and remove `if/else` as well.(like below)
```js
function calculate() {
var num = document.getElementById("number").value;
if(!isNaN(num)){
number.value = (num > 0) ? "Positive number"
:(num < 0) ? "Negative number"
: "0";
var para = document.createElement("p");
var node = document.createTextNode(num);
para.appendChild(node);
var element = document.getElementById("div1");
element.appendChild(para);
}
}
```
```html
What number is it?
Check
```
Upvotes: 1 <issue_comment>username_2: You should never rely on Javascript's implicit type conversion rules. If you want to compare a variable to a number, make sure that this variable is a valid number in the first place.
Here's a corrected copy of your code (although I agree with the commenter that displaying the result in the input box is quite unconventional).
```html
What number is it?
Check
function calculate() {
var num = Number(document.getElementById("number").value);
if (Number.isNaN(num)) {
// not a number, do nothing
return;
}
var message = "0";
if (num < 0)
message = "Negative number";
else if (num > 0)
message = "Positive number";
number.value = message;
var para = document.createElement("p");
para.appendChild(document.createTextNode(num));
document.getElementById("div1").appendChild(para);
}
```
This way, the flow is more logical and there's no need to repeat any code.
Upvotes: 1 [selected_answer]
|
2018/03/14
| 414 | 1,654 |
<issue_start>username_0: I just have a fairly simple question about using MPI in a C++ program. Indeed, let's take this example:
```
#include
#include
#include
#include
using namespace std;
#include
void multiply(double\* x,double\* y,int tai,double dot){
for(int i=0; i
```
Then, I call my function "multiply" in my program to multiply two vectors but unfortunately it returns the initial value of "dot" (which is basically 0).
I just would like to know if there is anything special about calling a function when we are doing parallel programming.
PS: I know the whole program is working as it gives me good results when I directly multiply my 2 vectors in the "main" function<issue_comment>username_1: Aside from the provided code having issues, the answer to your question is no, each processing element will go through the function as it's called, but maybe you don't want each processing element to do the whole dot product and you'd be better off using a scatterv and gatherv to split the program; or maybe each processing element has its own unique vectors that get reduced or added later and you're fine. Be sure to use MPI types.
Upvotes: 0 <issue_comment>username_2: The initial value of `dot` is not changing is because it is passed by value. A copy of `dot` is made when passed to `multiply` and that is the version that is modified. If you want `multiply` to modify the `dot` variable, and keep the changes outside the `multiply` function, pass `dot` by reference.
```
void multiply(double* x, double* y, int tai, double& dot);
```
The fact that you are using MPI has no effect on this behavior.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 458 | 2,012 |
<issue_start>username_0: I'm trying to setup a simple project with Spring security to enable Username / pwd login.
After pointing some breakpoint in the UsernamePasswordAuthenticationFilter I noticed that getAuthenticationManager has 0 providers
```
this.getAuthenticationManager()
```
However I did add this in the security-context.xml
```
```
It looks like the authenticationManager does get rightly autowired but for some reason the authentication-provider is not injected.
Did I forget to enable something somewhere?<issue_comment>username_1: By default SpringSecurity uses `org.springframework.security.authentication.ProviderManager` which initially should have at least one configured provider unless parent is set. Otherwise you will get `IllegalArgumentException` at initialization phase. Therefore you definitely should be able to find an authentication provider either in the authentication manager returned from the filter or in one of its parents (as long as there is no harmful code that removes providers intentionally of course).
Upvotes: 3 [selected_answer]<issue_comment>username_2: We were facing this issue when upgrading from Spring security 3 to version 4.
We had a `AuthenticationManager` definied as follows
```
```
As it turned out using only an `alias` attribute the definied `AuthenticationManager` was not used by Spring.
We needed to define an `id` attribute to make it work.
```
...
```
When no id is specified Spring sets the id to "org.springframework.security.authenticationManager" during the beans parsing in the [`org.springframework.security.config.authentication.AuthenticationManagerBeanDefinitionParser.parse`](https://github.com/spring-projects/spring-security/blob/d31fff11b3bb91e484228f29af469a9a0661dd32/config/src/main/java/org/springframework/security/config/authentication/AuthenticationManagerBeanDefinitionParser.java) overriding the globally registered `AuthenticationManager`. This somehow seems to mess up the specified `providers`.
Upvotes: 1
|
2018/03/14
| 567 | 2,436 |
<issue_start>username_0: I'm trying to get the notification count from the Notifications child. There's another child which has a random key (which I don't know), and within that has a `userKey` value.
I'm wanting to use `orderByChild("userKey")` and get all of the notifications, but not sure how to do this. I've tried looking for a solution, but I can't find one.
[](https://i.stack.imgur.com/FHGxe.png)
I've tried this but it's not working:
```
db.child("Notifications").orderByChild("userKey").equalTo(intent.getStringExtra("key")).addChildEventListener(new ChildEventListener() {
@Override
public void onChildAdded(DataSnapshot dataSnapshot, String s) {
indicator.setText(String.valueOf(dataSnapshot.getChildrenCount()));
}
@Override
public void onChildChanged(DataSnapshot dataSnapshot, String s) {
}
@Override
public void onChildRemoved(DataSnapshot dataSnapshot) {
}
@Override
public void onChildMoved(DataSnapshot dataSnapshot, String s) {
}
```
I appreciate any help!<issue_comment>username_1: Your solution will work whenever a new notification is added to the node. If you want to read the notifications for the first time you should use a SingleValueEventListener
```
db.child("Notifications").orderByChild("userKey").equalTo(intent.getStringExtra("key")).addListenerForSingleValueEvent(new ValueEventListener() {
@Override
public void onDataChange(DataSnapshot snapshot) {
indicator.setText(String.valueOf(dataSnapshot.getChildrenCount()));
}
@Override
public void onCancelled(DatabaseError databaseError) {
}
});
```
Upvotes: 1 <issue_comment>username_2: * One way is to change the structure of your 'Notifications' Node Structure from
Notifications > Rand\_Key > (Data + User\_key)
to
Notifications > User\_key > Rand\_key > (Data)
* In case you need to keep updating the notification count, you can write a Node.js Handler using Firebase Cloud Functions or your own node server, where you can listen to the 'Notifications' onChild added, and update the data wherever it needs to be. If you want to experiment, try cloud functions and remember it is still in beta.
<https://firebase.google.com/docs/functions/database-events>
Upvotes: 0
|
2018/03/14
| 4,387 | 13,894 |
<issue_start>username_0: I've been working with the Tensorflow Object Detection API - In my case, I'm attempting to detect vehicles in still images using the kitti-trained model (faster\_rcnn\_resnet101\_kitti\_2018\_01\_28) from the model zoo and I am using code modified from the object\_detection\_tutorial jupyter notebook included in the github repository .
I have included my modified code below but am finding the same results with the original notebook from github.
When running on a jupyter notebook server on an `Amazon AWS g3x4large (GPU)` instance with the deep learning AMI, it takes just shy of 4 seconds to process a single image. The time for the inference function is 1.3-1.5 seconds (see code below) - which seems ABNORMALLY high for the reported inference times for the model (20ms). While I don't expect to hit the reported mark, my times seem out of line and are impractical for my needs. I'm looking at processing 1-million+ images at a time and can't afford 46 days of processing time. Given that the model is used on video frame captures....I would think it should be possible to cut time per image to under 1 second, at least.
My questions are:
1) What explanations/solutions exist to reduce inference time?
2) Is 1.5 seconds to convert an image to a numpy (prior to processing) out-of-line?
3) If this is the best performance I can expect, how much increase in time could I hope to gain from reworking the model to batch process images?
Thanks for any help!
Code from python notebook:
```
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import json
import collections
import os.path
import datetime
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
# This is needed to display the images.
get_ipython().magic('matplotlib inline')
#Setup variables
PATH_TO_TEST_IMAGES_DIR = 'test_images'
MODEL_NAME = 'faster_rcnn_resnet101_kitti_2018_01_28'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'kitti_label_map.pbtxt')
NUM_CLASSES = 2
from utils import label_map_util
from utils import visualization_utils as vis_util
def get_scores(
boxes,
classes,
scores,
category_index,
min_score_thresh=.5
):
import collections
# Create a display string (and color) for every box location, group any boxes
# that correspond to the same location.
box_to_display_str_map = collections.defaultdict(list)
for i in range(boxes.shape[0]):
if scores is None or scores[i] > min_score_thresh:
box = tuple(boxes[i].tolist())
if scores is None:
box_to_color_map[box] = groundtruth_box_visualization_color
else:
display_str = ''
if classes[i] in category_index.keys():
class_name = category_index[classes[i]]['name']
else:
class_name = 'N/A'
display_str = str(class_name)
if not display_str:
display_str = '{}%'.format(int(100*scores[i]))
else:
display_str = '{}: {}%'.format(display_str, int(100*scores[i]))
box_to_display_str_map[i].append(display_str)
return box_to_display_str_map
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image, 0)})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
#get list of paths
exten='.jpg'
TEST_IMAGE_PATHS=[]
for dirpath, dirnames, files in os.walk(PATH_TO_TEST_IMAGES_DIR):
for name in files:
if name.lower().endswith(exten):
#print(os.path.join(dirpath,name))
TEST_IMAGE_PATHS.append(os.path.join(dirpath,name))
print((len(TEST_IMAGE_PATHS), 'Images To Process'))
#load model graph for inference
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
#setup class labeling parameters
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
#placeholder for timings
myTimings=[]
myX = 1
myResults = collections.defaultdict(list)
for image_path in TEST_IMAGE_PATHS:
if os.path.exists(image_path):
print(myX,"--------------------------------------",datetime.datetime.time(datetime.datetime.now()))
print(myX,"Image:", image_path)
myTimings.append((myX,"Image", image_path))
print(myX,"Open:",datetime.datetime.time(datetime.datetime.now()))
myTimings.append((myX,"Open",datetime.datetime.time(datetime.datetime.now()).__str__()))
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
print(myX,"Numpy:",datetime.datetime.time(datetime.datetime.now()))
myTimings.append((myX,"Numpy",datetime.datetime.time(datetime.datetime.now()).__str__()))
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
print(myX,"Expand:",datetime.datetime.time(datetime.datetime.now()))
myTimings.append((myX,"Expand",datetime.datetime.time(datetime.datetime.now()).__str__()))
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
print(myX,"Detect:",datetime.datetime.time(datetime.datetime.now()))
myTimings.append((myX,"Detect",datetime.datetime.time(datetime.datetime.now()).__str__()))
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.
print(myX,"Export:",datetime.datetime.time(datetime.datetime.now()))
myTimings.append((myX,"Export",datetime.datetime.time(datetime.datetime.now()).__str__()))
op=get_scores(
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
min_score_thresh=.2)
myResults[image_path].append(op)
print(myX,"Done:", datetime.datetime.time(datetime.datetime.now()))
myTimings.append((myX,"Done", datetime.datetime.time(datetime.datetime.now()).__str__()))
myX= myX + 1
#save results
with open((OUTPUTS_BASENAME+'_Results.json'), 'w') as fout:
json.dump(myResults, fout)
with open((OUTPUTS_BASENAME+'_Timings.json'), 'w') as fout:
json.dump(myTimings, fout)
```
Example Of Timings:
```
[1, "Image", "test_images/DE4T_11Jan2018/MFDC4612.JPG"]
[1, "Open", "19:20:08.029423"]
[1, "Numpy", "19:20:08.052679"]
[1, "Expand", "19:20:09.977166"]
[1, "Detect", "19:20:09.977250"]
[1, "Export", "19:23:13.902443"]
[1, "Done", "19:23:13.903012"]
[2, "Image", "test_images/DE4T_11Jan2018/MFDC4616.JPG"]
[2, "Open", "19:23:13.903885"]
[2, "Numpy", "19:23:13.906320"]
[2, "Expand", "19:23:15.756308"]
[2, "Detect", "19:23:15.756597"]
[2, "Export", "19:23:17.153233"]
[2, "Done", "19:23:17.153699"]
[3, "Image", "test_images/DE4T_11Jan2018/MFDC4681.JPG"]
[3, "Open", "19:23:17.154510"]
[3, "Numpy", "19:23:17.156576"]
[3, "Expand", "19:23:19.012935"]
[3, "Detect", "19:23:19.013013"]
[3, "Export", "19:23:20.323839"]
[3, "Done", "19:23:20.324307"]
[4, "Image", "test_images/DE4T_11Jan2018/MFDC4697.JPG"]
[4, "Open", "19:23:20.324791"]
[4, "Numpy", "19:23:20.327136"]
[4, "Expand", "19:23:22.175578"]
[4, "Detect", "19:23:22.175658"]
[4, "Export", "19:23:23.472040"]
[4, "Done", "19:23:23.472297"]
```<issue_comment>username_1: 1) What you can do is load the video directly instead of images, then change "run\_inference\_for\_single\_image()" to create the session once and load the images/video in it (re-creating the graph is very slow). Furthermore, you can edit the pipeline config file to reduce the number of proposals, which will directly speedup inference. Note you have to re-export the graph afterwards (<https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md>). Batch also helps (though I am sorry, I forgot by how much) and finally, you can employ multiprocessing to offload CPU specific operations (drawing bounding boxes, loading data) to utilize the GPU better.
2) Is 1.5 seconds to convert an image to a numpy (prior to processing) out-of-line <- yes, that is insanely slow and there is plenty of room for improvement.
3) While I don't know the exact gpu at AWS (k80?), you should be able to get over 10fps on a geforce 1080TI with all fixes, which is in line with the 79ms time they reported (where did you get 20ms for faster-rcnn\_resnet\_101?? )
Upvotes: 1 <issue_comment>username_2: You could also try [OpenVINO](https://docs.openvino.ai/latest/openvino_docs_install_guides_overview.html) for better performance of the inference. It optimizes the inference time by e.g. graph pruning and fusing some operations. OpenVINO is optimized for Intel hardware but it should work with any CPU (even with Cloud).
[Here](https://docs.openvino.ai/latest/openvino_docs_performance_benchmarks_openvino.html#faster-rcnn-resnet50-coco-tf-600x1024) are some performance benchmarks for the Faster RCNN Resnet model and various CPUs.
It's rather straightforward to convert the Tensorflow model to OpenVINO unless you have fancy custom layers. The full tutorial on how to do it can be found [here](https://github.com/openvinotoolkit/openvino_notebooks/tree/main/notebooks/101-tensorflow-to-openvino). Some snippets below.
**Install OpenVINO**
The easiest way to do it is using PIP. Alternatively, you can use [this tool](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/download.html) to find the best way in your case.
```
pip install openvino-dev[tensorflow2]
```
**Use Model Optimizer to convert SavedModel model**
The Model Optimizer is a command-line tool that comes from OpenVINO Development Package. It converts the Tensorflow model to IR, which is a default format for OpenVINO. You can also try the precision of FP16, which should give you better performance without a significant accuracy drop (just change data\_type). Run in the command line:
```
mo --saved_model_dir "model" --input_shape "[1, 3, 224, 224]" --data_type FP32 --output_dir "model_ir"
```
**Run the inference**
The converted model can be loaded by the runtime and compiled for a specific device e.g. CPU or GPU (integrated into your CPU like Intel HD Graphics). If you don't know what is the best choice for you, just use AUTO.
```
# Load the network
ie = Core()
model_ir = ie.read_model(model="model_ir/model.xml")
compiled_model_ir = ie.compile_model(model=model_ir, device_name="CPU")
# Get output layer
output_layer_ir = compiled_model_ir.output(0)
# Run inference on the input image
result = compiled_model_ir([input_image])[output_layer_ir]
```
There is even [OpenVINO Model Server][5] which is very similar to Tensorflow Serving.
Disclaimer: I work on OpenVINO.
Upvotes: 0
|
2018/03/14
| 608 | 1,615 |
<issue_start>username_0: I'm trying to set up an iframe and want to only show a certain part of a page with no scroll but I'm having trouble getting the width height and margins right.
Webpage I want to show a certain part of:
<https://stellarchain.io/address/GB3O3S72Y2D25EK2BSA7VBJB5BXIAIKJZF4WIMEIQSB53TEIOIADT76L>
Here is a snippet of what I want: [Just the balance and 2,984,000,000](https://i.stack.imgur.com/Qevrj.jpg)
This is the code I found and tried using:
```
```
Any Help will be appreciated! Thank you.<issue_comment>username_1: As a way to solve your problem, you can use this code:
```css
#outerdiv
{
width:200px;
height:75px;
overflow:hidden;
position:relative;
}
#innerIframe
{
position:absolute;
top:-370px;
left:-1050px;
width:1300px;
height:1450px;
}
```
```html
```
Upvotes: 2 <issue_comment>username_2: **An alternate way is in JQuery**
if same origin policy is not making issue)
```
$('#contentDIV').load('https://stellarchain.io/address/GB3O3S72Y2D25EK2BSA7VBJB5BXIAIKJZF4WIMEIQSB53TEIOIADT76L#targetDiv');
```
*replace #targetDiv from URL with the div area you want to load in your page.*
Upvotes: 0 <issue_comment>username_3: After a long search for this I finally found a good tutorial with a place to test! here is the link:
<https://www.3schools.in/2022/01/how-to-show-specific-part-of-website-in-iframe.html>
```
#div-id{
border : 2px solid blue;
width : 100%;
height : 200px;
position : relative;
overflow : hidden;
}
#iframe-id{
position : absolute;
top : -50px;
left : -2px;
width : 100%;
height : 200vh;
}
```
Upvotes: 0
|
2018/03/14
| 1,435 | 5,637 |
<issue_start>username_0: I'm new to Protractor and am trying to write test cases for a very simple scenario: Going to a login page and submitting the form, redirecting to the dashboard page, and then clicking a search button on the dashboard. The logging in part is working fine:
pages/login.js:
```
'use strict';
var LoginPage = function () {
this.email = element(by.css('#email'));
this.password = element(by.css('#password'));
this.submit = element(by.css('[type="submit"]'));
expect(this.email.isPresent()).toBe(true, 'Email field not found');
expect(this.password.isPresent()).toBe(true, 'Password field not found');
expect(this.submit.isPresent()).toBe(true, 'Submit button not found');
};
module.exports = new LoginPage();
```
test-spec.js:
```
it('should log in', function () {
console.log('Logging into Application.');
browser.get('https://example.com/auth/login');
var loginPage = require('./pages/login');
loginPage.email.sendKeys('<EMAIL>');
loginPage.password.sendKeys('<PASSWORD>');
loginPage.submit.click();
console.log('Waiting to be redirected to the dashboard page...');
return browser.driver.wait(function () {
return browser.driver.getCurrentUrl().then(function (url) {
return /dashboard/.test(url);
});
}, 30000);
});
```
However, when I then attempt to find the search button and click on it...
pages/dashboard.js:
```
'use strict';
var DashboardPage = function () {
var elementFromSecondPage = $('a[href*="/property/search"]');
console.log('trying to wait for element to be visible: ' + elementFromSecondPage);
browser.wait(protractor.until.elementIsVisible(elementFromSecondPage), 60000, 'Error: Element did not display within 1 minute');
console.log('done waiting');
console.log('trying to find this.search element');
this.search = element(by.css('a[href*="/property/search"]'));
console.log('this.search: ' + this.search);
expect(this.search.isPresent()).toBe(true, 'Search button not found');
};
module.exports = new DashboardPage();
```
test-spec.js:
```
it('should go to search', function () {
console.log('Going to the Search page.');
var dashboardPage = require('./pages/dashboard');
dashboardPage.search.click();
});
```
When trying to find the search button, it errors:
Logging into Upstream.
Waiting to be redirected to the dashboard page...
Going to the Search page.
trying to wait for element to be visible: [object Object]
done waiting
trying to find this.search element
this.search: [object Object]
Failures: 1) angularjs upstream add listing should go to search Message: Failed: Timed out waiting for asynchronous Angular tasks to finish after 11 seconds. This may be because the current page is not an Angular application. Please see the FAQ for more details: <https://github.com/angular/protractor/blob/master/docs/timeouts.md#waiting-for-angular> While waiting for element with locator - Locator: By(css selector, a[href\*="/property/search"])
Here is the html for the element that I'm searching for:
```
[...](/property/search)
```
I've tried a number of different solutions to try and wait for the dashboard page and its DOM to load, but I still get this error no matter what. It's confusing because if you look at the output above, when I do a jQuery find it successfully loads the element so it's clearly there.<issue_comment>username_1: All `browser.wait()` does is schedule a command to wait for a condition. You indicated that your JQuery selector `$('a[href*="/property/search"]');` is working to get the element. **A possible workaround** is to create your own function that has it's own timeout and will use that selector to find the element and return a boolean true once element found or false on timeout.
```
function waitForElem(){
//JQuery timeout logic here
if(elementFound){ return true}
else{//after some time interval
return false;
}
}
```
Then in your Protractor code still use `browser.wait()`
```
browser.wait(waitFormElem(), 60000, 'Error: Element did not display within 1 minute');
```
*I apologize for the pseudo code, but I really don't know JQuery and it appears you do so I figured you are informed enough to implement that logic*
Upvotes: 0 <issue_comment>username_2: You can use this library:
<https://www.protractortest.org/#/api?view=ProtractorExpectedConditions>
There is several option for waiting element to be visible, to be clickable, to contain some text in it, check what suits you best. I usually wait element to be visible like this:
```
var EC = protractor.ExpectedConditions;
browser.wait(EC.visibilityOf(element(by.css('a[href*="/property/search"]'))), 30000, "Element is not visible");
```
Upvotes: 0 <issue_comment>username_3: The issue ended up being what @Gunderson suggested. Angular was making continuous (polling) networking calls, so Protractor never saw it as "done" loading. Hence, every find element call timed out no matter how long I waited.
As a hack, I used this command to disable the waiting:
```
browser.waitForAngularEnabled(false);
```
This worked, however then what's the point of using Protractor over another testing framework, if it doesn't have special waiting for Angular apps? We then changed the web application that we're testing to disable the polling when running tests on it (via a querystring parameter on the login page), then everything started working beautifully.
Upvotes: 1
|
2018/03/14
| 582 | 1,808 |
<issue_start>username_0: According to the R-Studio blog, R-Studio 1.1+ now supports [ligatures](https://blog.rstudio.com/2017/09/13/rstudio-v1.1---the-little-things/).
I installed R-Studio version 1.1.442 on Windows.
According to the [R-Studio blog](https://blog.rstudio.com/2017/09/13/rstudio-v1.1---the-little-things/), and the Fira Code project [instructions](https://github.com/tonsky/FiraCode/wiki/RStudio-instructions), I should be able to select Fira as follows:
>
> * Go to Tools > Global Options > Appearance
> * Select "Fira Code" as Editor Font, and check "Use Ligatures".
> * Hit "OK" and enjoy
>
>
>

I do not have the ability to select 'Fira Code.'

I downloaded the most recent [Fira Code release](https://github.com/tonsky/FiraCode/releases), and noticed a TTF folder in the zip.

Where do I install the Fira Code font so that I can use Fira Code on R-Studio (Windows)?<issue_comment>username_1: I hope you still find this helpful. Try installing the `.otf` files instead.
[](https://i.stack.imgur.com/Uxa0T.png)
Try restarting Rstudio, or if that does not work, try restarting your computer. Fira Code should now appear as an option in Editor Font.
[](https://i.stack.imgur.com/g6JsH.png)
Upvotes: 3 [selected_answer]<issue_comment>username_2: I just dealt with a similar problem, but re-starting R Studio did the trick for me. I'm posting this in case anybody else deals with something similar
Upvotes: 1
|
2018/03/14
| 749 | 3,404 |
<issue_start>username_0: I'm currently learning the fundamentals of Algorithms and Data Structures and I am slightly confused about the concepts and the differences between arrays, linked-lists and stacks.
Please correct me if I am wrong: Is stack more like an abstract concept, and arrays and linked-lists are data structures? (Hence we can use either arrays or linked-lists to implement the concept of stack)
**Update - 032221**
Thank you everyone for helping me out for this question! Back when I asked this question, I had a hard time understanding the overall concept of primitive data types and fundamental data structure (in this case an array) that is offered by each languages.
For example, linked list or queues can be created and implemented using array, but then I thought such linked list and queues still should be called as array (because the foundational data structure that is used behind is technically an array). My thought process was there would be primitive data structures for linked lists or queues that does not use an array. Thus I did not understand quite properly linked-lists or stacks or such data structures are just different patterns and ways that data is organized and configured.
I hope this can help anyone who is having a hard time understanding data structure concept like I did!<issue_comment>username_1: **Array**
A book is an array. You can go to any page by index and quickly go forwards or backwards by any increment you like.
**Linked List**
A scavenger hunt is a linked list. You can only go from one item to the next, because each item contains the information where to find the next item.
**Stack**
A pile of letters on your desk is a stack. You can only see the letter lying on top. Removing the top letter reveals the next letter.
Upvotes: 3 <issue_comment>username_2: From Programming Theory perspective all the tree are **Abstract Data Types** The fundamental differences between them are their features.
* Array - offers a random access to any element in constant time, but removing or adding an element from/into an array is done in linear time
* Linked List - offers random access in linear time (that means sequential access). But adding or removing an element is done in constant time.
* Stack is a little bit different. It offers you access only to a single element - the top of the stack. The same is valid to removing and adding an element. You can remove only the element that is on the top of the stack and vice versa, add new element on the top of the stack.
You are correct that the stack can be **implemented** using array or linked list but this is already an implementation point of view. From Theory perspective they are different Abstract Data Types.
In fact if you dig deep enough on the assembly language level you have only 2 options to locate an arbitrary element:
1. Indexing - Calculate the element location in memory
2. Indirect addressing - Using a pointer stored in memory to locate the element.
Upvotes: 3 [selected_answer]<issue_comment>username_3: **ARRAY** - a container that can be called a fixed number of items and those items should be of the same type.
**STACK** - a list of elements in which an element may be added or deleted only at one end called the top of the stack.
**LINKEDLIST** - a linear collection of data elements called a node pointing to the next node by means of a pointer.
Upvotes: 0
|
2018/03/14
| 679 | 2,017 |
<issue_start>username_0: I have the following xml. I need to extract the IP address, protocol and port into a CSV file with the corresponding column names.
```
```
I'm able to grep IP address or the port using grep or sed like this `grep -Eo "([0-9]{1,3}[\.]){3}[0-9]{1,3}"` But I need it as columns in CSV file.
IPAddress Protocol Port . What is the best way to achieve this?<issue_comment>username_1: Dont' use regex to parse html/xml, but a real parser (using [xpath](/questions/tagged/xpath "show questions tagged 'xpath'")):
### Corrected wrong input xml file :
```
```
### Code :
```
xmlstarlet sel -t -v '//source/@address | //port/@protocol | //port/@port' file |
perl -pe '$. % 3 != 0 && s/\n/,/g;END{print "\n"}'
```
### Output :
```
10.XXX.XX.XX,tcp,22
10.XXX.XX.XXX,udp,1025
```
### theory :
According to the compiling theory, HTML can't be parsed using regex based on [finite state machine](http://en.wikipedia.org/wiki/Finite-state_machine). Due to hierarchical construction of HTML you need to use a [pushdown automaton](http://en.wikipedia.org/wiki/Pushdown_automaton) and manipulate [LALR](http://en.wikipedia.org/wiki/LR_parser) grammar using tool like [YACC](http://en.wikipedia.org/wiki/Yacc).
### realLife©®™ everyday tool in a [shell](/questions/tagged/shell "show questions tagged 'shell'") :
You can use one of the following :
[xmllint](http://xmlsoft.org/xmllint.html)
[xmlstarlet](http://xmlstar.sourceforge.net/docs.php)
[saxon-lint](https://github.com/sputnick-dev/saxon-lint) (my own project)
---
Check: [Using regular expressions with HTML tags](https://stackoverflow.com/questions/1732348/regex-match-open-tags-except-xhtml-self-contained-tags)
Upvotes: 2 <issue_comment>username_2: lacking xml tools, here is a fragile `awk` solution
```
1$ awk -v RS='' '
{for(i=1;i<=NF;i++)
if($i~/^(address|protocol|port)/)
{split($i,a,"\""); printf "%s", a[2] (++c%3?FS:ORS)}}' file
10.XXX.XX.XX tcp 22
10.XXX.XX.XXX udp 1025
```
Upvotes: 0
|
2018/03/14
| 1,387 | 6,160 |
<issue_start>username_0: I am writing a Topic Modeling program using Apache Tika to extract the text contents from other file type. Actually It run perfectly on Eclipse. But when I export to JAR file to use from command prompt of the Window 10. This error message appear when it try the code: "parser.parse(stream, handler, metadata, parseContext);"
"java.lang.SecurityException: Prohibited package name: java.sql"
I didn't upload my java code here because I don't think they are the root of the problem. Since it run perfectly inside Eclipse IDE. So do anyone know why it only happen when I try to run it from command line. What are the different in JVM between inside and outside of Eclipse IDE? Thank you.
```
package Views;
import java.io.*;
import org.apache.commons.io.FileUtils;
import org.apache.tika.metadata.Metadata;
import org.apache.tika.parser.AutoDetectParser;
import org.apache.tika.parser.ParseContext;
import org.apache.tika.sax.BodyContentHandler;
public class TestTika {
public static void main(String[] args) throws IOException {
String inputFolderName = "data";
String outputFolderName = "data_text";
System.out.println("Extracting text data from '" + inputFolderName + "' to '" + outputFolderName + "'");
FileUtils.deleteDirectory(new File(outputFolderName)); // Delete the old file in this directory
System.out.println("Delete all of the old files in directory'" + outputFolderName + "' successfully \n");
if (new File(outputFolderName).mkdir()) {
System.out.println("Created folder '"+ outputFolderName );
}
File inputFolder = new File(inputFolderName);
File[] listOfFiles = inputFolder.listFiles();
String fileName;
for (File file : listOfFiles) {
if (file.isFile()) {
fileName = file.getName();
System.out.println("\n" + fileName);
BodyContentHandler handler = new BodyContentHandler();
AutoDetectParser parser = new AutoDetectParser();
Metadata metadata = new Metadata();
ParseContext parseContext = new ParseContext();
FileInputStream stream = new FileInputStream(new File(inputFolderName + "/" + fileName));
try {
//////////////////////////////////// Error: Prohibited package name: java.sql ////////////////////////////////
//////////////////////////////////// /////////////////////////////////////////////////////////////////////////
parser.parse(stream, handler, metadata, parseContext);
} catch (Exception e) {
System.out.println("Warning: Error when processing file:" + fileName
+ " . This file will be igrored! \n" + e.getMessage() + "\n" + e.toString());
e.printStackTrace();
continue;
} finally {
stream.close();
}
String s = handler.toString();
Writer writer = null;
try {
writer = new BufferedWriter(new OutputStreamWriter(
new FileOutputStream(outputFolderName + "/" + fileName + ".txt"), "utf-8"));
writer.write(s);
} catch (IOException ex) {
// report
System.out.println("Warning: Error when saving file:" + fileName
+ ".txt . This file had been ignore! \n" + ex.getMessage());
continue;
} finally {
try {
writer.close();
} catch (Exception ex) {
/* ignore */}
}
}
}
System.out.println("Extracting text data from document files has been completed!");
return;
}
}
java.lang.SecurityException: Prohibited package name: java.sql
at java.base/java.lang.ClassLoader.preDefineClass(Unknown Source)
at java.base/java.lang.ClassLoader.defineClass(Unknown Source)
at java.base/java.security.SecureClassLoader.defineClass(Unknown Source)
at java.base/java.net.URLClassLoader.defineClass(Unknown Source)
at java.base/java.net.URLClassLoader.access$100(Unknown Source)
at java.base/java.net.URLClassLoader$1.run(Unknown Source)
at java.base/java.net.URLClassLoader$1.run(Unknown Source)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at java.base/java.net.URLClassLoader.findClass(Unknown Source)
at java.base/java.lang.ClassLoader.loadClass(Unknown Source)
at java.base/java.lang.ClassLoader.loadClass(Unknown Source)
at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:113)
at Views.TestTika.main(TestTika.java:43)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at org.eclipse.jdt.internal.jarinjarloader.JarRsrcLoader.main(JarRsrcLoader.java:58)
```<issue_comment>username_1: The 'prohibited package name' is thrown, when you are using a class of a package starting with 'java.' that is not found in your rt.jar. Either you created such a class yourself, or you have a .jar file containing such a class in your classpath.
If it's the former, put the class in another package. If it's the latter, try to find the .jar file containing this class (e.g. print out the classpath found in the system property java.class.path)
Upvotes: 3 <issue_comment>username_2: Ran into this issue when creating a Runnable JAR via Eclipse (Mars).
In the Jar generator I was setting the library handling option to
>
> Package required libraries into generated JAR
>
>
>
but switching to
>
> Extract required libaries
>
>
>
solved the issue
Upvotes: 2
|
2018/03/14
| 679 | 2,703 |
<issue_start>username_0: The only way I know of being able to use my directive is by exporting it in the module.
```
@NgModule({
imports: [
CommonModule
],
declarations: [BreadcrumbsComponent, IsRightDirective],
exports: [BreadcrumbsComponent, IsRightDirective]
})
export class BreadcrumbsModule { }
```
My BreadcrumbsModule is imported by my AppModule
```
@NgModule({
declarations: [
AppComponent
],
imports: [
BrowserModule,
BreadcrumbsModule
],
providers: [],
bootstrap: [AppComponent]
})
export class AppModule { }
```
Now when I use my breadcrumbs component, which I named the selector `bulma-breadcrumbs`, and add the attribute, `is-right`, it works as expected. However if I add it to another tag, like an `h1`, the directive also affects it.
I'm trying to get the directive to only apply on the `BreadcrumbsComponent`.<issue_comment>username_1: You can make the directive effective only if the element `tagName` is `bulma-breadcrumbs`:
```
export class IsRightDirective {
constructor(private elementRef: ElementRef) {
let element = elementRef.nativeElement as HTMLElement;
if (element.tagName.toLowerCase() === "bulma-breadcrumbs") {
element.style.border = "solid 4px red";
...
}
}
}
```
See **[this stackblitz](https://stackblitz.com/edit/template-driven-form-2-aa1few)** for a working example of the code.
Upvotes: 2 <issue_comment>username_2: Before Angular 2 RC5, a hierarchy of directives/components was transparent, because components had `directives` property that defined components/directives that affected only this component and its children.
After the introduction of `NgModule`, this feature remained intact but became less evident. As explained in [this answer](https://stackoverflow.com/a/48845024/3731501), it's possible with proper hierarchy of modules.
Most times module `declarations` and `exports` are same, this allows to use module directives, components and pipes globally within the application.
If a unit isn't exported from a module, it's available only locally, to other units within same module.
This
```
@NgModule({
imports: [
CommonModule
],
declarations: [BreadcrumbsComponent, IsRightDirective],
exports: [BreadcrumbsComponent]
})
export class BreadcrumbsModule { }
```
will prevent `is-right` attribute directive from being compiled anywhere but this module declarations (i.e. `BreadcrumbsComponent`).
Alternatively, directive `selector` can be restricted to `bulma-breadcrumbs[is-right]`. The result will be same, but this won't prevent the directive from being used in other modules that have their local `bulma-breadcrumbs` component.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 660 | 2,609 |
<issue_start>username_0: I want to know, if it's possible to save the output of this code into a dictionary (maybe it's also the wrong data-type). I'm not expirienced in coding yet, so I can't think of a way it could work.
I want to create a dicitionary that has the lines of the txt.-file in it alongside the value of the corresponding line. In the end, I want to create a code, where the user has the option to search for a word in the line through an input - the output should return the corresponding line. Has anyone a suggestion? Thanks in advance! Cheers!
```
filepath = 'myfile.txt'
with open(filepath) as fp:
line = fp.readline()
cnt = 1
while line:
print("Line {}: {}".format(cnt, line.strip()))
line = fp.readline()
cnt += 1
```<issue_comment>username_1: You can make the directive effective only if the element `tagName` is `bulma-breadcrumbs`:
```
export class IsRightDirective {
constructor(private elementRef: ElementRef) {
let element = elementRef.nativeElement as HTMLElement;
if (element.tagName.toLowerCase() === "bulma-breadcrumbs") {
element.style.border = "solid 4px red";
...
}
}
}
```
See **[this stackblitz](https://stackblitz.com/edit/template-driven-form-2-aa1few)** for a working example of the code.
Upvotes: 2 <issue_comment>username_2: Before Angular 2 RC5, a hierarchy of directives/components was transparent, because components had `directives` property that defined components/directives that affected only this component and its children.
After the introduction of `NgModule`, this feature remained intact but became less evident. As explained in [this answer](https://stackoverflow.com/a/48845024/3731501), it's possible with proper hierarchy of modules.
Most times module `declarations` and `exports` are same, this allows to use module directives, components and pipes globally within the application.
If a unit isn't exported from a module, it's available only locally, to other units within same module.
This
```
@NgModule({
imports: [
CommonModule
],
declarations: [BreadcrumbsComponent, IsRightDirective],
exports: [BreadcrumbsComponent]
})
export class BreadcrumbsModule { }
```
will prevent `is-right` attribute directive from being compiled anywhere but this module declarations (i.e. `BreadcrumbsComponent`).
Alternatively, directive `selector` can be restricted to `bulma-breadcrumbs[is-right]`. The result will be same, but this won't prevent the directive from being used in other modules that have their local `bulma-breadcrumbs` component.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 822 | 1,855 |
<issue_start>username_0: In R, I have a vector containing characters:
```
v <- c("X412A-Y423A", "X400A-Y405B", "X499A-Y448B", "X455A-Y213A")
```
I want to create a vector based on this one, depending on the last character in each string (A or B), the new vector will have a different value (red or blue) such as:
```
vnew <- c("red","blue","blue","red")
```
Any help would be appreciated.<issue_comment>username_1: With base `R` you can do:
```
v <- c("X412A-Y423A", "X400A-Y405B", "X499A-Y448B", "X455A-Y213A")
n <- nchar(v)
ifelse(substr(v, n, n)=="A", "red", "blue")
```
or you can use regular expressions (also base `R`):
```
ifelse(grepl("A$", v), "red", "blue")
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: This will work:
```
> v <- c("X412A-Y423A", "X400A-Y405B", "X499A-Y448B", "X455A-Y213A")
> x <- substr(v,nchar(v),nchar(v))
> vnew <- ifelse(x == "A", "red", ifelse(x == "B", "blue", ""))
> vnew
[1] "red" "blue" "blue" "red"
```
Upvotes: 0 <issue_comment>username_3: An elegant solution can be achieved using `case_when` from `dplyr` package and `sub` from `base-R`. The solution more suited for `data.frame` like scenario.
```
#data
v <- c("X412A-Y423A", "X400A-Y405B", "X499A-Y448B", "X455A-Y213A")
library(dplyr)
data.frame(v, stringsAsFactors = FALSE) %>%
mutate(value = sub('.*(?=.$)', '', v, perl=T)) %>%
mutate(color = case_when(
.$value == 'A' ~ "Red",
.$value == 'B' ~ "Blue",
TRUE ~ "Green"
))
# v value color
# 1 X412A-Y423A A Red
# 2 X400A-Y405B B Blue
# 3 X499A-Y448B B Blue
# 4 X455A-Y213A A Red
```
For just creating a vector with color solution could be:
```
last_char = sub('.*(?=.$)', '', v, perl=T)
case_when(
last_char == 'A' ~ "Red",
last_char == 'B' ~ "Blue",
TRUE ~ "Green"
)
[1] "Red" "Blue" "Blue" "Red
```
Upvotes: 1
|
2018/03/14
| 1,561 | 6,342 |
<issue_start>username_0: Currently getting to grips with JS and React. I want to map the value from the selected option to `this.state.transcode_profile`. I have tried the same method as I am using on the `input` tags unfortunately this is not working.
Here's the code, what am I doing wrong?
```
import React, { Component } from "react";
const ProfileList = ({profiles}) => (
----
{profiles.map(profile => {profile.name})}
);
const url = 'http://localhost:8000/api/tasks/';
class Submit_job extends Component {
constructor(){
super();
this.state = {
"profiles": [],
"material_id": null,
"transcode_profile": null,
"start_date": null,
"end_date": null,
};
};
componentDidMount(){
fetch("http://localhost:8000/api/profiles/")
.then(response => response.json())
.then(response => this.setState({ profiles: response}))
}
onChange = (e) => {
// Because we named the inputs to match their corresponding values in state, it's
// super easy to update the state
const state = this.state;
state[e.target.name] = e.target.value;
this.setState(state);
};
handleChange(e){
this.setState({selectValue:e.target.value});
};
postData = (e) => {
e.preventDefault();
// Default options are marked with *
return fetch(url, {
body: JSON.stringify({status: 'submitted',
video_data: {material_id: this.state.material_id},
profile_data: {name: this.state.transcode_profile },
start: this.state.start_date,
end: this.state.end_date,
user: 'Foobar'}), // must match 'Content-Type' header
cache: 'no-cache', // *default, no-cache, reload, force-cache, only-if-cached
credentials: 'same-origin', // include, same-origin, *omit
headers: {
'user-agent': 'Mozilla/4.0 MDN Example',
'content-type': 'application/json'
},
method: 'POST', // *GET, POST, PUT, DELETE, etc.
mode: 'cors', // no-cors, cors, *same-origin
redirect: 'follow', // *manual, follow, error
referrer: 'no-referrer', // *client, no-referrer
})
.then(response => response.json()) // parses response to JSON
};
render() {
return (
Submit Job
----------
Material ID:
Transcode Profile:
Start Date:
End Date:
Submit
);
}
}
export default Submit_job;
```
**Edit:**
This is how i got it working.
```
import React, { Component } from "react";
const ProfileList = ({onChange, profiles, value}) => (
----
{profiles.map(profile => {profile.name})}
);
const url = 'http://localhost:8000/api/tasks/';
class Submit_job extends Component {
constructor(){
super();
this.state = {
"profiles": [],
"material_id": null,
"transcode_profile": null,
"start_date": null,
"end_date": null,
};
};
componentDidMount(){
fetch("http://localhost:8000/api/profiles/")
.then(response => response.json())
.then(response => this.setState({ profiles: response}))
}
onChange = (e) => {
// Because we named the inputs to match their corresponding values in state, it's
// super easy to update the state
const state = this.state;
state[e.target.name] = e.target.value;
this.setState(state);
};
postData = (e) => {
e.preventDefault();
// Default options are marked with *
return fetch(url, {
body: JSON.stringify({status: 'submitted',
video_data: {material_id: this.state.material_id},
profile_data: {name: this.state.transcode_profile },
start: this.state.start_date,
end: this.state.end_date,
user: 'Lewis'}), // must match 'Content-Type' header
cache: 'no-cache', // *default, no-cache, reload, force-cache, only-if-cached
credentials: 'same-origin', // include, same-origin, *omit
headers: {
'user-agent': 'Mozilla/4.0 MDN Example',
'content-type': 'application/json'
},
method: 'POST', // *GET, POST, PUT, DELETE, etc.
mode: 'cors', // no-cors, cors, *same-origin
redirect: 'follow', // *manual, follow, error
referrer: 'no-referrer', // *client, no-referrer
})
.then(response => response.json()) // parses response to JSON
};
render() {
return (
Submit Job
----------
Material ID:
Transcode Profile:
Start Date:
End Date:
Submit
);
}
}
export default Submit_job;
```<issue_comment>username_1: The select should be passed the value as well, so it knows which option is selected:
```
const ProfileList = ({ onChange, profiles, value }) => (
----
{profiles.map(profile => {profile.name})}
);
```
Then when rendering ProfileList, we should pass the selectValue state, as well as the handleChange callback.
```
```
You should also set the default state in the constructor as well for selectValue
```
constructor(){
super();
this.state = {
"profiles": [],
"material_id": null,
"transcode_profile": null,
"start_date": null,
"end_date": null,
"selectValue": "-----"
};
}
```
If you haven't already read the React docs on forms, I would recommend them: <https://reactjs.org/docs/forms.html#the-select-tag>
Upvotes: 3 [selected_answer]<issue_comment>username_2: ```
handleChange = (e, {value}) => {
this.setState({selectValue:value});
};
```
this can also be written as
```
handleChange = (e, data) => {
this.setState({selectValue:data.value});
};
```
For some elements Semantic-UI-React requires you to provide the event first but also a second argument which contains the data you need. It is a well known limitation.
Upvotes: 1
|
2018/03/14
| 370 | 1,177 |
<issue_start>username_0: Trying to document some Kotlin code with a pretty routine structure:
1. A numbered list.
* With a bulleted sublist.
2. Where the numbers continue correctly at the top level of the list.
What I take to be the [official Dokka page](https://kotlinlang.org/docs/reference/kotlin-doc.html) doesn't even have the word "list" on the page. Have Googled hither and yon without finding any info on how to do this. Help!<issue_comment>username_1: Dokka uses markdown, which has some forks and flavors and not standardized at all, but this works:
```
/**
* list:
* - item1
* - item2
*
* numbered:
* 1. one
* 2. two
*/
```
[](https://i.stack.imgur.com/yhSUO.png)
Upvotes: 1 <issue_comment>username_2: As far as I can see, nesting with additional indentation of four spaces works:
```
/**
* 1. A numbered list.
* * With a bulleted sublist.
* 1. Where the numbers continue correctly at the top level of the list.
*/
val foo: Int = 0
```
The result is:
[](https://i.stack.imgur.com/MKhtg.png)
Upvotes: 3
|
2018/03/14
| 1,911 | 6,915 |
<issue_start>username_0: I am having two different config maps **test-configmap** and **common-config**. I tried to mount them at the same location, but one config map overwrote the other. Then I read about `subPath` and did not work.
deploy.yaml
```
apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: testing
spec:
replicas: 1
template:
metadata:
name: testing
labels:
app: testing
spec:
containers:
- name: testing-container
image: testing
imagePullPolicy: IfNotPresent
ports:
- containerPort: __PORT__
volumeMounts:
- name: commonconfig-volume
mountPath: /usr/src/app/config/test.config
subPath: test.config
volumes:
- name: commonconfig-volume
configMap:
name: test-configmap
- name: commonconfig-volume
configMap:
name: common-config
```
Error :
`The Deployment "testing" is invalid: spec.template.spec.volumes[1].name: Duplicate value: "commonconfig-volume"`
I am not sure if merging two config map achievable of not. And if yes then how should I do it.<issue_comment>username_1: You cannot mount two ConfigMaps to the same location.
But mentioning `subPath` and `key` for every item in each configmaps will let you get items from both configmaps in the same location. You'll have to write mount points for each file manually:
```
apiVersion: v1
kind: Pod
metadata:
name: config-single-file-volume-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "cat /etc/special-key" ]
volumeMounts:
- name: config-volume-1
mountPath: /etc/special-key1
subPath: path/to/special-key1
- name: config-volume-2
mountPath: /etc/special-key2
subPath: path/to/special-key2
volumes:
- name: config-volume-1
configMap:
name: test-configmap1
items:
- key: data-1
path: path/to/special-key1
- name: config-volume-2
configMap:
name: test-configmap2
items:
- key: data-2
path: path/to/special-key2
restartPolicy: Never
```
Another way is to mount them under same directory, but different subPath so that you don't have to specify items by hand. But, here keys from each configmap will be put into two different directories:
```
apiVersion: v1
kind: Pod
metadata:
name: config-single-file-volume-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "cat /etc/special-key" ]
volumeMounts:
- name: config-volume-1
mountPath: /etc/special-keys
subPath: cm1
- name: config-volume-2
mountPath: /etc/special-keys
subPath: cm2
volumes:
- name: config-volume-1
configMap:
name: test-configmap1
- name: config-volume-2
configMap:
name: test-configmap2
restartPolicy: Never
```
`cm1` and `cm2` will be two directories containing files derived from keys in `test-configmap1` and `test-configmap2` respectively.
Upvotes: 4 [selected_answer]<issue_comment>username_2: One way would be to mount them at different points but in the same `emptyDir` volume, mounting that same volume into an [init container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) and including a short script in the init container to merge the two files using whatever tools you install at the start of the script. Scripts can easily be included in the pod manifest using the technique in [this](https://stackoverflow.com/questions/49245628/how-to-execute-an-argument-in-kubernetes/49245794#49245794) answer.
Upvotes: 0 <issue_comment>username_3: You have to use special [projected](https://kubernetes.io/docs/concepts/storage/volumes/#projected) volumes for achieve that. Example your deployment:
```
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing
spec:
replicas: 1
selector:
matchLabels:
app: testing
template:
metadata:
name: testing
labels:
app: testing
spec:
containers:
- name: testing-container
image: testing
imagePullPolicy: IfNotPresent
ports:
- containerPort: __PORT__
volumeMounts:
- name: commonconfig-volume
mountPath: /usr/src/app/config
volumes:
- name: commonconfig-volume
projected:
sources:
- configMap:
name: test-configmap
- configMap:
name: common-config
```
You can use `secret` same as `configMap`
Upvotes: 5 <issue_comment>username_4: Another example of how this can be done for mounting multiple configmaps. This for an nginx dock were you want to both replace the main /etc/nginx/nginx.conf and the files in /etc/nginx/conn.f. This also deletes the default.conf file in conf.d
```
containers:
- name: nginx-proxy
image: nginx:1.16-alpine
imagePullPolicy: Always
ports:
- containerPort: 443
- containerPort: 80
volumeMounts:
- name: nginx-main-conf-file
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: nginx-site-conf-file
mountPath: /etc/nginx/conf.d
volumes:
- name: nginx-main-conf-file
configMap:
name: nginx-main-conf
- name: nginx-site-conf-file
configMap:
name: nginx-site-conf
```
And one **very important** point. If you have any out commented lines (# something) in your yaml file, then this will not work. It's a bug. Tested in kubectl v1.14
Upvotes: 2 <issue_comment>username_5: **NOTE:** This approach is good if you have less amount of data in configmaps
Let's say you have two configmaps
**test-confimap**
```
apiVersion: v1
kind: ConfigMap
metadata:
name: test-configmap
data:
test-1.conf: |
test-property-1=test-value-1
```
**common-confimap**
```
apiVersion: v1
kind: ConfigMap
metadata:
name: common-configmap
data:
common-1.conf: |
common-property-1=common-value-1
```
Instead of having different configmaps you can have same data in single configmap like below.
```
apiVersion: v1
kind: ConfigMap
metadata:
name: single-configmap
data:
common-1.conf: |
property-1=value-1
test-1.conf: |
property-1=value-1
```
Now create a volume from above configmap and mount to container with mounhPath like below
**volume from configmap**
```
volumes:
- configMap:
defaultMode: 420
name: single-cofigmap
name: my-single-config
```
**volumeMount**
```
volumeMounts:
- mountPath: /path/to/config/folder/
name: my-single-config
```
Now you can see two files at the `/path/to/config/folder/` location inside the container. Tell to your application to use which one it need.
Upvotes: 2
|
2018/03/14
| 2,042 | 7,480 |
<issue_start>username_0: I am trying to format a cell to have multiple font colors using a function in Google Apps Script. I am unable to find any documentation on it. Also, using `getFontColor()` doesn't return anything useful.
Is there any way to programmatically reproduce this functionality
[](https://i.stack.imgur.com/0PEav.png)
that is available to users via the Google Sheets web UI?<issue_comment>username_1: The [Sheets API](https://developers.google.com/sheets/api/reference/rest/) is a bit daunting to start using, but allows very fine-grained control over your spreadsheets. You'll have to [enable it](https://developers.google.com/apps-script/guides/services/advanced), as it is an "Advanced Service". I strongly recommend reviewing the [Sample Codelab](https://developers.google.com/sheets/api/samples/formatting).
With the Sheets API, the [`TextFormatRun`](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets#textformatrun) property can be manipulated on a cell-by-cell basis. Note:
>
> Runs of rich text applied to subsections of the cell. Runs are only valid on user entered strings, not formulas, bools, or numbers. Runs start at specific indexes in the text and continue until the next run. Properties of a run will continue unless explicitly changed in a subsequent run (and properties of the first run will continue the properties of the cell unless explicitly changed).
>
>
> When writing, the new runs will overwrite any prior runs. When writing a new userEnteredValue, previous runs will be erased.
>
>
>
This example uses it to adjust the green value of text, increasing from 0 to 100% over the length of a string in the active cell. Adjust to suit your needs.
```
function textFormatter() {
// Get the current cell's text.
var wb = SpreadsheetApp.getActive(), sheet = wb.getActiveSheet();
var cell = sheet.getActiveCell(), value = cell.getValue();
var len = value.toString().length;
if(len == 0) return;
// Change the color every 2 characters.
var newCellData = Sheets.newCellData();
newCellData.textFormatRuns = [];
var step = 1 / len;
for(var c = 0; c < len; c += 2) {
var newFmt = Sheets.newTextFormatRun();
newFmt.startIndex = c;
newFmt.format = Sheets.newTextFormat();
newFmt.format.foregroundColor = Sheets.newColor();
newFmt.format.foregroundColor.green = (c + 2) * step;
newCellData.textFormatRuns.push(newFmt);
}
// Create the request object.
var batchUpdateRQ = Sheets.newBatchUpdateSpreadsheetRequest();
batchUpdateRQ.requests = [];
batchUpdateRQ.requests.push(
{
"updateCells": {
"rows": [ { "values": newCellData } ],
"fields": "textFormatRuns",
"start": {
"sheetId": sheet.getSheetId(),
"rowIndex": cell.getRow() - 1,
"columnIndex": cell.getColumn() - 1
}
}
}
);
Sheets.Spreadsheets.batchUpdate(batchUpdateRQ, wb.getId());
}
```
---
Edit: Depending on how the value of the cells to be formatted are set, including the value of the cell in the same request may be necessary as well. [See this example on the issue tracker](https://issuetracker.google.com/issues/36764247#comment18)
Upvotes: 4 [selected_answer]<issue_comment>username_2: As on July 2018, Apps-Script support changing individual text colors and other font related styles. Two methods are added to `SpreadsheetApp`. `newTextStyle()` and `newRichTextValue()`. The following apps-script changes such fontstyles in A1. For best effects, Use a lengthy string(30 characters or more).
```
function rainbow(){
var rng = SpreadsheetApp.getActiveSheet().getRange("A1");
var val = rng.getValue().toString();
var len = val.length; // length of string in A1
var rich = SpreadsheetApp.newRichTextValue(); //new RichText
rich.setText(val); //Set Text value in A1 to RichText as base
for (var i=0;i
```
---
~~Documentation is not published yet. Methods are subject to change~~
### References:
* [RichTextValue](https://developers.google.com/apps-script/reference/spreadsheet/rich-text-value)
* [TextStyle](https://developers.google.com/apps-script/reference/spreadsheet/text-style)
Upvotes: 4 <issue_comment>username_3: ```
requests = [
object = {
"updateCells": {
"range": {
"sheetId": sheetId,
"startRowIndex":startRowIndex,
"endRowIndex": endRowIndex,
"startColumnIndex": startColumnIndex,
"endColumnIndex": endColumnIndex
}
"rows": [{
"values": [{
"textFormatRuns": [
{"format": {
"foregroundColor": {
"red": 0.0,
"green": 255.0,
"blue": 31.0
},
},"startIndex": 0
},
]
}
]
}]
"fields": "textFormatRuns(format)"
}
}
]
try:
result = service.spreadsheets().batchUpdate(spreadsheetId=internamiento_id,
body={'requests': requests}).execute()
print('{0} celdas actualizadas'.format(result.get('totalUpdatedCells')))
except Exception as e:
print(e)
json_salida["error"] = "Ocurrio un error "
return json_salida, 400
```
Upvotes: 0 <issue_comment>username_4: The function will generate text and then goes through all of the cells highlight the chosen words. So you can just run it on a blank sheet to figure how it works. It also handles multiple colors.
```
function highlightword() {
const red = SpreadsheetApp.newTextStyle().setForegroundColor('red').build();
const org = SpreadsheetApp.newTextStyle().setForegroundColor('orange').build();
const blu = SpreadsheetApp.newTextStyle().setForegroundColor('blue').build();
const cA = [red,org,blu];//colors array
const sA = ['Mimi id sweet litter wiener dog', 'username_4 died and we both miss him', 'Vony died to and I really miss her.', 'Someday fairly soon I will probably die.'];
const wordA = ['sweet', 'dog', 'died', 'fairly', 'little', 'and','Mimi','username_4'];
const ss = SpreadsheetApp.getActive();
const sh = ss.getSheetByName('Sheet0');
const rg = sh.getRange(1, 1, 10, 5);
let vs = rg.getValues();
rg.clearContent();
const dA = vs.map((r, i) => {
let row = [...Array(rg.getWidth()).keys()].map(i => sA[Math.floor(Math.random() * sA.length)]);
return row.slice();
});
rg.setValues(dA);
//End of sample text generation
SpreadsheetApp.flush();
dA.forEach((r, i) => {
r.forEach((c, j) => {
let idxObj = { pA: [] };
wordA.forEach(w => {
let idx = c.indexOf(w);
if (~idx) {
idxObj[w] = idx;
idxObj.pA.push(w);
}
});
if (idxObj.pA.length > 0) {
let cell = sh.getRange(i + 1, j + 1);
let val = SpreadsheetApp.newRichTextValue().setText(c);
idxObj.pA.forEach((p,k) => {
val.setTextStyle(idxObj[p], idxObj[p] + p.length, cA[k % cA.length]);
});
cell.setRichTextValue(val.build());
}
});
});
}
```
Upvotes: 1
|
2018/03/14
| 1,000 | 4,152 |
<issue_start>username_0: So I'm making a java program that will ask the user "how are you" and if the criteria in the if and if else statements isn't met then it should loop but i'm not sure how to implement the loop to make it so when the user doesn't input anything that is in both of the if statements it should repeat the code
Here is my code
```
import java.util.Scanner;
public class simplechatbot {
private static Scanner scanner;
public static void main(String args[]) {
scanner = new Scanner(System.in);
System.out.print("What is your name");
String greeting = scanner.nextLine();
System.out.print("Hi" + " " + greeting + ", ");
System.out.print("How are you");
String howareyou = scanner.nextLine();
if (howareyou.equalsIgnoreCase("good")) {
System.out.print("Thats good to hear!");
} else if (howareyou.equalsIgnoreCase("not good")) {
System.out.print("Unfortunate");
} else {
/*
this is where I want to put the loop that goes back to asking the user
"how are you" until they say something which matches the criteria
*/
}
}
}
```<issue_comment>username_1: You could use a while loop which you only break from when your desired conditions are met, repeatedly prompting the user to type a desired value:
```
while(true) {
if (howareyou.equalsIgnoreCase("good");
System.out.print("Thats good to hear!");
break;
} else if (howareyou.equalsIgnoreCase("not good"){
System.out.print("Unfortunate");
break;
}
System.out.print("Please enter a valid response:");
howareyou = scanner.nextLine();
}
```
Upvotes: 0 <issue_comment>username_2: You can wrap everything in a `while` loop and then use the `break` statement to break out of the loop.
```
import java.util.Scanner;
public class simplechatbot{
private static Scanner scanner;
public static void main(String args[]) {
scanner = new Scanner(System.in);
System.out.print("What is your name");
String greeting = scanner.nextLine();
System.out.print("Hi" + " " + greeting + ", ");
while(true){
System.out.print("How are you");
String howareyou = scanner.nextLine();
if(howareyou.equalsIgnoreCase("good");
System.out.print("Thats good to hear!");
break; // jumps to "here"
}else if(howareyou.equalsIgnoreCase("not good"){
System.out.print("Unfortunate");
break; // jumps to "here"
}
}
// <- "here"; continue with the rest of your program at this line
}
}
```
Upvotes: 0 <issue_comment>username_3: A `do while` loop should work nicely for this use case. Something like this:
```
String howareyou = null;
do {
System.out.print("How are you");
howareyou = scanner.nextLine();
while (!howareyou.equalsIgnoreCase("good") && !howareyou.equalsIgnoreCase("not good"));
if (howareyou.equalsIgnoreCase("good");
System.out.print("Thats good to hear!");
} else {
System.out.print("Unfortunate");
}
```
Upvotes: 0 <issue_comment>username_4: Use a while true loop forever unless one of the break conditions is met:
```
import java.util.Scanner;
public class SimpleChatBot {
public static void main(String args[]) {
try (Scanner scanner = new Scanner(System.in)) {
while (true) {
System.out.print("What is your name");
String greeting = scanner.nextLine();
System.out.print("Hi" + " " + greeting + ", ");
System.out.print("How are you");
String howareyou = scanner.nextLine();
if (howareyou.equalsIgnoreCase("good")) {
System.out.print("Thats good to hear!");
break;
} else if (howareyou.equalsIgnoreCase("not good")) {
System.out.print("Unfortunate");
break;
}
}
}
}
}
```
Upvotes: 2 [selected_answer]
|
2018/03/14
| 850 | 3,192 |
<issue_start>username_0: I am trying to find the presence of a string pattern like '/\*' in a string using python .
I tried a code snippet like the following :
```
strk = "*\"
if strk in num
print ("here it is")
```
But at the first line Ii got an error stating :
syntax error : "EOL while scanning string literal .
How to make the interpreter distinguish between "\" used as a string element and "\" as the newline character ?
I. .being inspired by the following link, tried using
```
strk = "*(\)"
```
But this didn't work .
<http://python-reference.readthedocs.io/en/latest/docs/str/escapes.html><issue_comment>username_1: actually you have to escape the escape backslash
```
strk = "*\\"
if strk in num:
print ("here it is")
```
Upvotes: 2 <issue_comment>username_2: There are two errors seen with your code.
The first is that when write backslashes (`\`) into code, you should use two back slashes like `\\`. A single backslash is used in many languages for interpreting special characters, e.g `\n` stands for a new line and `\t` stands for tab.
The second error is with your if statement where you are missing a colon from the end of your if statement. I would also recommend you make your code like:
```
if "*\\" in word:
print(word)
```
where `word` could be any string.
Upvotes: 1 <issue_comment>username_3: >
> How to make the interpreter distinguish between "\" used as a string element and "\" as the newline character ?
>
>
>
OK, first, `\` is not a newline character. `\n` is a newline character. And `\t` is a tab character. And so on. You've already found the whole list of them in some third-party docs, so I won't list them here.
So, how does the interpreter\* distinguish between `\` meaning an actual backslash, vs. `\` meaning the first character of an escape sequence?
The rule is pretty simple:\*\* It looks at the next character, and if it's a `n` or `t` or `x` or anything else on that list you found, then the `\` is the first character of an escape sequence; otherwise, it's just a plain backslash. (There's a separate rule for raw strings, but let's not worry about that here.\*\*\*)
If you want to make sure you have a plain backslash: notice that `\\` is an escape sequence for `\`. So, just escape the backslash to make sure it's a backslash instead of an escape for something else:
```
strk = "*\\"
```
Or, to put it in terms that don't make your head hurt: Double all your backslashes in source-code string literals.
---
\* Actually, by the time it gets to the interpreter, it's just a string constant object somewhere; it's the compiler that handles escape sequences. But that doesn't matter here.
\*\* But it's not the same rule used by C. For example, in Python, `\Q` means a backslash followed by a Q, but in C, it's just the Q. Most other languages have either copied C, or just made it an error to use a backslash that isn't part of a documented escape sequence.
\*\*\* For most cases of backslash-related confusion, raw strings make everything a lot easier. But for the specific case of a backslash at the very end of the string, they instead make things slightly more confusing.
Upvotes: 3 [selected_answer]
|
2018/03/14
| 175 | 742 |
<issue_start>username_0: I'm working on 2 computers, sometimes I commit changes and switch to other branches and therefore forget these local commits, is there a way in Intellij to allow me to view all unpushed commits, so I can check changes in different branches.<issue_comment>username_1: You can go to the **Version Control** pane, select the **Log** tab, and look for any occurrences where branch `abc` is ahead of `origin/abc`.
Upvotes: 2 <issue_comment>username_2: There is a way to see all unpushed commits of the current branch - just open Push dialog and it will list all commits that are not yet pushed.
However, the only way to find ALL unpushed commits is to check the references in the Log tab, as suggested earlier
Upvotes: 2
|
2018/03/14
| 919 | 3,301 |
<issue_start>username_0: I am currently working with an application using an Azure hosted SQL server instance. The application data doesn't take up a ton of physical space, however there are a lot of records. There are times where I need to delete a large amount of records, for example lets say 5 million records. As you might guess this takes a lot of time and resources. The issue is that I don't need a lot of resources for anything else. In order to not peg the DTU's at 100% for 30 minutes or longer I need to have many more resources that I need under normal use. I don't care how long the delete takes within reason. From what I have researched I cannot find a good way to limit the usage. It would be nice if I could somehow only allow 50% usage for the operation or something like that. Maybe I am missing something that could make the delete more efficient, but I don't think so. Its a pretty simple table with an index on the column I am using to do the delete. It seems like the main component that gets maxed out is Data IO. If anyone has any good ideas on how I can manage this it would be appreciated.
[](https://i.stack.imgur.com/brnUU.png)<issue_comment>username_1: 1) Delete by chunks in a loop. Check this: [How to delete large data of table in SQL without log?](https://stackoverflow.com/questions/24213299/how-to-delete-large-data-of-table-in-sql-without-log/28324562#28324562)
2) Use partitions and truncate by partitions. Check this: <https://stackify.com/how-to-partition-tables-in-azure-sql/>
Upvotes: 4 [selected_answer]<issue_comment>username_2: Delete involves locating data,getting data from disk and logging those operations..
**Locating data/Minimizing IO:**
To ensure IO is minimized, you will need to add right index .
Some times some operators involved in delete may run in parallel,to avoid this , you will need to add maxdop hint to ensure nothing in this query runs parallel..
```
delete from table where somecol=someval
option(maxdop 1)
```
**Minimizing log operation:**
Every DML operation is logged,but when you do individual deletes, you will use more log IO(which is one of the DTU metric of AZure database)..you will have to delete in batches and ensure they are in one single transaction..
```
while 1=1
begin
delete top(1000) from table where id=someval
if @@rowcount =0
break;
end
go
```
You also can partition your tables to make deletes faster.Truncate can be now used with partitions starting with sql 2016..
```
TRUNCATE TABLE tablename
WITH (PARTITIONS (1,2,3))
GO
```
the syntax also allows you to specify range..
```
[ WITH ( PARTITIONS ( { | }
[ , ...n ] ) ) ]
```
Partition can help you more, only if, you want to delete all or nothing of a partition.if you are doing this types of deletes more, you may need to design your table to help truncate
**Further reading and References:**
<https://www.sqlshack.com/sql-server-2016-enhancements-truncate-table-table-partitioning/>
Upvotes: 3 <issue_comment>username_3: Loop with a delay may work. This is 10 seconds.
```
select 1
WHILE (@@ROWCOUNT > 0)
BEGIN
DELETE TOP (10000) LargeTable
WHERE readTime < dateadd(MONTH,-7,GETDATE())
WAITFOR DELAY '00:00:01'
END
```
Upvotes: 1
|
2018/03/14
| 271 | 982 |
<issue_start>username_0: If I do:
```
readInData();
script - external.js
```
readInData() doesn't get called and I get an uncaught reference error. (the external.js functions aren't contained within a self-invoking function). but if I put readInData in a firebase call (below) then it executes.
```
db.collection("users").get().then((querySnapshot) => {
querySnapshot.forEach((doc) => {
readInData();
});
script - external.js
```
Is that a scope issue? Thanks<issue_comment>username_1: Looks like changing order of scripts will solve your issue. If you are lazy - can put big enough `setTimeout(readInData, 10000)` and check if it's connected with timing.
It does not look like scope issue.
Upvotes: 0 <issue_comment>username_2: Load the external.js file first so the invocation in your script tag can access it.
```
db.collection("users").get().then((querySnapshot) => {
querySnapshot.forEach((doc) => {
readInData();
});
```
Upvotes: 2 [selected_answer]
|
2018/03/14
| 4,144 | 11,241 |
<issue_start>username_0: I think the Firebase Admin SDK is missing a very important function (or maybe its documentation).
TL; DR : How can you refresh custom token with the Admin SDK?
The documentation (<https://firebase.google.com/docs/auth/admin/manage-sessions>) says:
>
> Firebase Authentication sessions are long lived. Every time a user
> signs in, the user credentials are sent to the Firebase Authentication
> backend and exchanged for a Firebase ID token (a JWT) and refresh
> token. Firebase ID tokens are short lived and last for an hour; the
> refresh token can be used to retrieve new ID tokens.
>
>
>
Ok. But how? There is no mention how to replace the refresh token with a new custom token. There are lots of documentation regarding how you can revoke a refresh token etc...
There is however a REST api function that says,
(<https://firebase.google.com/docs/reference/rest/auth/#section-refresh-token>)
>
> Exchange a refresh token for an ID token You can refresh a Firebase ID
> token by issuing an HTTP POST request to the
> securetoken.googleapis.com endpoint.
>
>
>
However, the access\_token (JWT) you get from this API call is not accepted neither. And the format of the JWT's are not even similar. Below are two samples of custom tokens retrieved (decoded) :
i. with the admin.auth().createCustomToken(uid) method of Admin SDK
```
{
"uid": "9N5veUXXXXX7eHOLB4ilwFexQs42",
"iat": 1521047461,
"exp": 1521051061,
"aud": "https://identitytoolkit.googleapis.com/google.identity.identitytoolkit.v1.IdentityToolkit",
"iss": "<EMAIL>",
"sub": "<EMAIL>"
}
```
ii. with the <https://securetoken.googleapis.com/v1/token?key=[API_KEY]> call
```
{
"iss": "https://securetoken.google.com/XXX",
"aud": "XXX",
"auth_time": 1521047461,
"user_id": "9N5veUXXXXX7eHOLB4ilwFexQs42",
"sub": "9N5veUXXXXX7eHOLB4ilwFexQs42",
"iat": 1521051719,
"exp": 1521055319,
"email": "<EMAIL>",
"email_verified": false,
"firebase": {
"identities": {
"email": [
"<EMAIL>"
]
},
"sign_in_provider": "password"
}
}
```
There are plenty of questions raised about this topic. Maybe someone from Firebase team can answer it once and for all. See the links below
Thanks for your time!!
* [How to use the Firebase refreshToken to reauthenticate?](https://stackoverflow.com/questions/38233687/how-to-use-the-firebase-refreshtoken-to-reauthenticate)
* [Firebase - Losing Auth Session After An Hour](https://stackoverflow.com/questions/47501971/firebase-losing-auth-session-after-an-hour)
* [How to handle custom firebase token expiry in Firebase 3.x.x](https://stackoverflow.com/questions/38350843/how-to-handle-custom-firebase-token-expiry-in-firebase-3-x-x)
* [Firebase authentication with custom token](https://stackoverflow.com/questions/42238246/firebase-authentication-with-custom-token)
* [Handling one hour token expiration in Firebase generated from iOS used for node auth](https://stackoverflow.com/questions/40910536/handling-one-hour-token-expiration-in-firebase-generated-from-ios-used-for-node)<issue_comment>username_1: You need to exchange a custom token for an Id Token and a Refresh token, this is mentioned [here](https://firebase.google.com/docs/reference/rest/auth/#section-verify-custom-token). The call should include the custom token and the property "returnSecureToken" as true. If this property is not added or is false, you will only get the ID Token.
After doing that, you can use the Refresh token to get a new ID Token once it expires. See the [documentation](https://firebase.google.com/docs/reference/rest/auth/#section-refresh-token).
Both, the custom token and the ID token, are short lived (1 hour) but the purpose is different, that is why the formats are different. You use the Id Token to make authenticated calls, whereas the custom token is only used to start the session and get an ID Token and Refresh token.
Keep in mind that if you are using an SDK, this whole work is being handled by the SDK.
Upvotes: 3 <issue_comment>username_2: You do not refresh already existing Custom Tokens but rather create new ones and exchange them for Access or Refresh Tokens. Here is how I did it in a working project I am currently using
**GENERATING A CUSTOM TOKEN FROM FIREBASE CLOUD FUNCTIONS**
-----------------------------------------------------------
Assuming you have your firebase project and a [Cloud Functions for Firebase](https://www.youtube.com/watch?v=DYfP-UIKxH0&t=245s) all set up.
This is how the Cloud Functions [index.ts](https://gist.github.com/username_2/25a86fd8f1940e5031df284735dde2ba) file would look like:
```
import * as functions from 'firebase-functions';
import * as admin from "firebase-admin";
// Start writing Firebase Functions
// https://firebase.google.com/docs/functions/typescript
export const getCustomToken = functions.https.onRequest((request, response) => {
if (admin.apps.length < 1) { //Checks if app already initialized
admin.initializeApp();
}
const uid = "USER_UUID"; //e.g. GVvCdXAC1FeC4WYTefcHD8So3q41
admin.auth().createCustomToken(uid)
.then(function(customToken) {
console.log(customToken.toString);
response.send(customToken);
})
.catch(function(error) {
console.log("Error creating custom token:", error);
});
});
```
The http GET request would look like:
```
https://us-central1-.cloudfunctions.net/getCustomToken
```
The response would look like:
```
<KEY>
```
Most probably you would have to enable the IAM(Identity and Access Management) if not enabled and set up the Service Account Credential. [Check the Troubleshooting](https://firebase.google.com/docs/auth/admin/create-custom-tokens#troubleshooting).
**EXCHANGING CUSTOM TOKEN FOR REFRESH & ACCESS TOKENS**
-------------------------------------------------------
The http POST request would look like:
```
https://www.googleapis.com/identitytoolkit/v3/relyingparty/verifyCustomToken?key=
```
with a body like:
```
{"token":"<KEY>","returnSecureToken":true}
```
The response would look like:
```
{
"kind": "identitytoolkit#VerifyCustomTokenResponse",
"idToken": "<KEY>",
"refreshToken": "<KEY>",
"expiresIn": "3600",
"isNewUser": false
}
```
Good luck,
Upvotes: 3 <issue_comment>username_3: How can you refresh custom token with the Admin SDK?
Answer: If you are using the Android SDK for Firebase, you should NEVER have to do this. If you are asking this question and are using the iOS or Android SDKs you probably have a set up issue. The SDKs will handle all the refreshing of tokens if you are SETUP CORRECTLY. I was using the FirebaseAuth's signInWithCustomToken and was running into the same issue.
[An overview of Firebase tokens](https://medium.com/@jwngr/demystifying-firebase-auth-tokens-e0c533ed330c)
[The SDK has 1 hour to USE THE CUSTOM TOKEN](https://github.com/firebase/quickstart-android/issues/31) If you read the whole conversation and ignore the complaining, it lays out the issue. See samtstern comment around July 6th
Once FirebaseAuth's signInWithCustomToken function is called, the SDK will take care of keeping the tokens up to date IF YOU ARE SETUP correctly. [For more info](https://medium.com/@iammert/android-firebase-auth-error-fetching-token-solved-d13df3090c8c)
The SHA1 cert from your ANDROID app has to be in your Firebase Admin console. Once you add the SHA1 cert you will need to download the google-services.json file and add it to your app.
I took over a firebase account for an app still in development and ran into this problem.
Upvotes: 1 <issue_comment>username_4: You didn't mention programming language, but in Java, something like this works for me:
```
// Use the user ID with the Firebase Admin SDK to generate a custom token
FirebaseAuth firebaseAuth = FirebaseAuth.getInstance();
String customToken = firebaseAuth.createCustomToken(userId);
// Now use the Firebase REST API to exchange the custom token for an ID and refresh token
JsonObject json = new JsonObject();
json.addProperty("token", customToken);
json.addProperty("returnSecureToken", true);
String postDataStr = GSON.toJson(json);
Map headers = new HashMap<>();
headers.put("Content-Type", "application/json");
// Make a request to the Firebase client-side API to exchange the credentials for an ID token, Refresh Token, etc.
URI uri = new URI("https://identitytoolkit.googleapis.com/v1/accounts:signInWithCustomToken?key={API\_KEY\_HERE}");
. . .
```
Make the POST request, etc. Note that the response should be parsed as JSON and then the the ID token is in ".idToken" and the refresh token is in ".refreshToken".
Upvotes: 1
|
2018/03/14
| 556 | 1,917 |
<issue_start>username_0: I am having trouble uploading an image to firebase storage. I need to have an input object in the HTML with a type of file. Then I need the image selected for that input to be uploaded to the firebase storage for my project. Is there also a way to store the location of that image in the real-time database so it can be accessed later? How would I do this?
Thank you in advance<issue_comment>username_1: to link to the database try this:
```
var ref= firebase.database().ref("Uploads");
var storage = firebase.storage();
var pathReference = storage.ref('images/stars.jpg');
pathReference.getDownloadURL().then(function(url) {
ref.push().set({
imgurl: url
});
```
more info here:
<https://firebase.google.com/docs/storage/web/download-files>
<https://firebase.google.com/docs/storage/web/upload-files>
after adding it to the storage, you will be able to get the `url` using `getDownloadUrl()` and then add it in the db:
```
Uploads
pushid
imgurl: url
```
Upvotes: 4 [selected_answer]<issue_comment>username_2: Check example application using **Firestore(Storage)** to save image file and using **Cloud Storage(Database)** to save image path, name, and file size.
[](https://i.stack.imgur.com/iK1zG.gif)
**Upload Image**
```
// The main task
this.task = this.storage.upload(path, file, { customMetadata });
```
**Save Image Info**
```
addImagetoDB(image: MyData) {
//Create an ID for document
const id = this.database.createId();
//Set document id with value in database
this.imageCollection.doc(id).set(image).then(resp => {
console.log(resp);
}).catch(error => {
console.log("error " + error);
});
}
```
**Source tutorial [Link](https://www.freakyjolly.com/ionic-4-image-upload-with-progress-in-firestore-and-firestorage-tutorial-by-application/)**
Upvotes: 3
|
2018/03/14
| 262 | 1,073 |
<issue_start>username_0: This is super hard to Google for because it removes the ".". I'm new to Java and setting up a Java app in Heroku. There is a file `system.properties` that lets us specify the Java version we want to use.
The contents are like:
```
java.runtime.version=1.8
```
I'm just wondering, who created the "standard" for having a system.properties file? Is that only used by Heroku? Or does that standard come from somewhere else in the Java world?<issue_comment>username_1: It's just a property file named `system`. It has no special meaning unless something reads that file and does something based on the contents (like in this case Heroku).
So it's a standard in the sense that property files contain key-value pairs like that, but `system.properties` is not a "Java wide" standard.
Upvotes: 2 <issue_comment>username_2: `system.properties` is part of the Heroku configuration. Among other things, it allows you to specify the Java version you'd be using (i.e., which java binary will run your code), so it cannot be part of Java itself.
Upvotes: 2
|
2018/03/14
| 328 | 1,355 |
<issue_start>username_0: I want to toggle between two images when the user hovers on a div. Is there a way to do it?
Can I use a boolean flag that is updated every time I hover the div and then use this flag to toggle images? Can someone let me know if that's the right way to do it?
Here's the code:
```js
$(document).ready(function(){
$('.my-element').on('mouseenter', function () {
$('img#element_1').attr({
'src': 'img/../element 1.png'
});
});
});
```
The first time I hover the mouse on div it should toggle my image. The next time the image toggles is when my mouse hovers the div again, instead of when the mouse leaves after the first toggle.<issue_comment>username_1: It's just a property file named `system`. It has no special meaning unless something reads that file and does something based on the contents (like in this case Heroku).
So it's a standard in the sense that property files contain key-value pairs like that, but `system.properties` is not a "Java wide" standard.
Upvotes: 2 <issue_comment>username_2: `system.properties` is part of the Heroku configuration. Among other things, it allows you to specify the Java version you'd be using (i.e., which java binary will run your code), so it cannot be part of Java itself.
Upvotes: 2
|
2018/03/14
| 1,099 | 3,434 |
<issue_start>username_0: I have a properties file, in which one of the property looks like this
```
tls.default.cipherSuites = TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,\
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,\
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,\
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,\
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,\
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,\
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,\
TLS_DHE_RSA_WITH_AES_128_CBC_SHA,\
TLS_RSA_WITH_AES_128_GCM_SHA256,\
TLS_RSA_WITH_AES_128_CBC_SHA256,\
TLS_RSA_WITH_AES_128_CBC_SHA,\
TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256,\
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,\
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,\
TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256,\
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,\
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,\
TLS_EMPTY_RENEGOTIATION_INFO_SCSV
```
I wanted to display only property name but not its values. I tried below command
```
Get-Content -Path "C:\Test\run_pa.properties" |
Where-Object { ! $_.StartsWith("#") } |
? { $_.trim() -ne "" } | ForEach-Object { $_ -replace '=.*' }
```
but i'm getting output as below
```
tls.default.protocols
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,\
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,\
TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,\
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,\
TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,\
TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,\
TLS_DHE_RSA_WITH_AES_128_CBC_SHA,\
TLS_RSA_WITH_AES_128_GCM_SHA256,\
TLS_RSA_WITH_AES_128_CBC_SHA256,\
TLS_RSA_WITH_AES_128_CBC_SHA,\
TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256,\
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256,\
TLS_ECDH_RSA_WITH_AES_128_CBC_SHA,\
TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256,\
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256,\
TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA,\
TLS_EMPTY_RENEGOTIATION_INFO_SCSV
```
How to avoid property values for this scenario and get only property name?<issue_comment>username_1: Use the `-Raw` option on `Get-Content` to always get just one `string` (there are version differences between v4 and v5) instead of an array for each line. Then just split it by the `=` sign and trim out whitespace using `.Trim(' ')`:
```
$content = Get-Content -Path $path -Raw
$propertyName = $content .Split('=')[0].Trim(' ')
```
Upvotes: -1 <issue_comment>username_2: The following filters the lines down to those that do not start with `#` and contain a `=`, and then removes everything starting with optional whitespace before the `=` from them, which leaves just the property names:
```
(Get-Content "C:\Test\run_pa.properties") -notmatch '^#' -match '=' -replace '\s*=.*'
```
Upvotes: 1 [selected_answer]
|