text
stringlengths 64
89.7k
| meta
dict |
---|---|
Q:
Perpendiculars on a line segment
Two points A and B are given. Find the set of feet of the perpendiculars dropped from the point A onto all possivle straight lines passing through the point B.
A:
Let $O$ be the midpoint of $AB$ and let $AO=R$ . If $l$ is any line passing through $B$ and $C$ is the leg of the perpendicular of $A$ on $l$ then the triangle $ABC$ is right at $C$ and hence $CO=AO=BO$.
Thus $C$ is on the circle with center at $O$ and radius $R$.
This shows that the locus is a subset of this circle.
Moreover, if $C'$ is any point on this circle excepting $A$ and $B$, since $AB$ is a diameter then $<C'=90^0$ which shows that $C$ is a point on the locus.
If $C'=A$ then this is the case $l=AB$ and if $C'=B$ then this is the case $l \perp AB$.
This shows that the locus is the circle with diameter $AB$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Javascript split creating an error for the first record
Check this Jfiddle
http://jsfiddle.net/TyJy4/9/
Here there is an array of users.If the username is right,you are allowed to go to the password field.The code works fine except when I type the first record(user) in the username for the first time.
Conditions when the user record works
When I type user and space, then tab(here it will throw error saying it does not exist in db.)then backspace, then tab it, goes to the password field.
When I type user2(or any other users except the first one) and then type user.
A:
You need to place the code to remove the readonly before you return true. See if making that change solves the problem for you (it appeared to for me):
for (var j=0; j<names.length; j++){
if(names[j].localeCompare(x)==0){
z.removeAttribute("readonly", 0);
return true;
}
}
The above can be shortened to:
if(names.indexOf(x) != -1){
z.removeAttribute("readonly", 0);
return true;
}
This uses the built-in indexdOf() which is probably written in C++, which should give it a performance boost over your for loop.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Android intent to other Activity
I want to forward from current activity to other Activity. But when i click the button i am not able to perform it.I want to move from MainActivity to FeedActivity Please tell what is fault in my code.
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingBottom="@dimen/activity_vertical_margin"
android:paddingLeft="@dimen/activity_horizontal_margin"
android:paddingRight="@dimen/activity_horizontal_margin"
android:paddingTop="@dimen/activity_vertical_margin"
tools:context="com.sample.test.MainActivity" >
<TextView
android:id="@+id/textView1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/hello_world" />
<Button
android:id="@+id/button1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignLeft="@+id/textView1"
android:layout_below="@+id/textView1"
android:layout_marginLeft="53dp"
android:layout_marginTop="92dp"
android:text="Button" />
<Button
android:id="@+id/button2"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignBottom="@+id/button1"
android:layout_marginLeft="20dp"
android:layout_toRightOf="@+id/button1"
android:text="@string/button2" />
</RelativeLayout>
MainActivity.java
public class MainActivity extends Activity {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Button btn = (Button) findViewById(R.id.button1);
Button btn2 = (Button) findViewById(R.id.button2);
btn.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View arg0) {
Toast.makeText(getApplicationContext(),
"Welcome to android by Vivek", Toast.LENGTH_LONG)
.show();
}
});
btn2.setOnClickListener(new OnClickListener() {
public void onClick(View view) {
Intent intent = new Intent(view.getContext(), FeedActivity.class);
startActivity(intent);
}
});
}
}
A:
When you are creating a new instance of Intent, you need to give it two parameters:
current activity class instance
goto activity class instance
Also, you need to either import View library or use your new OnClickListener as new View.OnClickListener
You can do that by adding this to import:
import android.view.View;
import android.view.View.OnClickListener;
Change your btn2.onClickListener to :
public void onClick(View view) {
Intent intent = new Intent(MainActivity.this, FeedActivity.class);
startActivity(intent);
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
to be able to use mysql-connector-python inside QGIS python script
I am creating python script in QGIS python console to be able to connect to mysql db and manipulate data (retrieve from db, generate shapefile and show it as a layer).
I tried to install mysql connector from OSGeo4w shell:
pip install mysql-connector-python but it fails with error:
C:>pip install mysql-connector-python
Collecting mysql-connector-python
C:\OSGEO4~1\apps\Python27\lib\site-packages\pip_vendor\requests\packages\urllib
3\util\ssl_.py:318: SNIMissingWarning: An HTTPS request has been made, but the S
NI (Subject Name Indication) extension to TLS is not available on this platform.
This may cause the server to present an incorrect TLS certificate, which can ca
use validation failures. You can upgrade to a newer version of Python to solve t
his. For more information, see https://urllib3.readthedocs.io/en/latest/security
.html#snimissingwarning.
SNIMissingWarning
C:\OSGEO4~1\apps\Python27\lib\site-packages\pip_vendor\requests\packages\urllib
3\util\ssl_.py:122: InsecurePlatformWarning: A true SSLContext object is not ava
ilable. This prevents urllib3 from configuring SSL appropriately and may cause c
ertain SSL connections to fail. You can upgrade to a newer version of Python to
solve this. For more information, see https://urllib3.readthedocs.io/en/latest/s
ecurity.html#insecureplatformwarning.
InsecurePlatformWarning
Could not find a version that satisfies the requirement mysql-connector-python
(from versions: )
No matching distribution found for mysql-connector-python
What is the best way to solve this?
A:
This error occured probably to my proxy limitations.
But my problem was solved by another way:
pip install wheel (ussing osgeo4w command shell).
download wl files from http://www.lfd.uci.edu/~gohlke/pythonlibs/
MySQL_python-1.2.5-cp27-none-win_amd64.whl
mysqlclient-1.3.10-cp27-cp27m-win_amd64.whl
and install them
pip install MySQL_python-1.2.5-cp27-none-win_amd64.whl
pip install mysqlclient-1.3.10-cp27-cp27m-win_amd64.whl
see https://www.a2hosting.com/kb/developer-corner/mysql/connecting-to-mysql-using-python for the python code example to deal with db connections.
My test example is:
def doQuery( conn ) :
cur = conn.cursor()
cur.execute( "SELECT name, vendor FROM t" )
for bname,bvendor in cur.fetchall() :
print bname,bvendor
import MySQLdb
myConnection = MySQLdb.connect(host=<host_ip>, user=db_user, passwd=db_pass, db=db_schema_or_db_instance )
doQuery( myConnection )
myConnection.close()
|
{
"pile_set_name": "StackExchange"
}
|
Q:
GLSL Linking fails without useful information
Similar titled question here: GLSL:shader linking fail (but no log) but in my case, both vertex and fragment shaders are very simple and in/out variables match as listed below.
[EDIT] Code for loading the shaders are listed further down.
Since VS2010 does not support range-based for-loop, some part of the code are #ifdef'd.
But anyway... I have tried the code with MinGW 32bit environment and it links OK.
Shader linking succeeds and runs fine when built with VS2010, but fails with NetBeans + MinGW-w64 and gives this log message:
Link info
---------
No shader objects attached.
Could it be something that's related with MinGW-w64 OpenGL librarys?
Here is my Vertex shader, and
#version 330
in vec4 vPosition;
in vec4 vColor;
out vec4 color;
void main()
{
color = vColor;
gl_Position = vPosition;
}
here is my Fragment shader.
#version 330
in vec4 color;
out vec4 fColor;
void main()
{
fColor = color;
}
LoadShader.h:
typedef struct {
GLenum type;
const char* filename;
GLuint shader;
} ShaderInfo;
main.cpp:
vector<ShaderInfo> shaders;
ShaderInfo vert = {GL_VERTEX_SHADER, "SimpleVertexShader.vert"};
ShaderInfo frag = {GL_FRAGMENT_SHADER, "SimpleFragmentShader.frag"};
shaders.push_back(vert);
shaders.push_back(frag);
program = LoadShaders(shaders);
LoadShader.cpp - LoadShaders()
GLuint LoadShaders(vector<ShaderInfo> shaders)
{
if (shaders.empty()) return 0;
#if !defined(_MSC_VER) || 1600 < _MSC_VER
for (auto entry : shaders)
entry.shader = CreateShader(entry.type, entry.filename);
#else
for (vector<ShaderInfo>::iterator entry = shaders.begin(); entry != shaders.end(); ++entry)
entry->shader = CreateShader(entry->type, entry->filename);
#endif
// Create the program
return CreateProgram(shaders);
}
LoadShader.cpp - CreateShader()
GLuint CreateShader(GLenum shaderType, const char* shader_file_path)
{
// Create the shader
GLuint shaderID = glCreateShader(shaderType);
if (!shaderID)
return 0;
// Read the shader code from the file
std::string shaderCode;
std::ifstream shaderStream(shader_file_path, std::ios::in);
if(shaderStream.is_open())
{
std::string Line = "";
while(getline(shaderStream, Line))
shaderCode += "\n" + Line;
shaderStream.close();
}
// Compile the shader
printf("Compiling shader : %s\n", shader_file_path);
char const* sourcePointer = shaderCode.c_str();
glShaderSource(shaderID, 1, &sourcePointer , NULL);
glCompileShader(shaderID);
// Check the shader
GLint compiled;
glGetShaderiv(shaderID, GL_COMPILE_STATUS, &compiled);
if (!compiled) {
GLsizei len;
glGetShaderiv(shaderID, GL_INFO_LOG_LENGTH, &len);
GLchar* log = new GLchar[len+1];
glGetShaderInfoLog(shaderID, len, &len, log);
std::cerr << "Shader compilation failed: " << log << std::endl;
delete [] log;
return 0;
}
return shaderID;
}
LoadShader.cpp - CreateProgram()
GLuint CreateProgram(vector<ShaderInfo> shaders)
{
// Create and link the program
fprintf(stdout, "Linking program\n");
GLuint programID = glCreateProgram();
if (!programID)
return 0;
// attach shaders and link the program
#if !defined(_MSC_VER) || 1600 < _MSC_VER
for (auto iter : shaders)
glAttachShader(programID, iter.shader);
#else
for (vector<ShaderInfo>::iterator iter = shaders.begin(); iter != shaders.end(); ++iter)
glAttachShader(programID, iter->shader);
#endif
glLinkProgram(programID);
// Check the program
GLint linked;
glGetProgramiv(programID, GL_LINK_STATUS, &linked);
if (!linked) {
GLsizei len;
glGetProgramiv(programID, GL_INFO_LOG_LENGTH, &len);
GLchar* log = new GLchar[len+1];
glGetProgramInfoLog(programID, len, &len, log);
std::cerr << "Shader linking failed: " << log << std::endl;
delete [] log;
#if !defined(_MSC_VER) || 1600 < _MSC_VER
for (auto iter : shaders) {
glDeleteShader(iter.shader);
iter.shader = 0;
}
#else
for (vector<ShaderInfo>::iterator iter = shaders.begin(); iter != shaders.end(); ++iter) {
glDeleteShader(iter->shader);
iter->shader = 0;
}
#endif
return 0;
}
return programID;
}
A:
I think you have an error in LoadShaders. The C++ 11 version of the loop:
for (auto entry : shaders)
entry.shader = CreateShader(entry.type, entry.filename);
takes a copy of each entry in shaders, so the elements in the vector will not be updated. Try instead:
for (auto &entry : shaders)
entry.shader = CreateShader(entry.type, entry.filename);
By the way, if you want to support older compilers, You might as well just have the old version of the loop.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to define angular1.3+ controller in typescript?
In new version of angular you must define controller like this
var app = angular.module('myApp', []);
app.controller('myCtrl', function($scope) {
$scope.firstName = "John";
$scope.lastName = "Doe";
});
How can I define This code by TypeScript?
A:
Controller will be something like:
class myCtrl
{
constructor( $scope )
{
$scope.firstName = "John";
$scope.lastName = "Doe";
}
}
And with module it will be something like
module myApp{
export class myCtrl{
static $inject = ["$scope"];
constructor( $scope: any)
{
$scope.firstName = "John";
$scope.lastName = "Doe";
}
}
}
$inject method specify the parameters that angular will inject in the class constructor. like $scope in our example you can inject services etc also
Now you can use it as
angular.module('myApp', []).controller('myCtrl',myApp.myCtrl);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Query WHERE response is empty enter null
I need to formulate a query and stuck. Need help with a WHERE x=x but if not enter a null or proceed.
Example.
SELECT
a.value1, a.value2,
b.vlaue1, b.value2,
c.value1
FROM
columnX a,
columnY b,
columnZ c
WHERE
a.value1 = b.value3
and b.value2 = c.value4
and c.value1 = a.value5
or c.value1 is null
I need the last WHERE of c.value1 to be either = to its check or if no value enter a null value. Right now it seems to choke and loop.
A:
Use join syntax, left join for C:
SELECT
a.value1, a.value2,
b.vlaue1, b.value2,
c.value1
FROM columnX a
INNER JOIN columnY b
on a.value1 = b.value3
LEFT JOIN columnZ c
on b.value2 = c.value4
and c.value1 = a.value5
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Paged scrolling with dynamic content
I'm having a imageView which i would like to show some content upon. However i want the user to be able to scroll between different content upon the image. This content is an label where the label text is determined from an array? how can i create something like this where you also will be able to scroll backwards from for instance the first object to the last object in array?
viewDidLoad
let firstFrame : CGRect = CGRectMake(0, 0, self.view.frame.width, self.view.frame.height-100-64)
var bigFrame : CGRect = firstFrame
bigFrame.size.width *= 2.0
scrollView = UIScrollView(frame: firstFrame)
self.cameraView.addSubview(scrollView!)
let firstView : UIView = UIView(frame: firstFrame)
firstView.backgroundColor = UIColor.redColor()
scrollView!.addSubview(firstView)
let secondView : UIView = UIView(frame: firstFrame)
scrollView!.addSubview(secondView)
scrollView!.pagingEnabled = true
scrollView!.contentSize = bigFrame.size
scrollView?.hidden = true
Further explanation
So basically i for instance have a array like
var textArray = ["#COYS", "#SPURS", "#DELLEALLI"]
I then have a UIImageView in my view which i basically want to be able to scroll different texts from the array on top. However i also want it to be able so that you for instance can scroll backwords and it in this casw will go from textArray[0] to textArray[2]
A:
one option would be to use a pageviewcontroller. take a look at my example:
https://www.dropbox.com/sh/mezxx53rd3bvw7g/AAB6PuUIGZwccgxOsjLPfvcfa?dl=0
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Identify the plane defined by $|z-2i| = 2|z+3|$
I tried:
$$|z-2i| = 2|z+3| \Leftrightarrow \\
|x+yi-2i|=2|x+yi+2|\Leftrightarrow \\
\sqrt{x^2+(y-2)^2}=\sqrt{4((x+2)^2+y^2)} \Leftrightarrow \\
\sqrt{x^2+y^2-4y+4} = \sqrt{4x^2+24x+36+4y^2} \Leftrightarrow \\
x^2+y^2-4y+4 = 4x^2+24x+36+4y^2 \Leftrightarrow \\
y^2-4y-4y^2=4x^2+24x+36+x^2 \Leftrightarrow \\
-3y^2-4y=5x^2+24x+26 \Leftrightarrow \\
???$$
What do I do next?
A:
I assume this means locus and not plane..
$$ \begin{align*}
|z-2i|&= 2|z+3|\\
|x+iy-2i|&= 2|x+iy+3| \\
|x+i(y-2)|&= 2|(x+3)+i(y)| \\
\sqrt{x^2+(y-2)^2}&=2\sqrt{(x+3)^2+y^2}\\
x^2+(y-2)^2&=4\left(x^2+6x+9+y^2\right)\\
x^2+y^2-4y+4&=4x^2+24x+36+4y^2 \\
3x^2+24x+32+3y^2+4y&=0\\
x^2+8x+\frac{32}{3}+y^2+\frac{4}{3}y&=0 \\
x^2+8x+y^2+\frac{4}{3}y &= -\frac{32}{3}\\
x^2+8x+16 +y^2+\frac{4}{3}y +\frac{4}{9}&= -\frac{32}{3}+16+\frac{4}{9} \\
(x+4)^2+\left(y+\frac{2}{3}\right)^2&=\frac{52}{9}
\end{align*}$$
Which means our locus is a circle with centre $\left(-4,\frac{-2}{3}\right)$ and radius $\frac{2\sqrt{13}}{3}$.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why is List not thread-safe?
From the following site:
http://crfdesign.net/programming/top-10-differences-between-java-and-c
Unfortunately, List<> is not thread-safe (C#’s ArrayList and Java’s
Vector are thread-safe). C# also has a Hashtable; the generic version is:
What makes List<T> not thread-safe? Is it implementation problem on .NET framework engineer's part? Or are generics not thread-safe?
A:
You really need to classify Java's Vector's type of thread safety. Javas Vector is safe to be used from multiple threads because it uses synchronization on the methods. State will not be corrupted.
However, Java's vector's usefulness is limited from multiple threads without additional synchronization. For example, consider the simple act of reading an element from a vector
Vector vector = getVector();
if ( vector.size() > 0 ) {
object first = vector.get(0);
}
This method will not corrupt the state of the vector, but it also is not correct. There is nothing stopping another thread from mutating the vector in between the if statement an the get() call. This code can and will eventually fail because of a race condition.
This type of synchronization is only useful in a handfull of scenarios and it is certainly not cheap. You pay a noticable price for synchronization even if you don't use multiple threads.
.Net chose not to pay this price by default for a scenario of only limited usefulness. Instead it chose to implement a lock free List. Authors are responsible for adding any synchronization. It's closer to C++'s model of "pay only for what you use"
I recently wrote a couple of articles on the dangers of using collections with only internal synchronization such as Java's vector.
Why are thread safe collections so hard?
A more usable API for a mutable thread safe collection
Reference Vector thread safety: http://www.ibm.com/developerworks/java/library/j-jtp09263.html
A:
Why would it be thread-safe? Not every class is. In fact, by default, classes are not thread-safe.
Being thread-safe would mean that any operation modifying the list would need to be interlocked against simultaneous access. This would be necessary even for those lists that will only ever be used by a single thread. That would be very inefficient.
A:
It is simply a design decision to implement the types not thread safe. The collections provide the property SyncRoot of the interface ICollection and the Synchronized() method on some collections for explicitly synchronizing the data types.
Use SyncRoot to lock an object in multithreaded environments.
lock (collection.SyncRoot)
{
DoSomething(collection);
}
Use collection.Synchronized() to obtain a thread-safe wrapper for the collection.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Calling a pre-defined Access query using ODBC in ADO.NET
I am using c# and OdbcConnection to connect to an Access database. Inside the database there is a pre-defined query that I want to run (like a stored proc in Sql Server). This used to be dead easy with the old COM-based ADO but it doesn't seem to work in ADO.net
OdbcConnection conn = AccessConnect.Connect();
var cmd = conn.CreateCommand();
cmd.CommandText = @"MyAccessQuery;";
cmd.CommandType = CommandType.StoredProcedure;
var da = new OdbcDataAdapter(cmd);
var ds = new DataSet();
da.Fill(ds);
Is there a way round it or am I going to have to duplicate my Access query in C# code?
A:
The Access ODBC (and OLEDB) interfaces expose saved queries in Access as either Views or Stored Procedures. How they are exposed determines the way they can be used by an external application.
Saved SELECT queries in Access that do not use PARAMETERS are exposed as Views, so they can be used like a table, e.g.
string sql = "SELECT * FROM mySavedSelectQuery WHERE id <= 3";
using (var cmd = new OdbcCommand(sql, con))
{
cmd.CommandType = System.Data.CommandType.Text;
using (var da = new OdbcDataAdapter(cmd))
{
var dt = new System.Data.DataTable();
da.Fill(dt);
Console.WriteLine("DataTable contains {0} row(s)", dt.Rows.Count);
}
}
Other types of saved queries in Access are exposed as Stored Procedures, so they need to be called using the ODBC {CALL ...} syntax, like so:
string sql = "{CALL mySavedParameterQuery (?)}";
using (var cmd = new OdbcCommand(sql, con))
{
cmd.CommandType = System.Data.CommandType.StoredProcedure;
// set parameter values (if any) in the order that they appear
// in the PARAMETERS list of the saved query
cmd.Parameters.Add("?", OdbcType.Int).Value = 3;
using (var da = new OdbcDataAdapter(cmd))
{
var dt = new System.Data.DataTable();
da.Fill(dt);
Console.WriteLine("DataTable contains {0} row(s)", dt.Rows.Count);
}
}
Note that in both of the above cases, as with any query against an Access database from an external application (C#, VB.NET, Java, etc.), there are some restrictions. For example, some Access built-in functions may not be available, and saved Access queries that call user-defined functions (written in VBA) will not work from a direct ODBC or OLEDB connection.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Android How can I using Cursor get image and video thumbnails in my direct path both?
How can I using Cursor get image and video thumbnails in my direct path in same cursor?
I need get the image and video thumbnails in /DCIM/100ANDRO folder.
But I just can separate to get image and video in at sd card all data.
private ArrayList<ImageItem> getData() {
final ArrayList<ImageItem> imageItems = new ArrayList<>();
ContentResolver cr = mContext.getContentResolver();
String[] projection = {MediaStore.Images.Thumbnails.DATA, MediaStore.Video.Thumbnails.DATA};
Cursor cursor = cr.query(MediaStore.Images.Thumbnails.EXTERNAL_CONTENT_URI,projection,null,null,null);
for( int i = 0 ; i < cursor.getCount(); i++)
{
cursor.moveToPosition(i);
String filePath = cursor.getString(cursor.getColumnIndex(MediaStore.Images.Thumbnails.DATA));
Log.i("info","filePath:"+filePath);
File file = new File(filePath);
Bitmap myBitmap = BitmapFactory.decodeFile(file.getAbsolutePath());
imageItems.add( new ImageItem(myBitmap, "Image#" + i) );
}
cursor = cr.query(MediaStore.Video.Thumbnails.EXTERNAL_CONTENT_URI,projection,null,null,null);
for( int i = 0 ; i < cursor.getCount(); i++)
{
cursor.moveToPosition(i);
String filePath = cursor.getString(cursor.getColumnIndex(MediaStore.Video.Thumbnails.DATA));
Log.i("info","filePath:"+filePath);
File file = new File(filePath);
Bitmap myBitmap = BitmapFactory.decodeFile(file.getAbsolutePath());
imageItems.add( new ImageItem(myBitmap, "Image#" + i) );
}
cursor.close();
return imageItems;
}
Have possible using cursor direct folder to get the thumbnails and get video and image thumbnails both?
thank you very much.
A:
I found the answer.
We should inverse the method.
We can find the real path. then to get the ID.
Through the ID to get the images and videos thumbnails.
To get the images and videos using a cursor can refer the articles.
Getting images thumbnails using below code:
bitmap = MediaStore.Images.Thumbnails.getThumbnail(context
.getApplicationContext().getContentResolver(), item.getImgId(),
MediaStore.Images.Thumbnails.MICRO_KIND, null);
Getting video thumbnails using below code:
bitmap = MediaStore.Video.Thumbnails.getThumbnail(context
.getApplicationContext().getContentResolver(), item.getImgId(),
MediaStore.Images.Thumbnails.MICRO_KIND, null);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Pattern quote creates excessive amount of \Q and \E
While replacing the string X with any pattern from ArrayList S, the new message N, produces an excessive amount of regex quotes, \Q and \E.
Is their some kind of way to break the loop once it reads through the message or implement some kind of count down to prevent a spam of regents quotes?
Code:
List<String> index = new ArrayList<String>()
index.add("This");
index.add("test");
String x = "This is a random test phrase";
for (String s : index)
{
x = Pattern.quote(x);
String new = x.replaceAll("(?i)"+s, "*"); //edit: forgot type
}
System.out.println(new);
Output running:
[16:43:24] [Async Chat Thread - #0/INFO]:\Q\Q\Q\Q\Q\QThis is a random * phrase\E\\E\Q\\E\\E\Q\Q\\E\\E\Q\\E\\E\Q\Q\Q\\E\\E\Q\\E\\E\Q\Q\\E\\E\Q\\E\\E\Q\Q\Q\Q\\E\\E\Q\\E\\E\Q\Q\\E\\E\Q\\E\\E\Q\Q\Q\\E\\E\Q\\E\\E\Q\Q\\E\\E\Q\\E\\E\Q\Q\Q\Q\Q\E\\E\Q\\E\\E\Q\Q\\E\\E\Q\\E\\E\Q\Q\Q\\E\\E\Q\\E\\E\Q\Q\\E\\E\Q\\E\\E\Q\Q\Q\Q\E\\E\Q\\E\\E\Q\Q\\E\\E\Q\\E\\E\Q\Q\Q\E\\E\Q\\E\\E\Q\Q\E\\E\Q\E
A:
Is this, what you want? I am not really sure what output you expect.
List<String> index = new ArrayList<String>();
index.add("This");
index.add("test");
String x = "This is a random test phrase";
for (String s : index)
{
x = x.replaceAll(s, "*");
}
System.out.println(x);
produces * is a random * phrase
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SSRS Subreport runs multiple times, I only want it running once
I have a report that has a drillthrough subreport that runs multiple times when it has more than one relationship to a many to many item that has nothing to do with the subreport.
Main report query
SELECT DISTINCT
cat.CategoryName AS 'Category Name', sub.SubCategoryName AS 'SubCategory Name', cur.Status, cur.PastConsiderationFlag, cur.Model, cur.Version, cur.Vendor, cur.AvailableDate AS 'Available Date', cur.EndOfProduction AS 'End of Production',
cur.EndOfSupport AS 'End of Support', dep.DepartmentName AS 'Department Name', emp.FirstName + ' ' + emp.LastName AS 'Tech Owner', emp2.FirstName + ' ' + emp2.LastName AS 'Tech Contact',
cur.NumOfDevices AS '# of Devices', cur.UpgradeDuration AS 'Upgrade Duration', cur.FiscalConsideration AS 'Fiscal Consideration', cur.Description, cur.SupportingComments, cur.CurrencyId, STUFF
((SELECT ', ' + pl.PlatformName AS Expr1
FROM Platform AS pl LEFT OUTER JOIN
Currency_Platform AS cp ON cur.CurrencyId = cp.CurrencyId
WHERE (pl.PlatformId = cp.PlatformId) FOR XML PATH('')), 1, 1, '') AS 'Platforms', ISNULL(STUFF
((SELECT ', ' + cu2.Model AS Expr1
FROM Currency AS cu2 RIGHT OUTER JOIN
Currency_Dependency AS cd ON cur.CurrencyId = cd.CurrencyId
WHERE (cu2.CurrencyId = cd.DependencyId) FOR XML PATH('')), 1, 1, ''), 'N/A') AS 'Dependencies', ISNULL(STUFF
((SELECT ', ' + cu2.Model AS Expr1
FROM Currency AS cu2 RIGHT OUTER JOIN
Currency_Affected AS ca ON cur.CurrencyId = ca.CurrencyId
WHERE (cu2.CurrencyId = ca.AffectedId) FOR XML PATH('')), 1, 1, ''), 'N/A') AS 'Affected Apps', Currency_Platform.PlatformId
FROM Currency AS cur INNER JOIN
SubCategory AS sub ON cur.SubCategoryId = sub.SubCategoryId INNER JOIN
Category AS cat ON sub.CategoryId = cat.CategoryId LEFT OUTER JOIN
Employee AS emp ON cur.OwnerId = emp.EmployeeId LEFT OUTER JOIN
Employee AS emp2 ON cur.ContactId = emp2.EmployeeId LEFT OUTER JOIN
Department AS dep ON cur.PortfolioOwnerId = dep.DepartmentId LEFT OUTER JOIN
Currency_Platform ON cur.CurrencyId = Currency_Platform.CurrencyId
Even though it's a distinct select, the subreport will run equal to the amount of Platforms it belongs to. I'll include the Query for the subreport here.
;with cte as (
-- anchor elements: where curr.Status = 1 and not a dependent
select
CurrencyId
, Model
, Version
, ParentId = null
, ParentModel = convert(varchar(128),'')
, Root = curr.Model
, [Level] = convert(int,0)
, [ParentPath] = convert(varchar(512),Model + Version)
from dbo.Currency as curr
where curr.Status = 1
/* anchor's do not depend on any other currency */
and not exists (
select 1
from dbo.Currency_Dependency i
where curr.CurrencyId = i.DependencyId
)
-- recursion begins here
union all
select
CurrencyId = c.CurrencyId
, Model = c.Model
, Version = c.Version
, ParentId = p.CurrencyId
, ParentModel = convert(varchar(128),p.Model + p.Version)
, Root = p.Root
, [Level] = p.[Level] + 1
, [ParentPath] = convert(varchar(512),p.[ParentPath] + ' > ' + c.Model + ' ' + c.Version)
from dbo.Currency as c
inner join dbo.Currency_Dependency as dep
on c.CurrencyId = dep.DependencyId
inner join cte as p
on dep.CurrencyId = p.CurrencyId
)
select CurrencyId, ParentPath, Model + ' ' + Version AS 'Model' from cte
WHERE CurrencyId = @CurrencyId
When I run the subreport individually, everything is fine. When I open the subreport through the main report passing the CurrencyId as a parameter, it does so as many times as the amount of platforms it belongs to.
Is there a way I can correct this either by improving the queries, or as I would prefer, force the subreport to only run once no matter what?
Thanks so much for having a look.
A:
You can use SQL Server Profiler to check the following things.
How many times and with what parameters is the subreport query has ran
How many values your first query returned
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Два массива одним foreach
Привет, есть два разных массива, динамическая форма. Можно ли оба этих массива проходить одним циклом?
A:
array_map позволяет прогнать несколько массивов, тоже использую для форм
A:
Если это два массива одинаковой длины, их можно обходить так:
<?php
// $_POST['arr1'], $_POST['arr2'] - полученные массивы
foreach ($_POST['arr1'] as $key => $arr1_value)
{
$arr2_value = $_POST['arr2'][$key'];
}
Также можно добавить проверку на наличие ключа во втором массиве во избежание нотисов.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Symfony2 > Use Security Context in Public Route (possible alternative)
I have a function UserController:saveUserAction(). This function is being used throughout my system WHENEVER a User Entity needs to be created or modified. This can happen from three places:
1) Admin Panel → User Administration (System Admin can create/modify users).
2) Members Dashboard → My Details (User can edit his own details).
3) Registration (Non-users can sign up and a user is created for them).
Now there are certain fields that may ONLY be set by an administrator, for instance, the USER_ROLE. (I do not want someone who is registering 'hacking' the system and signing themselves up as an administrator). Normally if ($this->get('security.context')->isGranted('ROLE_ADMIN')) works fine to determine if the user is an administrator, but seeing as the route to saveUser() is public (in order to facilicate public registrations), I am getting the error:
The security context contains no authentication token. One possible reason may be that there is no firewall configured for this URL.
Is there a way to user the security context on a public route or is there some kind of alternative other than checking manually what roles the logged in user (if any logged in user) has, because this is quite lumbersome as can be seen at Symfony2 > Easier way to determine access
A:
I have found a solution. I placed the route under my firewall and placed an exception in security.yml that the specific route is authenticated anonymously. :)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Could not log "render_template.action_view" event after rails upgrade to 4.2.8
I am in the process of upgrading a rails app that mostly serves JSON. The last working version I was able to upgrade to is 4.1. Once I upgraded to 4.2, request specs that produce strange errors in the test log:
Could not log "render_template.action_view" event. NoMethodError: undefined method `render_views?' for #<Class:0x007fe544a2b170>
Somewhere I read that this is due to rails trying to render a view that isn't present. Before the jump to rails 4, we set headers['CONTENT_TYPE'] = 'application/json' and everything was fine. I read that this isn't working anymore with rails 4. I already tried adding format: :json, as suggested here: Set Rspec default GET request format to JSON, which didn't help.
Any help on how to get the specs running again would be greatly appreciated.
A:
As it turns out, this error occurs if an include is missing in the rspec config block. Adding
RSpec.configure.include RSpec::Rails::ViewRendering
fixes that issue.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Best approach to use PdfStamper in for loop
I have below iText code to read files and adding it into master PDF file, so it is basically adding PDF page in the existing PDF at absolute position. Absolute position and page number in the master PDF will be decided dynamically. Sometimes it could be on page 1 with 100,100(x,y) or page 2 with 250,250(x,y). I am looping through the PDF objects where each object represent PDF file, then I will apply business logic to convert PDF object into PDF file and that is srcPdf. Now I need to add this srcPdf at absolute position in master PDF(which is pdfStamper here):
for(ListOfPdfObject pdfObj: ListOfPdfObjects) {
// code to create srcPdf so there will be new srcPdf for each iteration. srcPdf is flattened pdf of acro form field pdf.
PdfReader reader2 = new PdfReader(srcPdf.getAbsolutePath());
PdfImportedPage page = pdfStamper.getImportedPage(reader2, 1);
pdfStamper.insertPage(1, reader2.getPageSize(1));
pdfStamper.getUnderContent(1).addTemplate(page, 100, 100);
pdfStamper.close(); // problem is here
reader2.close();
}
Here pdfStamper is created outside for-loop like below:
PdfReader pdfReader = new PdfReader(new FileInputStream(tempPdf));
PdfStamper pdfStamper = new PdfStamper(pdfReader, new FileOutputStream(destPdf));
The problem is if I close pdfStamper after for-loop it throws RandomAccessSource not opened exception. If I close inside for loop I will have to create again inside for-loop. Could you please point me at right direction.
A:
As explained in my answer to Extract pdf page and insert into existing pdf, using PdfStamper is only one way to meet your requirement. PdfStamper is probably your best choice if you need to manipulate a single PDF document and it's possible to add a single page from another PDF as my previous answer demonstrates.
However, you now indicate that you have to concatenate multiple PDF files. In that case, using PdfStamper isn't the best choice. You should consider switching to PdfCopy:
Suppose that you have the following files.
String[] paths = new String[]{
"resources/to_be_inserted_1.pdf",
"resources/to_be_inserted_2.pdf",
"resources/to_be_inserted_3.pdf"
};
You need to insert the first page (and only the first page) of each of these documents at the start of an existing PDF with path "resources/main_document.pdf", then you could do something like this:
Document document = new Document();
PdfCopy copy = new PdfCopy(document, new FileOutputStream(dest));
document.open();
PdfReader reader;
for (String path : paths) {
reader = new PdfReader(path);
copy.addPage(copy.getImportedPage(reader, 1));
reader.close();
}
reader = new PdfReader("resources/main_document.pdf");
copy.addDocument(reader);
reader.close();
document.close();
As you can see, the addPage() method adds a single page, whereas the addDocument() method adds all the pages of a document.
Update
It seems that you don't want to insert new pages, but that you want to superimpose pages: you want to add pages on top of or under existing content.
In that case, you indeed need PdfStamper, but you're making two crucial errors.
You close the stamper inside the loop. Once the stamper is closed, it is closed: you can't add any more content to it. You need to move stamper.close() outside the loop.
You close the reader inside the loop, but stamper hasn't released the reader yet. You should free the reader first.
This is shown in the SuperImpose example:
public static final String SRC = "resources/pdfs/primes.pdf";
public static final String[] EXTRA =
{"resources/pdfs/hello.pdf", "resources/pdfs/base_url.pdf", "resources/pdfs/state.pdf"};
public static final String DEST = "results/stamper/primes_superimpose.pdf";
PdfReader reader = new PdfReader(SRC);
PdfStamper stamper = new PdfStamper(reader, new FileOutputStream(DEST));
PdfContentByte canvas = stamper.getUnderContent(1);
PdfReader r;
PdfImportedPage page;
for (String path : EXTRA) {
r = new PdfReader(path);
page = stamper.getImportedPage(r, 1);
canvas.addTemplate(page, 0, 0);
stamper.getWriter().freeReader(r);
r.close();
}
stamper.close();
In this case, I always add the imported pages to page 1 of the main document. If you want to add the imported pages to different pages, you need to create the canvas object inside the loop.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
C# declare variable into if statement
i want to do something like this using C# :
if (i == 0)
{
button a = new button();
}
else
{
TextBlock a = new TextBlock();
}
mainpage.children.add(a);
But i get an error that
Error 1 The name 'a' does not exist in the current context
Any ideas ?
thank you in advance !
A:
You need a common base class that both button and Textblock derive from, and it needs to be declared outside of the if statement if it's to be accessed after the if is complete. Control maybe?
Control a;
if (i == 0)
{
a = new button();
}
else
{
a = new TextBlock();
}
mainpage.children.add(a);
Not knowing what specific control toolkit you're using (WPF maybe?) I can't advise further. But I'd look at the signature for Add to get a clue - what's the parameter declared as?
A:
Try declaring a outside of the scope of the if/else. Like this:
Control a;
if (i == 0)
{
a = new button();
}
else
{
a = new TextBlock();
}
mainpage.children.add(a);
A:
You need to declare your variable in parent scope and give it a common base class. The common base class for System.Windows.Controls.TextBlock and System.Windows.Controls.Button can be for example System.Windows.UIElement or System.Windows.FrameworkElement. So your code can look like this:
UIElement a;
if (i == 0)
{
a = new Button();
}
else
{
a = new TextBlock();
}
mainpage.children.add(a);
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Bounded probability implies convergence in probability
Let $(X_n)$ be a sequence of random variables and $(a_n),(b_n)$ be two sequences of non-negative real numbers such that $a_n\downarrow 0$ and $b_n\downarrow 0$ when $n\to\infty$.
If for any $t>0$,
$$
P(|X_n|\geq a_n+t)\leq b_n,
$$
can we conclude that $X_n\overset{P}{\to}0$ as $n\to\infty$? From the hypothesis I know that for any $t>0$
$$
\lim_{n\to\infty}P(|X_n|\geq a_n+t)=0.
$$
But I do not see why the fact that $a_n\downarrow 0$ implies that
$$
\lim_{n\to\infty}P(|X_n|\geq t)=0.
$$
I was trying to use Slutsky's theorem, but I don't know anything of the convergence of $|X_n|-a_n$. Other thing is that for $n$ sufficiently large $a_n\leq \epsilon$ for any $\epsilon>0$, so
$$
P(|X_n|\geq a_n+t)\leq P(|X_n|\geq \epsilon+t),
$$
but that doesn't help either.
Any suggestions?
A:
Fix $s > 0$. We want to show that $\mathbb{P}(|X_n| \geq s) \to 0$ so fix $\varepsilon > 0$ and we aim to show that for large enough $n$, $\mathbb{P}(|X_n| \geq s) < \varepsilon$.
By the assumption applied with $t = \frac{s}{2}$, $$\mathbb{P}(|X_n| \geq a_n + \frac{s}{2}) \to 0$$
Combining this fact with the assumption that $a_n \to 0$, there is an $N$ such that $n \geq N$ implies that $a_n < \frac{s}{2}$ and $\mathbb{P}(|X_n| \geq a_n + \frac{s}{2}) < \varepsilon$. This means that if $n \geq N$, $|X_n| \geq s$ implies that $|X_n| \geq a_n + \frac{s}{2}$. Therefore, for $n \geq N$,
$$\mathbb{P}(|X_n| \geq s) \leq \mathbb{P}(|X_n| \geq a_n + \frac{s}{2}) < \varepsilon$$
which gives the desired convergence.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
jQuery jplist: click events on rows not firing when initialized, sorted, selected
i have a google map which has a trigger on, by clicking a div class row. This is completely ignored when the jplist plugin is used on the page. If i remove the code for this plugin the trigger works perfectly.
jsfiddle http://jsfiddle.net/LVThH/
$(this).click(function(){
google.maps.event.trigger( otherMarkers ,'click');
});
I really want to get to the bottom of what the issue is here and if its a conflict.
Please help :D
A:
Seems that no one is noticing you ;) so I give you something to get along with - I tried it out and it seems that jplist might be unbinding your click events from those .row class div's (and all under it) - when its initialized, sorted, or what so ever. It provided also redraw_callback event but it fired only once when page had been loaded (what a shame, because it could be used to attach your own .row click events after initialization). To demonstrate something like: redraw_callback: setClickEvents(); where setClickEvents(); would be method which contains code similar to yours where you call marker click.
But since it didnt work! - I see at least few possibilities:
1.) (GOOD way) Try another plugin - since I already hate jplist for doing those unbindings or...
2.) (HACKER way) Add javascript click handlers to div while not using jquery and write a bit javascript to top of the page to call clicks for markers which are put inside array when map is initialized.
Try out this fiddle (tested to work with moz and chrome.) .. Notice that I would still choose either Good way or consulting plugin maker how redraw_callback works or is it broken etc etc.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
zend studio + xampp server file transfer
i been using aptana and dreamweaver for some long time, but now i wanted to use zend studio, because of there latest release and it says it can help on debugging while coding on javascript/php.
Now the thing is, i keep my project in different location and testing project in different location, just for safety and some wired thing dont happen, which sometimes empty the code for no reason. anyway in other two application i can easily make remote connection and transfer the file using the arrow button or by keyboard CTRL+ALT+U . it will upload and i can just refresh the browser to check it. on zend, i dont see any remote connections and i did change the server connection, but im not sure how i can easily transfer files, like i do in other IDE. can anyone help me on this for creating a remote connection and setting up the keywords, so i can continue to do what i use to do.
A:
i found File Synchronization plugin for eclipse, that worked out pretty well too, its just it will auto upload, rather then i will press ctrl+alt+u, which i like more better then auto. oh well, something better then nothing.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is this sanitization unsafe? Is it vulnerable to SQL Injection?
Function RemoveSuspeitos(ByVal strTXT)
Dim txtAux As String
txtAux = strTXT
txtAux = Replace(txtAux, chr(34), "")
txtAux = Replace(txtAux, "'", "")
RemoveSuspeitos = txtAux
End Function
DB: MSSQL
1) Forget syntax errors in the above code, I am not expert in VB.
2) Lets say I always use single or double quotes, even for int values (e.g.: '" + $int_id + "').
Is this sanitization unsafe?
If yes, why? Please show me a real exploit scenario.
A:
Here is my try.
The problem with vulnerabilities is that they are not that direct as most users think.
In reality, production code grows slightly different from the artificial, totally controlled example.
So, here not technological, but methodological vulnerabilities are coming into the scene. I made a vast effort researching this matter last year, and here are my conclusions
Let's start from a metaphor:
One have to always use a seat belt. No exceptions. Yes, you can always say that you drive so safely that no crash ever be possible. But, unfortunately, statistics is against you. There are other people involved in the traffic. There are sudden obstacles. There are unforeseen breakages. That's why you should always wear a seat belt. Exactly the same thing with protecting your code.
Here let me post an excerpt from my research:
Why manual formatting is bad?
Because it's manual. Manual == error prone. It depends on the programmer's skill, temper, mood, number of beers last night and so on. As a matter of fact, manual formatting is the very and the only reason for the most injection cases in the world. Why?
Manual formatting can be incomplete.
Let's take Bobby Tables' case. It's a perfect example of incomplete formatting: string we added to the query were quoted but not escaped! While we just learned from the above that quoting and escaping should be always applied together (along with setting the proper encoding for the escaping function). But in a usual PHP application which does SQL string formatting separately (partly in the query and partly somewhere else), it is very likely that some part of formatting may be simple overlooked.
Manual formatting can be applied to wrong literal.
Not a big deal as long as we are using complete formatting (as it will cause immediate error which can be fixed at development phase), but combined with incomplete formatting it's a real disaster. There are hundreds of answers on the great site of Stack Overflow, suggesting to escape identifiers the same way as strings. Which is totally useless and leads straight to injection.
Manual formatting is essentially non-obligatory measure.
First of all, there is obvious lack of attention case, where proper formatting can be simply forgotten. But there is a real weird case - many PHP users often intentionally refuse to apply any formatting, because up to this day they still separating data to "clean" and "unclean", "user input" and "non-user input", etc. Means "safe" data don't require formatting. Which is a plain nonsense - remember Sarah O'Hara. From the formatting point of view, it is destination that matters. A developer have to mind the type of SQL literal, not the data source. Is this string going to the query? It have to be formatted then. No matter, if it is from user input or just mysteriously appeared amidst the code execution.
Manual formatting can be separated from the actual query execution by considerable distance.
Most underestimated and overlooked issue. Yet most essential of them all, as it alone can spoil all the other rules, if not followed.
Almost every PHP user is tempted to do all the "sanitization" in one place, far away from the actual query execution, and this false approach is a source of innumerable faults:
first of all, having no query at hand, one cannot tell what kind of SQL literal this certain piece of data is going represent - and thus violate both formatting rules (1) and (2) at once.
having more than one place for santization, we're calling for disaster, as one developer would think it was done by another, or made already somewhere else, etc.
having more than one place for santization, we're introducing another danger, of double-sanitizing data (say, one developer formatted it at the entry point and another - before query execution)
premature formatting most likely will spoil the source variable, making it unusable anywhere else.
As you can see, first two items can be considered inapplicable if, as you say, you're always putting your values in quotes. But here come last two. "Always" is too presuming a word. We are humans and we all make mistakes. You are not only one who is working on the project. And even if you personally doing everything right, other users may not share your confidence. Say, some of them may share a widespread delusion as though only user input have to be "sanitized" - and thus expose to the danger of second order injection.
This is why there ought to be mechanism that may guarantee 100% safety if strictly followed, no matter if developer understands it or not.
And using placeholders for the EVERY dynamical literal in the query IS such a mechanism.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How do I make a JS function that works onolad not work on the first page load.?
I have a html page which has a function that would work once the page loads.
<body onload="startNow()">
This startNow function has a part that refreshes the page if a certain condition is not met and thus causes the function to run again.
function startNow() {
var warning_check = document.getElementsByClassName("warning");
if (warning_check.length == 0) {
checkAll(); //another function call
} else {
window.location.reload();
}
}
Is there a way i can make this startNow function not run the first time this page loads?
A:
Easily done with a localStorage
function startNow(){
var warning_check = document.getElementsByClassName("warning");
if(warning_check.length == 0) {
checkAll(); //another function call
} else {
if (!!localStorage.returningUser){
window.location.reload();
}
}
localStorage.returningUser = 'true';
}
EDIT
the above function still runs for first time users but it won't reload if the code reaches to the reload point. If you want to code to stop running at all then
function startNow(){
if (!localStorage.returningUser){
localStorage.returningUser = 'true';
return false;
}
var warning_check = document.getElementsByClassName("warning");
if(warning_check.length == 0) {
checkAll(); //another function call
} else {
window.location.reload();
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
When tableDnD is disabled it leaves residual css and I can't clear the css
tableDnD (table Drag and Drop) is a jQuery plugin I'm using to be able to re-order rows in my html table. It's a pretty cool library, but the fact that you can click, drag, and drop rows -- disables the ability to select that row by clicking -- it thinks you might be starting a drag and drop, or doing a 0-distance drag.
So you can view my example here, and see what I'm talking about -- http://jsfiddle.net/du31ufts/5/
It starts off as disabled, so enable the row re-ordering and see the library. Then, try to select just one row in the middle. The only way the blue highlighting indicating selection shows up is if you click outside the bounds of the table, and thus you would only be able to select starting from the bottom or top, and not be able to select one row at a time. I need to select these rows to be able to copy-paste these rows into excel.
I've tried looking into the library itself and detaching $('__dragtable_disable_text_selection__'), I've tried jquerys removeAttr for unselectable, I've tried
$('.dragtable-sortable').attr('-moz-user-select', 'none');
Nothing is re-enabling my ability to be able to click and select single rows. I'd like to do this without modifying tableDnD functions.
Which CSS properties could be affecting my ability to select table rows?
A:
I asked about this on github and got an answer: It did require modification of the library. Specifically, replacing the if block starting at line 207 of the fiddle in this question with this
if (!$(this).hasClass("nodrag")) {
if (e.target.tagName == "TD") {
$.tableDnD.initialiseDrag(this, table, this, e, config);
return false;
}
}
modified fiddle. Thank you tschqr
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Match with multiple criteria without loop in R
I have a data frame displaying a set of conditions, for example:
B = data.frame(col1 = 1:10, col2 = 11:20 )
e.g., the first row says that when col1 = 1, col2 = 11.
I also have another data frame with the numbers that should met these conditions, for example:
A = data.frame(col1 = c(1:11,1:11), col2 = c(11:21,11:21), col3 = 101:122)
I would like to return the sum of the values in col3 in matrix A for all rows that meat the conditions in B. For example, using the first row in B this value is:
sum(A$col3[which(A$col1 == B$col1[1] & A$col2 == B$col2[1])])
#[1] 213
that is the sum of the entries in col3 in the 1st and 12th row of A. I need to find a vector with all these sums for all rows of matrix A. I know how to do this with a loop, however in my data matrices A and B are very large and have many conditions, so I was wondering whether there is a way to do the same thing without the loop. Thank you.
A:
Solution in base R
# Sum identical rows
A.summed <- aggregate(col3 ~ col1 + col2, data = A, sum);
# Select col1 col2 combinations that are also present in B
A.summed.sub <- subset(A.summed, paste(col1, col2) %in% paste(B$col1, B$col2));
# col1 col2 col3
#1 1 11 213
#2 2 12 215
#3 3 13 217
#4 4 14 219
#5 5 15 221
#6 6 16 223
#7 7 17 225
#8 8 18 227
#9 9 19 229
#10 10 20 231
Or the same as a one-liner
A.summed.sub <- subset(aggregate(col3 ~ col1 + col2, data = A, sum), paste(col1, col2) %in% paste(B$col1, B$col2));
# Add summed col3 to dataframe B by matching col1 col2 combinations
B$col3 <- A.summed[match(paste(B$col1, B$col2), paste(A.summed$col1, A.summed$col2)), "col3"];
B;
# col1 col2 col3
#1 1 11 213
#2 2 12 215
#3 3 13 217
#4 4 14 219
#5 5 15 221
#6 6 16 223
#7 7 17 225
#8 8 18 227
#9 9 19 229
#10 10 20 231
A:
A solution using dplyr. A2 is the final output. The idea is grouping the value in col1 and col2 and calculate the sum for col3. semi_join is to filter the data frame by matching values based on col1 and col2 in B.
library(dplyr)
A2 <- A %>%
group_by(col1, col2) %>%
summarise(col3 = sum(col3)) %>%
semi_join(B, by = c("col1", "col2")) %>%
ungroup()
A2
# # A tibble: 10 x 3
# col1 col2 col3
# <int> <int> <int>
# 1 1 11 213
# 2 2 12 215
# 3 3 13 217
# 4 4 14 219
# 5 5 15 221
# 6 6 16 223
# 7 7 17 225
# 8 8 18 227
# 9 9 19 229
# 10 10 20 231
A:
We can do a join on using data.table
library(data.table(
setDT(A)[B, .(col3 = sum(col3)), on = .(col1, col2), by = .EACHI]
# col1 col2 col3
# 1: 1 11 213
# 2: 2 12 215
# 3: 3 13 217
# 4: 4 14 219
# 5: 5 15 221
# 6: 6 16 223
# 7: 7 17 225
# 8: 8 18 227
# 9: 9 19 229
#10: 10 20 231
|
{
"pile_set_name": "StackExchange"
}
|
Q:
GUIMiner won't start mining on Slush's pool
I started co-mining on Slush's pool successfully for about 48 hours, but from there on I haven't been able to connect to it again. It logs in the console this, no matter if I'm using a valid worker and password or not:
2013-05-26 12:55:19: Listener for "Default" started
2013-05-26 12:55:20: Listener for "Default": api2.bitcoin.cz:8332 26/05/2013 12:55:20, checking for stratum...
2013-05-26 12:55:20: Listener for "Default": api2.bitcoin.cz:8332 26/05/2013 12:55:20, started OpenCL miner on platform 0, device 0 (Turks)
2013-05-26 12:55:20: Listener for "Default": api2.bitcoin.cz:8332 26/05/2013 12:55:20, no response to getwork, using as stratum
2013-05-26 12:55:20: Listener for "Default": api2.bitcoin.cz:8332 26/05/2013 12:55:20, No JSON object could be decoded
2013-05-26 12:55:30: Listener for "Default": api2.bitcoin.cz:8332 26/05/2013 12:55:30, Failed to subscribe
2013-05-26 12:55:32: Listener for "Default": api2.bitcoin.cz:8332 26/05/2013 12:55:32, IO errors - 1, tolerance 2
2013-05-26 12:55:33: Listener for "Default" shutting down
Of course, I've checked my connection, and I've started GUIMiner with administrator privileges. I'm using Windows 7.
A:
The problem is that in GUIMiner, the URL for Slush's pool is:
api2.bitcoin.cz:8332
That address has been deprecated, and it falls back to the GetWork protocol. From Slush's Pool news:
10.03.2013
Default mining URL for Stratum is stratum.bitcoin.cz:3333. If you're still using api.bitcoin.cz, please fix your URL to prevent fallback to deprecated Getwork protocol.
In the GUIMiner interface, instead of choosing "Slush's Pool" from the dropdown menu, choose "other" and then enter
http://api.bitcoin.cz:8332
or
stratum.bitcoin.cz:3333
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Links in Laravel
I'm am just starting out in Laravel, been using codeIgniter for years. Very quick question as the Laravel documentation does not seem to address this. Do all links in your blade file have to have defined route in the routes files. In codeigniter it was generally /controller/function in your links but it seems to me in Laravel all links have to be defined in routes file...
A:
No, they do not have to be defined.
There's nothing prohibiting you from using <a href='/whatever/you/want'> -- That said, it's generally better to use defined routes and reference them by name, that way, if you ever change the actual structures, the route('name'); will automatically resolve to the new structure.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
boost mpl count for simple example
I am trying to learn boost mpl, tried a very simple example to count the number of times a type appears in mpl map. Could somebody explain why the output of this program is 0
typedef map<
pair<int, unsigned>
, pair<char, unsigned char>
, pair<long_<5>, char[17]>
, pair<int[42], bool>
> m;
std::cout << mpl::count <
m,
mpl::key_type
<
m,
pair<int, unsigned>
>::type
>::type::value << std::endl;
A:
According to what is written in the code you'd like to count the occurrences of type
key_type<
m,
pair<int, unsigned>
>::type
in your map. In the end this is an int because in the description of mpl::key_type you'll find:
key_type<m,x>::type Identical to x::first;
Well, so let's see what are the actual contents of your map.
I could just write the type of the map, but I'd like to show you how to check a type the quick and lazy way. :P
So, we just make the compiler fail to see whats the type of the map.
I did it with adding this line somewhere:
typename m::blaa BB;
The compilation of course fails (because blaa is not an element of the mpl::map type) with following error message:
error: 'blaa' in 'm {aka struct boost::mpl::map<boost::mpl::pair<int, unsigned int>, boost::mpl::pair<char, unsigned char>, boost::mpl::pair<mpl_::long_<5l>, char [17]>, boost::mpl::pair<int [42], bool> >}' does not name a type
Ok, what we can read is that the map contains a list of pairs (e.g., boost::mpl::pair<int, unsigned int>),
but no int. And in your mpl::count call you are looking for int.
Just try to replace your std::cout lines with
the following lines and you'll see that the result will be as expected.
std::cout <<
boost::mpl::count< m,
boost::mpl::pair<int, unsigned>
>::type::value
<< std::endl;
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how do I clean up after a StreamWriter during an exception?
I'm trying to clean-up after an exception, and I'm not sure how to handle a StreamWriter.
Dim sw As StreamWriter
Try
''// stuff happens
somethingBad1() ''//Sometimes throws an exception
sw = New StreamWriter(File.Open("c:\tmp.txt", FileMode.Create))
''// stuff happens
somethingBad2() ''//Also sometimes throws an exception
sw.Write("Hello World")
sw.Flush() ''//Flush buffer
sw.Close() ''//Close Stream
Catch ex As Exception
sw = Nothing
Finally
sw = Nothing
end try
If somethingBad1 throws an exception, I don't need to do anything to sw; however, if somathignBad2 happens, sw has already been created and I need to close it. But How do I know if sw has been created or not?
A:
''//stuff happens but you don't care because you didn't instantiate
''// StreamWriter yet
somethingBad1() ''//Sometimes throws an exception
Using sw As New StreamWriter("test.dat")
''// stuff happens
somethingBad2() ''//Also sometimes throws an exception
''//as you are in a using statement the sw.Dispose method would be called
''//which would free the file handle properly
sw.Write("Hello World")
End Using
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Recursive XSD definition
I'm trying to write a recursive XSD definition. I looked at similar questions but I'm still unable to find what I'm doing wrong.
I'm defining a header which is an item that has a text, an URL and a list of children items.
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="header">
<xs:sequence>
<xs:element name="item">
<xs:complexType>
<xs:sequence>
<xs:element type="xs:string" name="text"/>
<xs:element type="xs:string" name="url"/>
<xs:sequence>
<xs:element ref="item" />
</xs:sequence>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:sequence>
</xs:element>
My IDE gives an error in the nested item saying that "item" is undefined.
A:
You've defined the item element locally, but it has to be defined globally in order to be referenced:
<?xml version="1.0" encoding="utf-8"?>
<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">
<xs:element name="item">
<xs:complexType>
<xs:sequence>
<xs:element type="xs:string" name="text"/>
<xs:element type="xs:string" name="url"/>
<xs:sequence>
<xs:element ref="item" />
</xs:sequence>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="header">
<xs:complexType>
<xs:sequence>
<xs:element ref="item"/>
</xs:sequence>
</xs:complexType>
</xs:element>
</xs:schema>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
calculate stats based on dynamic window using dplyr
I am trying to use dplyr in R to calculate rolling stats (mean, sd, etc) based on a dynamic window based on dates and for specific models. For instance, within groupings of items, I would like to calculate the rolling mean for all data 10 days prior. The dates on the data are not sequential and not complete so I can't use a fixed window.
One way to do this is use rollapply referencing the window width as shown below. However, I'm having trouble calculating the dynamic width. I'd prefer a method that omits the intermediate step of calculating the window and simply calculate based on the date_lookback. Here's a toy example.
I've used for loops to do this, but they are very slow.
library(dplyr)
library(zoo)
date_lookback <- 10 #days to look back for rolling calcs
df <- data.frame(label = c(rep("a",5),rep("b",5)),
date = as.Date(c("2017-01-02","2017-01-20",
"2017-01-21","2017-01-30","2017-01-31","2017-01-05",
"2017-01-08","2017-01-09","2017-01-10","2017-01-11")),
data = c(790,493,718,483,825,186,599,408,108,666),stringsAsFactors = FALSE) %>%
mutate(.,
cut_date = date - date_lookback, #calcs based on sample since this date
dyn_win = c(1,1,2,3,3,1,2,3,4,5), ##!! need to calculate this vector??
roll_mean = rollapply(data, align = "right", width = dyn_win, mean),
roll_sd = rollapply(data, align = "right", width = dyn_win, sd))
These are the roll_mean and roll_sd results I'm looking for:
> df
label date data cut_date dyn_win roll_mean roll_sd
1 a 2017-01-02 790 2016-12-23 1 790.0000 NA
2 a 2017-01-20 493 2017-01-10 1 493.0000 NA
3 a 2017-01-21 718 2017-01-11 2 605.5000 159.0990
4 a 2017-01-30 483 2017-01-20 3 564.6667 132.8847
5 a 2017-01-31 825 2017-01-21 3 675.3333 174.9467
6 b 2017-01-05 186 2016-12-26 1 186.0000 NA
7 b 2017-01-08 599 2016-12-29 2 392.5000 292.0351
8 b 2017-01-09 408 2016-12-30 3 397.6667 206.6938
9 b 2017-01-10 108 2016-12-31 4 325.2500 222.3921
10 b 2017-01-11 666 2017-01-01 5 393.4000 245.5928
Thanks in advance.
A:
You could try explicitly referencing your dataset inside the dplyr call:
date_lookback <- 10 #days to look back for rolling calcs
df <- data.frame(label = c(rep("a",5),rep("b",5)),
date = as.Date(c("2017-01-02","2017-01-20",
"2017-01-21","2017-01-30","2017-01-31","2017-01-05",
"2017-01-08","2017-01-09","2017-01-10","2017-01-11")),
data = c(790,493,718,483,825,186,599,408,108,666),stringsAsFactors = FALSE)
df %>%
group_by(date,label) %>%
mutate(.,
roll_mean = mean(ifelse(df$date >= date-date_lookback & df$date <= date & df$label == label,
df$data,NA),na.rm=TRUE),
roll_sd = sd(ifelse(df$date >= date-date_lookback & df$date <= date & df$label == label,
df$data,NA),na.rm=TRUE))
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Showing data in organized manner from database
I have following piece of code
<?php
$connect=mysql_connect("localhost","root","");
mysql_select_db("dbms_teacher",$connect);
$result=mysql_query("SELECT * FROM staffs");
while ($row=mysql_fetch_array($result))
{
echo "<tr>";
echo "<td>" . $row['fname'] . "</td>";
echo "<td>" . $row['lname'] . "</td>";
echo "<td>" .$row['post']. "</td>";
echo "<td>". $row['status']. "</td>";
// echo "<td>". "edit/delete". "</td>";
}
mysql_close($connect);
?>
This is the portion of code that brings up data from dbms_teacher. Is there a way such that I can get the data in a tabular form with fields Name , Post , status ,delete option(which I can manage later) ? I just need a good way to display it in organized format!
A:
Since you're only using select columns, you can hard-code it. It also helps if you get only those columns in your query:
$result=mysql_query("SELECT fname,lname,post,status FROM staffs");
echo "<table>";
echo "<tr><td>First Name</td><td>Last Name</td><td>Post</td><td>Status</td><td>Edit/Delete</td></tr>";
while ($row=mysql_fetch_array($result))
{
echo "<tr>";
echo "<td>" . $row['fname'] . "</td>";
echo "<td>" . $row['lname'] . "</td>";
echo "<td>" .$row['post']. "</td>";
echo "<td>". $row['status']. "</td>";
echo "<td></td>";
// echo "<td>". "edit/delete". "</td>";
echo "</tr>";
}
mysql_close($connect);
echo "</table>";
?>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
jQuery modals and deep linking
I currently have a gallery which opens a modal pop up when you click on the thumbnail. What I would like to do is to be able to generate a unique link specifically for the modal (i.e.; www.mywebite.com/#link1), which loads it's content via ajax. If somebody was to send this unique modal link and send it to someone and they were to paste it into their browser, ideally I would like the modal window to load and display its content automatically, without the user having to click on the appropriate thumbnail.
Is this even possible?? I know this is not the easiest of tasks and any help with this would be greatly appreciated.
To get an idea of what I'm working on go to:
http://www.ddbremedy.co.uk/siteupdate/work
You will see an iMac screen with the thumbnails on it.
Many thanks in advance.
UPDATE!!!!!
Ok this is where I am currently at. I have decided to scrap using jquery address and am deep linking using 'window.location.hash' instead.
Code is something like this:
var base_url = "http://www.ddbremedy.co.uk/siteupdate/";
$('#work_gallery li a').on('click', function(event) {
event.preventDefault();
postLink = $(this).attr('href');
window.location.hash = postLink.replace(base_url, "");
/* I have a bunch of code that animates the modal window
in which I don't need to include as there is quite alot of it.
Content loads via ajax. Then when I close the modal I add this
code to remove the hash and revert the link back to its original state. */
if ("pushState" in history) {
history.pushState("", document.title, window.location.pathname);
} else {
window.location.hash = "";
}
});
The above code works fine and displays the link exactly as I want it to when I load and close external content with ajax. Now what I need to figure out is how I can automatically load the ajax content if somebody takes that link and pastes it in the address bar. The content is loaded based on the links href and a click event, so how would I trick the browser into thinking that the correct link was clicked and load the correct content, purely based on it's link?
A:
Managed to get it working. This is how I did it:
First I run a function called 'checkUrl' which checks the URL to see if it contains a hash.
checkUrl = function() {
if (window.location.hash) {
}
};
checkUrl();
Then within the if statement I store the hash path into variable and split it from the hash. I then store the string after the hash into a variable.
var pathname = window.location.hash,
rez = pathname.split('#'),
linkUrl = rez[1];
I then pass that variable as a selector for the link that has that particular href and trigger a click event on the corresponding link, which then animates and loads in the correct modal.
$("a[href='http://www.ddbremedy.co.uk/siteupdate/" + linkUrl + "']").trigger('click');
Hopefully this will help someone in the future.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
VS2010 Web Publish command line version of File System deploy
Folks,
In a nutshell, I want to replicate this dialog:
It's a Visual Studio 2010 ASP.Net MVC project. If I execute this command, I get all the files I want, including the transformed web.configs in the "C:\ToDeploy" directory.
I want to replicate this on the command line so I can use it for a QA environment build.
I've seen various articles on how to do this on the command line for Remote Deploys, but I just want to do it for File System deploys.
I know I could replicate this functionality using nAnt tasks or rake scripts, but I want to do it using this mechanism so I'm not repeating myself.
I've investigated this some more, and I've found these links, but none of them solve it cleanly:
VS 2008 version, but no Web.Config transforms
Creates package, but doesn't deploy it..do I need to use MSDeploy on this package?
Deploys package after creating it above...does the UI really do this 2 step tango?
Thanks in advance!
A:
Ok, finally figured this out.
The command line you need is:
msbuild path/to/your/webdirectory/YourWeb.csproj /p:Configuration=Debug;DeployOnBuild=True;PackageAsSingleFile=False
You can change where the project outputs to by adding a property of outdir=c:\wherever\ in the /p: section.
This will create the output at:
path/to/your/webdirectory/obj/Debug/Package/PackageTmp/
You can then copy those files from the above directory using whatever method you'd like.
I've got this all working as a ruby rake task using Albacore. I am trying to get it all done so I can actually put it as a contribution to the project. But if anyone wants the code before that, let me know.
Another wrinkle I found was that it was putting in Tokenized Parameters into the Web.config. If you don't need that feature, make sure you add:
/p:AutoParameterizationWebConfigConnectionStrings=false
A:
I thought I'd post a another solution that I found, I've updated this solution to include a log file.
This is similar to Publish a Web Application from the Command Line, but just cleaned up and added log file. also check out original source http://www.digitallycreated.net/Blog/59/locally-publishing-a-vs2010-asp.net-web-application-using-msbuild
Create an MSBuild_publish_site.bat (name it whatever) in the root of your web application project
set msBuildDir=%WINDIR%\Microsoft.NET\Framework\v4.0.30319
set destPath=C:\Publish\MyWebBasedApp\
:: clear existing publish folder
RD /S /Q "%destPath%"
call %msBuildDir%\msbuild.exe MyWebBasedApp.csproj "/p:Configuration=Debug;PublishDestination=%destPath%;AutoParameterizationWebConfigConnectionStrings=False" /t:PublishToFileSystem /l:FileLogger,Microsoft.Build.Engine;logfile=Manual_MSBuild_Publish_LOG.log
set msBuildDir=
set destPath=
Update your Web Application project file MyWebBasedApp.csproj by adding the following xml under the <Import Project= tag
<Target Name="PublishToFileSystem" DependsOnTargets="PipelinePreDeployCopyAllFilesToOneFolder">
<Error Condition="'$(PublishDestination)'==''" Text="The PublishDestination property must be set to the intended publishing destination." />
<MakeDir Condition="!Exists($(PublishDestination))" Directories="$(PublishDestination)" />
<ItemGroup>
<PublishFiles Include="$(_PackageTempDir)\**\*.*" />
</ItemGroup>
<Copy SourceFiles="@(PublishFiles)" DestinationFiles="@(PublishFiles->'$(PublishDestination)\%(RecursiveDir)%(Filename)%(Extension)')" SkipUnchangedFiles="True" />
</Target>
this works better for me than other solutions.
Check out the following for more info:
1) http://www.digitallycreated.net/Blog/59/locally-publishing-a-vs2010-asp.net-web-application-using-msbuild
2) Publish a Web Application from the Command Line
3) Build Visual Studio project through the command line
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MongoDB: how to find a value from json data?
I am working with mongodb and would like to find a value from a json data.
My json data looks like this:
{
"_id": ObjectId("5306d69f80b1027ad2653dad"),
"Tests": [{
"_id": "52fda07f1e905ec468d93c29",
"names": "Art Test",
"script": "regression/purchase-back-forwd"
}],
"browser": ["GC"],
"creationTime": ISODate("2014-02-21T04:31:27.883Z"),
}
From this I have to find out Tests.names.
I used this but it is not working.
Here Jobs is the table
db.Jobs.find({Tests.names:"Art Test"})
A:
Use quotes for Tests.names:
db.Jobs.find({'Tests.names': "Art Test"})
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Double Typing on my MacBook pro
I've been having this issue for the past few months, where my Mac will echo type whole/partial words while I'm typing. This isn't the same issue as double typing keys, but rather whole phrases will appear again in the middle of me typing. It looks something like this:
"Hello ello world!"
Sometimes it will execute a command to close a window twice.
This goes away when I'm in "Safe Mode". Does anyone have any clue where this is coming from?
I don't think this is a keyboard issue, honestly. Well, maybe a buffering issue? I contacted Apple Support and they didn't have much to say. I reinstalled the operating system, and it still comes up. Though, it did seem to largely go away for a few days.
Additional Details:
Running Catalina 10.15.1 (Happened on previous versions of Catalina too)
MacBook Pro 2017 13inch
A:
There appear to be many variations on this question, that I have found.
However the problem addressed in this question is the one that I was experiencing.
So I'd like to try to post here a comprehensive answer that addresses the best possible solutions that have come up so far, starting with all the most common, simplest and least likely to work suggestions up to the ones that seem to address this problem specifically.
Note: If you've done your research, you'll note already an enormous amount of overlap with multiple pages linking back and forth to one another, in regards to this issue. My attempt here is simply to aggregate as many of the key factors involved into one place.
Background
So for a long time now, MacBook users have reported a 'double typing' issue, most especially with the most recent MacBook Pro's to hit the market. There have been many reports of this problem occurring on 2015, up to 2019 edition MacBook Pros, from the 13' to the 15' Touch Bar, and every thing in-between.
There appear to be multiple similar problems, that may or may not have more than one possible solution. The most common reported issues I have seen are:
single keys such as b or n being pressed and outputting doubles like bb or nn in their place. Space bar is also commonly reported to be double typing.
some people may also be irked by the default setting of a double space autocorrecting to a period ..
instances of multiple repeated strings or sequences of keystrokes, at seemingly random intervals, for example whole words, or even multiple words appearing twice, and also modifier commands such as: ⌘+w or ctrl+tab.
In my case the problem was the latter. I'm sure other variations of this problem exist, these are the ones I've encountered most frequently in my search for a solution.
Possible Solutions
This is a summary of most of the answers that I have come across so far, hopefully one of them will work for anyone who is experiencing this problem.
System preferences -> keyboard = Slide the bar left to turn 'key repeat' to off. This is one of the most common solutions as it is by far the simplest and quickest to try. It's also the least likely, from what I've gathered.
cleaning your keyboard, as per these instructions there's even an app to help you do this
turning off the period autocorrect option = System preferences -> Keyboard -> Text: Uncheck "Add period with double space"
This problem has become so substantial that Apple has released an extended service specifically for this problem, whereby they are replacing keyboards free of charge for anyone who's device falls within their approved list, which as of Jan 2020 includes all Mac products using the butterfly keyboard.
Like many others however, it seems to be increasingly clear that for most people (though not all) this is not a hardware issue. Many users, including myself, have testified that this occurs not just on the Mac keyboard, but external keyboards also, wired, wireless and bluetooth alike.
Most significantly this problem appears to have started for many, only after installing Catalina.
Users have reported that the problem disappears when:
Running their Mac in 'safe mode'
In other user accounts on their machine (this is true in my case)
So a number of other suggestions have also popped up. For those who believe that their problem is software related, the following solutions may work for you.
Install an app called Unshaky
Reset your NVRAM as per a comment from this other question
option+⌘+P+R
Update to the latest version of macOS Catalina (version 10.15.3 as of the time of writing) -- System Preferences -> Update Software
Create a new user account
Re-Install MacOS in the Background
Roll back to the previous MacOS, Mojave
And the worst case scenario - A full wipe of your system and re-install a fresh version of your MacOS
For anyone who has Wacom tablet software installed
this Reddit post appears to have isolated a fairly specific problem, but it appears to have fixed the problem for a lot of people, and makes intuitive sense to me, for anyone who has Wacom Driver software in their system.
Install the most recent Catalina compatible Wacom software from their website --(This is what solved the problem for me)
Uninstall any Wacom drivers you may have installed on your system. Official instructions can be found here
disconnect tablet from USB
Navigate to: Finder -> Applications -> Wacom table -> Wacom Tablet Utility
Click the 'Uninstall' button
Restart your machine
Find and delete all related folders as per the linked instructions
Acknowledgements
Finally, I'd like to acknowledge the multiple posts and pages that I have drawn on to find this solution. I can't take credit for any of the solutions posted here, I found them all in my efforts to fix my own frustrating problem. I hope that anyone experiencing this problem finds a solution in this answer.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Testing onclick events on SVG with Cypress
I am testing a d3 application with Cypress. As of the tests, I'd like to make sure a specific function is called when a circle within an SVG element is clicked. The function is called when I'm clicking manually but the test that I wrote fails, so I assume I made a mistake somewhere in the test.
Here's the test code I have right now:
import * as app from "../../app";
describe("Scatter plot", () => {
before(() => {
cy.visit("http://localhost:1234");
});
it("Triggers the displayMovieInfo on click", () => {
const displayMovieInfo = cy.spy(app, "displayMovieInfo");
cy.get("#scatterPlot")
.get("circle")
.eq(0)
.click({ force: true });
expect(displayMovieInfo).to.be.called;
});
});
The output I get from Cypress:
expected displayMovieInfo to have been called at least once, but it
was never called
Any help will be appreciated!
Update:
I believe the click might have not worked before because the circle didn't exist when cypress attempted to click it. By adding "await cy.wait(1000);" before the click action, the function is called (I can see the results and a message logged from inside it). Sadly, the test is still failing.
Update 2:
I changed the test to use the window object (see below), but the assertion still fails (the test itself succeeds, which is also not a good thing).
cy.window()
.then(window => {
displayMovieInfoSpy = cy.spy(window, "displayMovieInfo");
cy.get("#scatterPlot")
.get("circle")
.eq(2)
.click({ force: true })
.as("clicking");
expect(displayMovieInfoSpy).to.be.called;
});
Update 3:
It seems that the combination of d3 and parcel.js causes the test to fail. When using d3 alone or parcel.js alone, the test works just fine.
Also, the expect statement should be in the then block after the click action.
A:
Seems you're importing app variable on the test directly. This object is a different one than the one on your browser. You should make a global variable (or method) for getting your app variable directly from your browser
cy.window().its('app').then((app) => {
// your test with app var
})
Also you might want to use then() condition to ensure it checks it after. But this may not be necessary.
.click({ force: true }).then(() => {
expect(displayMovieInfo).to.be.called;
});
A:
I think you have mostly a couple concept problems about how cypress is working.
First click only can be on one element and second is about how you use alias.
This code I put works like a charm, hope it helps you a little about the concepts of alias, should, click and spy.
d3test.spec.js
describe("Scatter plot", () => {
before(() => {
cy.visit("http://localhost/d3test");
});
it("Triggers the displayMovieInfo on click", () => {
cy.window()
.then(window => {
let displayMovieInfoSpy = cy.spy(window, "displayMovieInfo");
cy.get("#scatterPlot").get("circle").as('circles')
cy.get('@circles').should('have.length', 1)
cy.get('@circles').click({ force: true })
.then(() => {
expect(displayMovieInfoSpy).to.be.called;
})
});
});
});
index.html
<svg id="scatterPlot">
<circle cx="50%" cy="50%" r="100" fill="blue" onclick="displayMovieInfo()"></circle>
</svg>
<script src="https://d3js.org/d3.v5.min.js"></script>
<script>
window.displayMovieInfo = function(){
console.log(1);
}
</script>
If you have more troubles I recomend you trying stuff one by one, using cy.log() and debugger console.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
MySQL CURSOR loop adds ONE extra looping inside procedure
My table has 1 row but my procedure twice looping :
DROP PROCEDURE IF EXISTS SP_Fetch_Returned_With_Serials;
CREATE PROCEDURE `SP_Fetch_Returned_With_Serials`(OUT __Factor_Id bigint unsigned,OUT __Payment_Id bigint unsigned,IN __Payment_Amount decimal(16,3))
BEGIN
DECLARE Num BIGINT UNSIGNED DEFAULT 0;
DECLARE __Product_Id bigint UNSIGNED DEFAULT NULL;
DECLARE __Serials varchar(255) DEFAULT NULL;
DECLARE done INT DEFAULT 0;
DECLARE _NEW_Factor_Id bigint UNSIGNED DEFAULT 0;
DECLARE cur CURSOR FOR
SELECT product_id, serials FROM tmp_table_returned_product_serial;
DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1;
# SET @Test = 0;
# SELECT count(*) into @Test from tmp_table_returned_product_serial;
# SIGNAL SQLSTATE '45000'
# SET MESSAGE_TEXT = @Test;
SET @Document_Id = 0;
SET @Product_Id = 0;
SET @Payment_Amount = 0;
START TRANSACTION ;
SET @Warehouse_Id = 0;
SELECT id INTO @Warehouse_Id FROM Tb_Warehouses WHERE warehouse_branch_id = @Branch_Id LIMIT 1;
SET @i = 0;
OPEN cur;
read_loop:
LOOP
IF done
THEN
LEAVE read_loop;
END IF;
FETCH cur INTO __Product_Id,__Serials;
SET @i = @i + 1 ;
CALL SP_Separate_Numeric_Values(`__Serials`);
SET @Is_Returnable = 0;
SET @Level_Id = 0;
SET @Free_Day = 0;
SET @Penalty_Percent = 0;
SET @Factor_Type_Id = 0;
SET @Currency_Id = 0;
SET @Customer_Id = 0;
SET @Factor_Id = 0;
SET @Value = 0;
SET @Real_Fee = 0;
SET @Date_At = CURRENT_DATE();
SET @Diff_Days = 0;
SET @New_Factor_Id = 0;
SET @New_Detail_Factor_Id = 0;
SET @Receipt_Remit_Type_Id = 0;
SET @Return_Penalty_Percent = 0;
SET @Return_Penalty_Price = 0;
SET @Document_Number = 0;
SELECT id INTO @Factor_Type_Id FROM Tb_Factor_Types WHERE name = 'back_sale_factor';
SELECT COUNT(NV.Number) INTO @Value FROM Numeric_Values NV;
SELECT TP.is_returnable,
TU.level_id,
currency_id,
customer_id,
VF.id,
VF.product_id,
real_fee,
date_at
INTO @Is_Returnable,@Level_Id,@Currency_Id,@Customer_Id,@Factor_Id,@Product_Id,@Real_Fee,@Date_At
FROM Vw_Factor_Master_Details VF
INNER JOIN Tb_Factor_Detail_Serials TFDS ON TFDS.detail_id = VF.detail_id
INNER JOIN Tb_Products TP ON VF.product_id = TP.id
INNER JOIN Tb_Users TU ON TU.user_id = VF.customer_id
INNER JOIN Numeric_Values NV ON NV.Number = TFDS.serial_id
LIMIT 1;
IF (@Is_Returnable)
THEN
SET @Product_Title = '';
SET @Error_Msg = '';
SELECT IFNULL(TPT.title, TP.title_en) AS title
INTO @Product_Title
FROM Tb_Products TP
LEFT JOIN (SELECT title, product_id FROM Tb_Product_Translations WHERE locale = @Locale) TPT
ON TP.id = TPT.product_id
WHERE product_id = @Product_Id;
SELECT message INTO @Error_Msg FROM Tb_Errors WHERE error_code = 1000028 AND locale = @Locale;
SET @Error_Msg = REPLACE(@Error_Msg, ':product', @Product_Title);
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = @Error_Msg;
END IF;
SELECT DATEDIFF(CURRENT_DATE(), @Date_At) INTO @Diff_Days;
SELECT MAX(`number`) INTO Num FROM Tb_Factors WHERE year_id = @Fiscal_Year AND type = 'sale_factor';
SET Num = ifnull(Num, 0) + 1;
SELECT col
INTO @Free_Day
FROM (SELECT JSON_EXTRACT(JSON_EXTRACT(TS.value, CONCAT('$.', @Level_Id)), '$.free_day') AS col
FROM Tb_Settings TS
WHERE `key` = 'returned_product') TS
WHERE col IS NOT NULL
LIMIT 1;
SELECT col
INTO @Penalty_Percent
FROM (SELECT JSON_EXTRACT(JSON_EXTRACT(TS.value, CONCAT('$.', @Level_Id)), '$.penalty_percent') AS col
FROM Tb_Settings TS
WHERE `key` = 'returned_product') TS
WHERE col IS NOT NULL
LIMIT 1;
IF (@Diff_Days > @Free_Day)
THEN
SET @Payment_Amount = @Payment_Amount + ((@Real_Fee - (@Real_Fee * @Penalty_Percent / 100)) * @Value);
SET @Return_Penalty_Percent = @Penalty_Percent;
SET @Return_Penalty_Price = @Real_Fee * @Penalty_Percent / 100;
ELSE
SET @Payment_Amount = @Payment_Amount + @Real_Fee;
END IF;
IF (_NEW_Factor_Id = 0)
THEN
INSERT INTO Tb_Factors (type, sale_place, product_type, company_id, branch_id, cash_desk_id, type_id,
year_id,
currency_id, finaler_id, signature_id, customer_id, final_at, signature_at,
reference_factor_id, creator_id, number)
VALUES ('sale_factor', 'branch', 'product', @Company, @Branch_Id, NULL, @Factor_Type_Id,@Fiscal_Year,
@Currency_Id, @Auth_User, @Auth_User, @Customer_Id, CURRENT_TIMESTAMP(), CURRENT_TIMESTAMP(),
@Factor_Id, @Auth_User, Num);
SET _NEW_Factor_Id = LAST_INSERT_ID();
END IF;
INSERT INTO Tb_Factor_Details (product_id, value, real_fee, fee, factor_id, creator_id, return_penalty_percent,
return_penalty_price)
VALUES (@Product_Id, @Value, @Real_Fee, @Real_Fee, _NEW_Factor_Id, @Auth_User, @Penalty_Percent,
@Return_Penalty_Price);
SET @New_Detail_Factor_Id = LAST_INSERT_ID();
INSERT INTO Tb_Factor_Detail_Serials (serial_id, detail_id)
SELECT NV.Number, @New_Detail_Factor_Id
FROM Numeric_Values NV;
SELECT id INTO @Receipt_Remit_Type_Id FROM Tb_Receipt_Remit_Types WHERE name = 'return_of_sales';
IF (@Document_Id = 0)
THEN
SELECT document_number
FROM Tb_Documents
WHERE fiscal_year_id = @Fiscal_Year
ORDER BY document_number DESC
LIMIT 1
INTO @Document_Number;
SET @Document_Number = @Document_Number + 1;
INSERT INTO Tb_Documents (type, factor_id, warehouse_id, receipt_remit_type_id, fiscal_year_id,
document_number,
creator_id, company_id)
VALUES ('receipt', _NEW_Factor_Id, @Warehouse_Id, @Receipt_Remit_Type_Id, @Fiscal_Year, @Document_Number,
@Auth_User, @Company);
SET @Document_Id = LAST_INSERT_ID();
END IF;
INSERT INTO Tb_Document_Details (product_id, value, confirmed_at, confirmed_by, creator_id, document_id)
VALUES (@Product_Id, @Value, CURRENT_TIMESTAMP(), @Auth_User, @Auth_User, @Document_Id);
SET @Document_Detail_Id = LAST_INSERT_ID();
INSERT INTO Tb_Detail_Serial_Numbers (serial_number_id, detail_id, origin_cost_center_id, confirmed_at,
confirmed_by)
SELECT NV.Number, @Document_Detail_Id, NULL, CURRENT_TIMESTAMP(), @Auth_User
FROM Numeric_Values NV;
END LOOP;
CLOSE cur;
SET @Type_Id = 0;
SELECT id INTO @Type_Id FROM Tb_Payment_Types WHERE name = 'cash' LIMIT 1;
INSERT INTO Tb_Receive_Payment (model_type, model_id, factor_id, type_id, receive_amount, payment_amount, year_id,
creator_id)
VALUES ('cashdesk', NULL, _NEW_Factor_Id, @Type_Id, 0, @Payment_Amount, @Fiscal_Year, @Auth_User);
SET @Payment_Id = LAST_INSERT_ID();
COMMIT ;
SIGNAL SQLSTATE '45000'
SET MESSAGE_TEXT = @i;
SET __Factor_Id = _NEW_Factor_Id;
SET __Payment_Id = @Payment_Id;
SET __Payment_Amount = @Payment_Amount;
END
A:
Move FETCH cur INTO __Product_Id,__Serials; to before if done
|
{
"pile_set_name": "StackExchange"
}
|
Q:
how to convert a date string to tm_wday in tm structure
I have a date string format say like "2010-03-01" and I want to get the "tm_wday" equivalent of it say like Monday, Tuesday ...
Could someone give me a hint on how to achieve this in c?
A:
Check the strptime() function:
char *strptime(const char *s, const char *format, struct tm *tm);
The strptime() function is the converse function to strftime(3) and converts the
character string pointed to by s to values which are stored in the tm structure
pointed to by tm, using the format specified by format.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Copy sources files into target directory with SBT
I recently decided to use SBT to build an existing project.
In this project I have some .glsl files within the scala packages which I need to copy during the compilation phase.
The project is structured like this :
- myapp.opengl
- Shader.scala
- myapp.opengl.shaders
- vertex_shader.glsl
- fragment_shader.glsl
Is this file structure correct for SBT or do I need to put the .glsl files into an other directory. And do you know a clean way to copy these files into the target folder ?
I would prefer not putting these files into the resources directory since they are (non-compiled) sources files
Thanks
A:
I would not recommend putting those files into src/main/scala as they do not belong there. If you want to keep them separate from your resource files, you can put them in a custom path, e.g. src/main/glsl and add the following lines to your project definition to have them copied into output directory:
val shaderSourcePath = "src"/"main"/"glsl"
// use shaderSourcePath as root path, so directory structure is
// correctly preserved (relative to the source path)
def shaderSources = (shaderSourcePath ##) ** "*.glsl"
override def mainResources = super.mainResources +++ shaderSources
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is there a way to set the DB as Single User Mode in C#?
I have a WPF app that updates my database from an in-code entity model. There may be instances that it updates the DB while there are users connected and I would like to put it in Single-User-Mode to avoid errors.
I would also like to avoid using sql. I am aware that I can run sql using:
DataContext.ExecuteCommand("ALTER DATABASE...")
I would rather use a command in a C# library to do this but I dont know where to look.
Is there a way to set the DB to SUM without using SQL?
A:
So I used the SMO server objects like this:
Server server = GetServer();
if (server != null)
{
Database db = server.Databases[Settings.Instance.GetSetting("Database", "MyDB")];
if (db != null)
{
server.KillAllProcesses(db.Name);
db.DatabaseOptions.UserAccess = DatabaseUserAccess.Single;
db.Alter(TerminationClause.RollbackTransactionsImmediately);
The problem was that this opened a different connection than my datacontext thus kicking myself off and not letting me access the DB. In the end I had to revert to using SQL.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
compare/sort prices and push text prices to the bottom of results - PHP
I'm using wordpress. Some of products prices are not numerical and they are like P.O.R. (price on request). I have a filter which sorts prices "low to high" and "high to low". Below you can see my sorting code:
LOW TO HIGH
$args = array(
'post_type' => 'products',
'meta_key' => 'product_price',
'orderby' => 'meta_value_num',
'order' => 'ASC',
'posts_per_page'=> -1
);
$post_list = get_posts($args);
HIGH TO LOW
$args = array(
'post_type' => 'products',
'meta_key' => 'product_price',
'orderby' => 'meta_value_num',
'order' => 'DESC',
'posts_per_page'=> -1
);
$post_list = get_posts($args);
Above code works how it was supposed to. When string is being compared to an integer, it is treated as 0 and that's why in first case (low to high) products with P.O.R. are on top and in second case (high to low) they are on bottom. But I want P.O.R. items to be always at the bottom. Researched many things but didn't found solution to this. Any ideas please?
A:
An easy solution would be exclude the POR's from the low to high query and write an additional query that just queries the POR products, and then merge it with the low to high query. To exclude the POR's from the query, you could add a meta_compare
$nonpor = array(
'post_type' => 'products',
'meta_key' => 'product_price',
'meta_value_num' => '0',
'meta_compare' => '>'
'orderby' => 'meta_value_num',
'order' => 'ASC',
'posts_per_page'=> -1
);
This will get you a list of all the posts that have prices, without the POR's: Next we query them:
$por = array(
'post_type' => 'products',
'meta_key' => 'product_price',
'meta_value_num' => '0',
'meta_compare' => '='
'posts_per_page'=> -1
);
Now we merge the results:
$regular = new WP_Query( $nonpor );
$pors = new WP_Query( $por );
//create a new, blank query object
$combined = new WP_Query();
// put the combined data into the new query
$combined->posts = array_merge( $regular->posts, $pors->posts );
//set the post count if you need it
$result->post_count = count( $result->posts );
//finally get the post list
$post_list = $combined->posts;
Replace this with your LOW TO HIGH, and you should be good.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Does a neutral dimercury molecule exist?
Is a neutral $\ce{Hg2}$ molecule possible, as a gas under extremely low (partial) pressures? What is the enthalpy of formation or similar?
It is my impression that usual mercury vapors are monatomic? I think it is well-known that a dimer ion $\ce{Hg2^{2+}}$ exists under some conditions.
Somewhat related threads: Is it possible to have a diatomic molecule of sodium in gaseous state? and Which elements can be diatomic?
A:
There has been recent research on mercury dimers in its ground state. Bond energies, transition energies, band spectrum and other spectroscopic parameters has been calculated. Here are the abstracts of two papers on mercury dimer:
The potential energy curve of the ground electronic state of the $\ce{Hg}$ dimer has been calculated using the CCSD(T) procedure and relativistic effective core potentials. The calculated binding energy ($\pu{0.047 eV}$) and equilibrium separation ($\pu{3.72 Å}$) are in excellent agreement with experiment. A variety of properties, including the second virial coefficient, rotational and vibrational spectroscopic constants, and vibrational energy levels, have been calculated using this interatomic potential and agreement with experiment is good overall.(Source)
The mercury dimer is among the most weakly bound metal dimers and has been extensively studied. The ground state $\ce{O^+_g}$ dissociation energy has been considered to lie between $\pu{0.55 eV}$ ($\pu{440 cm−1}$) and $\pu{0.091 eV}$ ($\pu{730 cm−1}$). We report here a spectroscopic study of $\ce{Hg2}$ in a supersonic jet. The first optical transition, $\ce{1_u←O^+_g}$, was characterized by its fluorescence excitation spectrum and the binding energy of the ground state has been measured precisely through the threshold of collision induced dissociation of $\ce{Hg2 1_u}$ to $\ce{Hg(^1S_0)+Hg(^3P_0)}$.(Source)
A:
Yes, $\ce{Hg2}$ has a bond length of $\pu{0.334nm}$ and a dissociation energy of $\pu{7.5 kJ/mol}$.
See Mercury Handbook: Chemistry, Applications and Environmental Impact at page 10.
and
Mass spectrometric equilibrium study of the molecule $\ce{Hg2}$ J. Chem. Phys. 1982, 77(3), 1425-1427 (https://doi.org/10.1063/1.443968).
A:
The nature of mercury associates was studied. Quick googling gave paper with words "$\ce{Hg_x}$ cluster transition from VdW to metallic bahavior between 20 and 70 atoms", suggesting that $\ce{Hg_2}$ associate, if exists, have VdW bonding (i.e. not a chemical, but physical bond) only.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Problem with the exemple of veins-lte omnet++ in a runtime
I am using Ubuntu 16.04, sumo 0.21, omnet 4.6 and veins-lte 1.2.
When I run the heterogenous exemple in veins, I have the following error:
<!> Error in module (LteMacUe) scenario.node[1].nic.mac (id=104) at event #2381, t=0.609: H-ARQ TX: fb is not for the pdu in this unit, maybe the addressed one was dropped.
how to fix it?
thank.
A:
Veins LTE is a rather old project.
From what I understand, Veins LTE 1.3 has a bug that keeps simulations from executing in Tkenv or Qtenv: https://github.com/floxyz/veins-lte/issues/2
I would recommend trying out the much newer Veins with SimuLTE instead. You can download Instant Veins with SimuLTE, a virtual machine that already has Veins (currently: Veins 5 alpha 1) and SimuLTE installed and ready to run from the Veins website.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
2017 Moderator Election Q&A - Question Collection
Aviation is scheduled for an election next week, September 18th.In connection with that, we will be holding a Q&A with the candidates. This will be an opportunity for members of the community to pose questions to the candidates on the topic of moderation. Participation is completely voluntary.
The purpose of this thread was to collect questions for the questionnaire. The questionnaire is now live, and you may find it here.
Here's how it'll work:
Until the nomination phase, (so, until Monday, September 18th at 20:00:00Z UTC, or 4:00 pm EDT on the same day, give or take time to arrive for closure), this question will be open to collect potential questions from the users of the site. Post answers to this question containing any questions you would like to ask the candidates. Please only post one question per answer.
We, the Community Team, will be providing a small selection of generic questions. The first two will be guaranteed to be included, the latter ones are if the community doesn't supply enough questions. This will be done in a single post, unlike the prior instruction.
If your question contains a link, please use the syntax of [text](link), as that will make it easier for transcribing for the finished questionnaire.
This is a perfect opportunity to voice questions that are specific to your community and issues that you are running into at currently.
At the start of the nomination phase, the Community Team will select up to 8 of the top voted questions submitted by the community provided in this thread, to use in addition to the aforementioned 2 guaranteed questions.
Once questions have been selected, a new question will be opened to host the actual questionnaire for the candidates, typically containing 10 questions in total.
This is not the only option that users have for gathering information on candidates. As a community, you are still free to, for example, hold a live chat session with your candidates to ask further questions, or perhaps clarifications from what is provided in the Q&A.
If you have any questions or feedback about this process, feel free to post as a comment here.
A:
How often, on average, would you be able to attend to moderator duties?
A:
There are "mod-only" flags, and all the others. Will you let the community have their say on the latter, or will you handle all the flags you possibly can?
For example, there are "Very Low Quality" or "Not an Answer" flags that would go in the "Low quality" review queue. Will you let the queue review process complete before intervening, or will you handle the flag before that?
I am specifically speaking of those situations where a mod is not required, since the community review process can handle them.
A:
The site can be quite harsh on new users who don't ask "perfect" first questions. (Can we stop downvoting posts from new users?)
How do you think you can help new users prosper at Aviation.SE?
Conversely, some users can't be helped and will continue to post low quality questions, vaguely to do with aviation. What will you do in this case?
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Should answers link to their sources?
A comment discussion recently started on this answer about whether answers should link to their sources. In case the comments get deleted, here is a summary:
Bob: Links to citations please.
Joe: Nobody else does, so I don't need to. Plus, any links to the rules will quickly be outdated.
We cite rules often on Board and Card Games. Rarely do we mention the source1, and even more rarely do we link to that source. Some would say that this is acceptable because we all know that it came from the comprehensive rules.
Remember, a large portion of the people who see our content are not registered users. They do not know how we do things here. Please make sure that your answer considers Magic: the Gathering answers and the unusually common references to the comprehensive rules.
1. Speaking from experience with mostly Magic: the Gathering questions, but there is something to be said here for all games that have rules.
A:
Yes...
...in general, when possible, when it adds something to the answer, when something significant is taken from a source.
Keep in mind, the really important thing is citing the source, making it clear where the information came from. Linking to it is an additional helpful thing to do in many cases. For example:
a link to a forum thread containing a response from the game designer is pretty much required. It's unique, and might be hard to find otherwise.
a link to the Arkham Horror FAQ is good. It's not obvious that it exists if all you have is the game. Similarly, if you're citing the errata from the Dunwich Horror rules, it's good link to them. Readers might only have the base game.
a link to the Arkham Horror rules is nice, but optional. I'd generally do it, because it's easy, and it might save someone the time of getting the rules out of the box to see. But if it's missing, it's not a huge deal; the OP does have a copy of the rules with the game, and they have Google to find them online if they want.
If you only used the rules incidentally, and most of the answer is about something else, don't worry about it.
If you can't find the rules online at all, don't worry about it.
As for Magic, I don't think it's really worth it to make any kind of policy. It's similar to the "nice, but optional" case in the list above. True, the rules are only available online, but that generally means that people already know how to find them, especially the kind of people actually interested in reading the comprehensive rules.
It's incredibly rare that someone asks for a source; most people are happy to either read what's quoted in the answer or use Google to find more. If someone really can't figure it out, they can ask a question. (That has happened, once.) So the potential upside is pretty low, not enough to warrant spending our time editing links into answers or commenting asking everyone to include them.
It's also a case of diminishing returns. If even 1/10 of the answers to Magic questions have a link to the rules, the odds are pretty low that anyone hasn't seen one yet. The more links there are, the less important it is that we bother adding them to more answers.
So, sure, link if you like. It certainly doesn't hurt. But I definitely don't want to see anyone getting badgered to include links.
A:
I think that, in general, people should link to sources when quoting game rules if possible. However, I also think that Magic: the Gathering is a special case, though it is not necessarily unique
I think that answers that quote the Comprehensive Rules should not be required to link to those rules, as long as they say that they are quoting them and give the exact rule number that they are quoting. It's interesting that you mention MLA citations, because Comprehensive Rules references can be very similar The name "Comprehensive Rules" is like an author/title combination in an MLA citation, and the specific section reference (like "302.6") is like the page number reference in an MLA inline citation. Together, those are sufficient for anyone to find the quoted text themselves and verify it. And that, I think, is what's important: that anyone reading the answer can easily verify the quotation.
While I say that answer authors should not be required to give a link to the Comprehensive Rules, I also think that such a link is never detrimental to an answer, and that it would always be appropriate to edit such a link into the answer.
I think that how much that applies to any other game depends on how discoverible that game's rules are online. The MTG comprehensive rules are easy to find with the obvious Google search. If a game's rules are difficult to find, then it would be more valuable to provide a link, so that verifying a quotation does not require an inordinate amount of work. And if the game rules are not available online at all, then obviously a reference to a paper manual is all an answer can be reasonably expected to provide.
A:
Links are not necessary, but answers should generally mention the name of their sources, so that less knowledgeable users can search for it. As some rules are only available in paper form, simply mentioning the name of the document it came from should be acceptable. I'm not saying that we need MLA or APA style citations on every post, but there should be some indication of where the quote came from.
Magic is no exception. If a comprehensive rule is quoted, then the author should state that it came from the Comprehensive Rules. For example:
Rule 46.1b from the Comprehensive Rules tells us that:
Foo fah fie [...]
Finally, if a user specifically asks for a link for a citation, that probably means that they can't find it. We should always be willing to help these users.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to validate false on only numbers via regex?
I need to validate a domain registration as it cannot be made of numbers only and the validation need to fit in the way my algorithm works, that is to return full domain name(without the .com, .net, etc extensions) if it is correct.
I've tryed a few expressions with no success:
^[^0-9]+$ # that one wont let he type a number
^\w[^0-9]+\w$ # that wont work too
Can someone help me?
A:
So, just to be clear, you want it to return False on strings that contain ONLY numbers? Try this:
EDIT 4 Based on literal interpretation of the question, this is all you need:
\D
That will match True for any string containing a character that's not a number.
However, based on the fact that you want to match URLs, you probably want something more like this:
^\w*[a-zA-Z_]\w*$
That will match any string containing alphanumerics and _s, as long as it contains at least one letter or _.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Unexpected JSON output on HTTP request
I am currently trying to get a simple HTTP JSON response from a website.
When I look at the page when I use Google Chrome:
{"columns":[{"name":"xx","dataType":"varchar","size":255,"nullable":true},{"name":"xx","dataType":"varchar","size":255,"nullable":true},{"name":"xx","dataType":"decimal","size":17,"nullable":true},{"name":"xx","dataType":"varchar","size":255,"nullable":true},{"name":"xx","dataType":"varchar","size":4,"nullable":true},{"name":"xx","dataType":"varchar","size":2,"nullable":true},{"name":"xx","dataType":"varchar","size":20,"nullable":true}],"rows":[["xxxxx","xxxx/xxxx","xxxx","Yacouba","5xx","xx","xxxxx"]]}
However when I use the following php code:
<?php
$json_url = "xxxx"; // url is something else but privacy reasons etc
// Initializing curl
$ch = curl_init();
// Configuring curl options
$options = array(
CURLOPT_URL => $json_url,
CURLOPT_POST => FALSE
);
// Setting curl options
curl_setopt_array($ch, $options);
// Getting results
$result = curl_exec($ch); // Getting jSON result string
curl_close($ch);
json_decode($result);
var_dump($result);
and then the output is as fallows:
‹ì½`I–%&/mÊ{JõJ×àt¡€`$Ø@ìÁˆÍæ’ìiG#)«*ÊeVe]f@Ìí¼÷Þ{ï½÷Þ{ï½÷º;N'÷ßÿ?\fdlöÎJÚÉž!€ªÈ?~|?"~ñGÓª\/–ÍG¾÷‹?Zf‹ü£Gͪ¦)òú÷/³I^~4úh–µÙ›ë¾ºÌêé<«éæø}°wÿþè£åº¤¦%ýÙÖëü—Œz€–Y¶ø8—U=ËëbyñûÏòIÑæëú÷¯Vù²i³l9Ïòi±ÈJx÷Á&ü´E°iëŒÞÕyóû¯¦¿ÿ²n6CÛ¿=¬²½ ØÞmFXçeÖ9¡v°´ï>ª«+pÈ÷>ÚýôÓ{Ôúm±l&y}‘/ïþ>Ù´ZO2úpoow´»C¿¹ÏîïìJ?žŸÐ?;;;û÷÷?Ýûèûßÿ%ÿOÿÿzbool(true)
What could the problem be?
A:
I fix it! :D with:
CURLOPT_ENCODING => ''
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why am I in DotNet dependency hell and what can I do to get out?
I have a DotNet 4.6.1 application with MVC and WebAPI. The MVC side has GlobalConfiguration.Configuration and the WebAPI has a dependency on Assembly1
Apparently GlobalConfiguration.Configuration in System.Web.Http has a dependency on "Newtonsoft.Json, Version=6.0.0.0", and Assembly1 has a dependency on "Newtonsoft.Json", Version=7.0.1.
I placed those quotations precisely because these are precise dependencies:
When I try to run a ping against my WebApi I get:
"Could not load file or assembly 'Newtonsoft.Json, Version=6.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed' or one of its dependencies"
My web.config was added to by NuGet and created by Microsoft so until now I had not touched it and it was built for me. The structure of the web.config is:
<!--Personal Comment: See how configuration has no namespace-->
<configuration>
<runtime>
<!--Personal Comment: See how assemblyBinding DOES have namespace-->
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<!--Left out all of the other dependentAssemblies for brevity-->
<dependentAssembly>
<!--Personal Comment: Take note the upper/lowercase of attributes-->
<assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral">
<bindingRedirect oldVersion="0.0.0.0-7.0.0.0 newVersion="7.0.0.0" />
</assemblyIdentity>
</dependentAssembly>
</assemblyBinding>
</runtime>
</configuration>
The odd thing is, if I go to the .csproj in VSCode and change the reference to Include="Newtonsoft.Json" and remove Version=7.0.0.0, etc... then nothing gets fixed. But if I change Version=7.0.0.0 to Version=6.0.0.0 and direct the hintpath to 7.0.1 then my solution works!
This seems like an awful way to live and to program and I don't like it unless I have to deal with it. According to every article I read online, every question, every answer, they all say "use bindingRedirect" and they use it in the way I tried. My assumption is that bindingRedirect is not working for my code, and I need to know why, or could I somehow reference Newtonsoft.Json twice in my references and tell the compiler to use the 7.0.0.0 code if it's not third-party project?
A:
I experienced a similar issue a while ago. Try this:
Remove the Json dependency in any projects that use it.
Close the solution.
Delete the SUO file. It has the same name of your solution with a .SUO extension.
Relaunch the solution.
Clean the solution.
Add the desired Json dependency to the projects that need it.
Rebuild the solution.
Run to verify.
The reason is that Visual Studio appears to "cache" the references in the SUO file.
If it still misbehaves, set the compiler to verbose and rebuild. It should tell you the exact source of the mismatch in the output.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
.NET CookieException "Cookie format error" when thread culture is not English - only in certain environments
I am trying to do some cookie management in an ASP.NET app that calls other web services. I am encountering an error that I can reproduce only in certain environments. My questions are:
Is the difference between production and development enough to cause this issue?
How could I figure out what is different between production/development?
What can I do to work-around this issue?
Following are the details I use to reproduce the problem. The error that I see is:
Unhandled Exception: System.Net.CookieException:
An error occurred when parsing the Cookie header for Uri 'http://example.com/'.
---> System.Net.CookieException: Cookie format error.
at System.Net.CookieContainer.CookieCutter(Uri uri, String headerName,
String setCookieHeader, Boolean isThrow)
--- End of inner exception stack trace ---
at System.Net.CookieContainer.CookieCutter(Uri uri, String headerName,
String setCookieHeader, Boolean isThrow)
at System.Net.CookieContainer.SetCookies(Uri uri, String cookieHeader)
at Program.Main() in c:\Sample\Program.cs:line 21
I have created a console app that reproduces the problem (relevant parts below). Full code at https://compilr.com/adutton/cookiecutterexample/main.cs
string[] cultures = new[] { "en-US", "es-MX" };
const string cookieHeader = ".ASPXAUTH=SECURITYINFO; domain=.example.com; "
+ "expires=Mon, 06-Mar-2023 18:36:33 GMT; path=/; HttpOnly";
foreach (string culture in cultures)
{
Console.WriteLine("CookieCutting with culture: " + culture);
Thread.CurrentThread.CurrentCulture = new CultureInfo(culture);
Thread.CurrentThread.CurrentUICulture = new CultureInfo(culture);
CookieContainer ctr = new CookieContainer();
// The following line throws an exception
ctr.SetCookies(new Uri("http://example.com/"), cookieHeader);
}
This code works on my development machine (Windows 7, x64, .NET 4.5.50709) but not in production (Windows Server 2008 R2 Enterprise, x64, .NET 4.0.30319) where the code throws an exception for the es-MX culture.
If I remove the date from the cookie header, the exception goes away, which leads me to believe that is a localization issue with the cookie parser. Perhaps this was fixed in .NET 4.0 -> 4.5?
A:
As @nunzabar pointed out, the problem is way the current culture sets a comma in the day-of-week. In this case, installing the .NET 4.5 framework caused the problem to disappear. I did not decompile the code to see the differences between .NET 4.0 and 4.5 but it was fixed when we installed the new version of the framework.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to use a view as a search result page with facets?
I've created a pretty nice view with faceted filters following this guy's tutorial. It's basically a cheap way of doing getting the faceted feature without having to set up Solr.
The problem I have is that I would actually like to pre-query the index with some search terms from the homepage with a search form...basically:
search for a term and get redirected to the view with the facets.
use the facets to further filter my nodes.
How would something like this be possible?
Currently I have configured (in the Search API configurations) one server that uses database search as service class, and one Index hooked to that server, which indexes different fields (Content type and three different taxonomy terms) which are exposed as search facets in the view.
Number 2 is basically covered: I have a view page where all the nodes are listed and can be filtered through the facets... what I want is for the users to get to the front page, enter their search query (could have autocomplete or not, I'll figure later) hit "submit" and find all their search results in a page similar to the one I have with the facets... so they can further narrow their search. Kind of like when you look for something in linkedin.
A:
For #1 You could try using Rules to catch the term and do a redirect. You could also use hook_form_FORM_ID_alter to catch it
Edit:
You can use "Drupal is initializing" for the event, and add a condition that examines the args coming in. For instance, search/some%20search%query would be:
arg(0) = search
arg(1) = some search query
So here are the steps:
Event -> Drupal is initializing
Condition -> Execute Custom PHP
$arg0 = arg(0);
$arg1 = arg(1);
if ($arg0 == 'search' && $arg1 == 'some search query') {
drupal_goto('search/my-predetermined-facet');
}
That should do it.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Background images failing to be linked in middleman. Why?
I'm using middleman to develop a static website and for some reason, the background-image refuses to load. I've done this before many times and I have no idea why It's not working.
Here I set the background image:
.background{
width: 100%;
}
#topbackground{
background-image: image-url("mountains.jpg");
height: 1000px;
border: 1px solid;
}
Here is the fairly simple html:
<div class="background" id="topbackground">
</div>
But no background image loads, as you can see here:
I have no conflicting stylesheets. The only other stylesheet this page is linked to is normalize.css and I've already tried neutralizing that, but it wasn't the issue.
The image is in the right directory, I have refreshed the server; is there any reason why the image would fail to load?
update: I've tried linking the image via an image tag using <%= image_tag "mountains.jpg" %> and it works just fine. It is just image-url in the scss file that is failing.
A:
Found the issue. My css file didn't have the .scss extension required for the use of the image-url helper.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
SQL Server Integrated Security from an Azure Web Site
When running an ASP.Net website on IIS I can specify the Active Directory (AD) username that the website runs in the context of in the App Pool settings. I can then create a connection string with Integrated Security = true to access my database. It's then possible to secure DB resources based on that AD user.
Is this possible in Windows Azure when connecting a Web Site to a VM hosting an SQL database?
Firstly it does not seem possible to specify the virtual network of the Web Site so I am not sure how to specify the connection string. I'm hoping I don't need to expose the SQL Server's port (1433) to the outside world so only the website can make access to it.
Secondly, I can't see how to specify the user context of the Website so that this can be passed to the SQL Server. I am aware that Azure has an Active Directory but I don't see any options in the Azure Portal to run a Web Site as a specific user.
A:
To answer the first part of your question, you can connect your Azure Web App to a virtual network only in the preview portal. If you go to the very bottom of the main properties "blade" for the web app, there's a "Networking" section that will allow you to select the vnet.
The second part - I don't believe there's a way to do that, due to the lack of control over the application pool settings for Azure Web Apps.
Each app in Web Apps runs as a random unique low-privileged worker process identity called the "application pool identity", described further here: http://www.iis.net/learn/manage/configuring-security/application-pool-identities.
(from: http://azure.microsoft.com/en-us/documentation/articles/web-sites-available-operating-system-functionality/)
Sql authentication will work in this scenario.
And if you have the website hosted outside of Azure, it's definitely possible - we have a TFS build server that's hosted locally using windows authentication to publish database project builds to a SQL Server VM in Azure over a vnet.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How To Get Article's Users and Comments With Eloquent
There are three database tables users, articles and a joining table article_users_comments, which holds the comment, the user id commented the article and the commented article id.
I can achieve the following thing with pure SQL join, but I want to do it with Eloquent, I thought that it would be quite easy, but I am kind of confused right now.
I have been trying different things, but it still doesn't work.
// User
class User extends Authenticatable implements MustVerifyEmail,CanResetPassword{
public function comments()
{
return $this->hasMany('App\ArticleComments');
}
}
// Article
class Article extends Model{
public function getArticles(){
$articles = Article::paginate(3);
return $articles;
}
public function getSingleArticle($title){
$article = Article::where('title','=',$title)->get();
return $article;
}
public function articleComments()
{
return $this->hasMany('App\ArticleComments');
}
}
// ArticleComments
class ArticleComments extends Model{
protected $table = 'article_users_comments';
public $timestamps = false;
public function article()
{
return $this->belongsTo('App\Article');
}
public function user()
{
$this->belongsTo('App\User');
}
}
// ArticleController(showing only the show method), which passes the data to the certain view
instantiating the Article Model
class ArticleController extends Controller{
/**
* Display the specified resource.
*
* @param int $id
* @return \Illuminate\Http\Response
*/
public function show($title)
{
$removeDashesFromUrl = str_replace('-',' ',$title);
$am = new Article();
$data = $am->getSingleArticle($removeDashesFromUrl);
return view('article',['article'=>$data]);
}
}
I want to get the comments and the users(which have commented the article) for a certain certain article.
A:
You should set the foreign key in your articleComments and article relations:
Eloquent determines the default foreign key name by examining the name of the relationship method and suffixing the method name with _id. However, you may pass a custom key name as the second argument to the belongsTo method:
Article Model
public function articleComments()
{
return $this->hasMany('App\ArticleComments','commented_article_id');
}
ArticleComments Model
public function article()
{
return $this->belongsTo('App\Article','commented_article_id');
}
You can get the comments from a article using the relation:
$article = Article::find($id);
$article->articleComments; // This will return all comments for the given article
You could use a foreach loop and access each attribute from each comment:
foreach($article->articleComments as $comment)
{
echo $comment->id;
echo $comment->user->id;
echo $comment->user->username;
.
.
.
}
You can access the user and any of his attributes just calling the relation in your comment like i did above.
For more info: click here.
Note: i strongly recommend you changing your model name to Comment, we don't use model names in the plural, always in singular.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Netflix error code F7355-1204 and F7363-1260
Netflix quit and displayed error codes F7355-1204 and F7363-1260. It will go to the point where the movie is downloading but stops and displays the error codes.
My os is Ubuntu 12.04 on a 32 bit machine. Netflix was working fine on the pipelight approach. Reinstalled firefox 52.0.2. Fixed libavcodec. Have yet to reinstall pipelight.
What are these error codes, and how do I fix them.
A:
First step, you will have to install Codec.
Copy and paste in the terminal -
sudo apt install ubuntu-restricted-extras
ubuntu-restricted-extras : is a meta-package that installs:
Support for MP3 and unencrypted DVD playback
Microsoft TrueType core fonts
Flash plugin
codecs for common audio and video files
Second step, if an extension for your browser is causing the problem, you need to uninstall that extension
A:
Clear your Firefox (version 52 or higher only) caches by going to Preferences, Advanced, Network tab, and click the Clear Now buttons, shown below...
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Only ask for filename if one has not been input
I am using C# and a winform and am saving data to a .xlsx on a button click event. I have a unique situation that I am not sure how to code for....
If the form is still displayed and the user clicks the button, I want it to prompt for a file name and save location. BUT if the form has not been closed and the user clicks the button a second time, I want .xlsx to be saved in the same location and with the same filename and over write with no prompt.
This is the syntax I use to prompt for save name and location, but how do I check to determine if a filename/save location has already been input and if it has do not prompt again?
private void btnOne_Click(object sender, EventArgs e)
{
SaveFileDialog save = new SaveFileDialog();
save.InitialDirectory = @"C:\";
save.RestoreDirectory = true;
save.Title = "Select save location file name";
save.DefaultExt = "xlsx";
if (save.ShowDialog() == DialogResult.OK)
{
try
{
var file = new FileInfo(save.FileName);
using (var package = new ExcelPackage(file))
{
package.Save();
}
}
catch { Messagebox.Show("An error has occured"; }
}
}
A:
So, whether the data has a set filename is a part of the state of the class. Inside the class where you have btnOne_Click, just define a string with the filename, defaulted to null:
string filepath = null;
Then, in your btnOne_Click, you want to check for the filepath. If it's not there, open the saveAs dialog. After that, if filepath is set, just save. It will be restructured like this:
private void btnOne_Click(object sender, EventArgs e)
{
if (filepath == null)
{
SaveFileDialog save = new SaveFileDialog();
save.InitialDirectory = @"C:\";
save.RestoreDirectory = true;
save.Title = "Select save location file name";
save.DefaultExt = "xlsx";
if (save.ShowDialog() == DialogResult.OK) {
filepath = save.FileName;
}
}
if (filepath != null)
{
try
{
var file = new FileInfo(filepath);
using (var package = new ExcelPackage(file))
{
package.Save();
}
}
catch { MessageBox.Show("An error has occured"; }
}
}
This logical structure gives you standard behavior for when a user presses a save button. If they cancel the saveAs dialog, then the save is aborted and the filename state is not changed.
A:
Declare this globally:
public string Filename;
Then change your subroutine like this:
private void btnOne_Click(object sender, EventArgs e)
{
if (string.IsNullOrWhiteSpace(Filename))
{
SaveFileDialog save = new SaveFileDialog();
save.InitialDirectory = @"C:\";
save.RestoreDirectory = true;
save.Title = "Select save location file name";
save.DefaultExt = "xlsx";
if (save.ShowDialog() == DialogResult.OK)
{
try
{
Filename = save.FileName;
var file = new FileInfo(save.FileName);
using (var package = new ExcelPackage(file))
{
package.Save();
}
}
catch { MessageBox.Show("An error has occured"); }
}
}
else
{
var file = new FileInfo(Filename);
using (var package = new ExcelPackage(file))
{
package.Save();
}
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Netlogo - item expected input to be a string but got zero instead
This is a follow-up question related to a previous post LinkI have data related to 16 laptop consumers' review ratings which are either satisfied (16 people) or dissatisfied (6 people). They are defined as turtles and they are distinguishable by asking if the boolean variable satisfied? or dissatisfied? is true.
The dataset is read as follows:
extensions [csv matrix array nw]
globals
[
rowcounter
csv
ii
Sc-headings Bat-headings Pr-headings income-headings average-headings;
Sc-set
Bat-set
Pr-set
prodcount ;num of producer agents
]
turtles-own [
turtle-Sc-list
turtle-Bat-list
turtle-Pr-list
turtle-income-list
turtle-average-list
review-set
satisfied?
dissatisfied?
LapUtl-set
ScPWU
BatPWU
PrPWU
]
to setup
clear-all
file-close-all
set rowcounter 1
proddata
readdataset
reset-ticks
end
breed [ producers producer ]
to go
Reviewrating
end
to intlz
set Sc-set []
set Bat-set []
set Pr-set []
end
Reading the dataset:
to readdataset
file-close-all ; close all open files
file-open "turtle_details.csv"
let headings csv:from-row file-read-line ;header is read
; Splitting headings of the csv file into 5 categories representing screen
; size data, battery charge data, Price data, income data, max age of an owner
set Sc-headings sublist headings 2 7
set Bat-headings sublist headings 7 12
set Pr-headings sublist headings 12 17
set income-headings sublist headings 17 18
set average-headings sublist headings 18 length headings
while [ not file-at-end? ] [
let data csv:from-row file-read-line
create-turtles 1 [
set shape "person"
set size 2.5
ifelse rowcounter < 11
[
set color 125
set satisfied? true
set dissatisfied? false ;
]
;else
[
set color 65
set satisfied? false
set dissatisfied? true ;
]
setxy random-xcor random-ycor
; hide-turtle
set turtle-Sc-list sublist data 2 7
set turtle-Bat-list sublist data 7 12
set turtle-Pr-list sublist data 12 17
set turtle-income-list sublist data 17 18
set turtle-averageage-list sublist data 18 length data
]
set rowcounter rowcounter + 1
]
file-close-all
end
There are 3 producers who have some attribute levels for the screen, battery, price.
Sc Bat Pr
24 10 18000
18 6 22000
30 8 26000
to proddata
file-close-all ; close all open files
if not file-exists? "Prodinitattr.csv" [
user-message "No file 'Prodinitattr.csv' exists!"
stop
]
file-open "Prodinitattr.csv" ; open the file with the producers' initial attributes
let headings csv:from-row file-read-line
while [ not file-at-end? ] [
let data csv:from-row file-read-line
create-producers 1 [
hide-turtle
set producer? true ; this agent is a producer
set satisfied? false ; this agent is not a referrer ; REFERRERS
set dissatisfied? false ; this agent is not a pbuyer
set prodcount prodcount + 1
; set shape "house"
setxy random-xcor random-ycor
]
set Sc-set lput item 0 data Sc-set
set Bat-set lput item 1 data Bat-set
set Pr-set lput item 2 data Pr-set
]
file-close-all
end
The thing that should be extracted from the data set is the evaluation of consumers (reviews). Each consumer has a review-set which is at first an empty set ,[].Then it will keep thee values which corresponds to the three review values for each of the three producers.
to reviewrating
ask turtles [
set review-set []
]
ask turtles [
set ii 0
while [ii < 3 ][
set ScPWU turtle-Sc-rating item ii Sc-set
set BatPWU turtle-Bat-rating item ii Bat-set
set PrPWU turtle-Pr-rating item ii Pr-set
set LapUtl-set lput (ScPWU + BatPWU + PrPWU) LapUtl-set
set ii ii + 1
] ; while
];ask
end
to-report turtle-Sc-rating [Sc]
let pos position Sc Sc-headings
if is-number? position Sc Sc-headings
[
let turt-Sc-rate-value item pos turtle-Sc-list
report turt-Sc-rate-value
]
end
to-report turtle-Bat-rating [Bat]
let pos position Bat Bat-headings
if is-number? position Bat Bat-headings
[
let turt-Bat-rate-value item pos turtle-Bat-list
report turt-Bat-rate-value
]
;***************
end
to-report turtle-Pr-rating [Pr]
let pos position Pr Pr-headings
if is-number? position Pr Pr-headings
[
let turt-Pr-rate-value item pos turtle-Pr-list
report turt-Pr-rate-value
]
end
The problem is I cannot see consumers' LapUtl vector because of the error. I had reported another error previously here, but I changed where the "go" procedure was written, and now the error is marking this line :
let turt-Sc-rate-value **item** pos turtle-Sc-list
How can I resolve tihs?
Thank you,
A:
I suspect you are not correctly reporting the error. I suspect the error is ERROR: ITEM expected this input to be a string or list, but got a number instead. Here is an example of a way to produce this error: item 0 0. If I am right, then you are running the code let turt-Sc-rate-value item pos turtle-Sc-list while turtle-Sc-list has a value of 0. In order to confirm this, replace this code with
ifelse (is-list? turtle-Sc-list)
[let turt-Sc-rate-value item pos turtle-Sc-list]
[error (word "turtle-Sc-list is not a list.")]
Now run your code. If it raises the error "turtle-Sc-list is not a list.", then you are ready to search for how you failed to initialize this variable correctly.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can the Proxy pattern accommodate template functions?
I have a class with a template function:
Foo.h:
class Foo {
public:
int some_function();
bool some_other_function(int a, const Bar& b) const;
template<typename T>
int some_template_function(const T& arg);
};
template<typename T>
int Foo::some_template_function(const T& arg){
/*...generic implementation...*/
}
Now I've come to a point where I want to be able to access Foo via a proxy class, as in the Proxy design pattern.
Intuitively, I'd like to refactor as follows (the following code is incorrect, but it expresses my "idealized" API):
FooInterface.h:
class FooInterface {
public:
virtual int some_function()=0;
virtual bool some_other_function(int a, const Bar& b) const=0;
template<typename T>
virtual int some_template_function(const T& arg)=0;
};
FooImpl.h:
#include "FooInterface.h"
/** Implementation of the original Foo class **/
class FooImpl : public FooInterface {
public:
int some_function();
bool some_other_function(int a, const Bar& b) const;
template<typename T>
int some_template_function(const T& arg);
};
template<typename T>
int FooImpl::some_template_function(const T& arg){
/*...generic implementation...*/
}
FooProxy.h:
#include "FooInterface.h"
class FooProxy : public FooInterface{
protected:
FooInterface* m_ptrImpl; // initialized somewhere with a FooImpl*; unimportant in the context of this question
public:
int some_function()
{ return m_ptrImpl->some_function(); }
bool some_other_function(int a, const Bar& b) const
{ return m_ptrImpl->some_other_function(a,b); }
template<typename T>
int some_template_function(const T& arg)
{ return m_ptrImpl->some_template_function(arg); }
};
But this code fails miserably.
First and foremost, FooImpl can't compile, since class template functions can't be virtual.
What's more, even if I played around with the definition of some_template_function, even if I go as far as relocating it into a concrete class or some other jury-rigging, it's still going to wreak havoc with the whole point of having a proxy class in the first place, because template code needs to be defined in the header and included. That would force FooProxy.h to include FooImpl.h, and FooImpl.h needs all the implementation details and file-includes necessary to implement some_template_function. So if I'm using the Proxy pattern in order to obscure implementation details, distance myself from a concrete implementation, and avoid unnecessary file-includes, then I'm out of luck.
Is there a way to apply the Proxy pattern, or some variation thereof, to a class with template functions? Or is is this impossible in C++?
Context: At the moment, I'm trying to provide proxy access to a group of classes which have a preexisting, built-in logging mechanism. The only API I have for this log uses variadic templates, so it's impossible to predict the parameter combinations it'll be used with. I'd like the separation between the implementation and the client using the proxy to be as clean as possible, and I need to minimize dependencies from the client to the implementation, but I do need them to write to the same log.
However, I am interested in this issue beyond my immediate problem. It's puzzling to me that templates poke such a hole into a major design pattern, and that I haven't found this issue addressed anywhere.
A:
A wrapper/proxy for a class which has a templated interface will always require that the definition of the template class be visible in a header file to the code that calls the wrapper. This is because the code generated for the templated interface depends on the types of the arguments it is called with.
If you're stuck with the existing templated implementation FooImpl, then as @mars writes in the comments, your only option is:
template <class Implementation>
class FooProxy
{
Implementation * m_ptrImpl;
//...
};
If you can change the existing implementation, the ideal solution would be to refactor the templated methods and split them into two layers; one layer that depends on the argument types, and a second layer that does not. The code in the existing methods that depends on the argument types should be identical in all implementations, so this layer can be moved into a method of the abstract interface class. The remaining code that does not depends on the argument types can be left in a non-templated method of the implementation class, meaning the implementation details can be hidden in the .cpp file.
Here's an example, based on the scenario of a log that supports writing arbitrary types:
LogInterface.h
class LogInterface {
public:
template<typename T>
void write(const T& arg)
{
// converts from 'T' to array of characters.
// calls non-template 'write' as many times as necessary.
}
virtual void write(const char* p, std::size_t n)=0;
};
LogImpl.h
#include "LogInterface.h"
/** Implementation of the original Log class **/
class LogImpl : public LogInterface {
public:
void write(const char* p, std::size_t n);
};
LogProxy.h
#include "LogInterface.h"
class LogProxy : public LogInterface{
protected:
LogInterface* m_ptrImpl; // initialized somewhere with a LogImpl*
public:
void write(const char* p, std::size_t n)
{ m_ptrImpl->write(p, n); }
};
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Adjuntar (Renderizar) texto a un Drawable en Android
He hecho algo de investigación en la red y encontré esta clase aquí:
https://stackoverflow.com/questions/3972445/how-to-put-text-in-a-drawable
public class TextDrawable extends Drawable {
private final String text;
private final Paint paint;
public TextDrawable(String text) {
this.text = text;
this.paint = new Paint();
paint.setColor(Color.WHITE);
paint.setTextSize(24f);
paint.setAntiAlias(true);
paint.setFakeBoldText(true);
paint.setShadowLayer(6f, 0, 0, Color.BLACK);
paint.setStyle(Paint.Style.FILL);
paint.setTextAlign(Paint.Align.LEFT);
}
@Override
public void draw(@NonNull Canvas canvas) {
canvas.drawText(text, 0, 0, paint);
}
@Override
public void setAlpha(@IntRange(from = 0, to = 255) int alpha) {
paint.setAlpha(alpha);
}
@Override
public void setColorFilter(@Nullable ColorFilter colorFilter) {
paint.setColorFilter(colorFilter);
}
@Override
public int getOpacity() {
return PixelFormat.TRANSLUCENT;
}
}
El drawable es un <layer-list>:
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item android:drawable="@color/backgroundColor" />
<item android:top="10dp" android:left="10dp" android:bottom="10dp" android:right="10dp">
<shape android:shape="rectangle">
<solid android:color="#f00"/>
</shape>
</item>
<item android:top="0dp" android:left="0dp" android:bottom="0dp" android:right="0dp" >
<bitmap android:src="@drawable/imagen1" />
</item>
</layer-list>
He intentado lo sigiente:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
TextDrawable textDrawable = new TextDrawable(" ** Bienvenido ** ");
LayerDrawable ld = (LayerDrawable) getResources().getDrawable(R.drawable.layerdraw, null);
ld.mutate();
ld.addLayer(textDrawable);
getWindow().setBackgroundDrawable(ld);
}
Pero el texto no aparece. Si hay un método mejor, es bienvenido.
NÓTESE que estoy desplegando dicho drawable como el background temporalmente, pero podría ser también dentro de un ImageWiew.
Edición
He investigado un poco más y he descubierto que probablemente no sea posible hacerlo directamente con el layer-list, sin embargo, he visto que es SI posible hacerlo conviertiendo dicho layer-list en un bitmap y entonces superponerle el texto, ¿cómo hago esto? ¿Cómo convierto este drawable a bitmap?. En todo caso contaría como solución a esta pregunta, pues solo quiero que el texto quede en conjunto como imagen de fondo.
A:
Con el método que usas mediante canvas.drawText() se puede realizar sin problema, el ImageView donde se "renderizara" el texto, no debe tener propiedades wrap_content, podrías asignar match_parent
<ImageView
android:id="@+id/imageView"
android:layout_width="match_parent"
android:layout_height="match_parent" />
y toma en cuenta en donde vas a dibujar el texto, por default el ejemplo que muestras, se mostraría en la esquina superior izquierda, coordenadas 0,0 en Android.
canvas.drawText(text, 0, 0, paint);
estas dos pueden ser las causas por las cuales no se muestra el texto.
Comparto mi clase TextDrawable la cual para instanciar necesita del contexto, el texto a desplegar, posicion X y posición Y, el tamaño del texto lo puedes configurar dentro de la clase mediante la variable FONT_SIZE.
import android.content.Context;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.ColorFilter;
import android.graphics.Paint;
import android.graphics.PixelFormat;
import android.graphics.drawable.Drawable;
public class TextDrawable extends Drawable {
private final String text;
private final Paint paint;
private float positionx = 0;
private float positiony = 0;
private Context ctx;
private static final float FONT_SIZE = 24.0f;
public TextDrawable(Context ctx, String text, int positionx, int positiony) {
this.ctx = ctx;
this.text = text;
this.positionx = positionx;
this.positiony = positiony;
this.paint = new Paint();
paint.setColor(Color.WHITE);
paint.setTextSize(getPxfromDP(FONT_SIZE));
paint.setAntiAlias(true);
paint.setFakeBoldText(true);
paint.setShadowLayer(8f, 0, 0, Color.BLACK);
paint.setStyle(Paint.Style.FILL);
paint.setTextAlign(Paint.Align.CENTER);//align center
}
//Convert Pixels to Dp
private float getPxfromDP(float dpValue) {
float density = ctx.getResources().getDisplayMetrics().density;
return dpValue * density + 0.5f;
}
@Override
public void draw(Canvas canvas) {
canvas.drawText(text, getPxfromDP(positionx), getPxfromDP(positiony), paint);
}
@Override
public void setAlpha(int alpha) {
paint.setAlpha(alpha);
}
@Override
public void setColorFilter(ColorFilter cf) {
paint.setColorFilter(cf);
}
@Override
public int getOpacity() {
return PixelFormat.TRANSLUCENT;
}
}
Como ejemplo de uso
TextDrawable textDrawable = new TextDrawable(getApplicationContext(), "Elenasys was here!", 200, 200);
myImageView.setImageDrawable(textDrawable);
para obtener este resultado:
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to construct a bijection from [a, b] to (0,1)?
The question is how to construct a bijection between [2, 4] and (0, 1), but is there a general formula to do so?
A:
I recommend first solving these two simpler problems:
Find a bijection between $[0,1]$ and $(0,1]$.
Find a bijection between $[0,1]$ and $(0,1)$.
Problem (1) already contains the essential new difficulty of this set of problems. Problem (2) is a chance to double-down on the new idea from the solution to problem (1), and should lead pretty nicely to a solution to your original problem.
One way to go from problem (2) to your exact problem is to find a bijection between $[2,4]$ and $[0,1]$ (which should be easy, with a continuous function) and then compose the two bijections together.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Connect Azure Function to Office 365 Flow
I have a an application that uses Azure Function Preview. In Azure Function an API is created that delivers an small object with a value. I want to use this in a Office 365 Flow to send an email when value is larger than X
A:
This is very easy. Your Azure Function has a url that can be access and that will return the object on a GET Verb.
To use it in Flow you create a new Flow application from the type "HTTP". In the HTTP action you add the url of your Azure Function.
Than you add a condition. In the condition action your define the rule to need to be true. You cannot have complex rules here, so you need to have a rule using contains to define if your object contains a certain value. I might be better not to return a object but a single value to have better fitting rule.
Than you can add another mail action that will create the email that is sent when the rule results in true.
Hope that this helps.
Sander
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why does the inner circle fill the outer circle?
I'm trying to make a small dark circle within a large light circle using xml in Android. Both circles have a color, a size and a stroke. Why does the small dark circle fill the 50 dp instead of being 28dp?
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<!--big light circle-->
<item>
<shape android:shape="oval">
<solid android:color="#EEEEEE" />
<size
android:width="50dp"
android:height="50dp" />
<stroke
android:width="2dp"
android:color="#404040" />
</shape>
</item>
<!--small dark circle-->
<item>
<shape android:shape="oval">
<solid android:color="#AEAEAE" />
<size
android:width="28dp"
android:height="28dp" />
<stroke
android:width="1dp"
android:color="#464646" />
</shape>
</item>
</layer-list>
The usage of the drawable:
<RadioButton
android:id="@+id/radioButtonPositive"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:background="@android:color/transparent"
android:button="@android:color/transparent"
android:drawableTop="@drawable/ic_choice_background"
android:gravity="center"
android:onClick="onRadioButtonClicked"
android:layout_marginRight="110dp"/>
The xml generates But I want it to generate
(the outer circles are the same size, this screenshot doesn't show it correctly)
A:
Change to below
<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<!--big light circle-->
<item>
<shape android:shape="oval">
<solid android:color="#EEEEEE" />
<size
android:width="50dp"
android:height="50dp" />
<stroke
android:width="2dp"
android:color="#404040" />
</shape>
</item>
<!--small dark circle-->
<item android:bottom="11dp"
android:left="11dp"
android:right="11dp"
android:top="11dp">
<shape android:shape="oval">
<solid android:color="#AEAEAE" />
<stroke
android:width="1dp"
android:color="#464646" />
</shape>
</item>
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Declare optional last element in array?
I want to declare a Typescript array type in which the last element is optional. Is there any elegant way to achieve this?
const test = (a: [string, string, any | undefined]) => console.log(a)
test(['foo', 'bar'])
In the test(...) function call, ts verbose this error
Argument of type '[string, string]' is not assignable to parameter of type '[string, string, any]'.
Property '2' is missing in type '[string, string]' but required in type '[string, string, any]'.(2345)
A:
const test = (a: [string, string, any?]) => console.log(a)
test(['foo', 'bar'])
Hope this helps.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to manage production and development credentials using Facebook iOS SDK
Symptoms
My app is using Facebook to let users log in. It works fine while debugging from XCode and testing through AdHoc deployment, in order to have external testers we submitted our app for review but it seems that Facebook is complaining during the OAuth Process "App Not Setup: The developers of this app have not setup this app properly for Facebook Login".
Hypothesis
The FacebookDisplayName and FacebookAppId present in the Custom iOS target properties in the info section of my target in XCode are matching the Facebook app development version. Somehow, the SDK must detect that during the review the app is not anymore in development and an error occures.
Question
How do I define in that target some Custom iOS target properties with different release and debug values?
A:
You can add a user defined setting in Target's settings with different values for each scheme (Debug, Release, Ad-Hoc, AppStore etc) and use the user defined variable in info.plist file (or as you call it Custom iOS target properties).
Here is an example of how i did it for an app's bundle identifier. I wanted separate bundle identifier and server URLs for the Debug and AdHoc versions so both could be installed and tested on same device:
Add required data as a user defined settings in Target Settings.
Set different values for the variable for different scheme. (Make sure to check the target settings reflects the changes and not just project settings.):
EDIT: Detailed image for adding user defined settings:
In above image, BUNDLE_ID_PREFIX and BUNDLE_ID_SUFFIX have different values for each scheme.
Use this variable instead of default values in info.plist:
You will by default use debug scheme for running the app from xcode. If you use release scheme for archiving the app for upload, it will automatically pick up the correct value from target settings.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Confused about proofs by contradiction, the Law of the Excluded Middle and existence of consistent axiomatic systems.
I apologize if my question is too dumb. I'm not particularly educated in this area of Mathematics.
Proof by contradiction consists of assuming a statement $P$ is false, and then reach a contradiction thus allowing us to conclude that $P$ must be true. Such line of reasoning seems to be using the Law of the Excluded Middle, that is, $P \lor \neg P$ is a tautology.
Wouldn't assuming said law lead to some problems. As an example, it has been proven that if ZFC is consistent, then both ZFC$+$CH and ZFC$+\neg$CH are also consistent. Thus, by LEM, there are only two possible options:
1) CH is true, but unprovable within ZFC.
2) $\neg$CH is true, but unprovable within ZFC.
Suppose for a second that the first option was correct. Since $\neg$ CH is consistent with ZFC, the axiomatic system ZFC$+ \neg$CH contains no contradictions. However CH being true does imply that ZFC$+ \neg$CH has a contradiction. The second option being true leads to the same result.
What am I missing?
I would truly appreciate any help/thoughts.
A:
Your confusion is in conflating the truth of a set of axioms with their consistency. I'll assume ZFC is consistent throughout this explanation (that's not known, but it's assumed in the undecidability result you stated).
Let $\diamond p$ denote "$p$ is consistent" and $\square p$ denote "$p$ is provable" viz. modal logic (I'm tweaking its concepts slightly for the present context). Also, let $c,\,z$ respectively denote the CH and ZFC. From the law of the excluded middle $c\lor\neg c$ we deduce $z\to((z\land c)\lor(z\land\neg c))$, and the undecidability of $c$ in $z$ means that $(\diamond(z\land c))\land(\diamond(z\land\neg c))$. But these results are not inconsistent. In particular, $z\land c$ does not imply $\square(z\land c)$, and hence does not contradict $\diamond(z\land\neg c)$.
In particular, a general instance of the law of the excluded middle, $p\lor\neg p$, doesn't imply $(\square p)\lor(\square(\neg p))$. Similarly, the law of non-contradiction $\neg(p\land\neg p)$ doesn't imply $\neg((\diamond p)\land(\diamond(\neg p)))$.
Just to relate all this to something you said earlier:
Proof by contradiction consists of assuming a statement $P$ is false,
and then reach a contradiction thus allowing us to conclude that $P$
must be true.
An intuitionistic logician, who rejects the law of the excluded middle, would instead say you assume some statement is true, reach a contradiction, and thus conclude the statement was false. In other words, $(q\to\bot)\to\neg q$. The case $q:=\neg p$ gives $(\neg p\to\bot)\to\neg\neg p$, which if $p=\neg\neg p$ simplifies to $(\neg p\to\bot)\to p$ as you intended. This simplification follows from the law of the excluded middle, but fails in intuitionistic logic. One way to understand this is that intuitionistic logic tracks provability rather than truth (this isn't necessarily how to read it, but it gives the right logical structure). In other words, $p\lor\neg p$ fails in intuitionistic logic because $(\square p)\lor(\square(\neg p))$ fails in classical logic.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can you publish a Java Application to the Google Play app store?
I know you can publish an Android Application to Google Play because it supports the Android OS but can you publish a Java Application to Google Play? I'm pretty sure you can't but I want to make sure. And if so, how can you make it support and Android device(For example have a game that uses arrow keys on a computer for user interaction and have the same effect on a phone where there are no arrow keys).
A:
Long answer: You can only upload APK files. An APK file is the file format used for installing software on the Android operating system.
Short answer: NO.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Catching update errors on MySQLdb
I have a function that updates a MySQL table from a CSV file. The MySQL table contains the client account number -- this is what I use to compare with the CSV file. At some point, some of the queries will fail because the account number being compared from the CSV file has not been added yet.
How do I get the records from the CSV file that failed during the update process? I wanted to store these records in a separate file and then re-read the file at a later time until all records have been successfully updated.
Below is the function that updates the DB.
def updateDatabase(records, options):
"""Update database"""
import re # Regular expression library
import MySQLdb
# establish DB connection
try:
db = MySQLdb.connect(host="localhost", user="root", passwd="", db="demo")
except MySQLdb.Error, e:
print "Error %d: %s" % (e.args[0], e.args[1])
sys.exit (1)
# create cursor
cursor = db.cursor()
# tell MySQLdb to turn off auto-commit
db.autocommit(False)
# inform the user that this could take a while
if len(records) > 499:
print 'This process can take a while.'
print 'Updating the database now...'
# this is the actual loop
maxrecords = len(records)
for record in records:
account_no, ag_1to15, ag_16to30, ag_31to60, ag_61to90, ag_91to120, beyond_120, total, status, credit_limit = record
if re.match('1000', account_no):
query = """UPDATE sys_accountscf SET cf_581 = %s, cf_583 = %s, cf_574 = %s, cf_575 = %s, cf_576 = %s, cf_577 = %s, cf_579 = %s, cf_585 = '%s', cf_558 = %s WHERE cf_538 = %s"""
else:
query = """UPDATE sys_accountscf SET cf_580 = %s, cf_582 = %s, cf_568 = %s, cf_569 = %s, cf_571 = %s, cf_572 = %s, cf_578 = %s, cf_584 = '%s', cf_555 = %s WHERE cf_535 = %s"""
cursor.execute(query % (ag_1to15, ag_16to30, ag_31to60, ag_61to90, ag_91to120, beyond_120, total, status, credit_limit, account_no))
# commit all changes and close database connection
try:
db.commit()
except:
db.rollback()
cursor.close()
db.close()
A:
An update query returns the number of rows affected.
Checking the Cursor.rowcount after you made am execute will give that number. If it is not 1, that that update row failed.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Android wireless debugging on multiple devices
By following Google ADB Documentation i am able to connect to device wireless . But i want to connect more than one device.
For adb wireless to work i have to restart adb in tcpip mode
Once i did that i can connect to a device but to connect to another device i will have to restart adb in usb mode again which will disconnect the first device.
Simple solution would be to connect both device with usb and then restart adb in tcpip. But i don't have any extra set of usb cable right now.
So is there any other way to connect more than one device except for the method i mentioned above.
A:
adb tcpip <port> doesn't restart the adb daemon in your PC but in Android and binds it to listen to that specific port. You can always connect multiple devices in wireless mode.
Follow these steps:
Plug first device and restart adbd in tcpip mode using adb tcpip <port>.
Connect to first device using adb connect <host>:<port>.
Unplug first device, plug second device and restart adbd in tcpip mode using adb -s <serial> tcpip <port>.
Connect to second device using adb connect <host>:<port>.
You may now unplug the second device. If you execute adb devices you would see both the devices connected in wireless mode. Whenever you intend to do any operation on any of the device, supply identifier using -s.
Example:
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Anonymous functions using Anonymous functions
im getting this error Parse error: parse error when using this function,
$rows_tick = array_filter(
$rows,
function ($element) use ($date_tick) {
return ($element['ModTime'] <= $date_tick->format("His000"));
}
);
am i miss some thing?
when using wampp server with php v 5.3.x it run normaly, but not when using xampp with php 5.2.9,
is it wrong ? when using Anonymous functions in php less than 5.3 ?
A:
Anonymous functions comes from php 5.3.
You should write normal function where php < 5.3.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is it possible to deselect in a QTreeView by clicking off an item?
I'd like to be able to deselect items in my QTreeView by clicking in a part of the QTreeView with no items in, but I can't seem to find anyway of doing this. I'd intercept a click that's not on an item, but the QTreeView doesn't have a clicked signal, so I can't work out how to do this.
A:
Based on @Eric's solution, and as it only deselects if the clicked item was selected, here is what I came up with.
This solution also works when you click the blank area of the QTreeView
#ifndef DESELECTABLETREEVIEW_H
#define DESELECTABLETREEVIEW_H
#include "QTreeView"
#include "QMouseEvent"
#include "QDebug"
class DeselectableTreeView : public QTreeView
{
public:
DeselectableTreeView(QWidget *parent) : QTreeView(parent) {}
virtual ~DeselectableTreeView() {}
private:
virtual void mousePressEvent(QMouseEvent *event)
{
QModelIndex item = indexAt(event->pos());
bool selected = selectionModel()->isSelected(indexAt(event->pos()));
QTreeView::mousePressEvent(event);
if ((item.row() == -1 && item.column() == -1) || selected)
{
clearSelection();
const QModelIndex index;
selectionModel()->setCurrentIndex(index, QItemSelectionModel::Select);
}
}
};
#endif // DESELECTABLETREEVIEW_H
Yassir
A:
This is actually quite simple (in PyQt):
class DeselectableTreeView(QtGui.QTreeView):
def mousePressEvent(self, event):
self.clearSelection()
QtGui.QTreeView.mousePressEvent(self, event)
Qt uses mousePressEvent to emit clicked. If you clear the selection before sending on the event, then if an item is clicked it will be selected, otherwise nothing will be selected. Many thanks to Patrice for helping me out with this one :)
A:
The clearSelection does not work in my case. I'm using treeviews with a singleselection mode. Here is what I've coded:
class DeselectableTreeView : public QTreeView
{
public:
DeselectableTreeView(QWidget *parent) : QTreeView(parent) {}
virtual ~DeselectableTreeView() {}
private:
virtual void mousePressEvent(QMouseEvent *event)
{
QModelIndex item = indexAt(event->pos());
bool selected = selectionModel()->isSelected(item);
QTreeView::mousePressEvent(event);
if (selected)
selectionModel()->select(item, QItemSelectionModel::Deselect);
}
};
This works really fine.
Eric
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to get nodes that are realted to more than one node which in turn are related to more than one node in Neo4j using Cypher
I am a newbie in neo4j and no doubt i am loving it.
Now my query is i have a database in which there are users who have visited one or more than one urls and these urls contain one or more than one tags.
Now what i want is to retrieve tags for a certain user who has visited more than one urls.
the relation is somewhat like this:
(:User)-[:VISITED]->(:URL)-[:CONTAINS]->(:Tag)
Now i want to retrieve user who has visited more than one urls and all the tags contained in all those urls. So basically i want all the tags that a user has visited where visited urls are more than one.
A:
Using Cypher 2.X, this should make the job:
MATCH (user:User)
MATCH user-[:VISITED]->(url:URL)
WITH count(url) AS countUrl, url
WHERE countUrl > 1
MATCH url-[:CONTAINS]->(tag:Tag)
RETURN user.id, collect(tag) AS tags //you can show whatever you want here
A:
You can still optimize the query provided by Mik378.
In fact, in Cypher you can reproduce the java equivalent of getDegree with the size(pattern) clause :
MATCH (n:User)-[:VISITED]->(url)<-[:TAGS]-(tag:Tag)
WHERE size((n)-[:VISITED]->()) > 1
RETURN n.email, collect(distinct tag.name) as tags
which would result in the following query plan :
+------------------+---------------+------+--------+------------------------------------------+----------------------------------------------------+
| Operator | EstimatedRows | Rows | DbHits | Identifiers | Other |
+------------------+---------------+------+--------+------------------------------------------+----------------------------------------------------+
| EagerAggregation | 3 | 5 | 90 | n.email, tags | n.email |
| Projection | 7 | 24 | 48 | anon[15], anon[37], n, n.email, tag, url | n.email; tag |
| Filter(0) | 7 | 24 | 24 | anon[15], anon[37], n, tag, url | tag:Tag |
| Expand(All)(0) | 7 | 24 | 34 | anon[15], anon[37], n, tag, url | (url)<-[:TAGS]-(tag) |
| Filter(1) | 3 | 10 | 10 | anon[15], n, url | url:Url |
| Expand(All)(1) | 3 | 10 | 15 | anon[15], n, url | (n)-[:VISITED]->(url) |
| Filter(2) | 2 | 5 | 10 | n | GetDegree(n,Some(VISITED),OUTGOING) > { AUTOINT0} |
| NodeByLabelScan | 5 | 5 | 6 | n | :User |
+------------------+---------------+------+--------+------------------------------------------+----------------------------------------------------+
Total database accesses: 237
The query was with my test db, so for your current implementation, it should be :
MATCH (n:User)-[:VISITED]->(url)-[:CONTAINS]->(tag:Tag)
WHERE size((n)-[:VISITED]->()) > 1
RETURN n.email, collect(distinct tag.name) as tags
|
{
"pile_set_name": "StackExchange"
}
|
Q:
A concrete example about string w and string x used in the proof of Rice's Theorem
So, in lectures about Rice's Theorem, reduction is usually used to proved the theorem. Reduction usually consists a construction of $M'$, using a TM $M$ which is in the form $\langle M,w \rangle$ to be simulated first, an input $x$ to be simulated if $M$ accepts. $M'$ accepts if x is accepted.
I really want a concrete input about $\langle M,w \rangle$ and $x$. For example:
$L = \{ \langle M\rangle \mid L(M) = \{\text{ stackoverflow }\}\}$, that is L contains all Turing machines whose languages contain one string: "stackoverflow". $L$ is undecidable.
What kind of $\langle M,w \rangle$ to be simulated?
Suppose we have input x = "stackoverflow" or x = "this is stackoverflow" or any x with "stackoverflow" in it.
What if we first simulate a TM $M$ selected from in the possibilities of all TMs, and this TM accepts only a single character $a$ as its language. So, we simulate this $\langle M,w \rangle$ with $w = a$, and surely it will be accepted. And then input $x$ is also accepted according to the definition of $L$.
So, we conclude that $\langle M,w \rangle$ in which language is a single $a$ is reducible to $L$ that accepts all TMs which have "stackoverflow"?
Edit: I've just looked up a brief definition of reduction. A reduction is a transformation from an unknown but easier problem to a harder problem but already known. If the harder problem is solvable, so is the easier one. Otherwise, it's not.
Given that definition, I think the correct TM $M$ with its description $\langle M,w \rangle$ in my example should be a TM such that it accepts regular languages. This is the harder problem. If this is solvable, then my trivial $L$ with one string is solvable. But apparently, it's not according to the proof. We can effectively say we reduced from language one string problem to regular language problem and try to solve it. Previously, I thought the other way around: $\langle M,w \rangle$ is reduced to one string problem.
Is my thinking correct?
A:
A reduction is a transformation from an unknown but easier problem to a harder problem but already known. If the harder problem is solvable, so is the easier one. Otherwise, it's not.
Your characterisation of reduction is a bit misleading. You don't start with an "easy" problem and reduce it to a "harder" problem (how can you know it's an easy problem if it's unknown? Often you're attempting the reduction to find out whether your problem is "easy").
A reduction is a computable transformation from one problem to another. It proves that the source problem is no harder (in some sense) than the target problem.
Sometimes we do this because we already know the source problem is impossible to solve; this proves that the target problem is impossible too. Since the source problem is "no harder" than the target problem, and we already know the source is undecidable, the target problem must be undecidable (or the source would in fact be harder). More intuitively, if we can reduce the source to the target and the source is undecidable, then the target must be undecidable or we could use the reduction and the solution to the target in order to solve the source (which we already know can't be done).
Other times we find a reduction because we know the target problem is decidable. This shows that the source problem is decidable too.
So you can use a reduction argument either to a problem that's already known to be decidable, or from a problem that's already known to be undecidable, depending on what you're trying to prove.
So, Rice's Theorum. I don't quite understand the questions you're asking, since I haven't seen the particular statement of the proof that's giving you trouble. Instead, here's my quickie explanation of how to prove Rice's Theorum, which I'm pretty sure is similar.
Suppose we have an arbitrary language $L$, consisting of (encodings of) exactly those TMs with some non-trivial semantic property $P$.[1] To prove that there is no algorithm for deciding language $L$, I will reduce the Halting Problem to the decision problem for $L$.
So the Halting Problem (or the particular variant of it I will use here) is: given $\langle M, x\rangle$, an encoding of a Turing machine $M$ and an input $x$, does $M$ halt on $x$?
My reduction must transform the input to the Halting Problem $\langle M, x \rangle$ into the input for the hypothetical decider for $L$, which I will call $ML$. $L$ is a language of Turing machines, so $ML$'s input looks like $\langle M \rangle$.
So my reduction will take $\langle M, x \rangle$ and compute $\langle M' \rangle$. $M'$ is a machine that takes an input $w$ and functions as follows:
Simulate $M$ on $x$, ignoring the result
Simulate $MP$ on $w$; accept if it accepts and reject if it rejects
Here $MP$ is a machine that whose encoding is in $L$ i.e. it has the property $P$. Since we're considering a non-trivial property $P$, such an $MP$ is guaranteed to exist.
Note that $x$ is part of the machine $M'$ the reduction has produced, while $w$ is the input to that machine whenever it is run. $x$ and $w$ are completely unrelated. We are assuming we've been given a particular $x$ as part of the input to the Halting Problem, but since we haven't proposed running $M'$, there is no specific $w$ at the moment.
Now, if $M$ halts on input $x$, then $M'$ always gets to stage 2 and thus accepts/rejects exactly the same strings as $MP$. Since the property $P$ is semantic, it doesn't depend on the particular TM, only on the language it accepts, so in this case $M'$ also has property $P$. If $M$ doesn't halt on input $x$, then $M'$ never gets to an accept state regardless of input, so its language is the empty language.
So now we don't want to run $M'$ on anything, since it might not even halt, but we could run our hypothetical $ML$ on it to check whether it's in $L$. This will almost tell us whether $M$ halts on $x$; the only thing we're missing is that the empty language might happen to have property $P$, in which case $ML$ will always accept $M'$ whether or not $M$ halts on $x$. But we can easily amend $M'$: if $P$ is true of the empty language (which our reduction could check using $ML$ if $ML$ could actually exist), then we use $MN$ instead of $MP$ - some machine that doesn't have property $P$ (which is also guaranteed to exist by the non-triviality of $P$) - and then our $M'$ has property $P$ if-and-only-if $M$ doesn't halt on $x$.
So now we've shown that a hypothetical decider for $L$, the language of machines with property $P$, could be used to decide the Halting Problem. Since we already know the Halting Problem can't be decided, $ML$ can't exist. Since the only thing I assumed about $P$ was that it was a semantic and non-trivial property, the proof holds for any semantic and non-trivial property.
I cheated a little in my reduction, since I had to use $ML$ to work out whether the empty language has property $P$, which means the reduction is only computable if $ML$ exists. This breaks my earlier definition of a reduction as "a computable transformation from one problem to another", but the proof by contradiction is still perfectly valid; proving a language undecidable by reduction from an undecidable language is really just a special case of a proof by contradiction anyway.
[1] A property of a TM is semantic if all TMs that accept the same language share the property, i.e. the property doesn't depend on the particular implementation of the TM but only on the language it accepts. A property is non-trivial if there is at least one TM that has the property and at least one TM that doesn't have the property.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
FOSUserBundle - Symfony2 login with different encoded password
I have to do a migration of users from an old database where the passwords are stored in MD5, while in the new application using SHA512.
My purpose is that the old users on first login in the new application must change the password but this implies you can load these users from the database with the password in MD5.
How this can be done using Symfony 2.3.3 + FOSUserBundle?
A:
You could create custom password encoder.
For example:
security:
encoders:
FOS\UserBundle\Model\UserInterface: { id: my_password_encoder }
Then register that service:
services:
my_password_encoder:
class: MyProject\DefaultBundle\Security\PasswordEncoder
and create service with following:
use Symfony\Component\Security\Core\Encoder\BasePasswordEncoder;
class PasswordEncoder extends BasePasswordEncoder
{
public function encodePassword($raw, $salt)
{
return md5($raw);
}
public function isPasswordValid($encoded, $raw, $salt)
{
return $this->comparePasswords($encoded, $this->encodePassword($raw, $salt));
}
}
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Description of assembly hello world
I am interested in learning assembly (x86) language and made my first hello-world program. I used the debugger in windows-xp and I wonder if someone could explain what every line does.
1) jmp 115
2) db 'Hello world!$'
3) -a 115
4) mov ah, 09
5) mov dx, 102
6) int 21
7) int 20
I have attached a screenshoot of the every step to make a executable file in assembly,
A:
I'd recommend that you read Intel's Software Developer's Manual (especially volume 2), and/or some x86 assembly tutorial (like The Art of Assembly.
Breakdown of the code:
1) jmp 115
Jumps to the mov ah,09 instruction, so that the CPU doesn't try to execute the 'Hello world' string as if it was code (the CPU can't tell the difference between code and data).
2) db 'Hello world!$'
Declares a string. The dollar-sign is used as a string terminator by some DOS interrupt functions.
3) -a 115
Tells debug to assemble subsequent code starting at address 115.
4) mov ah, 09
Puts the value 9 in register ah.
5) mov dx, 102
Puts the address of the 'Hello world' string in register dx
6) int 21
Performs interrupt 21h / function 9 (write string). The function number is expected in register ah and the string offset in register dx, which was taken care of by the previous two instructions.
7) int 20
Performs interrupt 20h (terminate program)
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Determining average from database using VB6
I’m trying to work out an average grade from a database file using the ‘Do Loop’ method; I’m completely clueless how to even start it. Does anyone have any tips or is able to help me?
Regards, Jack.
A:
Rather than looping in vb, calculate the average in SQL.
E.g.,
select StudentID, avg(Grade) as AverageGrade
from StudentGrade
group by StudentID
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Can't use gitpython in AWS lambda
I've been trying to use the gitpython package in aws lambda. I've used python2.7 environment. I bundled up gitpython using this along with my python code into a zip file and uploaded.
import json
import git
def lambda_function(event, context):
repo="https://github.com/abc/xyz.git"
git.Git().clone(repo)
It says
Cmd('git') not found due to: OSError('[Errno 2] No such file or directory')
cmdline: git clone https://github.com/abc/xyz.git: GitCommandNotFound
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 13, in lambda_function
git.Git().clone("https://github.com/abc/xyz.git")
File "/var/task/git/cmd.py", line 425, in <lambda>
return lambda *args, **kwargs: self._call_process(name, *args, **kwargs)
File "/var/task/git/cmd.py", line 877, in _call_process
return self.execute(call, **exec_kwargs)
File "/var/task/git/cmd.py", line 602, in execute
raise GitCommandNotFound(command, err)
GitCommandNotFound: Cmd('git') not found due to: OSError('[Errno 2] No such file or directory')
cmdline: git clone https://github.com/abc/xyz.git
I think this error is caused because the lambda machine dosen't have git in it! How can I use this?
A:
There is a special lambda layer that brings in git to lambda functions.
Check this and this reference. Basically,
Click on Layers and choose "Add a layer", and "Provide a layer version ARN"
and enter the following ARN (replace us-east-1 with the region of your Lambda):
arn:aws:lambda:us-east-1:553035198032:layer:git:6
|
{
"pile_set_name": "StackExchange"
}
|
Q:
One submit tag, two forms on same page
I am looking for the best approach to this problem. I have two search forms on the same page ( they each search a a different api for info) and I would like to have one submit button and then the relevant api is called dependent on which form has content. So I though I could specify the controller action when submitting on each form like so
<div class="container margin50">
<div class="row">
<div class="span6 offset3 cf formBackground">
<h1>CoverArt Finder</h1>
<h3>Search Movies</h3>
<%= form_tag main_results_path, :method => "get" %>
<%= text_field_tag 'search', nil, :placeholder => 'Enter Film Name Here.....' %>
<h1>OR<h1>
<h3>Search Albums</h3>
<%= form_tag album_album_results_path, :method => "get" %>
<%= text_field_tag 'search', nil, :placeholder => 'Enter Artist Name here.....' %>
<%= submit_tag "search" %>
</div>
</div>
</div>
Obviously this is not working as i always get the results for the movie search parameters. Do i need a conditional statement in there to recognise which form is filled in? I’m a little unsure here.
Any other info needed please ask
Any help appreciated
Thanks
A:
If you're willing to do this client side (JavaScript / jQuery) it shouldn't be much hazzle. On clicking the submit button you could check which form has an value. Some simple sudo code:
on submitButton click:
if formA.someValue != null
post / submit formA
else if formB.someValue != null
post / submit formB
|
{
"pile_set_name": "StackExchange"
}
|
Q:
I am having difficulties with back-reference in awk
Recently, I am into security logs and want to make it better way on bash-shell. I found out in awk back-references are only stored by 9.
But I need to use 10 back-references.
Tried
awk '{print gensub(/^([0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}).+?\sID\s(\[[0-9]{4}\]).+?\sTargetUserName\s=\s(.+?)\sTargetDomainName\s=\s(.+?)\sTargetLogonId\s=\s(.+?)\sLogonType\s=\s([0-9]{1,2})\s(.+?\sWorkstationName\s=\s(.+?)\sLogonGuid\s=\s.+?TransmittedServices\s=\s.+?\sLmPackageName\s=\s.+?KeyLength\s=\s.+?\sProcessId\s=\s.+?\sProcessName\s=\s.+?\sIpAddress\s=\s(.+?)\sIpPort\s\=\s([0-9]{1,}))?.+?$/,"\\5,\\4,\\3,\\2\\6,\\1,\\8,\\9,","g") }'
Target strings (actually there are thousands of strings)
2017-03-21T02:00:00 kornawesome Security/Microsoft-Windows-Security-Auditing ID [4624] :EventData/Data -> SubjectUserSid = S-1-5-18 SubjectUserName = PRETENDERS$ SubjectDomainName = WORKGROUP SubjectLogonId = 0x00000000000004j7 TargetUserSid = X-12-54-181 TargetUserName = SYSTEMS TargetDomainName = NT AUTHORITY TargetLogonId = 0x00000000000003e7 LogonType = 8 LogonProcessName = Lxxoi AuthenticationPackageName = Negotiate WorkstationName = - LogonGuid = {00344000-0000-0000-0000-0000000003440} TransmittedServices = - LmPackageName = Stainless KeyLength = 0 ProcessId = 0x0000000000000244 ProcessName = C:/Windows/System32/services.exe IpAddress = 10.0.0.0 IpPort = 10.5.3.2 ImpersonationLevel = %%1122
If there is another way to perform with awk and also, I would like to use basic bash and associative array.
Please, also give me kind explanation...for me (noob).
A:
A problem with security logs is that some of the text is probably under user control, so using regular expressions to break things apart is problematic. However you can potentially use more than one expression to break things apart, and this can work around the limit of 9 backreferences. For example if all your log entries start with a timestamp you can peel that off.
awk '{t=$1 ;$1="";
print gensub(/^.+?\sID\s(\[[0-9]{4}\]).+?\sTargetUserName\s=\s(.+?)\sTargetDomainName\s=\s(.+?)\sTargetLogonId\s=\s(.+?)\sLogonType\s=\s([0-9]{1,2})\s(.+?\sWorkstationName\s=\s(.+?)\sLogonGuid\s=\s.+?TransmittedServices\s=\s.+?\sLmPackageName\s=\s.+?KeyLength\s=\s.+?\sProcessId\s=\s.+?\sProcessName\s=\s.+?\sIpAddress\s=\s(.+?)\sIpPort\s\=\s([0-9]{1,}))?.+?$/,"\\4,\\3,\\2,\\1\\5" t ",\\7,\\8,","g") }'
You can be selective, so you have WorkstationName\s=\s(.+?)\sLogonGuid as part of you pattern, you could use
awk {t=$1; $1="" ; printf("%s", gensub(/^.+?WorkstationName\s=\s(.+?)\sLogonGuid.*$/,"\\1,")); printf("%s,", t)}
to pull out a field, and this can be repeated.
@cas notes in the comments that the data can be viewed in 2 parts, the stuff before the EventData/Data -> and the stuff after it, and that the stuff after it can be split on = (space equal space). I would go further and view it as key/value pairs and split on /\s\S+\s=\s/ and use optional 4th argument to split to get the keys. There are a couple of big assumptions in this, that the user doesn't get to put an equals sign into the line and that each piece of data has a single word key. Note the index of the keys and values differ by 1, and that the initial part of the line ends up in v[1].
/usr/bin/awk '{
n=split($0,v,/\s\S+\s=\s/,k)
printf("There are %d fields\n",n)
for(i=0;i<n;i++) { printf("%d key \"%s\" value \"%s\"\n",i,k[i],v[i+1]) }
}'
with your sample data gives
There are 22 fields
0 key "" value "2017-03-21T02:00:00 kornawesome Security/Microsoft-Windows-Security-Auditing ID [4624] :EventData/Data ->"
1 key " SubjectUserSid = " value "S-1-5-18"
2 key " SubjectUserName = " value "PRETENDERS$"
3 key " SubjectDomainName = " value "WORKGROUP"
4 key " SubjectLogonId = " value "0x00000000000004j7"
5 key " TargetUserSid = " value "X-12-54-181"
6 key " TargetUserName = " value "SYSTEMS"
7 key " TargetDomainName = " value "NT AUTHORITY"
8 key " TargetLogonId = " value "0x00000000000003e7"
9 key " LogonType = " value "8"
10 key " LogonProcessName = " value "Lxxoi "
11 key " AuthenticationPackageName = " value "Negotiate"
12 key " WorkstationName = " value "-"
13 key " LogonGuid = " value "{00344000-0000-0000-0000-0000000003440}"
14 key " TransmittedServices = " value "-"
15 key " LmPackageName = " value "Stainless"
16 key " KeyLength = " value "0"
17 key " ProcessId = " value "0x0000000000000244"
18 key " ProcessName = " value "C:/Windows/System32/services.exe"
19 key " IpAddress = " value "10.0.0.0"
20 key " IpPort = " value "10.5.3.2"
21 key " ImpersonationLevel = " value "%%1122"
From here you can go further, create an associative array called say data
for(i=1;i<n;i++) {gsub(/[ =]/,"",k[i]);data[k[i]]=v[i+1]}
and then you can print out things like data["IpPort"] rather than worrying if this is field 20 or 21.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Visitor pattern with multiple visit implementations
I need to do various operation over a Composition pattern. I vas these implementation of the Visitor Pattern with the Composite Pattern.
Interface:
public interface Visitor {
void visit(DayPart dayPart);
void visit(DayParting dayParting);
void visit(TimePart timePart);
}
The visitor method in the Composite Class:
public void accept(Visitor visitor) {
visitor.visit(this);
Iterator<AbstractComposite> itr = iterator();
Parting children;
while (itr.hasNext()) {
children = (Parting) itr.next();
children.accept(visitor);
}
}
The visitor method over the leaf:
@Override
public void accept(Visitor visitor) {
visitor.visit(this);
}
The visitor Class implementation:
public class myVisitor implements Visitor {
@Override
public void visit(DayPart dayPart) {
//do something (or not)
}
@Override
public void visit(DayParting dayParting){
// do something (or not)
}
@Override
public void visit(TimePart timePart){
//do something (or not)
}
}
The problem is that i need do various operations over the Composition and with this design i need to do a new class for every "visit" what i need to do and this is pretty impractical.
So, i was thinking do something like this (i put generic names) with inners classes.
public abstract class VisitorImpl implements Visitor {
//Hook methods
@Override
public void visit(Composite composite) {
//hok method
}
@Override
public void visit(Composite2 composite2) {
//hook method
}
@Override
public void visit(Leaf leaf) {
//hook method
}
public static Visitor ParticularVisitor(){
return new ParticularVisitor();
}
private static class ParticularVisitor extends VisitorImpl {
@Override
public void visit(Composite composite) {
//do something real.
}
@Override
public void visit(Composite2 composite2) {
//do something real.
}
@Override
public void visit(Leaf leaf) {
//do something real.
}
}
private static class ParticularVisitor_2 extends VisitorImpl {
public ParticularVisitor_2(){
}
@Override
public void visit(Leaf leaf){
//Just do something with this
}
}
}
Is this a good solution to my problem? any improve?
A:
Visitor Design Pattern
Intent
Represent an operation to be performed on the elements of an object structure.
Visitor lets you define a new operation without changing the classes of the elements on which it operates.
The classic technique for recovering lost type information.
Do the right thing based on the type of two objects.
Double dispatch
Problem
Many distinct and unrelated operations need to be performed on node
objects in a heterogeneous aggregate structure. You want to avoid
"polluting" the node classes with these operations. And, you don't want
to have to query the type of each node and cast the pointer to the correct
type before performing the desired operation.
From: Sourcemaking
So, your "problem"
"The problem is that i need do various operations over the Composition
and with this design i need to do a new class for every "visit" what i
need to do and this is pretty impractical."
is not exactly a problem, is how Visitor is supposed to work.
One operation to be performed = one visitor
If there is no relationship between your operations, that's the best you can do with Visitor.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
grunt-contrib-less won't import .css files
My main less file @ private/less/app.less
@import 'a.less';
@import 'b.css';
which is importing:
private/less/a.less
& private/less/b.css
My grunt file:
module.exports = function(grunt) {
grunt.initConfig({
less: {
development: {
options: {
paths: ["/css"],
yuicompress: true
},
files: {
"public/css/app.css": "private/less/app.less"
}
},
},
// running `grunt watch` will watch for changes
watch: {
less: {
files: ['private/less/**/*.less'],
tasks: ['less'],
options: {
spawn: false
}
}
}
});
grunt.loadNpmTasks('grunt-contrib-less');
grunt.loadNpmTasks('grunt-contrib-watch');
grunt.registerTask('default', ['less', 'watch']);
};
My jade file has this for an import statement:
link(rel='stylesheet' href='css/app.css')
CSS Output @ public/css/app.css
// all the a.less styles ....
@import 'b.css';
which is giving me the error that the file isn't found because it's using the css @import method.
Anyone have any suggestions for ways of importing .css files in a .less file that's compiled with grunt-contrib-less?
A:
You can force less to import the css file as an less file regardless of the extension by using @import (less)
@import 'a.less';
@import (less) 'b.css';
which results in
// all the a.less styles ....
// all the b.css styles ....
Ref: import options: less
Use @import (less) to treat imported files as Less, regardless of file extension.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to count occurrences of several strings per row in a data frame in R
i have a question for which I thought I found a solution but if I double checked by hand I got numbers. I searched in other quotes but couldn't get exactly what I am looking for.
I have a dataframe with pharmaceutical agents. Each row is a subject and up to 20 columns store an agent each.Then I have a list of agents that can be clustered for one purpose. E.g. beta blockers. What I would like to do is iterate over each row to count if and how many e.g. beta blockers or statins a subject is taking.
I have tried with:
BETA = c("METOPROLOL", "BISOPROLOL", "NEBILET", "METOHEXAL", "SOTALEX",
"QUERTO", "NEBIVOLOL", "CARVEDILOL", "METOPROLOLSUCCINAT", "BELOC")
for (i in 1:202) {
dat$betablock[i] <- sum(str_count(meds[i,], BETA ))
}
I don't get a warning but it doesn't count the correct number of occurrences.
Here is some sample data:
Med1 Med2 Med3 Med4 Med5 Med6 Med7 Med8 Med9 Med10 Med11 Med12 Med13 Med14 Med15
1 AMLODIPIN RAMIPRIL METOPROLOL
2 PLAVIX SIMVASTATIN MIRTAZAPIN
3 BISOPROLOL AMLODIPIN ASS VALSARTAN CHLORALDURAT Doxozosin TAMSULOSIN CIPRAMIL
4 ASS ENALAPRIL L-THYROXIN LITALIR LITALIR AMLODIPIN CETIRIZIN HCT NACL CARMEN PROTEIN 88 NOVALGIN
5 ASS ATORVASTATIN FOSAMAX CALCIUM PANTOZOL NOVAMINSULFON
6 ASS FRAGMIN TORASEMID SPIRONOLACTON LORZAAR PROTECT VESIKUR ROCALTROL ATORVASTATIN PREDNISOLON LACTULOSE MIRTAZAPIN LANTUS ACTRAPID PANTOZOL SALBUTAMOL
Med16 Med17 Med18 Med19 Med20
1
2
3
4
5
6 AMPHO MORONAL
As you can see in the first row third column the string 'METOPROLOL' is listed. But when I call the result of my for loop for the first subject it results '0'.
> dat$betablock[1]
[1] 0
Any suggestions?
A:
If I understand correctly, the OP has multiple lists of agents that can be clustered for one purpose not just one list of beta blockers. The OP mentions statins, e.g. The OP wants to count how many different agents belonging to each cluster are being taken by each subject. The counts for each agent cluster are to be appended to each row.
I suggest to compute the sums for all clusters at once rather than to do this manually list by list.
For this, we first need to set-up a data frame with the clustering:
cluster
Purpose Agent
1: BETA METOPROLOL
2: BETA BISOPROLOL
3: BETA NEBILET
4: BETA METOHEXAL
5: BETA SOTALEX
6: BETA QUERTO
7: BETA NEBIVOLOL
8: BETA CARVEDILOL
9: BETA METOPROLOLSUCCINAT
10: BETA BELOC
11: STATIN ATORVASTATIN
12: STATIN SIMVASTATIN
13: STATIN LOVASTATIN
14: STATIN PRAVASTATIN
15: STATIN FLUVASTATIN
16: STATIN PITAVASTIN
cluster can be created, e.g., by
library(data.table)
library(magrittr)
cluster <- list(
BETA = c("METOPROLOL", "BISOPROLOL", "NEBILET", "METOHEXAL", "SOTALEX",
"QUERTO", "NEBIVOLOL", "CARVEDILOL", "METOPROLOLSUCCINAT", "BELOC"),
STATIN = c("ATORVASTATIN", "SIMVASTATIN", "LOVASTATIN", "PRAVASTATIN",
"FLUVASTATIN", "PITAVASTIN")
) %>%
lapply(data.table) %>%
rbindlist(idcol = "Purpose") %>%
setnames("V1", "Agent")
For counting the occurrences, we need to join or merge this table with the list of agents each subject is taking dat after dat has been reshaped from wide to long format.
While data in spreadsheet-style wide format, i.e., with one row per subject and many columns, are often suitable for data entry and inspection the database-style long format is often more suitable for data processing.
taken <- melt(setDT(dat)[, ID := .I], "ID", value.name = "Agent", na.rm = TRUE)[
Agent != ""][
, Agent := toupper(Agent)][]
ID variable Agent
1: 1 Med1 AMLODIPIN
2: 2 Med1 PLAVIX
3: 3 Med1 BISOPROLOL
4: 4 Med1 ASS
5: 5 Med1 ASS
6: 6 Med1 ASS
7: 1 Med2 RAMIPRIL
8: 2 Med2 SIMVASTATIN
9: 3 Med2 AMLODIPIN
10: 4 Med2 ENALAPRIL
11: 5 Med2 ATORVASTATIN
12: 6 Med2 FRAGMIN
13: 1 Med3 METOPROLOL
14: 2 Med3 MIRTAZAPIN
15: 3 Med3 ASS
16: 4 Med3 L-THYROXIN
17: 5 Med3 FOSAMAX
18: 6 Med3 TORASEMID
19: 3 Med4 VALSARTAN
20: 4 Med4 LITALIR
21: 5 Med4 CALCIUM
22: 6 Med4 SPIRONOLACTON
23: 3 Med5 CHLORALDURAT
24: 4 Med5 LITALIR
25: 5 Med5 PANTOZOL
26: 6 Med5 LORZAAR PROTECT
27: 3 Med6 DOXOZOSIN
28: 4 Med6 AMLODIPIN
29: 5 Med6 NOVAMINSULFON
30: 6 Med6 VESIKUR
31: 3 Med7 TAMSULOSIN
32: 4 Med7 CETIRIZIN
33: 6 Med7 ROCALTROL
34: 3 Med8 CIPRAMIL
35: 4 Med8 HCT
36: 6 Med8 ATORVASTATIN
37: 4 Med9 NACL
38: 6 Med9 PREDNISOLON
39: 4 Med10 CARMEN
40: 6 Med10 LACTULOSE
41: 4 Med11 PROTEIN 88
42: 6 Med11 MIRTAZAPIN
43: 4 Med12 NOVALGIN
44: 6 Med12 LANTUS
45: 6 Med13 ACTRAPID
46: 6 Med14 PANTOZOL
47: 6 Med15 SALBUTAMOL
48: 6 Med16 AMPHO MORONAL
ID variable Agent
dat is modified by appending a row number which identifies each subject, then it is reshaped to long format using melt(). Missing or empty entries are removed and agent names are converted to uppercase for consistency.
Edit In long format it is also easy to check for duplicate agents per subject
taken[duplicated(taken, by = c("ID", "Agent"))]
ID variable Agent
1: 4 Med5 LITALIR
and remove the duplicates:
taken <- unique(taken, by = c("ID", "Agent"))
The final step creates what I believe is the expected result:
ID BETA STATIN Med1 Med2 Med3 Med4 Med5 Med6 Med7 Med8
1: 1 1 0 AMLODIPIN RAMIPRIL METOPROLOL
2: 2 0 1 PLAVIX SIMVASTATIN MIRTAZAPIN
3: 3 1 0 BISOPROLOL AMLODIPIN ASS VALSARTAN CHLORALDURAT Doxozosin TAMSULOSIN CIPRAMIL
4: 4 0 0 ASS ENALAPRIL L-THYROXIN LITALIR LITALIR AMLODIPIN CETIRIZIN HCT
5: 5 0 1 ASS ATORVASTATIN FOSAMAX CALCIUM PANTOZOL NOVAMINSULFON
6: 6 0 1 ASS FRAGMIN TORASEMID SPIRONOLACTON LORZAAR PROTECT VESIKUR ROCALTROL ATORVASTATIN
Pleae, note the additional columns with the counts by cluster (Due to limited space not all columns of the result are shown here). This is created by
cluster[taken, on = .(Agent)][
, dcast(.SD, ID ~ Purpose, length)][
dat, on = "ID"][
, "NA" := NULL][]
using the following operations:
Join cluster and taken to have Purpose appended
Reshape to wide format, one row per subject and one column per purpose, thereby counting the number of occurrences
Join this result result with the original data dat
Remove the superfluous column of NA counts
Data
dat <- structure(list(Med1 = c("AMLODIPIN", "PLAVIX", "BISOPROLOL",
"ASS", "ASS", "ASS"), Med2 = c("RAMIPRIL", "SIMVASTATIN", "AMLODIPIN",
"ENALAPRIL", "ATORVASTATIN", "FRAGMIN"), Med3 = c("METOPROLOL",
"MIRTAZAPIN", "ASS", "L-THYROXIN", "FOSAMAX", "TORASEMID"), Med4 = c("",
"", "VALSARTAN", "LITALIR", "CALCIUM", "SPIRONOLACTON"), Med5 = c("",
"", "CHLORALDURAT", "LITALIR", "PANTOZOL", "LORZAAR PROTECT"),
Med6 = c("", "", "Doxozosin", "AMLODIPIN", "NOVAMINSULFON",
"VESIKUR"), Med7 = c("", "", "TAMSULOSIN", "CETIRIZIN", "",
"ROCALTROL"), Med8 = c("", "", "CIPRAMIL", "HCT", "", "ATORVASTATIN"
), Med9 = c("", "", "", "NACL", "", "PREDNISOLON"), Med10 = c("",
"", "", "CARMEN", "", "LACTULOSE"), Med11 = c("", "", "",
"PROTEIN 88", "", "MIRTAZAPIN"), Med12 = c("", "", "", "NOVALGIN",
"", "LANTUS"), Med13 = c("", "", "", "", "", "ACTRAPID"),
Med14 = c("", "", "", "", "", "PANTOZOL"), Med15 = c("",
"", "", "", "", "SALBUTAMOL"), Med16 = c("", "", "", "",
"", "AMPHO MORONAL")), class = "data.frame", row.names = c(NA,
-6L))
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Wix Directory Problems
I have two questions about the directories with using fragments in Windows Installer XML.
I got this fragment file from the heat.exe:
<?xml version="1.0" encoding="utf-8"?>
<Wix xmlns="http://schemas.microsoft.com/wix/2006/wi">
<Fragment>
<DirectoryRef Id="TARGETDIR">
<Directory Id="dir08A07F5561FBEB6B9772467C730F6445" Name="Test" />
</DirectoryRef>
</Fragment>
<Fragment>
<ComponentGroup Id="InstallationFiles">
<Component Id="cmp071F7F8F6B6027C8D2841272FE526A2B" Directory="dir08A07F5561FBEB6B9772467C730F6445" Guid="{CCCB70AC-29F5-4DAA-B03E-1A2266649AB6}">
<File Id="fil63087E96FFB31F9E39B642CE8914F48B" KeyPath="yes" Source="SourceDir\dmedv.jpg" />
</Component>
<Component Id="cmpAE6CBEDA75641CF25BA9996AEB74A0DE" Directory="dir08A07F5561FBEB6B9772467C730F6445" Guid="{F5DABCAB-95D1-4197-A49F-E5F052A8E7EF}">
<File Id="filD27F2F6B26F5C14563865FE6C2AD5D50" KeyPath="yes" Source="SourceDir\Files.txt" />
</Component>
<Component Id="cmp25C5EADB5C0A9E779D20EC7B77BD42B0" Directory="dir08A07F5561FBEB6B9772467C730F6445" Guid="{E301B04A-6EA5-496B-A58A-8898110BE57C}">
<File Id="fil7C91C48D9AA0F2FE0EB37A21F108037F" KeyPath="yes" Source="SourceDir\readme.txt" />
</Component>
<Component Id="cmpD387AB4B40EDF14BF271ADDA7B71D2B7" Directory="dir08A07F5561FBEB6B9772467C730F6445" Guid="{6AF61DF4-32D0-4E7C-95B8-1DB9E7409029}">
<File Id="fil966691BA382AFC9343430FE162643432" KeyPath="yes" Source="SourceDir\readme1.txt" />
</Component>
<Component Id="cmpB86212407C1BEA12838C8C7B20495E9F" Directory="dir08A07F5561FBEB6B9772467C730F6445" Guid="{921E971E-E224-464C-9FBC-FBC5F78B3E5B}">
<File Id="fil61CD8EF43EA29DF58454E9A19F8C1EF9" KeyPath="yes" Source="SourceDir\readme2.txt" />
</Component>
<Component Id="cmpE4143B48FF854AE84F6054D4636FDE81" Directory="dir0ADF7E89B935DD39670130B4DC1D670E" Guid="{6F248718-93DD-4850-A18E-BD7079F738D5}">
<File Id="fil03847B355B6AADE5E4E04D143C92BC67" KeyPath="yes" Source="SourceDir\Test2\dmedv2.jpg" />
</Component>
</ComponentGroup>
</Fragment>
<Fragment>
<DirectoryRef Id="dir08A07F5561FBEB6B9772467C730F6445" />
</Fragment>
<Fragment>
<DirectoryRef Id="dir0ADF7E89B935DD39670130B4DC1D670E" />
</Fragment>
<Fragment>
<DirectoryRef Id="dir08A07F5561FBEB6B9772467C730F6445">
<Directory Id="dir0ADF7E89B935DD39670130B4DC1D670E" Name="Test2" />
</DirectoryRef>
</Fragment>
</Wix>
and I have this wix installer file:
<?xml version='1.0' encoding='windows-1252'?>
<?define ProductVersion="1.0.0.0"?>
<?define ProductName="DMServices Installer"?>
<?define Manufacturer="DM EDV- und Bürosysteme GmbH"?>
<Wix xmlns='http://schemas.microsoft.com/wix/2006/wi' xmlns:iis='http://schemas.microsoft.com/wix/IIsExtension'>
<Product Name="$(var.ProductName)" Id='BB7FBBE4-0A25-4cc7-A39C-AC916B665220' UpgradeCode='8A5311DE-A125-418f-B0E1-5A30B9C667BD'
Language='1033' Codepage='1252' Version="$(var.ProductVersion)" Manufacturer="$(var.Manufacturer)">
<Package Id='*' Keywords='Installer'
Description="DMService Installer Setup"
Manufacturer='DM EDV- und Bürosysteme GmbH'
InstallerVersion='100' Languages='1033' Compressed='yes' SummaryCodepage='1252' />
<Media Id='1' Cabinet='Sample.cab' EmbedCab='yes' DiskPrompt="CD-ROM #1" />
<Property Id='DiskPrompt' Value="the man" />
<PropertyRef Id="NETFRAMEWORK35"/>
<Condition Message='This setup requires the .NET Framework 3.5.'>
<![CDATA[Installed OR (NETFRAMEWORK35)]]>
</Condition>
<Directory Id='TARGETDIR' Name='SourceDir'>
<Directory Id='ProgramFilesFolder'>
<Directory Id='DM' Name='DM EDV'>
<Directory Id='INSTALLDIR' Name='DMServices'>
</Directory>
</Directory>
</Directory>
</Directory>
<Feature Id='InstallationFiles' Title='InstallationFiles' Level='1'>
<ComponentGroupRef Id='InstallationFiles' />
</Feature>
</Product>
</Wix>
So far.
Now when i generate this files to wixobj, the compiler shows errors because the files can't be found. The files are in a directory called "Test". And in the file it's named SourceDir.
For a lil workaround i can copy the test directory and call it SourceDir ;-). So my Setup will be created.
How can I do it without a second directory?
EDIT: Problem is done.
Now i install my package. But whatever i do, the files will be installed to C:\Test.
But I want it to be installed in my Program Files Directory.
In many examples i can do it, like in the file, but i have to know the guids.
But we do wix for getting all files from one directory, without to put our hands on.
So how to install the files into the program files directory?
A:
Take a closer look at -dr switch of heat.exe. You can put the necessary directory reference there. So, define your directory structure in the main file as you do now, and provide correct directory ID to heat.exe.
UPDATE:
Ok, the following works for me. The main directory structure:
<Directory Id="TARGETDIR" Name="SourceDir">
<Directory Id="INSTALLLOCATION" Name="My folder">
<Directory Id="WebsiteFolder" Name="Website">
...
</Directory>
</Directory>
</Directory>
The Feature references the ComponentGroup:
<Feature Id="ProductFeature" Title="!(loc.ProductFeature.Title)" Level="100">
...
<ComponentGroupRef Id="WebsiteFolderComponentGroup"/>
...
</Feature>
The heat.exe generates the following fragment:
<Fragment>
<DirectoryRef Id="WebsiteFolder">
<Component Id="cmp1" Guid="GUID-GOES-HERE">
<File Id="fil1" KeyPath="yes" Source="$(var.WebsiteFolderSource)\Default.aspx" />
</Component>
<Component Id="cmp2" Guid="GUID-GOES-HERE">
<File Id="fil2" KeyPath="yes" Source="$(var.WebsiteFolderSource)\default.css" />
</Component>
<Directory Id="dir1" Name="App_Browsers">
<Component Id="cmp3" Guid="GUID-GOES-HERE">
<File Id="fil3" KeyPath="yes" Source="$(var.WebsiteFolderSource)\App_Browsers\Form.browser" />
</Component>
</Directory>
<Directory Id="App_Config" Name="App_Config">
<Component Id="cmp4" Guid="GUID-GOES-HERE">
<File Id="fil4" KeyPath="yes" Source="$(var.WebsiteFolderSource)\App_Config\ConnectionStrings.config" />
</Component>
</Directory>
<Directory Id="bin" Name="bin">
<Component Id="cmp5" Guid="GUID-GOES-HERE">
<File Id="fil5" KeyPath="yes" Source="$(var.WebsiteFolderSource)\bin\MySystem.Web.UI.dll" />
</Component>
<Component Id="cmp6" Guid="GUID-GOES-HERE">
<File Id="fil6" KeyPath="yes" Source="$(var.WebsiteFolderSource)\bin\Another.dll" />
</Component>
...
</Directory>
...
</Directory>
...
<ComponentGroup Id="WebsiteFolderComponentGroup">
<ComponentRef Id="cmp1" />
<ComponentRef Id="cmp2" />
<ComponentRef Id="cmp3" />
<ComponentRef Id="cmp4" />
...
</ComponentGroup>
And finally, the heat command which generates necessary output looks like this (Nant sample):
<exec program="heat.exe" verbose="true" basedir="${paths.source}">
<arg line='dir "${paths.dist.website}"'/><!-- Notice the quotes inside the attributes -->
<arg line='-srd'/>
<arg line='-dr WebsiteFolder'/>
<arg line='-cg WebsiteFolderComponentGroup'/>
<arg line='-out "${paths.harvest}\website.wxs"'/>
<arg line='-ke -sfrag -scom -sreg -gg'/>
<arg line='-var var.WebsiteFolderSource'/>
</exec>
These snippets contain enough information to understand how it all works. Play with heat.exe switches to find out the combination you need. Good luck!
|
{
"pile_set_name": "StackExchange"
}
|
Q:
awkakeFromNib,initWithCoder,initWithFrame not being called
none of awakeFromNib,initWithFrame,initWithCoder are being called in custom TableViewCell, when a cell is dequeueReusableCellWithIdentifier.
Note that the cell is registered in code like,
[self.tableView registerClass:[RGTableViewCell class] forCellReuseIdentifier:@"1"];
. An RGTableViewCell is being dequeued but none of the initialization methods that I mentioned are called. I was hoping to do some setup in one of those methods.
Thanks for any hints,
Cheers
A:
The designated initializer for a UITableViewCell is initWithStyle:reuseIdentifier:, as is stated in the class reference. If you are not using a xib or storyboard to create your cell, that's the initializer that should be called.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
In App Purchase restore Not Working - iOS Swift StoreKit
I am using SwiftyStoreKit to handle my in app purchasing. I have one non-consumable in-app purchase. I am unable to restore the purchase. I am testing this on a release version of my app where I have been charged before for it, but it will not restore. The code I am using to call SwiftyStoreKit is as follows:
SwiftyStoreKit.restorePurchases(atomically: true) { results in
if results.restoreFailedProducts.count > 0 {
print("Restore Failed: \(results.restoreFailedProducts)")
}
else if results.restoredProducts.count > 0 {
print("Restore Success: \(results.restoredProducts)")
self.defaults?.set(true, forKey: "UnlockApp")
NotificationCenter.default.post(name: Notification.Name(rawValue: "transition"), object: nil)
self.dismissView()
} else {
print("Nothing to Restore")
}
}
As you can see I set a user default in order to unlock features in the app, but this never triggers it always comes back with "Nothing to Restore"
Has anyone dealt with this, or know a possible reason for this behavior?
Edit: This is on a physical device, on a release version (not sandboxed purchase)
A:
The solution was when i have a developer version on the phone, i must:
Uninstall the developer version
Restart iPhone
Download production version from app store
Then the restore purchase option will work
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Creating arrays in C++ without knowing the length ahead of time
I'm working on a small program to help speed up some data analysis for my lab work. It's supposed to read in data from a text file, create a bunch of arrays containing this data and then do some maths.
The problem I keep running into is that I don't know how many lines the original text file will have, so I don't know how big to make my arrays. I'm very new to C++ and right now I don't feel comfortable with dynamically sized arrays, here's a bit of the code
// first determine the length of the file
ifstream dataFile ("xys_data.txt");
const int LENGTH = count(istreambuf_iterator<char>(dataFile), istreambuf_iterator<char>(), '\n'); // counts the number of new lines
// declare vector of type datapoint
dataPoint data[LENGTH];
When I try and compile this i get the error
expected constant expression
cannot allocate an array of constant size 0
'data' : unknown size
But haven't I defined the LENGTH to be constant?
Any help will be appreciated.
EDIT
Following the advice of almost all of you, I have started using std::vector. I have one last issue that I'm a bit shaky on.
In the first attempt at the program I defined a data structure:
struct dataPoint
{
double x; // x values
double y; // y values
double s; // sigma values
};
Then when I read the data from the file, I sent it to this structure like so
while (!dataFile.eof()) // this loop writes out each row of data to the arrays x, y, s until it reaches the end of the file
{
int j = 0;
dataFile >> data[j].x >> data[j].y >> data[j].s;
j++;
}
Is there a way I can do this using vectors? My first thought is to define the vectors x, y and s and replace the data[j].x with x in the loop, but this doesn't work.
A:
You say that you're not comfortable with dynamically-sized arrays, or std::vector.
I'm afraid that you'll have to figure out how to get comfortable here, because that's precisely what's std::vector is for.
The only other alternative is to use the gcc compiler. gcc does allow you to have variable-sized arrays. It's a compiler extension, it's not standard C or C++:
void method(size_t n)
{
dataPoint array[n];
// Do something with the array.
}
A:
First and foremost, built in arrays in C++ have to have compile-time size. It is not enough to declare your LENGTH variable const. It is also important to make it a compile-time constant. Your LENGTH is not a compile time constant, so declaring an array of size LENGTH is not possible. This is what the compiler is trying to tell you.
When you need to build an array, whose size is not known in advance, you typically have at least three approaches to choose from:
Two-pass reading. Make a "dry run" over the data source to determine the exact size of the future array. Allocate the array. Make a second pass over the data source to fill the array with data.
Reallocation. Use a reallocatable array. Allocate an array of some fixed size and fill it with data. If the array proves to be too small, reallocate it to bigger size and continue to fill it. Continue to read and reallocate until all data is read.
Conversion. Read the data into a cheaply and easily expandable data structure (like linked list), then convert it to array.
Each of these approaches has it own pros and cons, limitations and areas of applicability.
It looks like in your case you are trying to use the first approach. It is not a very good idea to use it when you are working with files. Firstly, making two passes over a file is not very efficient. Secondly, in general case it might create a race condition: the file might change between the passes. If you still want to go that way, just use std::vector instead of a built-in array.
However, in your case I would recommend using the second approach. Again, use std::vector to store your data. Read your data line by line and append it to the vector, item by item. The vector will automatically reallocate itself as necessary.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why would Netflix switch from its five-star rating system to a like/dislike system?
Netflix used to base its suggestions on a user's submitted ratings of other movies/shows. This rating system had five stars.
Now, Netflix allows users to like/dislike (thumbs-up/thumbs-down) movies/shows. They claim it's easier to rate movies.
Wouldn't this 2-way classification be statistically less predictive than a 5-way classification system? Wouldn't it capture less variation?
A:
According to an article by Preston & Coleman (2000), 2 item-scale relaibility does not differ markedly from 5 item-scale reliability:
The subject of measurement was satisfaction with restaruants but it translates well to the movie rating. Ease of use, how quick it is to use and how well can a person express feelings on a differnt item-scales was measured as well. The results are as follows:
It is clear that users find 2 item-scale slightly easier to use and quicker to use in comparison to 5 item-scale but also very inadequate in expressing user's true beliefs. This indicates that 2 item-scale does not capture underlying variability very well and results in loss of variability. Discrimination indices are also markedly poorer for 2 item-scales in comparison to 5 item-scales.
Taking all of the above into account I would speculate that Netflix is willing to exchange some voting precision to lure more users into voting. I think they prefer more people voting since it increases sample coverage. This can lead to better understanding of less engaged users. Marginal value of additional information for less engaged users is likely much higher in comparison to engaged users.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
"OR " condition in where clause of linq query is returning null
The following code when executed individually is executing properly but when an or condition is included
to club both the condition in single query its returning null
userSessionList = userSessionList.Where(u =>
(u.User.FirstName.ToLower().Contains(name)) ||
(u.User.LastName.ToLower().Contains(name))
)
.ToList();
A:
Use Null-conditional operators
userSessionList = userSessionList.Where(u =>
(u.User?.FirstName?.ToLower().Contains(name) == true) ||
(u.User?.LastName?.ToLower().Contains(name) == true)
)
.ToList();
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How to change "sender:" field in the header of Logwatch emails
I changed MailFrom="" in /usr/share/logwatch/default.conf/logwatch.conf
That altered "From:" in the headers, but "Sender:" is still "root@ip-xx-xx-xx-xx.domain"
EC2 Linux AMI beta,
Postfix is the mailer
A:
Arrived here via google with the same problem.
Adding the "-f user" option to sendmail in /usr/share/logwatch/default.conf/logwatch.conf had no effect.
Turns out logwatch.pl processes /usr/share/logwatch/dist.conf/logwatch.conf after /usr/share/logwatch/default.conf/logwatch.conf.
Edit or remove the MailFrom = root override in /usr/share/logwatch/dist.conf/logwatch.conf for it to work.
A:
There are several locations where Logwatch configuration details can be specified, with each one superseding the previous one:
/usr/share/logwatch/default.conf/*
/etc/logwatch/conf/dist.conf/*
/etc/logwatch/conf/*
The script / command line arguments
It is recommended to change: /etc/logwatch/conf/logwatch.conf
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Why the result of this java program is '44'?
I thought the result could be '43' because the type of q was 'poly 1'. However, the result was '44'. I couldn't understand that. please give me the answer.
class poly1 {
int a;
public poly1(){
a = 3;
}
public void print_a(){
System.out.print(a);
}
}
public class poly2 extends poly1{
public poly2(){
a = 4;
}
public void print_a(){
System.out.print(a);
}
public static void main(String[] args){
poly2 p = new poly2();
p.print_a();
poly1 q = new poly2();
q.print_a();
}
}
A:
When you invoke a class' constructor, the class' super type constructor is invoke first (until there is no super types).
When you invoke
new poly2();
The poly1 constructor is invoked first (because poly1 is a super type of poly2), setting a to 3 and then the poly2 constructor is invoked, setting a to 4 which is what you see.
the type of q was 'poly 1'
What seems to confuse you is that in the following code
poly1 q = new poly2();
the variable q is declared as type poly1. That makes no difference in this case. What actually matters is the run time type of the object. That's determined by the new statement. In this case, the object is of dynamic type poly2.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Add and parse my own JSON object in my Shopify Liquid theme
I added a JSON file to the assets folder in my Shopify liquid theme. I want to get and parse this JSON object in a jquery method from a javascript file in my assets folder. I've tried including the json file as an asset_url and I've tried using jquery's getJSON() method with the asset's path but the file can't be found. Does anyone know a good approach for adding a custom data object to a shopify liquid theme and the best way to access it?
A:
You could save your JSON in a .liquid file and include it in your template. You'd define the JSON like this:
<script type="text/javascript">
window.my_json_obj = {
...
}
</script>
That way, you could access window.my_json_obj in your jQuery script. But if a key/value storage approach is enough for your needs, you should probably take a look at Shopify's metafields
|
{
"pile_set_name": "StackExchange"
}
|
Q:
How can I compare tuples using MySQL?
one more problem. I need your help.
Make a list of medications that have been entered as the same (identical_with) but differ in their association with the disease.
identical_with
association
I don't know how to do that.
The result should be in that case:
result
A:
To solve your problem, you need to use twice the table association. Following code should be OK:
select
i.Name_1, i.Name_2
from
association a
inner join
identical_with i
on i.Name_1 = a.Name
inner join
association a2
on i.Name_2 = a2.Name
where
a2.Fachname <> a1.Fachname
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Network manager with NSURLConnection
I am trying to create something like a Network Manager using NSUrlConnections.
For that, I want to be able to send multiple requests, but I also want to be able to identify the client(delegate) that made the request when the response arrives.
I have created a NSDictionary like this:
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:url cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:SERVER_TIMEOUT];
....
[clients setObject:client forKey:connection];
in "- (void)connectionDidFinishLoading:(NSURLConnection *)connection" I have something like this:
client = (id<RTANetworkDelegate>)[clients objectForKey:connection];
[clients removeObjectForKey:connection];
The Network Manager is the delegate for all the connections, I do some preprocessing and then I send the (parsed) response to the right delegate, that sent the request in the first place.
Unfortunately, it appears that a NSMutableURLRequest cannot be set as a key in a dictionary since it does not have the copyWithZone method and I get the error:
-[NSURLConnection copyWithZone:]:
unrecognized selector sent to
instance
Any help would be much appreciated!
Thanks!
=======================================
[Edit] I already found this in the meantime:
http://blog.emmerinc.be/index.php/2009/03/15/multiple-async-nsurlconnections-example/
It seems to solve my problem.. I still don't know if it's the best solution though. I thought I would post it here since it might help others too.
A:
You could use the -hash value of the connection object as the key:
[clients setObject:client forKey:[connection hash]];
I'd stay away from the actual URL or anything similar as two requests could potentially have the same URL.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Fixing scrolling in nano running in tmux in mate-terminal
The problem:
I open a terminal (in Linux Mint, so mate-terminal)
zsh is the shell
Then I run tmux
Edit a file with nano
Scroll up and down that file with the cursor
Issue: When scrolling down in nano, only the bottom half of the terminal window gets refreshed
Issue: When scrolling up in nano, only the top half of the terminal windo
gets refreshed
The complete nano view of file does not get refreshed in my terminal window when scrolling. Any tips?
Edit: my .tmux.conf
It seems that this line specifically is the culprit (as commenting it out fixes the problem):
set -g default-terminal "xterm-256color"
I'm pretty sure I added that line because I have issues even running nano during an SSH session.
Here is the full file:
set-option -g default-shell /bin/zsh
# Make sure tmux knows we're using 256 colours, for
# correct colourised output
set -g default-terminal "xterm-256color"
# The following were marked as "unknown", so
# I do know what I'm doing wrong.
#set -g mode-mouse on
#setw -g mouse-select-window on
#setw -g mouse-select-pane on
# Attempting to stop "alert" sound upon startup
# but none of these are working...
set-option bell-on-alert off
set-option bell-action none
set-option visual-bell off
A:
From the tmux FAQ:
******************************************************************************
* PLEASE NOTE: most display problems are due to incorrect TERM! Before *
* reporting problems make SURE that TERM settings are correct inside and *
* outside tmux. *
* *
* Inside tmux TERM must be "screen" or similar (such as "screen-256color"). *
* Don't bother reporting problems where it isn't! *
* *
* Outside, it must match your terminal: particularly, use "rxvt" for rxvt *
* and derivatives. *
******************************************************************************
http://tmux.git.sourceforge.net/git/gitweb.cgi?p=tmux/tmux;a=blob;f=FAQ
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Set magic line width to the anchor not the list item
Hello I am using a magic line and I get it working, but the width of the magic line is taking the width of the <li> element, but I want it to take the width of the <a> element. Can anyone help me edit the jquery code to do this?
Here is the html:
<ul id="menu-main-menu">
<li><a href="#">text</a></li>
<li><a href="#">text text</a></li>
<li><a href="#">text text text</a></li>
<li><a href="#">text text text text</a></li>
</ul>
css:
#menu-main-menu {
display: flex;
}
#menu-main-menu li {
flex: 1 1 100%;
text-align: center;
list-style-type: none;
position: relative;
}
#menu-main-menu li a {
padding: 0 10px;
}
#magic-line {
position: absolute !important;
top: 0px;
left: 0;
width: 100px;
height: 2px;
background: #000;
padding-top: 0 !important;
margin-right: 0 !important;
}
jquery (sorry about the length)
jQuery(function() {
var $el, leftPos, newWidth,
$mainNav = jQuery("#menu-main-menu");
$mainNav.append("<li id='magic-line'></li>");
var $magicLine = jQuery("#magic-line");
if( jQuery('#menu-main-menu .current-menu-ancestor').length ) {
var currentPageWidth = jQuery('#menu-main-menu .current-menu-ancestor > a').parent().width();
var currentPageLeft = jQuery('#menu-main-menu .current-menu-ancestor > a').parent().position().left;
}
if( jQuery('#menu-main-menu .current-menu-item').length ) {
var currentPageWidth = jQuery('#menu-main-menu .current-menu-item > a').parent().width();
var currentPageLeft = jQuery('#menu-main-menu .current-menu-item > a').parent().position().left;
}
$magicLine
.width(currentPageWidth)
.css("left", currentPageLeft)
.data("origLeft", $magicLine.position().left)
.data("origWidth", $magicLine.width());
jQuery("#menu-main-menu li").hover(function() {
$el = jQuery(this);
leftPos = $el.position().left;
newWidth = $el.width();
$magicLine.stop().animate({
left: leftPos,
width: newWidth
});
}, function() {
$magicLine.stop().animate({
left: $magicLine.data("origLeft"),
width: $magicLine.data("origWidth")
});
});
jQuery("#menu-main-menu li .sub-menu li").hover(function() {
$magicLine.stop()
}, function() {
$magicLine.stop().animate({
left: $magicLine.data("origLeft"),
width: $magicLine.data("origWidth")
});
});
}, 1500);
and the jsfiddle
here
Thanks!
A:
You should change this
jQuery("#menu-main-menu li").hover(function() {
$el = jQuery(this);
leftPos = $el.position().left;
newWidth = $el.width();
$magicLine.stop().animate({
left: leftPos,
width: newWidth
});
}, function() {
$magicLine.stop().animate({
left: $magicLine.data("origLeft"),
width: $magicLine.data("origWidth")
});
});
into
jQuery("#menu-main-menu li").hover(function() {
$el = jQuery(this);
leftPos = $el.position().left + $el.children().position().left ;
newWidth = $el.children().width();
$magicLine.stop().animate({
left: leftPos,
width: newWidth
});
}, function() {
$magicLine.stop().animate({
left: $magicLine.data("origLeft"),
width: $magicLine.data("origWidth")
});
});
|
{
"pile_set_name": "StackExchange"
}
|
Q:
C - Compiling with -Wall doesn't warn about uninitialized variables
I have an example flawed program that should give exactly one warning about an uninitialized variable, but when I compile it gcc doesn't give me any warnings.
Here is the code:
#include <stdio.h>
int main()
{
int foo;
printf("I am a number: %d \n", foo);
return 0;
}
Here is what I run: cc -Wall testcase.c -o testcase
And I get no feedback. As far as I know this should produce:
testcase.c: In function 'main':
testcase.c:7: warning: 'foo' is used uninitialized in this function
It appears to warn Zed Shaw correctly in a similar example in his C tutorial). This is the example I had first tried and noticed that it wasn't working as expected.
Any ideas?
EDIT:
Version of gcc:
i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00)
A:
Are you compiling with optimisation turned on? Here's what my man gcc page says:
-Wuninitialized
Warn if an automatic variable is used without first being
initialized or if a variable may be clobbered by a "setjmp" call.
These warnings are possible only in optimizing compilation, because
they require data flow information that is computed only when
optimizing. If you do not specify -O, you will not get these
warnings. Instead, GCC will issue a warning about -Wuninitialized
requiring -O.
My version of gcc is:
i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00)
Actually, I just tried this on a gcc 4.4.5 and I do get the warning without using -O. So it depends on your compiler version.
A:
Use Clang, be done with it. Seems like a bug in GCC, cause Clang warns like it should.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
linux email with Kerio
I have Kerio installed in Red Hat enterprise linux, which works well for the P.C. clients however I am not receiving linux generated emails (e.g. cron job output or email generated by biabam or mail).
Is there a way to have linux use kerio as a email server in the same way as sendmail or postfix works?
A:
I found this link :
http://support.kerio.com/index.php?_m=knowledgebase&_a=viewarticle&kbarticleid=233&nav=0,1,8
Which tells you to create a link to /opt/kerio/mailserver/sendmail replacing the /usr/sbin/sendmail file with the kerio version, I also had to add 127.0.0.1, port 25 to the SMTP service.
This meant that I can send email using biabam and mail but the crontab emails (stdout) are not appearing anywhere.
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Vue-CLI webpack-simple template - How to run build
I have a Vue.js app that runs perfectly fine with npm run dev, but when doing npm run build and opening the index.html, there will be no activity, only a blank page.
The directory in the html is dist/build.js, which is loaded correctly.
This is the webpack config:
var path = require('path')
var webpack = require('webpack')
module.exports = {
entry: './src/main.js',
output: {
path: path.resolve(__dirname, 'dist'),
publicPath: 'dist/',
filename: 'build.js'
},
module: {
rules: [
{
test: /\.css$/,
use: [
'vue-style-loader',
'css-loader'
],
}, {
test: /\.vue$/,
loader: 'vue-loader',
options: {
loaders: {
}
// other vue-loader options go here
}
},
{
test: /\.js$/,
loader: 'babel-loader',
exclude: /node_modules/
},
{
test: /\.(png|jpg|gif|svg|png)$/,
loader: 'file-loader',
options: {
name: '[name].[ext]?[hash]'
}
}
]
},
resolve: {
alias: {
'vue$': 'vue/dist/vue.esm.js'
},
extensions: ['*', '.js', '.vue', '.json']
},
devtool: '#eval-source-map'
}
if (process.env.NODE_ENV === 'production') {
module.exports.devtool = '#source-map'
module.exports.plugins = (module.exports.plugins || []).concat([
new webpack.DefinePlugin({
'process.env': {
NODE_ENV: '"production"'
}
}),
new webpack.optimize.UglifyJsPlugin({
sourceMap: true,
compress: {
warnings: false
}
}),
new webpack.LoaderOptionsPlugin({
minimize: true
})
])
}
These are the dependencies in package.json:
"dependencies": {
"firebase": "^4.10.1",
"vue": "^2.5.11",
"vue-router": "^3.0.1",
"vuefire": "^1.4.5",
"vuex": "^3.0.1"
},
"devDependencies": {
"babel-core": "^6.26.0",
"babel-loader": "^7.1.2",
"babel-preset-env": "^1.6.0",
"babel-preset-stage-3": "^6.24.1",
"cross-env": "^5.0.5",
"css-loader": "^0.28.7",
"file-loader": "^1.1.4",
"vue-loader": "^13.0.5",
"vue-template-compiler": "^2.4.4",
"webpack": "^3.6.0",
"webpack-dev-server": "^2.9.1"
}
The dist folder and index.html are in the same directory. The index.html shows no content or errors in log.
A:
I'm assuming you're opening the index.html file directly, eg
file:///home/you/some-project-folder/index.html
The webpack-simple template assumes you'll be serving your app via an HTTP server with the app at the document root (ie /).
You can see this in the index.html...
<script src="/dist/build.js"></script>
Note the / prefix.
The idea is that you upload index.html and the dist folder to some hosting provider's server.
Now, you could edit this path to be dist/build.js and it may work but paths to assets will probably be wrong and any AJAX requests may not work due to browser limitations on resources loaded via file:///
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Is it safe to use JavaScript's Math.max on an array of strings?
This seems to work, on an array of strings that look like numbers (they're numbers from a CSV file read in with csv-parse, which seems to convert everything into strings):
var a = ['123.1', '1234.0', '97.43', '5678'];
Math.max.apply(Math, a);
Returns 5678.
Does Math.max convert strings to numbers automatically?
Or should I do a + conversion myself first to be extra safe?
A:
Does Math.max convert strings to numbers automatically?
Quoting the ECMA Script 5.1 Specification for Math.max,
Given zero or more arguments, calls ToNumber on each of the arguments and returns the largest of the resulting values.
So, internally all the values are tried to convert to a number before finding the max value and you don't have to explicitly convert the strings to numbers.
But watch out for the NaN results if the string is not a valid number. For example, if the array had one invalid string like this
var a = ['123.1', '1234.0', '97.43', '5678', 'thefourtheye'];
console.log(Math.max.apply(Math, a));
// NaN
|
{
"pile_set_name": "StackExchange"
}
|
Q:
Apache CXF issue on Glassfish
I created a web-service app based on Apache CXF (2.7.5), deployed it on a Glassfish 3.0.1 and it works fine till I turn on WS-Sec support. Then I get the following exception when I try to do a web-service request:
Caused by: javax.xml.crypto.NoSuchMechanismException: class configured for XMLSignatureFactory(provider: ApacheXMLDSig)cannot be found.
at javax.xml.crypto.dsig.XMLDSigSecurity.doGetImpl(Unknown Source) ~[webservices-osgi.jar:1.0]
at javax.xml.crypto.dsig.XMLDSigSecurity.getImpl(Unknown Source) ~[webservices-osgi.jar:1.0]
at javax.xml.crypto.dsig.XMLDSigSecurity.getImpl(Unknown Source) ~[webservices-osgi.jar:1.0]
at javax.xml.crypto.dsig.XMLSignatureFactory.findInstance(Unknown Source) ~[webservices-osgi.jar:1.0]
at javax.xml.crypto.dsig.XMLSignatureFactory.getInstance(Unknown Source) ~[webservices-osgi.jar:1.0]
at org.apache.ws.security.message.WSSecSignature.init(WSSecSignature.java:127) ~[wss4j-1.6.10.jar:1.6.10]
at org.apache.ws.security.message.WSSecSignature.<init>(WSSecSignature.java:120) ~[wss4j-1.6.10.jar:1.6.10]
at org.apache.cxf.ws.security.wss4j.policyhandlers.AbstractBindingBuilder.getSignatureBuilder(AbstractBindingBuilder.java:1730) ~[cxf-rt-ws-security-2.7.5.jar:2.7.5]
at org.apache.cxf.ws.security.wss4j.policyhandlers.AsymmetricBindingHandler.doSignature(AsymmetricBindingHandler.java:546) ~[cxf-rt-ws-security-2.7.5.jar:2.7.5]
at org.apache.cxf.ws.security.wss4j.policyhandlers.AsymmetricBindingHandler.doSignBeforeEncrypt(AsymmetricBindingHandler.java:147) ~[cxf-rt-ws-security-2.7.5.jar:2.7.5]
... 273 common frames omitted
Caused by: java.lang.ClassNotFoundException: org.apache.jcp.xml.dsig.internal.dom.DOMXMLSignatureFactory
at org.apache.felix.framework.ModuleImpl.findClassOrResourceByDelegation(ModuleImpl.java:744) ~[felix.jar:na]
at org.apache.felix.framework.ModuleImpl.access$100(ModuleImpl.java:61) ~[felix.jar:na]
at org.apache.felix.framework.ModuleImpl$ModuleClassLoader.loadClass(ModuleImpl.java:1656) ~[felix.jar:na]
at java.lang.ClassLoader.loadClass(ClassLoader.java:247) ~[na:1.6.0_43]
It seems that CXF invokes the XMLSignatureFactory class contained by Glassfish's default webservice provider implementation instead of invoking it's own one (it's in the xmlsec.jar file). All CXF files are packed into my war file and also have the <class-loader delegate="false" /> set in sun-web.xml.
Can someone help me why the Glassfish classloader works this way and how could I fix this?
A:
I managed to find out that Glassfish (at least the 3.0.1 version) modifies the default class loading behavior to "protect" some packages (mostly javax. packages) in it's classpath. That's the reason why it finds and uses classes in it's modules directory instead of the one in my war's lib.
To solve this a JVM option should be added to the domain.xml:
<jvm-options>-Dcom.sun.enterprise.overrideablejavaxpackages=javax.xml.crypto,javax.xml.crypto.dsig</jvm-options>
With this Glassfish will allow to use your libs in your war file. But even with this setting it's problematic to use CXF with WS-Securityy along with Metro. The better solution is to use a Glassfish with only Web Profile not Full Profile as Web Profile doesn't have Metro included.
|
{
"pile_set_name": "StackExchange"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.