anchor
				 
			stringlengths 579 
			4.42k 
			 | positive
				 
			stringlengths 574 
			3.84k 
			 | negative
				 
			stringclasses 619
				values  | 
|---|---|---|
	Extremal Problems for tPartite and tColorable Hypergraphs Fix integers t ge r ge 2 and an runiform hypergraph F. We prove that the maximum number of edges in a tpartite runiform hypergraph on n vertices that contains no copy of F is c_t, Fn choose r o(nr), where c_t, F can be determined by a finite computation.   We explicitly define a sequence F_1, F_2, ldots of runiform hypergraphs, and prove that the maximum number of edges in a tchromatic runiform hypergraph on n vertices containing no copy of F_i is alpha_t,r,in choose r o(nr), where alpha_t,r,i can be determined by a finite computation for each ige 1. In several cases, alpha_t,r,i is irrational. The main tool used in the proofs is the Lagrangian of a hypergraph. 
 | 
	Lagrangian densities of enlargements of matchings in hypergraphs Abstract   Determining the Turan density of a hypergraph is a central and challenging question in extremal combinatorics. We know very few about the Turan densities of hypergraphs. Even the conjecture given by Turan on the Turan density of      K  4  3   ,     the smallest complete hypergraph remains open. It turns out that the hypergraph Lagrangian method, a continuous optimization method has been helpful in hypergraph extremal problems. In this paper, we intend to explore this method further, and try to understand the Turan densities of hypergraphs via Lagrangian densities.  Given an integer n and an runiform graph H, the Turan number of H, denoted by ex(n, H), is the maximum number of edges in an nvertex runiform graph without containing a copy of H. The Turan density of H, denoted by xcfx80(H), is the limit of the function      e  x  (  n  ,  H  )    (   n  r   )      as nxc2xa0xe2x86x92xc2xa0xe2x88x9e. The Lagrangian density of H is      xcfx80  xcexbb    (  H  )     sup     r  !  xcexbb   (  F  )   :  H    i  s    n  o  t    c  o  n  t  a  i  n  e  d    i  n    F     ,     where xcexbb(F) is the Lagrangian of F. For any runiform graph H, Sidorenko showed that xcfx80xcexbb(H) equals the Turan density of the extension of H. So researching on Lagrangian densities of hypergraphs is helpful to better understand the behavior of the Turan densities of hypergraphs. For a tvertex runiform graph H,      xcfx80  xcexbb    (  H  )   xe2x89xa5  r  !  xcexbb   (   K   t  xe2x88x92  1   r   )      since     K   t  xe2x88x92  1   r     doesnxe2x80x99t contain H, where     K   t  xe2x88x92  1   r     is the     (  t  xe2x88x92  1  )    vertex complete runiform graph. We say that H is xcexbbperfect if the equality holds, i.e.,      xcfx80  xcexbb    (  H  )     r  !  xcexbb   (   K   t  xe2x88x92  1   r   )     . A result given by Motzkin and Straus shows that every graph is xcexbbperfect. It is natural and fundamental to explore which hypergraphs are xcexbbperfect. Sidorenko (1989) showed that the     (  r  xe2x88x92  2  )    fold enlargement of a tree satisfying the Erdxc5x91sSos conjecture with order at least Ar is xcexbbperfect, where Ar is the last maximum of the function      g  r    (  x  )      (  r    x  xe2x88x92  3  )       xe2x88x92  r     xe2x88x8f   i    1    r  xe2x88x92  1     (  i    x  xe2x88x92  2  )      as xxc2xa0xe2x89xa5xc2xa02. By using the socalled generalised Lagrangian of hypergraphs, Jenssen (2017) showed that the     (  r  xe2x88x92  2  )    fold enlargement of     M  2  2     for     r    5  ,  6     or 7 is xcexbbperfect, where     M  s  r     is the runiform matching of size s. The result given by Sidorenko (1989) implies that the     (  r  xe2x88x92  2  )    fold enlargement of     M  t  2     for     r    5     and txc2xa0xe2x89xa5xc2xa04, or     r    6     and txc2xa0xe2x89xa5xc2xa06, or     r    7     and txc2xa0xe2x89xa5xc2xa08 is xcexbbperfect. So there are still some gaps between the results of Jenssen and Sidorenko. In this paper we fill the gaps for     r    5     or 6. We also determine the Lagrangian densities for the     (  r  xe2x88x92  3  )    fold enlargement of     M  t  3     for     r    5     or 6. 
 | 
	Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups. 
 | 
					
	Extremal Problems for tPartite and tColorable Hypergraphs Fix integers t ge r ge 2 and an runiform hypergraph F. We prove that the maximum number of edges in a tpartite runiform hypergraph on n vertices that contains no copy of F is c_t, Fn choose r o(nr), where c_t, F can be determined by a finite computation.   We explicitly define a sequence F_1, F_2, ldots of runiform hypergraphs, and prove that the maximum number of edges in a tchromatic runiform hypergraph on n vertices containing no copy of F_i is alpha_t,r,in choose r o(nr), where alpha_t,r,i can be determined by a finite computation for each ige 1. In several cases, alpha_t,r,i is irrational. The main tool used in the proofs is the Lagrangian of a hypergraph. 
 | 
	Lagrangian densities of enlargements of matchings in hypergraphs Abstract   Determining the Turan density of a hypergraph is a central and challenging question in extremal combinatorics. We know very few about the Turan densities of hypergraphs. Even the conjecture given by Turan on the Turan density of      K  4  3   ,     the smallest complete hypergraph remains open. It turns out that the hypergraph Lagrangian method, a continuous optimization method has been helpful in hypergraph extremal problems. In this paper, we intend to explore this method further, and try to understand the Turan densities of hypergraphs via Lagrangian densities.  Given an integer n and an runiform graph H, the Turan number of H, denoted by ex(n, H), is the maximum number of edges in an nvertex runiform graph without containing a copy of H. The Turan density of H, denoted by xcfx80(H), is the limit of the function      e  x  (  n  ,  H  )    (   n  r   )      as nxc2xa0xe2x86x92xc2xa0xe2x88x9e. The Lagrangian density of H is      xcfx80  xcexbb    (  H  )     sup     r  !  xcexbb   (  F  )   :  H    i  s    n  o  t    c  o  n  t  a  i  n  e  d    i  n    F     ,     where xcexbb(F) is the Lagrangian of F. For any runiform graph H, Sidorenko showed that xcfx80xcexbb(H) equals the Turan density of the extension of H. So researching on Lagrangian densities of hypergraphs is helpful to better understand the behavior of the Turan densities of hypergraphs. For a tvertex runiform graph H,      xcfx80  xcexbb    (  H  )   xe2x89xa5  r  !  xcexbb   (   K   t  xe2x88x92  1   r   )      since     K   t  xe2x88x92  1   r     doesnxe2x80x99t contain H, where     K   t  xe2x88x92  1   r     is the     (  t  xe2x88x92  1  )    vertex complete runiform graph. We say that H is xcexbbperfect if the equality holds, i.e.,      xcfx80  xcexbb    (  H  )     r  !  xcexbb   (   K   t  xe2x88x92  1   r   )     . A result given by Motzkin and Straus shows that every graph is xcexbbperfect. It is natural and fundamental to explore which hypergraphs are xcexbbperfect. Sidorenko (1989) showed that the     (  r  xe2x88x92  2  )    fold enlargement of a tree satisfying the Erdxc5x91sSos conjecture with order at least Ar is xcexbbperfect, where Ar is the last maximum of the function      g  r    (  x  )      (  r    x  xe2x88x92  3  )       xe2x88x92  r     xe2x88x8f   i    1    r  xe2x88x92  1     (  i    x  xe2x88x92  2  )      as xxc2xa0xe2x89xa5xc2xa02. By using the socalled generalised Lagrangian of hypergraphs, Jenssen (2017) showed that the     (  r  xe2x88x92  2  )    fold enlargement of     M  2  2     for     r    5  ,  6     or 7 is xcexbbperfect, where     M  s  r     is the runiform matching of size s. The result given by Sidorenko (1989) implies that the     (  r  xe2x88x92  2  )    fold enlargement of     M  t  2     for     r    5     and txc2xa0xe2x89xa5xc2xa04, or     r    6     and txc2xa0xe2x89xa5xc2xa06, or     r    7     and txc2xa0xe2x89xa5xc2xa08 is xcexbbperfect. So there are still some gaps between the results of Jenssen and Sidorenko. In this paper we fill the gaps for     r    5     or 6. We also determine the Lagrangian densities for the     (  r  xe2x88x92  3  )    fold enlargement of     M  t  3     for     r    5     or 6. 
 | 
	Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority. 
 | 
					
	Hololens AR  Using VuforiaBased Marker Tracking Together with Text Recognition in an Assembly Scenario Many real world applications for Microsoft HoloLens*based applications suffer the problem of reliably recognizing and identifying movable objects within an environment. While the HoloLens is perfectly able to discern already known rooms, it still has troubles with reflecting surfaces or identically shaped objects. Using dedicated recognition libraries for each task poses the issue of shared resource access in the rather controlled HoloLens environmentIn this poster we present a solution for scenario with hard to track objects and similarly shaped objects for an electrical cabinet assembly task, where the reflective cabinet is tagged with a marker and the prefabricated cables are differentiated by textbased labels. 
 | 
	Evaluation of HoloLens Tracking and Depth Sensing for Indoor Mapping Applications The Microsoft HoloLens is a headworn mobile augmented reality device that is capable of mapping its direct environment in realtime as triangle meshes and localize itself within these threedimensional meshes simultaneously. The device is equipped with a variety of sensors including four tracking cameras and a timeofflight (ToF) range camera. Sensor images and their poses estimated by the builtin tracking system can be accessed by the user. This makes the HoloLens potentially interesting as an indoor mapping device. In this paper, we introduce the different sensors of the device and evaluate the complete system in respect of the task of mapping indoor environments. The overall quality of such a system depends mainly on the quality of the depth sensor together with its associated pose derived from the tracking system. For this purpose, we first evaluate the performance of the HoloLens depth sensor and its tracking system separately. Finally, we evaluate the overall system regarding its capability for mapping multiroom environments. 
 | 
	On the Teaching Reform of Mathematics Course in Independent Colleges This paper summarizes the teaching reform of mathematics courses carried out by our higher mathematics teaching team in view of the work carried out and experience gained in recent years. Mathematics teachers in colleges and universities should change their educational concepts, incorporate the idea of mathematical models, Encourage and guide students to participate in mathematical model competitions for college students, adopt a mixed teaching mode of online and offline, innovate and turn over the teaching form of the classroom, make microclasses and develop online teaching platforms, and strengthen the construction of Resource pool. These are effective ways to improve the mathematics level and ability of our students. The cultivation of studentsxe2x80x99 innovative spirit and ability should be put in the first place in mathematics teaching. 
 | 
					
	Hololens AR  Using VuforiaBased Marker Tracking Together with Text Recognition in an Assembly Scenario Many real world applications for Microsoft HoloLens*based applications suffer the problem of reliably recognizing and identifying movable objects within an environment. While the HoloLens is perfectly able to discern already known rooms, it still has troubles with reflecting surfaces or identically shaped objects. Using dedicated recognition libraries for each task poses the issue of shared resource access in the rather controlled HoloLens environmentIn this poster we present a solution for scenario with hard to track objects and similarly shaped objects for an electrical cabinet assembly task, where the reflective cabinet is tagged with a marker and the prefabricated cables are differentiated by textbased labels. 
 | 
	Evaluation of HoloLens Tracking and Depth Sensing for Indoor Mapping Applications The Microsoft HoloLens is a headworn mobile augmented reality device that is capable of mapping its direct environment in realtime as triangle meshes and localize itself within these threedimensional meshes simultaneously. The device is equipped with a variety of sensors including four tracking cameras and a timeofflight (ToF) range camera. Sensor images and their poses estimated by the builtin tracking system can be accessed by the user. This makes the HoloLens potentially interesting as an indoor mapping device. In this paper, we introduce the different sensors of the device and evaluate the complete system in respect of the task of mapping indoor environments. The overall quality of such a system depends mainly on the quality of the depth sensor together with its associated pose derived from the tracking system. For this purpose, we first evaluate the performance of the HoloLens depth sensor and its tracking system separately. Finally, we evaluate the overall system regarding its capability for mapping multiroom environments. 
 | 
	How to Make a Medical Error Disclosure to Patients This paper aims to investigate Chinese publicu0027s expectations of medical error disclosure, and to develop guidelines for hospitals. A national questionnaire survey was conducted in 2019, collecting 1,008 valid responses. Respondentsu0027 were asked their views of the severity of error they would like to be disclosed and what, when, where and who they preferred in an error disclosure. Results showed that Chinese public would like to be disclosed any error reached them even no harm. For both moderate and severe outcome errors, they preferred to be disclosed facetoface, all the information as detail as possible, immediately after the error was recognized and in a prepared meeting room. Regarding attendance of patient side, disclosure was expected to be made to the patient and family. For hospital side, the healthcare provider who committed the error, hisher leader, patient safety manager and highpositioned person of the hospital were anticipated to be present. About the person to make the disclosure, respondents preferred the healthcare provider who committed the error in a moderate outcome case while the leader or highpositioned person in a severe case. 
 | 
					
	Hololens AR  Using VuforiaBased Marker Tracking Together with Text Recognition in an Assembly Scenario Many real world applications for Microsoft HoloLens*based applications suffer the problem of reliably recognizing and identifying movable objects within an environment. While the HoloLens is perfectly able to discern already known rooms, it still has troubles with reflecting surfaces or identically shaped objects. Using dedicated recognition libraries for each task poses the issue of shared resource access in the rather controlled HoloLens environmentIn this poster we present a solution for scenario with hard to track objects and similarly shaped objects for an electrical cabinet assembly task, where the reflective cabinet is tagged with a marker and the prefabricated cables are differentiated by textbased labels. 
 | 
	Evaluation of HoloLens Tracking and Depth Sensing for Indoor Mapping Applications The Microsoft HoloLens is a headworn mobile augmented reality device that is capable of mapping its direct environment in realtime as triangle meshes and localize itself within these threedimensional meshes simultaneously. The device is equipped with a variety of sensors including four tracking cameras and a timeofflight (ToF) range camera. Sensor images and their poses estimated by the builtin tracking system can be accessed by the user. This makes the HoloLens potentially interesting as an indoor mapping device. In this paper, we introduce the different sensors of the device and evaluate the complete system in respect of the task of mapping indoor environments. The overall quality of such a system depends mainly on the quality of the depth sensor together with its associated pose derived from the tracking system. For this purpose, we first evaluate the performance of the HoloLens depth sensor and its tracking system separately. Finally, we evaluate the overall system regarding its capability for mapping multiroom environments. 
 | 
	A Critical Look at the 2019 College Admissions Scandal Discusses the 2019 College admissions scandal. Let me begin with a disclaimer: I am making no legal excuses for the participants in the current scandal. I am only offering contextual background that places it in the broader academic, cultural, and political perspective required for understanding. It is only the most recent installment of a wellworn narrative: the controlling elite make their own rules and live by them, if they can get away with it. Unfortunately, some of the participants, who are either serving or facing jail time, didnxe2x80x99t know to not go into a gunfight with a sharp stick. Money alone is not enough to avoid prosecution for fraud: you need political clout. The best protection a defendant can have is a prosecutor who fears political reprisal. Compare how the Koch brothers escaped prosecution for stealing millions of oil dollars from Native American tribes1,2 with the fate of actresses Lori Loughlin and Felicity Huffman, who, at the time of this writing, face jail time for paying bribes to get their children into good universities.3,4 In the former case, the federal prosecutor who dared to empanel a grand jury to get at the truth was fired for cause, which put a quick end to the prosecution. In the latter case, the prosecutors pushed for jail terms and public admonishment with the zeal of Oliver Cromwell. There you have it: stealing oil from Native Americans versus trying to bribe your kids into a great university. Where is the greater crime? Admittedly, these actresses and their 
 | 
					
	Hololens AR  Using VuforiaBased Marker Tracking Together with Text Recognition in an Assembly Scenario Many real world applications for Microsoft HoloLens*based applications suffer the problem of reliably recognizing and identifying movable objects within an environment. While the HoloLens is perfectly able to discern already known rooms, it still has troubles with reflecting surfaces or identically shaped objects. Using dedicated recognition libraries for each task poses the issue of shared resource access in the rather controlled HoloLens environmentIn this poster we present a solution for scenario with hard to track objects and similarly shaped objects for an electrical cabinet assembly task, where the reflective cabinet is tagged with a marker and the prefabricated cables are differentiated by textbased labels. 
 | 
	Evaluation of HoloLens Tracking and Depth Sensing for Indoor Mapping Applications The Microsoft HoloLens is a headworn mobile augmented reality device that is capable of mapping its direct environment in realtime as triangle meshes and localize itself within these threedimensional meshes simultaneously. The device is equipped with a variety of sensors including four tracking cameras and a timeofflight (ToF) range camera. Sensor images and their poses estimated by the builtin tracking system can be accessed by the user. This makes the HoloLens potentially interesting as an indoor mapping device. In this paper, we introduce the different sensors of the device and evaluate the complete system in respect of the task of mapping indoor environments. The overall quality of such a system depends mainly on the quality of the depth sensor together with its associated pose derived from the tracking system. For this purpose, we first evaluate the performance of the HoloLens depth sensor and its tracking system separately. Finally, we evaluate the overall system regarding its capability for mapping multiroom environments. 
 | 
	Fundamental Utilitarianism and Intergenerational Equity with Extinction Discounting Ramsey famously condemned discounting xe2x80x9cfuture enjoymentsxe2x80x9d as xe2x80x9cethically indefensiblexe2x80x9d. Suppes enunciated an equity criterion which, when social choice is utilitarian, implies giving equal weight to all individualsxe2x80x99 utilities. By contrast, Arrow (Contemporary economic issues. International Economic Association Series. Palgrave Macmillan, London, 1999a; Discounting and Intergenerational Effects, Resources for the Future Press, Washington DC, 1999b) accepted, perhaps reluctantly, what he called Koopmansxe2x80x99 (Econometrica 28(2):287xe2x80x93309, 1960) xe2x80x9cstrong argumentxe2x80x9d implying that no equitable preference ordering exists for a sufficiently unrestricted domain of infinite utility streams. Here we derive an equitable utilitarian objective for a finite population based on a version of the Vickreyxe2x80x93Harsanyi original position, where there is an equal probability of becoming each person. For a potentially infinite population facing an exogenous stochastic process of extinction, an equitable extinction biased original position requires equal conditional probabilities, given that the individualxe2x80x99s generation survives the extinction process. Such a position is welldefined if and only if survival probabilities decline fast enough for the expected total number of individuals who can ever live to be finite. Then, provided that each individualxe2x80x99s utility is bounded both above and below, maximizing expected xe2x80x9cextinction discountedxe2x80x9d total utilityxe2x80x94as advocated, inter alia, by the Stern Review on climate changexe2x80x94provides a coherent and dynamically consistent equitable objective, even when the population size of each generation can be chosen. 
 | 
					
	Hololens AR  Using VuforiaBased Marker Tracking Together with Text Recognition in an Assembly Scenario Many real world applications for Microsoft HoloLens*based applications suffer the problem of reliably recognizing and identifying movable objects within an environment. While the HoloLens is perfectly able to discern already known rooms, it still has troubles with reflecting surfaces or identically shaped objects. Using dedicated recognition libraries for each task poses the issue of shared resource access in the rather controlled HoloLens environmentIn this poster we present a solution for scenario with hard to track objects and similarly shaped objects for an electrical cabinet assembly task, where the reflective cabinet is tagged with a marker and the prefabricated cables are differentiated by textbased labels. 
 | 
	Evaluation of HoloLens Tracking and Depth Sensing for Indoor Mapping Applications The Microsoft HoloLens is a headworn mobile augmented reality device that is capable of mapping its direct environment in realtime as triangle meshes and localize itself within these threedimensional meshes simultaneously. The device is equipped with a variety of sensors including four tracking cameras and a timeofflight (ToF) range camera. Sensor images and their poses estimated by the builtin tracking system can be accessed by the user. This makes the HoloLens potentially interesting as an indoor mapping device. In this paper, we introduce the different sensors of the device and evaluate the complete system in respect of the task of mapping indoor environments. The overall quality of such a system depends mainly on the quality of the depth sensor together with its associated pose derived from the tracking system. For this purpose, we first evaluate the performance of the HoloLens depth sensor and its tracking system separately. Finally, we evaluate the overall system regarding its capability for mapping multiroom environments. 
 | 
	On the Probabilistic Degrees of Symmetric Boolean functions. The probabilistic degree of a Boolean function f:0,1nrightarrow 0,1 is defined to be the smallest d such that there is a random polynomial mathbfP of degree at most d that agrees with f at each point with high probability. Introduced by Razborov (1987), upper and lower bounds on probabilistic degrees of Boolean functions  specifically symmetric Boolean functions  have been used to prove explicit lower bounds, design pseudorandom generators, and devise algorithms for combinatorial problems.  :76,this paper, we characterize the probabilistic degrees of all symmetric Boolean functions up to polylogarithmic factors over all fields of fixed characteristic (positive or zero). 
 | 
					
	A quadratically convergent local algorithm on minimizing sums of the largest eigenvalues of a symmetric matrix In this paper, we consider the problem on minimizing sums of the largest eigenvalues of a symmetric matrix which depends on the decision variable affinely. An important application of this problem is the graph partitioning problem, which arises in layout of circuit boards, computer logic partitioning, and paging of computer programs. Given xe2x88x88xe2x89xa50, we first derive an optimality condition which ensures that the objective function is within xe2x88x88 error bound of the solution. This condition may be used as a practical stopping criterion for any algorithm solving the underlying problem. We also show that, in a neighborhood of the minimizer, the optimization problem can be equivalently formulated as a smooth constrained problem. An existing algorithm on minimizing the largest eigenvalue of a symmetric matrix is shown to be applicable here. This algoritm enjoys the property that if started close enough to the minimizer, then it will converge quadratically. To implement a practical algorithm, one needs to incorporate some technique to generate a good starting point. Since the problem is convex, this can be done by using an algorithm for general convex optimization problems (e.g., Kelleyu0027s cutting plane method or ellipsoid methods), or an algorithm specific for the optimization problem under consideration (e.g., the algorithm developed by Cullum, Donath, and Wolfe). Such union ensures that the overall algorithm has global convergence with quadratic rate. Finally, the results presented in this paper are readily extended on minimizing sums of the largest eigenvalues of a Hermitian matrix. 
 | 
	Hermitian Laplacians and a Cheeger inequality for the Max2Lin problem We study spectral approaches for the MAX2LIN(k) problem, in which we are given a system of m linear equations of the form x_i  x_j equiv c_ijmod k, and required to find an assignment to the n variables x_i that maximises the total number of satisfied equations.  :47,consider Hermitian Laplacians related to this problem, and prove a Cheeger inequality that relates the smallest eigenvalue of a Hermitian Laplacian to the maximum number of satisfied equations of a MAX2LIN(k) instance mathcalI. We develop an widetildeO(kn2) time algorithm that, for any (1varepsilon)satisfiable instance, produces an assignment satisfying a left(1  O(k)sqrtvarepsilonright)fraction of equations. We also present a subquadratictime algorithm that, when the graph associated with mathcalI is an expander, produces an assignment satisfying a left(1 O(k2)varepsilon right)fraction of the equations. Our Cheeger inequality and first algorithm can be seen as generalisations of the Cheeger inequality and algorithm for MAXCUT developed by Trevisan. 
 | 
	The longterm effect of media violence exposure on aggression of youngsters Abstract   The effect of media violence on aggression has always been a trending issue, and a better understanding of the psychological mechanism of the impact of media violence on youth aggression is an extremely important research topic for preventing the negative impacts of media violence and juvenile delinquency. From the perspective of anger, this study explored the longterm effect of different degrees of media violence exposure on the aggression of youngsters, as well as the role of aggressive emotions. The studies found that individuals with a high degree of media violence exposure (HMVE) exhibited higher levels of proactive aggression in both irritation situations and higher levels of reactive aggression in lowirritation situations than did participants with a low degree of media violence exposure (LMVE). After being provoked, the anger of all participants was significantly increased, and the anger and proactive aggression levels of the HMVE group were significantly higher than those of the LMVE group. Additionally, rumination and anger played a mediating role in the relationship between media violence exposure and aggression. Overall, this study enriches the theoretical understanding of the longterm effect of media violence exposure on individual aggression. Second, this study deepens our understanding of the relatively new and relevant phenomenon of the mechanism between media violence exposure and individual aggression. 
 | 
					
	A quadratically convergent local algorithm on minimizing sums of the largest eigenvalues of a symmetric matrix In this paper, we consider the problem on minimizing sums of the largest eigenvalues of a symmetric matrix which depends on the decision variable affinely. An important application of this problem is the graph partitioning problem, which arises in layout of circuit boards, computer logic partitioning, and paging of computer programs. Given xe2x88x88xe2x89xa50, we first derive an optimality condition which ensures that the objective function is within xe2x88x88 error bound of the solution. This condition may be used as a practical stopping criterion for any algorithm solving the underlying problem. We also show that, in a neighborhood of the minimizer, the optimization problem can be equivalently formulated as a smooth constrained problem. An existing algorithm on minimizing the largest eigenvalue of a symmetric matrix is shown to be applicable here. This algoritm enjoys the property that if started close enough to the minimizer, then it will converge quadratically. To implement a practical algorithm, one needs to incorporate some technique to generate a good starting point. Since the problem is convex, this can be done by using an algorithm for general convex optimization problems (e.g., Kelleyu0027s cutting plane method or ellipsoid methods), or an algorithm specific for the optimization problem under consideration (e.g., the algorithm developed by Cullum, Donath, and Wolfe). Such union ensures that the overall algorithm has global convergence with quadratic rate. Finally, the results presented in this paper are readily extended on minimizing sums of the largest eigenvalues of a Hermitian matrix. 
 | 
	Hermitian Laplacians and a Cheeger inequality for the Max2Lin problem We study spectral approaches for the MAX2LIN(k) problem, in which we are given a system of m linear equations of the form x_i  x_j equiv c_ijmod k, and required to find an assignment to the n variables x_i that maximises the total number of satisfied equations.  :47,consider Hermitian Laplacians related to this problem, and prove a Cheeger inequality that relates the smallest eigenvalue of a Hermitian Laplacian to the maximum number of satisfied equations of a MAX2LIN(k) instance mathcalI. We develop an widetildeO(kn2) time algorithm that, for any (1varepsilon)satisfiable instance, produces an assignment satisfying a left(1  O(k)sqrtvarepsilonright)fraction of equations. We also present a subquadratictime algorithm that, when the graph associated with mathcalI is an expander, produces an assignment satisfying a left(1 O(k2)varepsilon right)fraction of the equations. Our Cheeger inequality and first algorithm can be seen as generalisations of the Cheeger inequality and algorithm for MAXCUT developed by Trevisan. 
 | 
	General Data Protection Regulation in Health Clinics The focus on personal data has merited the EU concerns and attention, resulting in the legislative change regarding privacy and the protection of personal data. The General Data Protection Regulation (GDPR) aims to reform existing measures on the protection of personal data of European Union citizens, with a strong impact on the rights and freedoms of individuals in establishing rules for the processing of personal data. The GDPR considers a special category of personal data, the health data, being these considered as sensitive data and subject to special conditions regarding treatment and access by third parties. This work presents the evolution of the applicability of the Regulation (EU) 2016679 six months after its application in Portuguese health clinics. The results of the present study are discussed in the light of future literature and work are identified. 
 | 
					
	A quadratically convergent local algorithm on minimizing sums of the largest eigenvalues of a symmetric matrix In this paper, we consider the problem on minimizing sums of the largest eigenvalues of a symmetric matrix which depends on the decision variable affinely. An important application of this problem is the graph partitioning problem, which arises in layout of circuit boards, computer logic partitioning, and paging of computer programs. Given xe2x88x88xe2x89xa50, we first derive an optimality condition which ensures that the objective function is within xe2x88x88 error bound of the solution. This condition may be used as a practical stopping criterion for any algorithm solving the underlying problem. We also show that, in a neighborhood of the minimizer, the optimization problem can be equivalently formulated as a smooth constrained problem. An existing algorithm on minimizing the largest eigenvalue of a symmetric matrix is shown to be applicable here. This algoritm enjoys the property that if started close enough to the minimizer, then it will converge quadratically. To implement a practical algorithm, one needs to incorporate some technique to generate a good starting point. Since the problem is convex, this can be done by using an algorithm for general convex optimization problems (e.g., Kelleyu0027s cutting plane method or ellipsoid methods), or an algorithm specific for the optimization problem under consideration (e.g., the algorithm developed by Cullum, Donath, and Wolfe). Such union ensures that the overall algorithm has global convergence with quadratic rate. Finally, the results presented in this paper are readily extended on minimizing sums of the largest eigenvalues of a Hermitian matrix. 
 | 
	Hermitian Laplacians and a Cheeger inequality for the Max2Lin problem We study spectral approaches for the MAX2LIN(k) problem, in which we are given a system of m linear equations of the form x_i  x_j equiv c_ijmod k, and required to find an assignment to the n variables x_i that maximises the total number of satisfied equations.  :47,consider Hermitian Laplacians related to this problem, and prove a Cheeger inequality that relates the smallest eigenvalue of a Hermitian Laplacian to the maximum number of satisfied equations of a MAX2LIN(k) instance mathcalI. We develop an widetildeO(kn2) time algorithm that, for any (1varepsilon)satisfiable instance, produces an assignment satisfying a left(1  O(k)sqrtvarepsilonright)fraction of equations. We also present a subquadratictime algorithm that, when the graph associated with mathcalI is an expander, produces an assignment satisfying a left(1 O(k2)varepsilon right)fraction of the equations. Our Cheeger inequality and first algorithm can be seen as generalisations of the Cheeger inequality and algorithm for MAXCUT developed by Trevisan. 
 | 
	Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority. 
 | 
					
	A quadratically convergent local algorithm on minimizing sums of the largest eigenvalues of a symmetric matrix In this paper, we consider the problem on minimizing sums of the largest eigenvalues of a symmetric matrix which depends on the decision variable affinely. An important application of this problem is the graph partitioning problem, which arises in layout of circuit boards, computer logic partitioning, and paging of computer programs. Given xe2x88x88xe2x89xa50, we first derive an optimality condition which ensures that the objective function is within xe2x88x88 error bound of the solution. This condition may be used as a practical stopping criterion for any algorithm solving the underlying problem. We also show that, in a neighborhood of the minimizer, the optimization problem can be equivalently formulated as a smooth constrained problem. An existing algorithm on minimizing the largest eigenvalue of a symmetric matrix is shown to be applicable here. This algoritm enjoys the property that if started close enough to the minimizer, then it will converge quadratically. To implement a practical algorithm, one needs to incorporate some technique to generate a good starting point. Since the problem is convex, this can be done by using an algorithm for general convex optimization problems (e.g., Kelleyu0027s cutting plane method or ellipsoid methods), or an algorithm specific for the optimization problem under consideration (e.g., the algorithm developed by Cullum, Donath, and Wolfe). Such union ensures that the overall algorithm has global convergence with quadratic rate. Finally, the results presented in this paper are readily extended on minimizing sums of the largest eigenvalues of a Hermitian matrix. 
 | 
	Hermitian Laplacians and a Cheeger inequality for the Max2Lin problem We study spectral approaches for the MAX2LIN(k) problem, in which we are given a system of m linear equations of the form x_i  x_j equiv c_ijmod k, and required to find an assignment to the n variables x_i that maximises the total number of satisfied equations.  :47,consider Hermitian Laplacians related to this problem, and prove a Cheeger inequality that relates the smallest eigenvalue of a Hermitian Laplacian to the maximum number of satisfied equations of a MAX2LIN(k) instance mathcalI. We develop an widetildeO(kn2) time algorithm that, for any (1varepsilon)satisfiable instance, produces an assignment satisfying a left(1  O(k)sqrtvarepsilonright)fraction of equations. We also present a subquadratictime algorithm that, when the graph associated with mathcalI is an expander, produces an assignment satisfying a left(1 O(k2)varepsilon right)fraction of the equations. Our Cheeger inequality and first algorithm can be seen as generalisations of the Cheeger inequality and algorithm for MAXCUT developed by Trevisan. 
 | 
	Virtual Reality for training the public towards unexpected emergency situations Nowadays, unexpected situations in public spaces are quite frequent; for this reason, there is the need to provide valid decisionmaking tools to support peoplexe2x80x99s behavior in emergency situations. The aim of these support tools is to provide a training for the public on how to behave when something unexpected happens, in order to make them aware of how to manage and control their own emotions. Thanks to the introduction of new technologies, trainings are also feasible in Virtual Reality (VR), exploiting the chance to create virtual environments and situations that reflect real ones and test different scenarios on a sample of people in order to verify and validate training procedures. Virtual simulations in this context are paramount, because they offer the possibility to analyse reactions and behaviors in a safe, not real, so without health concern, environment. Three scenarios (fire, heart attack of a person in the environment and terrorist attack) have been reproduced in VR, analyzing how to define the context for emergency situations. Users approaching the training only know they are going to face a situation without having details on what is happening; this is fundamental to test the training efficiency on peoplexe2x80x99s reaction. 
 | 
					
	Importance of Feature Weighing in Cervical Cancer Subtypes Identification Cancer subtypes identification is very important for the advancement of precision cancer disease diagnosis and therapy. It is one of the important components of the personalized medicine framework. Cervical cancer (CC) is one of the leading gynecological cancers that causes deaths in women worldwide. However, there is a lack of studies to identify histological subtypes among the patients suffering from tumor of the uterine cervix. Hence, subtyping of cancer can help in analyzing shared molecular profiles between different histological subtypes of solid tumors of uterine cervix. With the advancement in technology, large scale multiomics data are generated. The integration of genomics data generated from different platforms helps in capturing complementary information about the patients. Several computational approaches have been developed that integrate mutiomics data for cancer subtyping. In this study, mRNA (messenger RNA) and miRNA (microRNA) expression data are integrated to identify the histological subtypes of CC. In this regard, a method is proposed that ranks the biomarkers (mRNA and miRNA) on the basis of their varying expression across the samples. The ranking method generates a weight for every biomarker which is further used to identify the similarity between the samples. A wellknown approach named Similarity Network Fusion (SNF) is then applied, followed by Spectral clustering, to identify groups of related samples. This study focuses on the role of weighing the biomarkers prior to their integration and application of the clustering algorithm. The weighing method proposed in this study is compared with some other methods and proved to be more efficient. The proposed method helps in identifying histological subtypes of CC and can also be applied to other types of cancer data where histological subtypes play a key role in designing treatments and therapies. 
 | 
	Cancer Subtype Classification based on Superlayered Neural Network Targeted treatment on different cancer subtype has been of clinical interest. However, accurate molecular identification of pathological cancer subtypes has been challenging. To meet such needs, a new subtype classification model based on deep neural network, Sparse Crossmodal Superlayered Neural Network is presented in this study. The model focuses on selecting biomarkers with considering integration of the high dimensional RNA sequencing data and DNA methylation data. For a real data application, the multiomics data of lung adenocarcinoma and squamous cell carcinoma from The Cancer Genomic Atlas was fitted to the proposed model. Our model was compared to other existing methods such as principal component analysis, penalized logistic regression, and artificial neural network. With only a small number of biomarkers, the proposed model was able to effectively classify these lung cancer subtypes. Gene set analysis of selected biomarkers revealed significant difference in epidermis development and cornification pathway activation level between two lung cancer subtypes. 
 | 
	Unmanned agricultural product sales system The invention relates to the field of agricultural product sales, provides an unmanned agricultural product sales system, and aims to solve the problem of agricultural product waste caused by the factthat most farmers can only prepare goods according to guessing and experiences when selling agricultural products at present. The unmanned agricultural product sales system comprises an acquisition module for acquiring selection information of customers; a storage module which prestores a vegetable preparation scheme; a matching module which is used for matching a corresponding side dish schemefrom the storage module according to the selection information of the client; a pushing module which is used for pushing the matched side dish scheme back to the client; an acquisition module which isalso used for acquiring confirmation information of a client; an order module which is used for generating order information according to the confirmation information of the client, wherein the pushing module is used for pushing the order information to the client and the seller, and the acquisition module is also used for acquiring the delivery information of the seller; and a logistics trackingmodule which is used for tracking the delivery information to obtain logistics information, wherein the pushing module is used for pushing the logistics information to the client. The scheme is usedfor sales of unmanned agricultural product shops. 
 | 
					
	Importance of Feature Weighing in Cervical Cancer Subtypes Identification Cancer subtypes identification is very important for the advancement of precision cancer disease diagnosis and therapy. It is one of the important components of the personalized medicine framework. Cervical cancer (CC) is one of the leading gynecological cancers that causes deaths in women worldwide. However, there is a lack of studies to identify histological subtypes among the patients suffering from tumor of the uterine cervix. Hence, subtyping of cancer can help in analyzing shared molecular profiles between different histological subtypes of solid tumors of uterine cervix. With the advancement in technology, large scale multiomics data are generated. The integration of genomics data generated from different platforms helps in capturing complementary information about the patients. Several computational approaches have been developed that integrate mutiomics data for cancer subtyping. In this study, mRNA (messenger RNA) and miRNA (microRNA) expression data are integrated to identify the histological subtypes of CC. In this regard, a method is proposed that ranks the biomarkers (mRNA and miRNA) on the basis of their varying expression across the samples. The ranking method generates a weight for every biomarker which is further used to identify the similarity between the samples. A wellknown approach named Similarity Network Fusion (SNF) is then applied, followed by Spectral clustering, to identify groups of related samples. This study focuses on the role of weighing the biomarkers prior to their integration and application of the clustering algorithm. The weighing method proposed in this study is compared with some other methods and proved to be more efficient. The proposed method helps in identifying histological subtypes of CC and can also be applied to other types of cancer data where histological subtypes play a key role in designing treatments and therapies. 
 | 
	Cancer Subtype Classification based on Superlayered Neural Network Targeted treatment on different cancer subtype has been of clinical interest. However, accurate molecular identification of pathological cancer subtypes has been challenging. To meet such needs, a new subtype classification model based on deep neural network, Sparse Crossmodal Superlayered Neural Network is presented in this study. The model focuses on selecting biomarkers with considering integration of the high dimensional RNA sequencing data and DNA methylation data. For a real data application, the multiomics data of lung adenocarcinoma and squamous cell carcinoma from The Cancer Genomic Atlas was fitted to the proposed model. Our model was compared to other existing methods such as principal component analysis, penalized logistic regression, and artificial neural network. With only a small number of biomarkers, the proposed model was able to effectively classify these lung cancer subtypes. Gene set analysis of selected biomarkers revealed significant difference in epidermis development and cornification pathway activation level between two lung cancer subtypes. 
 | 
	Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority. 
 | 
					
	Importance of Feature Weighing in Cervical Cancer Subtypes Identification Cancer subtypes identification is very important for the advancement of precision cancer disease diagnosis and therapy. It is one of the important components of the personalized medicine framework. Cervical cancer (CC) is one of the leading gynecological cancers that causes deaths in women worldwide. However, there is a lack of studies to identify histological subtypes among the patients suffering from tumor of the uterine cervix. Hence, subtyping of cancer can help in analyzing shared molecular profiles between different histological subtypes of solid tumors of uterine cervix. With the advancement in technology, large scale multiomics data are generated. The integration of genomics data generated from different platforms helps in capturing complementary information about the patients. Several computational approaches have been developed that integrate mutiomics data for cancer subtyping. In this study, mRNA (messenger RNA) and miRNA (microRNA) expression data are integrated to identify the histological subtypes of CC. In this regard, a method is proposed that ranks the biomarkers (mRNA and miRNA) on the basis of their varying expression across the samples. The ranking method generates a weight for every biomarker which is further used to identify the similarity between the samples. A wellknown approach named Similarity Network Fusion (SNF) is then applied, followed by Spectral clustering, to identify groups of related samples. This study focuses on the role of weighing the biomarkers prior to their integration and application of the clustering algorithm. The weighing method proposed in this study is compared with some other methods and proved to be more efficient. The proposed method helps in identifying histological subtypes of CC and can also be applied to other types of cancer data where histological subtypes play a key role in designing treatments and therapies. 
 | 
	Cancer Subtype Classification based on Superlayered Neural Network Targeted treatment on different cancer subtype has been of clinical interest. However, accurate molecular identification of pathological cancer subtypes has been challenging. To meet such needs, a new subtype classification model based on deep neural network, Sparse Crossmodal Superlayered Neural Network is presented in this study. The model focuses on selecting biomarkers with considering integration of the high dimensional RNA sequencing data and DNA methylation data. For a real data application, the multiomics data of lung adenocarcinoma and squamous cell carcinoma from The Cancer Genomic Atlas was fitted to the proposed model. Our model was compared to other existing methods such as principal component analysis, penalized logistic regression, and artificial neural network. With only a small number of biomarkers, the proposed model was able to effectively classify these lung cancer subtypes. Gene set analysis of selected biomarkers revealed significant difference in epidermis development and cornification pathway activation level between two lung cancer subtypes. 
 | 
	Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups. 
 | 
					
	Importance of Feature Weighing in Cervical Cancer Subtypes Identification Cancer subtypes identification is very important for the advancement of precision cancer disease diagnosis and therapy. It is one of the important components of the personalized medicine framework. Cervical cancer (CC) is one of the leading gynecological cancers that causes deaths in women worldwide. However, there is a lack of studies to identify histological subtypes among the patients suffering from tumor of the uterine cervix. Hence, subtyping of cancer can help in analyzing shared molecular profiles between different histological subtypes of solid tumors of uterine cervix. With the advancement in technology, large scale multiomics data are generated. The integration of genomics data generated from different platforms helps in capturing complementary information about the patients. Several computational approaches have been developed that integrate mutiomics data for cancer subtyping. In this study, mRNA (messenger RNA) and miRNA (microRNA) expression data are integrated to identify the histological subtypes of CC. In this regard, a method is proposed that ranks the biomarkers (mRNA and miRNA) on the basis of their varying expression across the samples. The ranking method generates a weight for every biomarker which is further used to identify the similarity between the samples. A wellknown approach named Similarity Network Fusion (SNF) is then applied, followed by Spectral clustering, to identify groups of related samples. This study focuses on the role of weighing the biomarkers prior to their integration and application of the clustering algorithm. The weighing method proposed in this study is compared with some other methods and proved to be more efficient. The proposed method helps in identifying histological subtypes of CC and can also be applied to other types of cancer data where histological subtypes play a key role in designing treatments and therapies. 
 | 
	Cancer Subtype Classification based on Superlayered Neural Network Targeted treatment on different cancer subtype has been of clinical interest. However, accurate molecular identification of pathological cancer subtypes has been challenging. To meet such needs, a new subtype classification model based on deep neural network, Sparse Crossmodal Superlayered Neural Network is presented in this study. The model focuses on selecting biomarkers with considering integration of the high dimensional RNA sequencing data and DNA methylation data. For a real data application, the multiomics data of lung adenocarcinoma and squamous cell carcinoma from The Cancer Genomic Atlas was fitted to the proposed model. Our model was compared to other existing methods such as principal component analysis, penalized logistic regression, and artificial neural network. With only a small number of biomarkers, the proposed model was able to effectively classify these lung cancer subtypes. Gene set analysis of selected biomarkers revealed significant difference in epidermis development and cornification pathway activation level between two lung cancer subtypes. 
 | 
	Boundary state feedback exponential stabilization for a onedimensional wave equation with velocity recirculation Abstract   In this paper, we consider boundary state feedback stabilization of a onedimensional wave equation with indomain feedbackrecirculation of an intermediate point velocity. We firstly construct an auxiliary control system which has a nonlocal term of the displacement at the same intermediate point. Then by choosing a wellknown exponentially stable wave equation as its target system, we find one backstepping transformation from which a state feedback law for this auxiliary system is proposed. Finally, taking the resulting closedloop of the auxiliary system as a new target system, we obtain another backstepping transformation from which a boundary state feedback controller for the original system is designed. By the equivalence of three systems, the closedloop of original system is proved to be wellposed and exponentially stable. Some numerical simulations are presented to validate the theoretical results. 
 | 
					
	Importance of Feature Weighing in Cervical Cancer Subtypes Identification Cancer subtypes identification is very important for the advancement of precision cancer disease diagnosis and therapy. It is one of the important components of the personalized medicine framework. Cervical cancer (CC) is one of the leading gynecological cancers that causes deaths in women worldwide. However, there is a lack of studies to identify histological subtypes among the patients suffering from tumor of the uterine cervix. Hence, subtyping of cancer can help in analyzing shared molecular profiles between different histological subtypes of solid tumors of uterine cervix. With the advancement in technology, large scale multiomics data are generated. The integration of genomics data generated from different platforms helps in capturing complementary information about the patients. Several computational approaches have been developed that integrate mutiomics data for cancer subtyping. In this study, mRNA (messenger RNA) and miRNA (microRNA) expression data are integrated to identify the histological subtypes of CC. In this regard, a method is proposed that ranks the biomarkers (mRNA and miRNA) on the basis of their varying expression across the samples. The ranking method generates a weight for every biomarker which is further used to identify the similarity between the samples. A wellknown approach named Similarity Network Fusion (SNF) is then applied, followed by Spectral clustering, to identify groups of related samples. This study focuses on the role of weighing the biomarkers prior to their integration and application of the clustering algorithm. The weighing method proposed in this study is compared with some other methods and proved to be more efficient. The proposed method helps in identifying histological subtypes of CC and can also be applied to other types of cancer data where histological subtypes play a key role in designing treatments and therapies. 
 | 
	Cancer Subtype Classification based on Superlayered Neural Network Targeted treatment on different cancer subtype has been of clinical interest. However, accurate molecular identification of pathological cancer subtypes has been challenging. To meet such needs, a new subtype classification model based on deep neural network, Sparse Crossmodal Superlayered Neural Network is presented in this study. The model focuses on selecting biomarkers with considering integration of the high dimensional RNA sequencing data and DNA methylation data. For a real data application, the multiomics data of lung adenocarcinoma and squamous cell carcinoma from The Cancer Genomic Atlas was fitted to the proposed model. Our model was compared to other existing methods such as principal component analysis, penalized logistic regression, and artificial neural network. With only a small number of biomarkers, the proposed model was able to effectively classify these lung cancer subtypes. Gene set analysis of selected biomarkers revealed significant difference in epidermis development and cornification pathway activation level between two lung cancer subtypes. 
 | 
	What Makes a Social Robot Good at Interacting with Humans This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: xe2x80x9cDo social robots need to look like living creatures that already exist in the world for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have animated faces for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have the ability to speak a coherent human language for humans to interact well with them?xe2x80x9d and xe2x80x9cDo social robots need to have the capability to make physical gestures for humans to interact well with them?xe2x80x9d. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethicalmoral concerns have also been discussed. 
 | 
					
	Codes from Cubic Curves and their Extensions We study the linear codes and their extensions associated with sets of points in the plane corresponding to cubic curves. Instead of merely studying linear extensions, all possible extensions of the code are studied.  In this way several new results are obtained and some existing results are strengthened. This type of analysis was carried out by Alderson, Bruen, and Silverman  J. Combin. Theory Ser. A , 114(6), 2007 for the case of MDS codes and by the present authors  Des. Codes Cryptogr. , 47(13), 2008 for a broader range of codes. The methods cast some light on the question as to when a linear code can be extended to a nonlinear code.  For example, for p prime, it is shown that a linear n,3,n3_p code corresponding to a nonsingular cubic curve comprising nu003e p4 points admits only extensions that are equivalent to linear codes.  The methods involve the theory of Redei blocking sets and the use of the BruenSilverman model of linear codes. 
 | 
	Asymptotically good ZprZpsadditive cyclic codes Abstract   We construct a class of      Z      p    r        Z      p    s       additive cyclic codes generated by pairs of polynomials, where p is a prime number. Based on probabilistic arguments, we determine the asymptotic rates and relative distances of this class of codes: the asymptotic GilbertVarshamov bound at      1      p    s  xe2x88x92  r      2    xcexb4    is greater than      1    2      and the relative distance of the code is convergent to xcexb4, while the rate is convergent to      1    1      p    s  xe2x88x92  r        for    0    xcexb4      1    1      p    s  xe2x88x92  r        and    1  xe2x89xa4  r    s   . As a consequence, we prove that there exist numerous asymptotically good      Z      p    r        Z      p    s       additive cyclic codes. 
 | 
	The longterm effect of media violence exposure on aggression of youngsters Abstract   The effect of media violence on aggression has always been a trending issue, and a better understanding of the psychological mechanism of the impact of media violence on youth aggression is an extremely important research topic for preventing the negative impacts of media violence and juvenile delinquency. From the perspective of anger, this study explored the longterm effect of different degrees of media violence exposure on the aggression of youngsters, as well as the role of aggressive emotions. The studies found that individuals with a high degree of media violence exposure (HMVE) exhibited higher levels of proactive aggression in both irritation situations and higher levels of reactive aggression in lowirritation situations than did participants with a low degree of media violence exposure (LMVE). After being provoked, the anger of all participants was significantly increased, and the anger and proactive aggression levels of the HMVE group were significantly higher than those of the LMVE group. Additionally, rumination and anger played a mediating role in the relationship between media violence exposure and aggression. Overall, this study enriches the theoretical understanding of the longterm effect of media violence exposure on individual aggression. Second, this study deepens our understanding of the relatively new and relevant phenomenon of the mechanism between media violence exposure and individual aggression. 
 | 
					
	Codes from Cubic Curves and their Extensions We study the linear codes and their extensions associated with sets of points in the plane corresponding to cubic curves. Instead of merely studying linear extensions, all possible extensions of the code are studied.  In this way several new results are obtained and some existing results are strengthened. This type of analysis was carried out by Alderson, Bruen, and Silverman  J. Combin. Theory Ser. A , 114(6), 2007 for the case of MDS codes and by the present authors  Des. Codes Cryptogr. , 47(13), 2008 for a broader range of codes. The methods cast some light on the question as to when a linear code can be extended to a nonlinear code.  For example, for p prime, it is shown that a linear n,3,n3_p code corresponding to a nonsingular cubic curve comprising nu003e p4 points admits only extensions that are equivalent to linear codes.  The methods involve the theory of Redei blocking sets and the use of the BruenSilverman model of linear codes. 
 | 
	Asymptotically good ZprZpsadditive cyclic codes Abstract   We construct a class of      Z      p    r        Z      p    s       additive cyclic codes generated by pairs of polynomials, where p is a prime number. Based on probabilistic arguments, we determine the asymptotic rates and relative distances of this class of codes: the asymptotic GilbertVarshamov bound at      1      p    s  xe2x88x92  r      2    xcexb4    is greater than      1    2      and the relative distance of the code is convergent to xcexb4, while the rate is convergent to      1    1      p    s  xe2x88x92  r        for    0    xcexb4      1    1      p    s  xe2x88x92  r        and    1  xe2x89xa4  r    s   . As a consequence, we prove that there exist numerous asymptotically good      Z      p    r        Z      p    s       additive cyclic codes. 
 | 
	The Use of Remote and Traditional Faciliation to Evaluate Telesimulation to Support Interprofessional Education and Processing in Healthcare Simulation Training This pilot study appraised traditional versus remote facilitation via telesimulation for an established interprofessional training at two geographically separate sites. Participant feedback was captured via 5 point Likert scale surveys. Results demonstrate learners supported the use of remote facilitation: to meet the interprofessional learning objectives, as an adequate replacement for live facilitation and to implement simulation education in lowresource or lowfacilitator areas. Improvements were suggested for audio connectivity between participants. In conclusion, the program evaluation suggests that telesimulation, with remote and traditional facilitation, is an effective strategy to provide interprofessional simulation education. Improvements identified are to standardize the setup of audiovisual technology and tailor participant orientation to encourage meaningful dialogue between sites. 
 | 
					
	Codes from Cubic Curves and their Extensions We study the linear codes and their extensions associated with sets of points in the plane corresponding to cubic curves. Instead of merely studying linear extensions, all possible extensions of the code are studied.  In this way several new results are obtained and some existing results are strengthened. This type of analysis was carried out by Alderson, Bruen, and Silverman  J. Combin. Theory Ser. A , 114(6), 2007 for the case of MDS codes and by the present authors  Des. Codes Cryptogr. , 47(13), 2008 for a broader range of codes. The methods cast some light on the question as to when a linear code can be extended to a nonlinear code.  For example, for p prime, it is shown that a linear n,3,n3_p code corresponding to a nonsingular cubic curve comprising nu003e p4 points admits only extensions that are equivalent to linear codes.  The methods involve the theory of Redei blocking sets and the use of the BruenSilverman model of linear codes. 
 | 
	Asymptotically good ZprZpsadditive cyclic codes Abstract   We construct a class of      Z      p    r        Z      p    s       additive cyclic codes generated by pairs of polynomials, where p is a prime number. Based on probabilistic arguments, we determine the asymptotic rates and relative distances of this class of codes: the asymptotic GilbertVarshamov bound at      1      p    s  xe2x88x92  r      2    xcexb4    is greater than      1    2      and the relative distance of the code is convergent to xcexb4, while the rate is convergent to      1    1      p    s  xe2x88x92  r        for    0    xcexb4      1    1      p    s  xe2x88x92  r        and    1  xe2x89xa4  r    s   . As a consequence, we prove that there exist numerous asymptotically good      Z      p    r        Z      p    s       additive cyclic codes. 
 | 
	Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority. 
 | 
					
	Novel Coding Tools Based on Characteristics for Short Videos Currently, short videos occupy large proportion of live streaming media and short video coding attracts intense attention. Different from traditional online videos, short videos feature frequent scene switching and allow various complex special effects to be added. For the unique characteristics of these short videos, we proposed an advanced video compression platform, which includes Library Picture based Cross Random Access Point Reference (LPCRAPR), Affinebased Intra Block Copy (AIBC) and Local Illumination Compensation (LIC), to meet the challenges of short video coding. The experimental results show that our proposed platform has performed 5.84% and 2.74% gain under Random Access (RA) and All Intra (AI) configuration compared to VTM 4.0, with approximately two times encoding complexity increasing and no additional decoding time. For our proposed short video coding, high coding efficiency and improved subjective results are achieved. 
 | 
	Next generation video coding for mobile applications : Industry requirements and technologies Handheld batteryoperated consumer electronics devices such as camera phones, digital still cameras, digital camcorders, and personal media players have become very popular in recent years. Video codecs are extensively used in these devices for video capture andor playback. The annual shipment of such devices already exceeds a hundred million units and is growing, which makes mobile batteryoperated video device requirements very important to focus in video coding research and development. This paper highlights the following unique set of requirements for video coding for these applications: low power consumption, high video quality at low complexity, and low cost, and motivates the need for a new video coding standard that enables better tradeoffs of power consumption, complexity, and coding efficiency to meet the challenging requirements of portable video devices. This paper also provides a brief overview of some of the video coding technologies being presented in the ITUT Video Coding Experts Group (VCEG) standardization body for computational complexity reduction and for coding efficiency improvement in a future video coding standard. 
 | 
	Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well.  Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found. 
 | 
					
	Novel Coding Tools Based on Characteristics for Short Videos Currently, short videos occupy large proportion of live streaming media and short video coding attracts intense attention. Different from traditional online videos, short videos feature frequent scene switching and allow various complex special effects to be added. For the unique characteristics of these short videos, we proposed an advanced video compression platform, which includes Library Picture based Cross Random Access Point Reference (LPCRAPR), Affinebased Intra Block Copy (AIBC) and Local Illumination Compensation (LIC), to meet the challenges of short video coding. The experimental results show that our proposed platform has performed 5.84% and 2.74% gain under Random Access (RA) and All Intra (AI) configuration compared to VTM 4.0, with approximately two times encoding complexity increasing and no additional decoding time. For our proposed short video coding, high coding efficiency and improved subjective results are achieved. 
 | 
	Next generation video coding for mobile applications : Industry requirements and technologies Handheld batteryoperated consumer electronics devices such as camera phones, digital still cameras, digital camcorders, and personal media players have become very popular in recent years. Video codecs are extensively used in these devices for video capture andor playback. The annual shipment of such devices already exceeds a hundred million units and is growing, which makes mobile batteryoperated video device requirements very important to focus in video coding research and development. This paper highlights the following unique set of requirements for video coding for these applications: low power consumption, high video quality at low complexity, and low cost, and motivates the need for a new video coding standard that enables better tradeoffs of power consumption, complexity, and coding efficiency to meet the challenging requirements of portable video devices. This paper also provides a brief overview of some of the video coding technologies being presented in the ITUT Video Coding Experts Group (VCEG) standardization body for computational complexity reduction and for coding efficiency improvement in a future video coding standard. 
 | 
	Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups. 
 | 
					
	Novel Coding Tools Based on Characteristics for Short Videos Currently, short videos occupy large proportion of live streaming media and short video coding attracts intense attention. Different from traditional online videos, short videos feature frequent scene switching and allow various complex special effects to be added. For the unique characteristics of these short videos, we proposed an advanced video compression platform, which includes Library Picture based Cross Random Access Point Reference (LPCRAPR), Affinebased Intra Block Copy (AIBC) and Local Illumination Compensation (LIC), to meet the challenges of short video coding. The experimental results show that our proposed platform has performed 5.84% and 2.74% gain under Random Access (RA) and All Intra (AI) configuration compared to VTM 4.0, with approximately two times encoding complexity increasing and no additional decoding time. For our proposed short video coding, high coding efficiency and improved subjective results are achieved. 
 | 
	Next generation video coding for mobile applications : Industry requirements and technologies Handheld batteryoperated consumer electronics devices such as camera phones, digital still cameras, digital camcorders, and personal media players have become very popular in recent years. Video codecs are extensively used in these devices for video capture andor playback. The annual shipment of such devices already exceeds a hundred million units and is growing, which makes mobile batteryoperated video device requirements very important to focus in video coding research and development. This paper highlights the following unique set of requirements for video coding for these applications: low power consumption, high video quality at low complexity, and low cost, and motivates the need for a new video coding standard that enables better tradeoffs of power consumption, complexity, and coding efficiency to meet the challenging requirements of portable video devices. This paper also provides a brief overview of some of the video coding technologies being presented in the ITUT Video Coding Experts Group (VCEG) standardization body for computational complexity reduction and for coding efficiency improvement in a future video coding standard. 
 | 
	Effects of Brownfield Remediation on Total Gaseous Mercury Concentrations in an Urban Landscape In order to obtain a better perspective of the impacts of brownfields on the landxe2x80x93atmosphere exchange of mercury in urban areas, total gaseous mercury (TGM) was measured at two heights (1.8 m and 42.7 m) prior to 2011xe2x80x932012 and after 2015xe2x80x932016 for the remediation of a brownfield and installation of a parking lot adjacent to the Syracuse Center of Excellence in Syracuse, NY, USA. Prior to brownfield remediation, the annual average TGM concentrations were 1.6 xc2xb1 0.6 and 1.4 xc2xb1 0.4     ng xc2xb7  m  xe2x88x92 3       at the ground and upper heights, respectively. After brownfield remediation, the annual average TGM concentrations decreased by 32% and 22% at the ground and the upper height, respectively. Mercury soil flux measurements during summer after remediation showed net TGM deposition of 1.7     ng xc2xb7  m  xe2x88x92 2   xc2xb7   day   xe2x88x92 1       suggesting that the site transitioned from a mercury source to a net mercury sink. Measurements from the Atmospheric Mercury Network (AMNet) indicate that there was no regional decrease in TGM concentrations during the study period. This study demonstrates that evasion from mercurycontaminated soil significantly increased local TGM concentrations, which was subsequently mitigated after soil restoration. Considering the large number of brownfields, they may be an important source of mercury emissions source to local urban ecosystems and warrant future study at additional locations. 
 | 
					
	Novel Coding Tools Based on Characteristics for Short Videos Currently, short videos occupy large proportion of live streaming media and short video coding attracts intense attention. Different from traditional online videos, short videos feature frequent scene switching and allow various complex special effects to be added. For the unique characteristics of these short videos, we proposed an advanced video compression platform, which includes Library Picture based Cross Random Access Point Reference (LPCRAPR), Affinebased Intra Block Copy (AIBC) and Local Illumination Compensation (LIC), to meet the challenges of short video coding. The experimental results show that our proposed platform has performed 5.84% and 2.74% gain under Random Access (RA) and All Intra (AI) configuration compared to VTM 4.0, with approximately two times encoding complexity increasing and no additional decoding time. For our proposed short video coding, high coding efficiency and improved subjective results are achieved. 
 | 
	Next generation video coding for mobile applications : Industry requirements and technologies Handheld batteryoperated consumer electronics devices such as camera phones, digital still cameras, digital camcorders, and personal media players have become very popular in recent years. Video codecs are extensively used in these devices for video capture andor playback. The annual shipment of such devices already exceeds a hundred million units and is growing, which makes mobile batteryoperated video device requirements very important to focus in video coding research and development. This paper highlights the following unique set of requirements for video coding for these applications: low power consumption, high video quality at low complexity, and low cost, and motivates the need for a new video coding standard that enables better tradeoffs of power consumption, complexity, and coding efficiency to meet the challenging requirements of portable video devices. This paper also provides a brief overview of some of the video coding technologies being presented in the ITUT Video Coding Experts Group (VCEG) standardization body for computational complexity reduction and for coding efficiency improvement in a future video coding standard. 
 | 
	Symmetric Simplicial Pseudoline Arrangements A simplicial arrangement of pseudolines is a collection of topological lines in the projective plane where each region that is formed is triangular. This paper refines and develops David Eppsteinu0027s notion of a kaleidoscope construction for symmetric pseudoline arrangements to construct and analyze several infinite families of simplicial pseudoline arrangements with high degrees of geometric symmetry. In particular, all simplicial pseudoline arrangements with the symmetries of a regular kgon and three symmetry classes of pseudolines, consisting of the mirrors of the kgon and two other symmetry classes, plus sometimes the line at infinity, are classified, and other interesting families (with more symmetry classes of pseudolines) are discussed. 
 | 
					
	Novel Coding Tools Based on Characteristics for Short Videos Currently, short videos occupy large proportion of live streaming media and short video coding attracts intense attention. Different from traditional online videos, short videos feature frequent scene switching and allow various complex special effects to be added. For the unique characteristics of these short videos, we proposed an advanced video compression platform, which includes Library Picture based Cross Random Access Point Reference (LPCRAPR), Affinebased Intra Block Copy (AIBC) and Local Illumination Compensation (LIC), to meet the challenges of short video coding. The experimental results show that our proposed platform has performed 5.84% and 2.74% gain under Random Access (RA) and All Intra (AI) configuration compared to VTM 4.0, with approximately two times encoding complexity increasing and no additional decoding time. For our proposed short video coding, high coding efficiency and improved subjective results are achieved. 
 | 
	Next generation video coding for mobile applications : Industry requirements and technologies Handheld batteryoperated consumer electronics devices such as camera phones, digital still cameras, digital camcorders, and personal media players have become very popular in recent years. Video codecs are extensively used in these devices for video capture andor playback. The annual shipment of such devices already exceeds a hundred million units and is growing, which makes mobile batteryoperated video device requirements very important to focus in video coding research and development. This paper highlights the following unique set of requirements for video coding for these applications: low power consumption, high video quality at low complexity, and low cost, and motivates the need for a new video coding standard that enables better tradeoffs of power consumption, complexity, and coding efficiency to meet the challenging requirements of portable video devices. This paper also provides a brief overview of some of the video coding technologies being presented in the ITUT Video Coding Experts Group (VCEG) standardization body for computational complexity reduction and for coding efficiency improvement in a future video coding standard. 
 | 
	What Makes a Social Robot Good at Interacting with Humans This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: xe2x80x9cDo social robots need to look like living creatures that already exist in the world for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have animated faces for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have the ability to speak a coherent human language for humans to interact well with them?xe2x80x9d and xe2x80x9cDo social robots need to have the capability to make physical gestures for humans to interact well with them?xe2x80x9d. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethicalmoral concerns have also been discussed. 
 | 
					
	Eyes on you: field study of robot vendor using humanlike eye component xe2x80x9cAkagachixe2x80x9d Eye gaze is an important nonverbal behavior for communication robots as it serves as the onset of communication. Existing communication robots have various eyes because design choices for an appropriate eye have yet to be determined, so many robots are designed on the basis of individual designersxe2x80x99 ideas. Thus, this study focuses on humanlike eye gaze in a real environment. We developed an independent humanlike eye gaze component called Akagachi for various robots and conducted an observational field study by implementing it to a vendor robot called Reika. We conducted a field study in a theme park where Reika sells softserve ice cream in a food stall and analyzed the behaviors of 984 visitors. Our results indicate that Reika elicits significantly more interaction from people with eye gaze than without it. 
 | 
	Measuring engagement elicited by eye contact in HumanRobot Interaction The present study aims at investigating how eye contact established by a humanoid robot affects engagement in humanrobot interaction (HRI). To this end, we combined explicit subjective evaluations with implicit measures, i.e. reaction times and eye tracking. More specifically, we employed a gaze cueing paradigm in HRI protocol involving the iCub robot. Critically, before moving its gaze, iCub either established eye contact or not with the user. We investigated the patterns of fixations of participantsxe2x80x99 gaze on the robotxe2x80x99s face, joint attention and the subjective ratings of engagement as a function of eye contact or no eye contact. We found that eye contact affected implicit measures of engagement, i.e. longer fixation times on the robotxe2x80x99s face during eye contact. Moreover, we showed that joint attention was elicited only when the robot established eye contact, whereas no joint attention occurred when it did not. On the contrary, explicit measures of engagement with the robot did not vary across conditions. Our results highlight the value of combining explicit with implicit measures in an HRI protocol in order to unveil underlying human cognitive mechanisms, which might be at stake during the interactions. These mechanisms could be crucial for establishing an effective and engaging HRI, and provide guidelines to the robotics community with respect to better robot design. 
 | 
	A generalization of generalized Paley graphs and new lower bounds for R(3, q) Generalized Paley graphs are cyclic graphs constructed from  quadratic or higher residues of finite fields. Using this type of cyclic graphs to study the lower bounds for classical Ramsey numbers, has high computing efficiency in both looking for parameter sets and computing clique numbers. We have found a new generalization of generalized Paley graphs, i.e. automorphism cyclic graphs, also having the same advantages. In this paper we study the properties of the parameter sets of automorphism cyclic graphs, and develop an algorithm to compute the order of the maximum independent set, based on which we get new lower bounds for 8 classical Ramsey numbers: R(3,22) geq 131, R(3,23) geq 137, R(3,25) geq 154, R(3,28) geq 173, R(3,29) geq 184, R(3,30) geq 190, R(3,31) geq 199, R(3,32) geq 214. Furthermore, we also get R(5,23) geq 521 based on R(3,22) geq 131. These nine results above improve their corresponding best known lower bounds. 
 | 
					
	Eyes on you: field study of robot vendor using humanlike eye component xe2x80x9cAkagachixe2x80x9d Eye gaze is an important nonverbal behavior for communication robots as it serves as the onset of communication. Existing communication robots have various eyes because design choices for an appropriate eye have yet to be determined, so many robots are designed on the basis of individual designersxe2x80x99 ideas. Thus, this study focuses on humanlike eye gaze in a real environment. We developed an independent humanlike eye gaze component called Akagachi for various robots and conducted an observational field study by implementing it to a vendor robot called Reika. We conducted a field study in a theme park where Reika sells softserve ice cream in a food stall and analyzed the behaviors of 984 visitors. Our results indicate that Reika elicits significantly more interaction from people with eye gaze than without it. 
 | 
	Measuring engagement elicited by eye contact in HumanRobot Interaction The present study aims at investigating how eye contact established by a humanoid robot affects engagement in humanrobot interaction (HRI). To this end, we combined explicit subjective evaluations with implicit measures, i.e. reaction times and eye tracking. More specifically, we employed a gaze cueing paradigm in HRI protocol involving the iCub robot. Critically, before moving its gaze, iCub either established eye contact or not with the user. We investigated the patterns of fixations of participantsxe2x80x99 gaze on the robotxe2x80x99s face, joint attention and the subjective ratings of engagement as a function of eye contact or no eye contact. We found that eye contact affected implicit measures of engagement, i.e. longer fixation times on the robotxe2x80x99s face during eye contact. Moreover, we showed that joint attention was elicited only when the robot established eye contact, whereas no joint attention occurred when it did not. On the contrary, explicit measures of engagement with the robot did not vary across conditions. Our results highlight the value of combining explicit with implicit measures in an HRI protocol in order to unveil underlying human cognitive mechanisms, which might be at stake during the interactions. These mechanisms could be crucial for establishing an effective and engaging HRI, and provide guidelines to the robotics community with respect to better robot design. 
 | 
	Instrument Design and Performance of the HighFrequency Airborne Microwave and MillimeterWave Radiometer The highfrequency airborne microwave and millimeterwave radiometer (HAMMR) is a crosstrack scanning airborne radiometer instrument with 25 channels from 18.7 to 183.3 GHz. HAMMR includes: lowfrequency microwave channels at 18.7, 23.8, and 34.0 GHz at two linearorthogonal polarizations; highfrequency millimeterwave channels at 90, 130 and 168 GHz; and millimeterwave sounding channels consisting of eight channels near the 118.75xc2xa0GHz oxygen absorption line for temperature profiling and eight additional channels near the 183.31 GHz water vapor absorption line for water vapor profiling. HAMMR was deployed on a twin otter aircraft for a west coast flight campaign (WCFC) from November 4xe2x80x9317, 2014. During the WCFC, HAMMR collected radiometric observations for more than 53.5 h under diverse atmospheric conditions, including clear sky, scattered and dense clouds, as well as over a variety of surface types, including coastal ocean areas, inland water and land. These measurements provide a comprehensive dataset to validate the instrument. 
 | 
					
	Eyes on you: field study of robot vendor using humanlike eye component xe2x80x9cAkagachixe2x80x9d Eye gaze is an important nonverbal behavior for communication robots as it serves as the onset of communication. Existing communication robots have various eyes because design choices for an appropriate eye have yet to be determined, so many robots are designed on the basis of individual designersxe2x80x99 ideas. Thus, this study focuses on humanlike eye gaze in a real environment. We developed an independent humanlike eye gaze component called Akagachi for various robots and conducted an observational field study by implementing it to a vendor robot called Reika. We conducted a field study in a theme park where Reika sells softserve ice cream in a food stall and analyzed the behaviors of 984 visitors. Our results indicate that Reika elicits significantly more interaction from people with eye gaze than without it. 
 | 
	Measuring engagement elicited by eye contact in HumanRobot Interaction The present study aims at investigating how eye contact established by a humanoid robot affects engagement in humanrobot interaction (HRI). To this end, we combined explicit subjective evaluations with implicit measures, i.e. reaction times and eye tracking. More specifically, we employed a gaze cueing paradigm in HRI protocol involving the iCub robot. Critically, before moving its gaze, iCub either established eye contact or not with the user. We investigated the patterns of fixations of participantsxe2x80x99 gaze on the robotxe2x80x99s face, joint attention and the subjective ratings of engagement as a function of eye contact or no eye contact. We found that eye contact affected implicit measures of engagement, i.e. longer fixation times on the robotxe2x80x99s face during eye contact. Moreover, we showed that joint attention was elicited only when the robot established eye contact, whereas no joint attention occurred when it did not. On the contrary, explicit measures of engagement with the robot did not vary across conditions. Our results highlight the value of combining explicit with implicit measures in an HRI protocol in order to unveil underlying human cognitive mechanisms, which might be at stake during the interactions. These mechanisms could be crucial for establishing an effective and engaging HRI, and provide guidelines to the robotics community with respect to better robot design. 
 | 
	Crossing Number for Graphs with Bounded Pathwidth The crossing number is the smallest number of pairwise edge crossings when drawing a graph into the plane. There are only very few graph classes for which the exact crossing number is known or for which there at least exist constant approximation ratios. Furthermore, up to now, general crossing number computations have never been successfully tackled using bounded width of graph decompositions, like treewidth or pathwidth. In this paper, we show that the crossing number is tractable (even in linear time) for maximal graphs of bounded pathwidthxc2xa03. The technique also shows that the crossing number and the rectilinear (a.k.a. straightline) crossing number are identical for this graph class, and that we require only an O(n)times O(n)grid to achieve such a drawing. Our techniques can further be extended to devise a 2approximation for general graphs with pathwidth 3. One crucial ingredient here is that the crossing number of a graph with a separation pair can be lowerbounded using the crossing numbers of its cutcomponents, a result that may be interesting in its own right. Finally, we give a 4mathbfw3approximation of the crossing number for maximal graphs of pathwidthxc2xa0mathbfw. This is a constant approximation for bounded pathwidth. We complement this with an NPhardness proof of the weighted crossing number already for pathwidth 3 graphs and bicliques K_3,n. 
 | 
					
	Eyes on you: field study of robot vendor using humanlike eye component xe2x80x9cAkagachixe2x80x9d Eye gaze is an important nonverbal behavior for communication robots as it serves as the onset of communication. Existing communication robots have various eyes because design choices for an appropriate eye have yet to be determined, so many robots are designed on the basis of individual designersxe2x80x99 ideas. Thus, this study focuses on humanlike eye gaze in a real environment. We developed an independent humanlike eye gaze component called Akagachi for various robots and conducted an observational field study by implementing it to a vendor robot called Reika. We conducted a field study in a theme park where Reika sells softserve ice cream in a food stall and analyzed the behaviors of 984 visitors. Our results indicate that Reika elicits significantly more interaction from people with eye gaze than without it. 
 | 
	Measuring engagement elicited by eye contact in HumanRobot Interaction The present study aims at investigating how eye contact established by a humanoid robot affects engagement in humanrobot interaction (HRI). To this end, we combined explicit subjective evaluations with implicit measures, i.e. reaction times and eye tracking. More specifically, we employed a gaze cueing paradigm in HRI protocol involving the iCub robot. Critically, before moving its gaze, iCub either established eye contact or not with the user. We investigated the patterns of fixations of participantsxe2x80x99 gaze on the robotxe2x80x99s face, joint attention and the subjective ratings of engagement as a function of eye contact or no eye contact. We found that eye contact affected implicit measures of engagement, i.e. longer fixation times on the robotxe2x80x99s face during eye contact. Moreover, we showed that joint attention was elicited only when the robot established eye contact, whereas no joint attention occurred when it did not. On the contrary, explicit measures of engagement with the robot did not vary across conditions. Our results highlight the value of combining explicit with implicit measures in an HRI protocol in order to unveil underlying human cognitive mechanisms, which might be at stake during the interactions. These mechanisms could be crucial for establishing an effective and engaging HRI, and provide guidelines to the robotics community with respect to better robot design. 
 | 
	Classifying unavoidable Tverberg partitions Let T(d,r)  (r1)(d1)1 be the parameter in Tverbergu0027s theorem, and call a partition mathcal I of 1,2,ldots,T(d,r) into r parts a  Tverberg type . We say that mathcal I o ccurs xc2xa0in an ordered point sequence P if P contains a subsequence Pu0027 of T(d,r) points such that the partition of Pu0027 that is orderisomorphic to mathcal I is a Tverberg partition. We say that mathcal I is  unavoidable xc2xa0if it occurs in every sufficiently long point sequence.  In this paper we study the problem of determining which Tverberg types are unavoidable. We conjecture a complete characterization of the unavoidable Tverberg types, and we prove some cases of our conjecture for dle 4. Along the way, we study the avoidability of many other geometric predicates.  Our techniques also yield a large family of T(d,r)point sets for which the number of Tverberg partitions is exactly (r1)!d. This lends further support for Sierksmau0027s conjecture on the number of Tverberg partitions. 
 | 
					
	Eyes on you: field study of robot vendor using humanlike eye component xe2x80x9cAkagachixe2x80x9d Eye gaze is an important nonverbal behavior for communication robots as it serves as the onset of communication. Existing communication robots have various eyes because design choices for an appropriate eye have yet to be determined, so many robots are designed on the basis of individual designersxe2x80x99 ideas. Thus, this study focuses on humanlike eye gaze in a real environment. We developed an independent humanlike eye gaze component called Akagachi for various robots and conducted an observational field study by implementing it to a vendor robot called Reika. We conducted a field study in a theme park where Reika sells softserve ice cream in a food stall and analyzed the behaviors of 984 visitors. Our results indicate that Reika elicits significantly more interaction from people with eye gaze than without it. 
 | 
	Measuring engagement elicited by eye contact in HumanRobot Interaction The present study aims at investigating how eye contact established by a humanoid robot affects engagement in humanrobot interaction (HRI). To this end, we combined explicit subjective evaluations with implicit measures, i.e. reaction times and eye tracking. More specifically, we employed a gaze cueing paradigm in HRI protocol involving the iCub robot. Critically, before moving its gaze, iCub either established eye contact or not with the user. We investigated the patterns of fixations of participantsxe2x80x99 gaze on the robotxe2x80x99s face, joint attention and the subjective ratings of engagement as a function of eye contact or no eye contact. We found that eye contact affected implicit measures of engagement, i.e. longer fixation times on the robotxe2x80x99s face during eye contact. Moreover, we showed that joint attention was elicited only when the robot established eye contact, whereas no joint attention occurred when it did not. On the contrary, explicit measures of engagement with the robot did not vary across conditions. Our results highlight the value of combining explicit with implicit measures in an HRI protocol in order to unveil underlying human cognitive mechanisms, which might be at stake during the interactions. These mechanisms could be crucial for establishing an effective and engaging HRI, and provide guidelines to the robotics community with respect to better robot design. 
 | 
	Shifted Set Families, Degree Sequences, and Plethysm We study, in three parts, degree sequences of kfamilies  (or kuniform hypergraphs) and shifted kfamilies.    bullet  The first part collects for the first time in one place, various implications  such as   scriptstyle hboxThreshold Rightarrow  hboxUniquely Realizable Rightarrow hboxDegreeMaximal Rightarrow hboxShifted   which are equivalent concepts for 2families ( simple graphs), but  strict implications for kfamilies with k geq 3.  The implication that uniquely realizable implies degreemaximal seems to be new.  bullet The second part recalls Merris and Robyu0027s reformulation of the characterization due to Ruch and Gutman for graphical degree sequences and shifted 2families. It then introduces two generalizations which are characterizations of shifted kfamilies.  bullet  The third part recalls the connection between degree sequences of kfamilies of size m and the plethysm of elementary symmetric functions e_me_k. It then uses highest weight theory to explain how shifted kfamilies provide the top part of these plethysm expansions, along with offering a conjecture about a further relation. 
 | 
					
	What Makes a Social Robot Good at Interacting with Humans This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: xe2x80x9cDo social robots need to look like living creatures that already exist in the world for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have animated faces for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have the ability to speak a coherent human language for humans to interact well with them?xe2x80x9d and xe2x80x9cDo social robots need to have the capability to make physical gestures for humans to interact well with them?xe2x80x9d. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethicalmoral concerns have also been discussed. 
 | 
	Enthusiastic Robots Make Better Contact This paper presents the design and evaluation of humanlike welcoming behaviors for a humanoid robot to draw the attention of passersby by following a threestep model: (1) selecting a target (person) to engage, (2) executing behaviors to draw the targetxe2x80x99s attention, and (3) monitoring the attentive response. A computer vision algorithm was developed to select the person, start the behaviors and monitor the response automatically. To vary the robotxe2x80x99s enthusiasm when engaging passersby, a waving gesture was designed as basic welcoming behavioral element, which could be successively combined with an utterance and an approach movement. This way, three levels of enthusiasm were implemented: Mild (waving), moderate (waving and utterance) and high (waving, utterance and approach movement). The three levels of welcoming behaviors were tested with a Pepper robot at the entrance of a university building. We recorded data and observation sheets from several hundreds of passersby (N 364) and conducted postinterviews with randomly selected passersby (N 28). The level selection was done at random for each participant. The passersby indicated that they appreciated the robot at the entrance and clearly recognized its role as a welcoming robot. In addition, the robot proved to draw more attention when showing high enthusiasm (i.e., more welcoming behaviors), particularly for female passersby. 
 | 
	Instrument Design and Performance of the HighFrequency Airborne Microwave and MillimeterWave Radiometer The highfrequency airborne microwave and millimeterwave radiometer (HAMMR) is a crosstrack scanning airborne radiometer instrument with 25 channels from 18.7 to 183.3 GHz. HAMMR includes: lowfrequency microwave channels at 18.7, 23.8, and 34.0 GHz at two linearorthogonal polarizations; highfrequency millimeterwave channels at 90, 130 and 168 GHz; and millimeterwave sounding channels consisting of eight channels near the 118.75xc2xa0GHz oxygen absorption line for temperature profiling and eight additional channels near the 183.31 GHz water vapor absorption line for water vapor profiling. HAMMR was deployed on a twin otter aircraft for a west coast flight campaign (WCFC) from November 4xe2x80x9317, 2014. During the WCFC, HAMMR collected radiometric observations for more than 53.5 h under diverse atmospheric conditions, including clear sky, scattered and dense clouds, as well as over a variety of surface types, including coastal ocean areas, inland water and land. These measurements provide a comprehensive dataset to validate the instrument. 
 | 
					
	What Makes a Social Robot Good at Interacting with Humans This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: xe2x80x9cDo social robots need to look like living creatures that already exist in the world for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have animated faces for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have the ability to speak a coherent human language for humans to interact well with them?xe2x80x9d and xe2x80x9cDo social robots need to have the capability to make physical gestures for humans to interact well with them?xe2x80x9d. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethicalmoral concerns have also been discussed. 
 | 
	Enthusiastic Robots Make Better Contact This paper presents the design and evaluation of humanlike welcoming behaviors for a humanoid robot to draw the attention of passersby by following a threestep model: (1) selecting a target (person) to engage, (2) executing behaviors to draw the targetxe2x80x99s attention, and (3) monitoring the attentive response. A computer vision algorithm was developed to select the person, start the behaviors and monitor the response automatically. To vary the robotxe2x80x99s enthusiasm when engaging passersby, a waving gesture was designed as basic welcoming behavioral element, which could be successively combined with an utterance and an approach movement. This way, three levels of enthusiasm were implemented: Mild (waving), moderate (waving and utterance) and high (waving, utterance and approach movement). The three levels of welcoming behaviors were tested with a Pepper robot at the entrance of a university building. We recorded data and observation sheets from several hundreds of passersby (N 364) and conducted postinterviews with randomly selected passersby (N 28). The level selection was done at random for each participant. The passersby indicated that they appreciated the robot at the entrance and clearly recognized its role as a welcoming robot. In addition, the robot proved to draw more attention when showing high enthusiasm (i.e., more welcoming behaviors), particularly for female passersby. 
 | 
	Constitutive Model of StressDependent Seepage in Columnar Jointed Rock Mass Columnar jointed rock mass (CJRM) is a highly symmetrical natural fractured structure. As the rock mass of the dam foundation of the Baihetan Hydropower Station, the study of its permeability anisotropy is of great significance to engineering safety. Based on the theory of composite mechanics and Goodmanxe2x80x99s joint superposition principle, the constitutive model of joints of CJRM is derived according to the Quadrangular prism, the Pentagonal prism and the Hexagonal prism model; combined with Singhxe2x80x99s research results on intermittent joint stress concentration, considering column deflection angles, the joint constitutive model of CJRM in threedimensional space is established. For the CJRM in the Baihetan dam site area, the Quadrangular prism, the Pentagonal prism and the Hexagonal prism constitutive models were used to calculate the permeability coefficients of CJRM under different deflection angles. The permeability anisotropy characteristics of the three models were compared and verified by numerical simulation results. The results show that the calculation results of the Pentagonal prism model are in good agreement with the numerical simulation results. The variation of permeability coefficient under different confining pressures is compared, and the relationship between permeability coefficient and confining pressure is obtained, which accords with the negative exponential function and conforms to the general rule of joint seepage. 
 | 
					
	What Makes a Social Robot Good at Interacting with Humans This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: xe2x80x9cDo social robots need to look like living creatures that already exist in the world for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have animated faces for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have the ability to speak a coherent human language for humans to interact well with them?xe2x80x9d and xe2x80x9cDo social robots need to have the capability to make physical gestures for humans to interact well with them?xe2x80x9d. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethicalmoral concerns have also been discussed. 
 | 
	Enthusiastic Robots Make Better Contact This paper presents the design and evaluation of humanlike welcoming behaviors for a humanoid robot to draw the attention of passersby by following a threestep model: (1) selecting a target (person) to engage, (2) executing behaviors to draw the targetxe2x80x99s attention, and (3) monitoring the attentive response. A computer vision algorithm was developed to select the person, start the behaviors and monitor the response automatically. To vary the robotxe2x80x99s enthusiasm when engaging passersby, a waving gesture was designed as basic welcoming behavioral element, which could be successively combined with an utterance and an approach movement. This way, three levels of enthusiasm were implemented: Mild (waving), moderate (waving and utterance) and high (waving, utterance and approach movement). The three levels of welcoming behaviors were tested with a Pepper robot at the entrance of a university building. We recorded data and observation sheets from several hundreds of passersby (N 364) and conducted postinterviews with randomly selected passersby (N 28). The level selection was done at random for each participant. The passersby indicated that they appreciated the robot at the entrance and clearly recognized its role as a welcoming robot. In addition, the robot proved to draw more attention when showing high enthusiasm (i.e., more welcoming behaviors), particularly for female passersby. 
 | 
	On the Utility of the Inverse Gamma Distribution in Modeling Composite Fading Channels. We introduce a general approach to characterize composite fading models based on inverse gamma (IG) shadowing. We first determine to what extent the IG distribution is an adequate choice for modeling shadow fading, by means of a comprehensive test with field measurements and other distributions conventionally used for this purpose. Then, we prove that the probability density function and cumulative density function of any IGbased composite fading model are directly expressed in terms of a Laplacedomain statistic of the underlying fast fading model, and in some relevant cases, as a mixture of wellknown stateoftheart distributions. We exemplify our approach by presenting a composite IGtwowave with diffuse power fading model, for which its statistical characterization is directly attained in a simple form. 
 | 
					
	What Makes a Social Robot Good at Interacting with Humans This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: xe2x80x9cDo social robots need to look like living creatures that already exist in the world for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have animated faces for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have the ability to speak a coherent human language for humans to interact well with them?xe2x80x9d and xe2x80x9cDo social robots need to have the capability to make physical gestures for humans to interact well with them?xe2x80x9d. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethicalmoral concerns have also been discussed. 
 | 
	Enthusiastic Robots Make Better Contact This paper presents the design and evaluation of humanlike welcoming behaviors for a humanoid robot to draw the attention of passersby by following a threestep model: (1) selecting a target (person) to engage, (2) executing behaviors to draw the targetxe2x80x99s attention, and (3) monitoring the attentive response. A computer vision algorithm was developed to select the person, start the behaviors and monitor the response automatically. To vary the robotxe2x80x99s enthusiasm when engaging passersby, a waving gesture was designed as basic welcoming behavioral element, which could be successively combined with an utterance and an approach movement. This way, three levels of enthusiasm were implemented: Mild (waving), moderate (waving and utterance) and high (waving, utterance and approach movement). The three levels of welcoming behaviors were tested with a Pepper robot at the entrance of a university building. We recorded data and observation sheets from several hundreds of passersby (N 364) and conducted postinterviews with randomly selected passersby (N 28). The level selection was done at random for each participant. The passersby indicated that they appreciated the robot at the entrance and clearly recognized its role as a welcoming robot. In addition, the robot proved to draw more attention when showing high enthusiasm (i.e., more welcoming behaviors), particularly for female passersby. 
 | 
	SpatioTemporal Evolution Analysis of Drought Based on Cloud Transformation Algorithm over Northern Anhui Province Drought is one of the most typical and serious natural disasters, which occurs frequently in most of mainland China, and it is crucial to explore the evolution characteristics of drought for developing effective schemes and strategies of drought disaster risk management. Based on the application of Cloud theory in the drought evolution research field, the cloud transformation algorithm, and the conception zooming coupling model was proposed to refit the distribution pattern of SPI instead of the PearsonIII distribution. Then the spatiotemporal evolution features of drought were further summarized utilizing the cloud characteristics, average, entropy, and hyperentropy. Lastly, the application results in Northern Anhui province revealed that the drought condition was the most serious during the period from 1957 to 1970 with the SPI12 index in 49 months being less than xe2x88x920.5 and 12 months with an extreme drought level. The overall drought intensity varied with the highest certainty level but lowest stability level in winter, but this was opposite in the summer. Moreover, drought hazard would be more significantly intensified along the elevation of latitude in Northern Anhui province. The overall drought hazard in Suzhou and Huaibei were the most serious, which is followed by Bozhou, Bengbu, and Fuyang. Drought intensity in Huainan was the lightest. The exploration results of drought evolution analysis were reasonable and reliable, which would supply an effective decisionmaking basis for establishing drought risk management strategies. 
 | 
					
	What Makes a Social Robot Good at Interacting with Humans This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: xe2x80x9cDo social robots need to look like living creatures that already exist in the world for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have animated faces for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have the ability to speak a coherent human language for humans to interact well with them?xe2x80x9d and xe2x80x9cDo social robots need to have the capability to make physical gestures for humans to interact well with them?xe2x80x9d. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethicalmoral concerns have also been discussed. 
 | 
	Enthusiastic Robots Make Better Contact This paper presents the design and evaluation of humanlike welcoming behaviors for a humanoid robot to draw the attention of passersby by following a threestep model: (1) selecting a target (person) to engage, (2) executing behaviors to draw the targetxe2x80x99s attention, and (3) monitoring the attentive response. A computer vision algorithm was developed to select the person, start the behaviors and monitor the response automatically. To vary the robotxe2x80x99s enthusiasm when engaging passersby, a waving gesture was designed as basic welcoming behavioral element, which could be successively combined with an utterance and an approach movement. This way, three levels of enthusiasm were implemented: Mild (waving), moderate (waving and utterance) and high (waving, utterance and approach movement). The three levels of welcoming behaviors were tested with a Pepper robot at the entrance of a university building. We recorded data and observation sheets from several hundreds of passersby (N 364) and conducted postinterviews with randomly selected passersby (N 28). The level selection was done at random for each participant. The passersby indicated that they appreciated the robot at the entrance and clearly recognized its role as a welcoming robot. In addition, the robot proved to draw more attention when showing high enthusiasm (i.e., more welcoming behaviors), particularly for female passersby. 
 | 
	A Spatialxe2x80x93Temporal SubspaceBased Compressive Channel Estimation Technique in Unknown Interference MIMO Channels Spatialxe2x80x93temporal (ST) subspacebased channel estimation techniques formulated with   ell 2   minimum mean square error (MMSE) criterion alleviate the multiaccess interference (MAI) problem when the interested signals exhibit lowrank property. However, the conventional   ell 2  ST subspacebased methods suffer from mean squared error (MSE) deterioration in unknown interference channels, due to the difficulty to separate the interested signals from the channel covariance matrices (CCMs) contaminated with unknown interference. As a solution to the problem, we propose a new   ell 1   regularized ST channel estimation algorithm by applying the expectationmaximization (EM) algorithm to iteratively examine the signal subspace and the corresponding sparsesupports. The new algorithm updates the CCM independently of the slotdependent   ell 1   regularization, which enables it to correctly perform the sparseindependent component analysis (ICA) with a reasonable complexity order. Simulation results shown in this paper verify that the proposed technique significantly improves MSE performance in unknown interference MIMO channels, and hence, solves the BER floor problems from which the conventional receivers suffer. 
 | 
					
	Numerical Investigation of Forced Convective Heat Transfer and Performance Evaluation Criterion of Al2O3Water Nanofluid Flow inside an Axisymmetric Microchannel Al2O3water nanofluid conjugate heat transfer inside a microchannel is studied numerically. The fluid flow is laminar and a constant heat flux is applied to the axisymmetric microchannelxe2x80x99s outer wall, and the two ends of the microchannelxe2x80x99s wall are considered adiabatic. The problem is inherently threedimensional, however, in order to reduce the computational cost of the solution, it is rational to consider only a half portion of the axisymmetric microchannel and the domain is revolved through its axis. Hence. the problem is reduced to a twodimensional domain, leading to less computational grid. At the centerline (r  0), as the flow is axisymmetric, there is no radial gradient (xe2x88x82uxe2x88x82r  0, v  0, xe2x88x82Txe2x88x82r  0). The effects of four Reynolds numbers of 500, 1000, 1500, and 2000; particle volume fractions of 0% (pure water), 2%, 4%, and 6%; and nanoparticles diameters in the range of 10 nm, 30 nm, 50 nm, and 70 nm on forced convective heat transfer as well as performance evaluation criterion are studied. The parameter of performance evaluation criterion provides valuable information related to heat transfer augmentation together with pressure losses and pumping power needed in a system. One goal of the study is to address the expense of increased pressure loss for the increment of the heat transfer coefficient. Furthermore, it is shown that, despite the macroscale problem, in microchannels, the viscous dissipation effect cannot be ignored and is like an energy source in the fluid, affecting temperature distribution as well as the heat transfer coefficient. In fact, it is explained that, in the microscale, an increase in inlet velocity leads to more viscous dissipation rates and, as the friction between the wall and fluid is considerable, the temperature of the wall grows more intensely compared with the bulk temperature of the fluid. Consequently, in microchannels, the thermal behavior of the fluid would be totally different from that of the macroscale. 
 | 
	Characterization of Liquid Cooled Cold Plates for a Multi Chip Module (MCM) and their Impact on Data Center Chiller Operation Miniaturization of microelectronic components comes at a price of high heat flux density. By adopting liquid cooling, the rising demand of high heat flux devices can be met while the reliability of the microelectronic devices can also be improved to a greater extent. Liquid cooled cold plates are largely replacing air based heat sinks for electronics in data center applications, thanks to its large heat carrying capacity. A bench level study was carried out to characterize the thermohydraulic performance of microchannel cold plates which uses warm DeIonized (DI) water for cooling Multi Chip Modules (MCM). A laboratory built mock package housing mock chips and a heat spreader was employed while assessing the thermal performance of three different cold plate designs at varying coolant flow rate and temperature. The case temperature measured at the heat spreader for varying flow rates and input power were essential in identifying the convective resistance corresponding to the cold plates. The flow performance was evaluated by the measuring the pressure drop across cold plate module at varying flow rates. Cold plate with the enhanced microchannel design yielded better results compared to a traditional parallel microchannel design. The experimental results were validated using a numerical model which is further optimized for improved geometric designs. Finally, an estimation of chiller operating cost was obtained for a 100% air cooled facility and compared their performance to that of a 60% warm water cooled facility. 
 | 
	Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority. 
 | 
					
	Numerical Investigation of Forced Convective Heat Transfer and Performance Evaluation Criterion of Al2O3Water Nanofluid Flow inside an Axisymmetric Microchannel Al2O3water nanofluid conjugate heat transfer inside a microchannel is studied numerically. The fluid flow is laminar and a constant heat flux is applied to the axisymmetric microchannelxe2x80x99s outer wall, and the two ends of the microchannelxe2x80x99s wall are considered adiabatic. The problem is inherently threedimensional, however, in order to reduce the computational cost of the solution, it is rational to consider only a half portion of the axisymmetric microchannel and the domain is revolved through its axis. Hence. the problem is reduced to a twodimensional domain, leading to less computational grid. At the centerline (r  0), as the flow is axisymmetric, there is no radial gradient (xe2x88x82uxe2x88x82r  0, v  0, xe2x88x82Txe2x88x82r  0). The effects of four Reynolds numbers of 500, 1000, 1500, and 2000; particle volume fractions of 0% (pure water), 2%, 4%, and 6%; and nanoparticles diameters in the range of 10 nm, 30 nm, 50 nm, and 70 nm on forced convective heat transfer as well as performance evaluation criterion are studied. The parameter of performance evaluation criterion provides valuable information related to heat transfer augmentation together with pressure losses and pumping power needed in a system. One goal of the study is to address the expense of increased pressure loss for the increment of the heat transfer coefficient. Furthermore, it is shown that, despite the macroscale problem, in microchannels, the viscous dissipation effect cannot be ignored and is like an energy source in the fluid, affecting temperature distribution as well as the heat transfer coefficient. In fact, it is explained that, in the microscale, an increase in inlet velocity leads to more viscous dissipation rates and, as the friction between the wall and fluid is considerable, the temperature of the wall grows more intensely compared with the bulk temperature of the fluid. Consequently, in microchannels, the thermal behavior of the fluid would be totally different from that of the macroscale. 
 | 
	Characterization of Liquid Cooled Cold Plates for a Multi Chip Module (MCM) and their Impact on Data Center Chiller Operation Miniaturization of microelectronic components comes at a price of high heat flux density. By adopting liquid cooling, the rising demand of high heat flux devices can be met while the reliability of the microelectronic devices can also be improved to a greater extent. Liquid cooled cold plates are largely replacing air based heat sinks for electronics in data center applications, thanks to its large heat carrying capacity. A bench level study was carried out to characterize the thermohydraulic performance of microchannel cold plates which uses warm DeIonized (DI) water for cooling Multi Chip Modules (MCM). A laboratory built mock package housing mock chips and a heat spreader was employed while assessing the thermal performance of three different cold plate designs at varying coolant flow rate and temperature. The case temperature measured at the heat spreader for varying flow rates and input power were essential in identifying the convective resistance corresponding to the cold plates. The flow performance was evaluated by the measuring the pressure drop across cold plate module at varying flow rates. Cold plate with the enhanced microchannel design yielded better results compared to a traditional parallel microchannel design. The experimental results were validated using a numerical model which is further optimized for improved geometric designs. Finally, an estimation of chiller operating cost was obtained for a 100% air cooled facility and compared their performance to that of a 60% warm water cooled facility. 
 | 
	The Complete Picture of the Twitter Social Graph In this work, we collected the entire Twitter social graph that consists of 537 million Twitter accounts connected by 23.95 billion links, and performed a preliminary analysis of the collected data. In order to collect the social graph, we implemented a distributed crawler on the PlanetLab infrastructure that collected all information in 4 months. Our preliminary analysis already revealed some interesting properties. Whereas there are 537 million Twitter accounts, only 268 million already sent at least one tweet and no more than 54 million have been recently active. In addition, 40% of the accounts are not followed by anybody and 25% do not follow anybody. Finally, we found that the Twitter policies, but also social conventions (like the followback convention) have a huge impact on the structure of the Twitter social graph. 
 | 
					
	Numerical Investigation of Forced Convective Heat Transfer and Performance Evaluation Criterion of Al2O3Water Nanofluid Flow inside an Axisymmetric Microchannel Al2O3water nanofluid conjugate heat transfer inside a microchannel is studied numerically. The fluid flow is laminar and a constant heat flux is applied to the axisymmetric microchannelxe2x80x99s outer wall, and the two ends of the microchannelxe2x80x99s wall are considered adiabatic. The problem is inherently threedimensional, however, in order to reduce the computational cost of the solution, it is rational to consider only a half portion of the axisymmetric microchannel and the domain is revolved through its axis. Hence. the problem is reduced to a twodimensional domain, leading to less computational grid. At the centerline (r  0), as the flow is axisymmetric, there is no radial gradient (xe2x88x82uxe2x88x82r  0, v  0, xe2x88x82Txe2x88x82r  0). The effects of four Reynolds numbers of 500, 1000, 1500, and 2000; particle volume fractions of 0% (pure water), 2%, 4%, and 6%; and nanoparticles diameters in the range of 10 nm, 30 nm, 50 nm, and 70 nm on forced convective heat transfer as well as performance evaluation criterion are studied. The parameter of performance evaluation criterion provides valuable information related to heat transfer augmentation together with pressure losses and pumping power needed in a system. One goal of the study is to address the expense of increased pressure loss for the increment of the heat transfer coefficient. Furthermore, it is shown that, despite the macroscale problem, in microchannels, the viscous dissipation effect cannot be ignored and is like an energy source in the fluid, affecting temperature distribution as well as the heat transfer coefficient. In fact, it is explained that, in the microscale, an increase in inlet velocity leads to more viscous dissipation rates and, as the friction between the wall and fluid is considerable, the temperature of the wall grows more intensely compared with the bulk temperature of the fluid. Consequently, in microchannels, the thermal behavior of the fluid would be totally different from that of the macroscale. 
 | 
	Characterization of Liquid Cooled Cold Plates for a Multi Chip Module (MCM) and their Impact on Data Center Chiller Operation Miniaturization of microelectronic components comes at a price of high heat flux density. By adopting liquid cooling, the rising demand of high heat flux devices can be met while the reliability of the microelectronic devices can also be improved to a greater extent. Liquid cooled cold plates are largely replacing air based heat sinks for electronics in data center applications, thanks to its large heat carrying capacity. A bench level study was carried out to characterize the thermohydraulic performance of microchannel cold plates which uses warm DeIonized (DI) water for cooling Multi Chip Modules (MCM). A laboratory built mock package housing mock chips and a heat spreader was employed while assessing the thermal performance of three different cold plate designs at varying coolant flow rate and temperature. The case temperature measured at the heat spreader for varying flow rates and input power were essential in identifying the convective resistance corresponding to the cold plates. The flow performance was evaluated by the measuring the pressure drop across cold plate module at varying flow rates. Cold plate with the enhanced microchannel design yielded better results compared to a traditional parallel microchannel design. The experimental results were validated using a numerical model which is further optimized for improved geometric designs. Finally, an estimation of chiller operating cost was obtained for a 100% air cooled facility and compared their performance to that of a 60% warm water cooled facility. 
 | 
	Managing Information From the  :2,Information highlights the increasing value of information and IT within organizations and shows how organizations use it. It also deals with the crucial relationship between information and personal effectiveness. The use of computer software and communications in a management context are discussed in detail, including how to mould an information system to your needs. The book explains the basics using reallife examples and brings managers uptodate with the latest developments in electronic commerce and the Internet. The book is based on the Management Charter Initiativeu0027s Occupational Standards for Management NVQs and SVQs at level 4. It is particularly suitable for managers on the Certificate in Management, or Part I of the Diploma, especially those accredited by the IM and BTEC. 
 | 
					
	Numerical Investigation of Forced Convective Heat Transfer and Performance Evaluation Criterion of Al2O3Water Nanofluid Flow inside an Axisymmetric Microchannel Al2O3water nanofluid conjugate heat transfer inside a microchannel is studied numerically. The fluid flow is laminar and a constant heat flux is applied to the axisymmetric microchannelxe2x80x99s outer wall, and the two ends of the microchannelxe2x80x99s wall are considered adiabatic. The problem is inherently threedimensional, however, in order to reduce the computational cost of the solution, it is rational to consider only a half portion of the axisymmetric microchannel and the domain is revolved through its axis. Hence. the problem is reduced to a twodimensional domain, leading to less computational grid. At the centerline (r  0), as the flow is axisymmetric, there is no radial gradient (xe2x88x82uxe2x88x82r  0, v  0, xe2x88x82Txe2x88x82r  0). The effects of four Reynolds numbers of 500, 1000, 1500, and 2000; particle volume fractions of 0% (pure water), 2%, 4%, and 6%; and nanoparticles diameters in the range of 10 nm, 30 nm, 50 nm, and 70 nm on forced convective heat transfer as well as performance evaluation criterion are studied. The parameter of performance evaluation criterion provides valuable information related to heat transfer augmentation together with pressure losses and pumping power needed in a system. One goal of the study is to address the expense of increased pressure loss for the increment of the heat transfer coefficient. Furthermore, it is shown that, despite the macroscale problem, in microchannels, the viscous dissipation effect cannot be ignored and is like an energy source in the fluid, affecting temperature distribution as well as the heat transfer coefficient. In fact, it is explained that, in the microscale, an increase in inlet velocity leads to more viscous dissipation rates and, as the friction between the wall and fluid is considerable, the temperature of the wall grows more intensely compared with the bulk temperature of the fluid. Consequently, in microchannels, the thermal behavior of the fluid would be totally different from that of the macroscale. 
 | 
	Characterization of Liquid Cooled Cold Plates for a Multi Chip Module (MCM) and their Impact on Data Center Chiller Operation Miniaturization of microelectronic components comes at a price of high heat flux density. By adopting liquid cooling, the rising demand of high heat flux devices can be met while the reliability of the microelectronic devices can also be improved to a greater extent. Liquid cooled cold plates are largely replacing air based heat sinks for electronics in data center applications, thanks to its large heat carrying capacity. A bench level study was carried out to characterize the thermohydraulic performance of microchannel cold plates which uses warm DeIonized (DI) water for cooling Multi Chip Modules (MCM). A laboratory built mock package housing mock chips and a heat spreader was employed while assessing the thermal performance of three different cold plate designs at varying coolant flow rate and temperature. The case temperature measured at the heat spreader for varying flow rates and input power were essential in identifying the convective resistance corresponding to the cold plates. The flow performance was evaluated by the measuring the pressure drop across cold plate module at varying flow rates. Cold plate with the enhanced microchannel design yielded better results compared to a traditional parallel microchannel design. The experimental results were validated using a numerical model which is further optimized for improved geometric designs. Finally, an estimation of chiller operating cost was obtained for a 100% air cooled facility and compared their performance to that of a 60% warm water cooled facility. 
 | 
	Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups. 
 | 
					
	Numerical Investigation of Forced Convective Heat Transfer and Performance Evaluation Criterion of Al2O3Water Nanofluid Flow inside an Axisymmetric Microchannel Al2O3water nanofluid conjugate heat transfer inside a microchannel is studied numerically. The fluid flow is laminar and a constant heat flux is applied to the axisymmetric microchannelxe2x80x99s outer wall, and the two ends of the microchannelxe2x80x99s wall are considered adiabatic. The problem is inherently threedimensional, however, in order to reduce the computational cost of the solution, it is rational to consider only a half portion of the axisymmetric microchannel and the domain is revolved through its axis. Hence. the problem is reduced to a twodimensional domain, leading to less computational grid. At the centerline (r  0), as the flow is axisymmetric, there is no radial gradient (xe2x88x82uxe2x88x82r  0, v  0, xe2x88x82Txe2x88x82r  0). The effects of four Reynolds numbers of 500, 1000, 1500, and 2000; particle volume fractions of 0% (pure water), 2%, 4%, and 6%; and nanoparticles diameters in the range of 10 nm, 30 nm, 50 nm, and 70 nm on forced convective heat transfer as well as performance evaluation criterion are studied. The parameter of performance evaluation criterion provides valuable information related to heat transfer augmentation together with pressure losses and pumping power needed in a system. One goal of the study is to address the expense of increased pressure loss for the increment of the heat transfer coefficient. Furthermore, it is shown that, despite the macroscale problem, in microchannels, the viscous dissipation effect cannot be ignored and is like an energy source in the fluid, affecting temperature distribution as well as the heat transfer coefficient. In fact, it is explained that, in the microscale, an increase in inlet velocity leads to more viscous dissipation rates and, as the friction between the wall and fluid is considerable, the temperature of the wall grows more intensely compared with the bulk temperature of the fluid. Consequently, in microchannels, the thermal behavior of the fluid would be totally different from that of the macroscale. 
 | 
	Characterization of Liquid Cooled Cold Plates for a Multi Chip Module (MCM) and their Impact on Data Center Chiller Operation Miniaturization of microelectronic components comes at a price of high heat flux density. By adopting liquid cooling, the rising demand of high heat flux devices can be met while the reliability of the microelectronic devices can also be improved to a greater extent. Liquid cooled cold plates are largely replacing air based heat sinks for electronics in data center applications, thanks to its large heat carrying capacity. A bench level study was carried out to characterize the thermohydraulic performance of microchannel cold plates which uses warm DeIonized (DI) water for cooling Multi Chip Modules (MCM). A laboratory built mock package housing mock chips and a heat spreader was employed while assessing the thermal performance of three different cold plate designs at varying coolant flow rate and temperature. The case temperature measured at the heat spreader for varying flow rates and input power were essential in identifying the convective resistance corresponding to the cold plates. The flow performance was evaluated by the measuring the pressure drop across cold plate module at varying flow rates. Cold plate with the enhanced microchannel design yielded better results compared to a traditional parallel microchannel design. The experimental results were validated using a numerical model which is further optimized for improved geometric designs. Finally, an estimation of chiller operating cost was obtained for a 100% air cooled facility and compared their performance to that of a 60% warm water cooled facility. 
 | 
	Sentiment Analysis of Events in Social Media The growing popularity of Online Social Networks has open new research directions and perspectives for content analysis, i.e., Network Analysis and Natural Language Processing. From the perspective of information spread, the Network Analysis community propose Event Detection. This approach focuses on the network features, without an indepth analysis of the textual content, summarization being a preferred method. Natural Language Processing analyses only the textual content, not integrating the graphbased structure of the network. To address these limitations, we propose a method that bridges the two directions and integrates contentawareness into networkawareness. Our method uses event detection to extract topics of interest and then applies sentiment analysis on each event. The obtained results have high accuracy, proving that our method determines with high precision the overall sentiment of the detected events. 
 | 
					
	Design and Implementation of a Virtual Sensor Network for Smart Waste Water Monitoring Monitoring and analysis of open air basins is a critical task in waste water plant management. These tasks generally require sampling waters at several hard to access points, be it real time with multiparametric sensor probes, or retrieving water samples. Full automation of these processes would require deploying hundreds (if not thousands) of fixed sensors, unless the sensors can be translated. This work proposes the utilization of robotized unmanned aerial vehicle (UAV) platforms to work as a virtual high density sensor network, which could analyze in real time or capture samples depending on the robotic UAV equipment. To check the validity of the concept, an instance of the robotized UAV platform has been fully designed and implemented. A multiagent system approach has been used (implemented over a Robot Operating System, ROS, middleware layer) to define a software architecture able to deal with the different problems, optimizing modularity of the software; in terms of hardware, the UAV platform has been designed and built, as a sample capturing probe. A description on the main features of the multiagent system proposed, its architecture, and the behavior of several components is discussed. The experimental validation and performance evaluation of the system components has been performed independently for the sake of safety: autonomous flight performance has been tested onsite; the accuracy of the localization technologies deemed as deployable options has been evaluated in controlled flights; and the viability of the sample capture device designed and built has been experimentally tested. 
 | 
	Coverage Sampling Planner for UAVenabled Environmental Exploration and Field Mapping Unmanned Aerial Vehicles (UAVs) have been implemented for environmental monitoring by using their capabilities of mobile sensing, autonomous navigation, and remote operation. However, in realworld applications, the limitations of onboard resources (e.g., power supply) of UAVs will constrain the coverage of the monitored area and the number of the acquired samples, which will hinder the performance of field estimation and mapping. Therefore, the issue of constrained resources calls for an efficient sampling planner to schedule UAVbased sensing tasks in environmental monitoring. This paper presents a mission planner of coverage sampling and path planning for a UAVenabled mobile sensor to effectively explore and map an unknown environment that is modeled as a random field. The proposed planner can generate a coverage path with an optimal coverage density for exploratory sampling, and the associated energy cost is subjected to a power supply constraint. The performance of the developed framework is evaluated and compared with the existing stateoftheart algorithms, using a realworld dataset that is collected from an environmental monitoring program as well as physical field experiments. The experimental results illustrate the reliability and accuracy of the presented coverage sampling planner in a prior survey for environmental exploration and field mapping. 
 | 
	The longterm effect of media violence exposure on aggression of youngsters Abstract   The effect of media violence on aggression has always been a trending issue, and a better understanding of the psychological mechanism of the impact of media violence on youth aggression is an extremely important research topic for preventing the negative impacts of media violence and juvenile delinquency. From the perspective of anger, this study explored the longterm effect of different degrees of media violence exposure on the aggression of youngsters, as well as the role of aggressive emotions. The studies found that individuals with a high degree of media violence exposure (HMVE) exhibited higher levels of proactive aggression in both irritation situations and higher levels of reactive aggression in lowirritation situations than did participants with a low degree of media violence exposure (LMVE). After being provoked, the anger of all participants was significantly increased, and the anger and proactive aggression levels of the HMVE group were significantly higher than those of the LMVE group. Additionally, rumination and anger played a mediating role in the relationship between media violence exposure and aggression. Overall, this study enriches the theoretical understanding of the longterm effect of media violence exposure on individual aggression. Second, this study deepens our understanding of the relatively new and relevant phenomenon of the mechanism between media violence exposure and individual aggression. 
 | 
					
	Design and Implementation of a Virtual Sensor Network for Smart Waste Water Monitoring Monitoring and analysis of open air basins is a critical task in waste water plant management. These tasks generally require sampling waters at several hard to access points, be it real time with multiparametric sensor probes, or retrieving water samples. Full automation of these processes would require deploying hundreds (if not thousands) of fixed sensors, unless the sensors can be translated. This work proposes the utilization of robotized unmanned aerial vehicle (UAV) platforms to work as a virtual high density sensor network, which could analyze in real time or capture samples depending on the robotic UAV equipment. To check the validity of the concept, an instance of the robotized UAV platform has been fully designed and implemented. A multiagent system approach has been used (implemented over a Robot Operating System, ROS, middleware layer) to define a software architecture able to deal with the different problems, optimizing modularity of the software; in terms of hardware, the UAV platform has been designed and built, as a sample capturing probe. A description on the main features of the multiagent system proposed, its architecture, and the behavior of several components is discussed. The experimental validation and performance evaluation of the system components has been performed independently for the sake of safety: autonomous flight performance has been tested onsite; the accuracy of the localization technologies deemed as deployable options has been evaluated in controlled flights; and the viability of the sample capture device designed and built has been experimentally tested. 
 | 
	Coverage Sampling Planner for UAVenabled Environmental Exploration and Field Mapping Unmanned Aerial Vehicles (UAVs) have been implemented for environmental monitoring by using their capabilities of mobile sensing, autonomous navigation, and remote operation. However, in realworld applications, the limitations of onboard resources (e.g., power supply) of UAVs will constrain the coverage of the monitored area and the number of the acquired samples, which will hinder the performance of field estimation and mapping. Therefore, the issue of constrained resources calls for an efficient sampling planner to schedule UAVbased sensing tasks in environmental monitoring. This paper presents a mission planner of coverage sampling and path planning for a UAVenabled mobile sensor to effectively explore and map an unknown environment that is modeled as a random field. The proposed planner can generate a coverage path with an optimal coverage density for exploratory sampling, and the associated energy cost is subjected to a power supply constraint. The performance of the developed framework is evaluated and compared with the existing stateoftheart algorithms, using a realworld dataset that is collected from an environmental monitoring program as well as physical field experiments. The experimental results illustrate the reliability and accuracy of the presented coverage sampling planner in a prior survey for environmental exploration and field mapping. 
 | 
	Shifted Set Families, Degree Sequences, and Plethysm We study, in three parts, degree sequences of kfamilies  (or kuniform hypergraphs) and shifted kfamilies.    bullet  The first part collects for the first time in one place, various implications  such as   scriptstyle hboxThreshold Rightarrow  hboxUniquely Realizable Rightarrow hboxDegreeMaximal Rightarrow hboxShifted   which are equivalent concepts for 2families ( simple graphs), but  strict implications for kfamilies with k geq 3.  The implication that uniquely realizable implies degreemaximal seems to be new.  bullet The second part recalls Merris and Robyu0027s reformulation of the characterization due to Ruch and Gutman for graphical degree sequences and shifted 2families. It then introduces two generalizations which are characterizations of shifted kfamilies.  bullet  The third part recalls the connection between degree sequences of kfamilies of size m and the plethysm of elementary symmetric functions e_me_k. It then uses highest weight theory to explain how shifted kfamilies provide the top part of these plethysm expansions, along with offering a conjecture about a further relation. 
 | 
					
	Design and Implementation of a Virtual Sensor Network for Smart Waste Water Monitoring Monitoring and analysis of open air basins is a critical task in waste water plant management. These tasks generally require sampling waters at several hard to access points, be it real time with multiparametric sensor probes, or retrieving water samples. Full automation of these processes would require deploying hundreds (if not thousands) of fixed sensors, unless the sensors can be translated. This work proposes the utilization of robotized unmanned aerial vehicle (UAV) platforms to work as a virtual high density sensor network, which could analyze in real time or capture samples depending on the robotic UAV equipment. To check the validity of the concept, an instance of the robotized UAV platform has been fully designed and implemented. A multiagent system approach has been used (implemented over a Robot Operating System, ROS, middleware layer) to define a software architecture able to deal with the different problems, optimizing modularity of the software; in terms of hardware, the UAV platform has been designed and built, as a sample capturing probe. A description on the main features of the multiagent system proposed, its architecture, and the behavior of several components is discussed. The experimental validation and performance evaluation of the system components has been performed independently for the sake of safety: autonomous flight performance has been tested onsite; the accuracy of the localization technologies deemed as deployable options has been evaluated in controlled flights; and the viability of the sample capture device designed and built has been experimentally tested. 
 | 
	Coverage Sampling Planner for UAVenabled Environmental Exploration and Field Mapping Unmanned Aerial Vehicles (UAVs) have been implemented for environmental monitoring by using their capabilities of mobile sensing, autonomous navigation, and remote operation. However, in realworld applications, the limitations of onboard resources (e.g., power supply) of UAVs will constrain the coverage of the monitored area and the number of the acquired samples, which will hinder the performance of field estimation and mapping. Therefore, the issue of constrained resources calls for an efficient sampling planner to schedule UAVbased sensing tasks in environmental monitoring. This paper presents a mission planner of coverage sampling and path planning for a UAVenabled mobile sensor to effectively explore and map an unknown environment that is modeled as a random field. The proposed planner can generate a coverage path with an optimal coverage density for exploratory sampling, and the associated energy cost is subjected to a power supply constraint. The performance of the developed framework is evaluated and compared with the existing stateoftheart algorithms, using a realworld dataset that is collected from an environmental monitoring program as well as physical field experiments. The experimental results illustrate the reliability and accuracy of the presented coverage sampling planner in a prior survey for environmental exploration and field mapping. 
 | 
	Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well.  Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found. 
 | 
					
	Design and Implementation of a Virtual Sensor Network for Smart Waste Water Monitoring Monitoring and analysis of open air basins is a critical task in waste water plant management. These tasks generally require sampling waters at several hard to access points, be it real time with multiparametric sensor probes, or retrieving water samples. Full automation of these processes would require deploying hundreds (if not thousands) of fixed sensors, unless the sensors can be translated. This work proposes the utilization of robotized unmanned aerial vehicle (UAV) platforms to work as a virtual high density sensor network, which could analyze in real time or capture samples depending on the robotic UAV equipment. To check the validity of the concept, an instance of the robotized UAV platform has been fully designed and implemented. A multiagent system approach has been used (implemented over a Robot Operating System, ROS, middleware layer) to define a software architecture able to deal with the different problems, optimizing modularity of the software; in terms of hardware, the UAV platform has been designed and built, as a sample capturing probe. A description on the main features of the multiagent system proposed, its architecture, and the behavior of several components is discussed. The experimental validation and performance evaluation of the system components has been performed independently for the sake of safety: autonomous flight performance has been tested onsite; the accuracy of the localization technologies deemed as deployable options has been evaluated in controlled flights; and the viability of the sample capture device designed and built has been experimentally tested. 
 | 
	Coverage Sampling Planner for UAVenabled Environmental Exploration and Field Mapping Unmanned Aerial Vehicles (UAVs) have been implemented for environmental monitoring by using their capabilities of mobile sensing, autonomous navigation, and remote operation. However, in realworld applications, the limitations of onboard resources (e.g., power supply) of UAVs will constrain the coverage of the monitored area and the number of the acquired samples, which will hinder the performance of field estimation and mapping. Therefore, the issue of constrained resources calls for an efficient sampling planner to schedule UAVbased sensing tasks in environmental monitoring. This paper presents a mission planner of coverage sampling and path planning for a UAVenabled mobile sensor to effectively explore and map an unknown environment that is modeled as a random field. The proposed planner can generate a coverage path with an optimal coverage density for exploratory sampling, and the associated energy cost is subjected to a power supply constraint. The performance of the developed framework is evaluated and compared with the existing stateoftheart algorithms, using a realworld dataset that is collected from an environmental monitoring program as well as physical field experiments. The experimental results illustrate the reliability and accuracy of the presented coverage sampling planner in a prior survey for environmental exploration and field mapping. 
 | 
	How to Make a Medical Error Disclosure to Patients This paper aims to investigate Chinese publicu0027s expectations of medical error disclosure, and to develop guidelines for hospitals. A national questionnaire survey was conducted in 2019, collecting 1,008 valid responses. Respondentsu0027 were asked their views of the severity of error they would like to be disclosed and what, when, where and who they preferred in an error disclosure. Results showed that Chinese public would like to be disclosed any error reached them even no harm. For both moderate and severe outcome errors, they preferred to be disclosed facetoface, all the information as detail as possible, immediately after the error was recognized and in a prepared meeting room. Regarding attendance of patient side, disclosure was expected to be made to the patient and family. For hospital side, the healthcare provider who committed the error, hisher leader, patient safety manager and highpositioned person of the hospital were anticipated to be present. About the person to make the disclosure, respondents preferred the healthcare provider who committed the error in a moderate outcome case while the leader or highpositioned person in a severe case. 
 | 
					
	A Multimodal Advanced Approach for the Stratification of Carotid Artery Disease The scope of this paper is to present the novel risk stratification framework for carotid artery disease which is under development in the TAXINOMISIS study. The study is implementing a multimodal strategy, integrating big data and advanced modeling approaches, in order to improve the stratification and management of patients with carotid artery disease, who are at risk for manifesting cerebrovascular events such as stroke. Advanced image processing tools for 3D reconstruction of the carotid artery bifurcation together with hybrid computational models of plaque growth, based on fluid dynamics and agent based modeling, are under development. Model predictions on plaque growth, rupture or erosion combined with big data from unique longitudinal cohorts and biobanks, including multiomics, will be utilized as inputs to machine learning and data mining algorithms in order to develop a new risk stratification platform able to identify patients at high risk for cerebrovascular events, in a precise and personalized manner. Successful completion of the TAXINOMISIS platform will lead to advances beyond the state of the art in risk stratification of carotid artery disease and rationally reduce unnecessary operations, refine medical treatment and open new directions for therapeutic interventions, with high socioeconomic impact. 
 | 
	Vessel lumen segmentation in internal carotid artery ultrasounds with deep convolutional neural networks Carotid ultrasound is a screening modality used by physicians to direct treatment in the prevention of ischemic stroke in highrisk patients. It is a time intensive process that requires highly trained technicians and physicians. Evaluation of a carotid ultrasound requires identification of the vessel wall, lumen, and plaque of the carotid artery. Automated machine learning methods for these tasks are highly limited. We propose and evaluate here single and multipath convolutional Uneural network for lumen identification from ultrasound images. We obtained deidentified images under IRB approval from 98 patients. We isolated just the internal carotid artery ultrasound images for these patients giving us a total of 302 images. We manually segmented the vessel lumen, which we use as ground truth to develop and validate our model. With a basic simple convolutional UNet we obtained a 10fold crossvalidation accuracy of 95%. We also evaluated a dualpath UNet where we modified the original image and used it as a synthetic modality but we found no improvement in accuracy. We found that the sample size made a considerable difference and thus expect our accuracy to rise as we add more training samples to the model. Our work here represents a first successful step towards the automated identification of the vessel lumen in carotid artery ultrasound images and is an important first step in creating a system that can independently evaluate carotid ultrasounds. 
 | 
	Unmanned agricultural product sales system The invention relates to the field of agricultural product sales, provides an unmanned agricultural product sales system, and aims to solve the problem of agricultural product waste caused by the factthat most farmers can only prepare goods according to guessing and experiences when selling agricultural products at present. The unmanned agricultural product sales system comprises an acquisition module for acquiring selection information of customers; a storage module which prestores a vegetable preparation scheme; a matching module which is used for matching a corresponding side dish schemefrom the storage module according to the selection information of the client; a pushing module which is used for pushing the matched side dish scheme back to the client; an acquisition module which isalso used for acquiring confirmation information of a client; an order module which is used for generating order information according to the confirmation information of the client, wherein the pushing module is used for pushing the order information to the client and the seller, and the acquisition module is also used for acquiring the delivery information of the seller; and a logistics trackingmodule which is used for tracking the delivery information to obtain logistics information, wherein the pushing module is used for pushing the logistics information to the client. The scheme is usedfor sales of unmanned agricultural product shops. 
 | 
					
	A Multimodal Advanced Approach for the Stratification of Carotid Artery Disease The scope of this paper is to present the novel risk stratification framework for carotid artery disease which is under development in the TAXINOMISIS study. The study is implementing a multimodal strategy, integrating big data and advanced modeling approaches, in order to improve the stratification and management of patients with carotid artery disease, who are at risk for manifesting cerebrovascular events such as stroke. Advanced image processing tools for 3D reconstruction of the carotid artery bifurcation together with hybrid computational models of plaque growth, based on fluid dynamics and agent based modeling, are under development. Model predictions on plaque growth, rupture or erosion combined with big data from unique longitudinal cohorts and biobanks, including multiomics, will be utilized as inputs to machine learning and data mining algorithms in order to develop a new risk stratification platform able to identify patients at high risk for cerebrovascular events, in a precise and personalized manner. Successful completion of the TAXINOMISIS platform will lead to advances beyond the state of the art in risk stratification of carotid artery disease and rationally reduce unnecessary operations, refine medical treatment and open new directions for therapeutic interventions, with high socioeconomic impact. 
 | 
	Vessel lumen segmentation in internal carotid artery ultrasounds with deep convolutional neural networks Carotid ultrasound is a screening modality used by physicians to direct treatment in the prevention of ischemic stroke in highrisk patients. It is a time intensive process that requires highly trained technicians and physicians. Evaluation of a carotid ultrasound requires identification of the vessel wall, lumen, and plaque of the carotid artery. Automated machine learning methods for these tasks are highly limited. We propose and evaluate here single and multipath convolutional Uneural network for lumen identification from ultrasound images. We obtained deidentified images under IRB approval from 98 patients. We isolated just the internal carotid artery ultrasound images for these patients giving us a total of 302 images. We manually segmented the vessel lumen, which we use as ground truth to develop and validate our model. With a basic simple convolutional UNet we obtained a 10fold crossvalidation accuracy of 95%. We also evaluated a dualpath UNet where we modified the original image and used it as a synthetic modality but we found no improvement in accuracy. We found that the sample size made a considerable difference and thus expect our accuracy to rise as we add more training samples to the model. Our work here represents a first successful step towards the automated identification of the vessel lumen in carotid artery ultrasound images and is an important first step in creating a system that can independently evaluate carotid ultrasounds. 
 | 
	Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups. 
 | 
					
	A Multimodal Advanced Approach for the Stratification of Carotid Artery Disease The scope of this paper is to present the novel risk stratification framework for carotid artery disease which is under development in the TAXINOMISIS study. The study is implementing a multimodal strategy, integrating big data and advanced modeling approaches, in order to improve the stratification and management of patients with carotid artery disease, who are at risk for manifesting cerebrovascular events such as stroke. Advanced image processing tools for 3D reconstruction of the carotid artery bifurcation together with hybrid computational models of plaque growth, based on fluid dynamics and agent based modeling, are under development. Model predictions on plaque growth, rupture or erosion combined with big data from unique longitudinal cohorts and biobanks, including multiomics, will be utilized as inputs to machine learning and data mining algorithms in order to develop a new risk stratification platform able to identify patients at high risk for cerebrovascular events, in a precise and personalized manner. Successful completion of the TAXINOMISIS platform will lead to advances beyond the state of the art in risk stratification of carotid artery disease and rationally reduce unnecessary operations, refine medical treatment and open new directions for therapeutic interventions, with high socioeconomic impact. 
 | 
	Vessel lumen segmentation in internal carotid artery ultrasounds with deep convolutional neural networks Carotid ultrasound is a screening modality used by physicians to direct treatment in the prevention of ischemic stroke in highrisk patients. It is a time intensive process that requires highly trained technicians and physicians. Evaluation of a carotid ultrasound requires identification of the vessel wall, lumen, and plaque of the carotid artery. Automated machine learning methods for these tasks are highly limited. We propose and evaluate here single and multipath convolutional Uneural network for lumen identification from ultrasound images. We obtained deidentified images under IRB approval from 98 patients. We isolated just the internal carotid artery ultrasound images for these patients giving us a total of 302 images. We manually segmented the vessel lumen, which we use as ground truth to develop and validate our model. With a basic simple convolutional UNet we obtained a 10fold crossvalidation accuracy of 95%. We also evaluated a dualpath UNet where we modified the original image and used it as a synthetic modality but we found no improvement in accuracy. We found that the sample size made a considerable difference and thus expect our accuracy to rise as we add more training samples to the model. Our work here represents a first successful step towards the automated identification of the vessel lumen in carotid artery ultrasound images and is an important first step in creating a system that can independently evaluate carotid ultrasounds. 
 | 
	Classifying unavoidable Tverberg partitions Let T(d,r)  (r1)(d1)1 be the parameter in Tverbergu0027s theorem, and call a partition mathcal I of 1,2,ldots,T(d,r) into r parts a  Tverberg type . We say that mathcal I o ccurs xc2xa0in an ordered point sequence P if P contains a subsequence Pu0027 of T(d,r) points such that the partition of Pu0027 that is orderisomorphic to mathcal I is a Tverberg partition. We say that mathcal I is  unavoidable xc2xa0if it occurs in every sufficiently long point sequence.  In this paper we study the problem of determining which Tverberg types are unavoidable. We conjecture a complete characterization of the unavoidable Tverberg types, and we prove some cases of our conjecture for dle 4. Along the way, we study the avoidability of many other geometric predicates.  Our techniques also yield a large family of T(d,r)point sets for which the number of Tverberg partitions is exactly (r1)!d. This lends further support for Sierksmau0027s conjecture on the number of Tverberg partitions. 
 | 
					
	A Multimodal Advanced Approach for the Stratification of Carotid Artery Disease The scope of this paper is to present the novel risk stratification framework for carotid artery disease which is under development in the TAXINOMISIS study. The study is implementing a multimodal strategy, integrating big data and advanced modeling approaches, in order to improve the stratification and management of patients with carotid artery disease, who are at risk for manifesting cerebrovascular events such as stroke. Advanced image processing tools for 3D reconstruction of the carotid artery bifurcation together with hybrid computational models of plaque growth, based on fluid dynamics and agent based modeling, are under development. Model predictions on plaque growth, rupture or erosion combined with big data from unique longitudinal cohorts and biobanks, including multiomics, will be utilized as inputs to machine learning and data mining algorithms in order to develop a new risk stratification platform able to identify patients at high risk for cerebrovascular events, in a precise and personalized manner. Successful completion of the TAXINOMISIS platform will lead to advances beyond the state of the art in risk stratification of carotid artery disease and rationally reduce unnecessary operations, refine medical treatment and open new directions for therapeutic interventions, with high socioeconomic impact. 
 | 
	Vessel lumen segmentation in internal carotid artery ultrasounds with deep convolutional neural networks Carotid ultrasound is a screening modality used by physicians to direct treatment in the prevention of ischemic stroke in highrisk patients. It is a time intensive process that requires highly trained technicians and physicians. Evaluation of a carotid ultrasound requires identification of the vessel wall, lumen, and plaque of the carotid artery. Automated machine learning methods for these tasks are highly limited. We propose and evaluate here single and multipath convolutional Uneural network for lumen identification from ultrasound images. We obtained deidentified images under IRB approval from 98 patients. We isolated just the internal carotid artery ultrasound images for these patients giving us a total of 302 images. We manually segmented the vessel lumen, which we use as ground truth to develop and validate our model. With a basic simple convolutional UNet we obtained a 10fold crossvalidation accuracy of 95%. We also evaluated a dualpath UNet where we modified the original image and used it as a synthetic modality but we found no improvement in accuracy. We found that the sample size made a considerable difference and thus expect our accuracy to rise as we add more training samples to the model. Our work here represents a first successful step towards the automated identification of the vessel lumen in carotid artery ultrasound images and is an important first step in creating a system that can independently evaluate carotid ultrasounds. 
 | 
	Extremal Problems for tPartite and tColorable Hypergraphs Fix integers t ge r ge 2 and an runiform hypergraph F. We prove that the maximum number of edges in a tpartite runiform hypergraph on n vertices that contains no copy of F is c_t, Fn choose r o(nr), where c_t, F can be determined by a finite computation.   We explicitly define a sequence F_1, F_2, ldots of runiform hypergraphs, and prove that the maximum number of edges in a tchromatic runiform hypergraph on n vertices containing no copy of F_i is alpha_t,r,in choose r o(nr), where alpha_t,r,i can be determined by a finite computation for each ige 1. In several cases, alpha_t,r,i is irrational. The main tool used in the proofs is the Lagrangian of a hypergraph. 
 | 
					
	A Multimodal Advanced Approach for the Stratification of Carotid Artery Disease The scope of this paper is to present the novel risk stratification framework for carotid artery disease which is under development in the TAXINOMISIS study. The study is implementing a multimodal strategy, integrating big data and advanced modeling approaches, in order to improve the stratification and management of patients with carotid artery disease, who are at risk for manifesting cerebrovascular events such as stroke. Advanced image processing tools for 3D reconstruction of the carotid artery bifurcation together with hybrid computational models of plaque growth, based on fluid dynamics and agent based modeling, are under development. Model predictions on plaque growth, rupture or erosion combined with big data from unique longitudinal cohorts and biobanks, including multiomics, will be utilized as inputs to machine learning and data mining algorithms in order to develop a new risk stratification platform able to identify patients at high risk for cerebrovascular events, in a precise and personalized manner. Successful completion of the TAXINOMISIS platform will lead to advances beyond the state of the art in risk stratification of carotid artery disease and rationally reduce unnecessary operations, refine medical treatment and open new directions for therapeutic interventions, with high socioeconomic impact. 
 | 
	Vessel lumen segmentation in internal carotid artery ultrasounds with deep convolutional neural networks Carotid ultrasound is a screening modality used by physicians to direct treatment in the prevention of ischemic stroke in highrisk patients. It is a time intensive process that requires highly trained technicians and physicians. Evaluation of a carotid ultrasound requires identification of the vessel wall, lumen, and plaque of the carotid artery. Automated machine learning methods for these tasks are highly limited. We propose and evaluate here single and multipath convolutional Uneural network for lumen identification from ultrasound images. We obtained deidentified images under IRB approval from 98 patients. We isolated just the internal carotid artery ultrasound images for these patients giving us a total of 302 images. We manually segmented the vessel lumen, which we use as ground truth to develop and validate our model. With a basic simple convolutional UNet we obtained a 10fold crossvalidation accuracy of 95%. We also evaluated a dualpath UNet where we modified the original image and used it as a synthetic modality but we found no improvement in accuracy. We found that the sample size made a considerable difference and thus expect our accuracy to rise as we add more training samples to the model. Our work here represents a first successful step towards the automated identification of the vessel lumen in carotid artery ultrasound images and is an important first step in creating a system that can independently evaluate carotid ultrasounds. 
 | 
	A Spatialxe2x80x93Temporal SubspaceBased Compressive Channel Estimation Technique in Unknown Interference MIMO Channels Spatialxe2x80x93temporal (ST) subspacebased channel estimation techniques formulated with   ell 2   minimum mean square error (MMSE) criterion alleviate the multiaccess interference (MAI) problem when the interested signals exhibit lowrank property. However, the conventional   ell 2  ST subspacebased methods suffer from mean squared error (MSE) deterioration in unknown interference channels, due to the difficulty to separate the interested signals from the channel covariance matrices (CCMs) contaminated with unknown interference. As a solution to the problem, we propose a new   ell 1   regularized ST channel estimation algorithm by applying the expectationmaximization (EM) algorithm to iteratively examine the signal subspace and the corresponding sparsesupports. The new algorithm updates the CCM independently of the slotdependent   ell 1   regularization, which enables it to correctly perform the sparseindependent component analysis (ICA) with a reasonable complexity order. Simulation results shown in this paper verify that the proposed technique significantly improves MSE performance in unknown interference MIMO channels, and hence, solves the BER floor problems from which the conventional receivers suffer. 
 | 
					
	Social and Governance Implications of Improved Data Efficiency. Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the socialeconomic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent datarich AI firms, exposing them to more competition from datapoor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency  as more actors gain access to any level of capability  the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the AI production function, will be key to understanding the development of the AI industry and its societal impacts. 
 | 
	The Impact of Data Preparation on the Fairness of Software Systems Machine learning models are widely adopted in scenarios that directly affect people. The development of software systems based on these models raises societal and legal concerns, as their decisions may lead to the unfair treatment of individuals based on attributes like race or gender. Data preparation is key in any machine learning pipeline, but its effect on fairness is yet to be studied in detail. In this paper, we evaluate how the fairness and effectiveness of the learned models are affected by the removal of the sensitive attribute, the encoding of the categorical attributes, and instance selection methods (including crossvalidators and random undersampling). We used the Adult Income and the German Credit Data datasets, which are widely studied and known to have fairness concerns. We applied each data preparation technique individually to analyse the difference in predictive performance and fairness, using statistical parity difference, disparate impact, and the normalised prejudice index. The results show that fairness is affected by transformations made to the training data, particularly in imbalanced datasets. Removing the sensitive attribute is insufficient to eliminate all the unfairness in the predictions, as expected, but it is key to achieve fairer models. Additionally, the standard random undersampling with respect to the true labels is sometimes more prejudicial than performing no random undersampling. 
 | 
	Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority. 
 | 
					
	Social and Governance Implications of Improved Data Efficiency. Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the socialeconomic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent datarich AI firms, exposing them to more competition from datapoor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency  as more actors gain access to any level of capability  the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the AI production function, will be key to understanding the development of the AI industry and its societal impacts. 
 | 
	The Impact of Data Preparation on the Fairness of Software Systems Machine learning models are widely adopted in scenarios that directly affect people. The development of software systems based on these models raises societal and legal concerns, as their decisions may lead to the unfair treatment of individuals based on attributes like race or gender. Data preparation is key in any machine learning pipeline, but its effect on fairness is yet to be studied in detail. In this paper, we evaluate how the fairness and effectiveness of the learned models are affected by the removal of the sensitive attribute, the encoding of the categorical attributes, and instance selection methods (including crossvalidators and random undersampling). We used the Adult Income and the German Credit Data datasets, which are widely studied and known to have fairness concerns. We applied each data preparation technique individually to analyse the difference in predictive performance and fairness, using statistical parity difference, disparate impact, and the normalised prejudice index. The results show that fairness is affected by transformations made to the training data, particularly in imbalanced datasets. Removing the sensitive attribute is insufficient to eliminate all the unfairness in the predictions, as expected, but it is key to achieve fairer models. Additionally, the standard random undersampling with respect to the true labels is sometimes more prejudicial than performing no random undersampling. 
 | 
	Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well.  Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found. 
 | 
					
	Social and Governance Implications of Improved Data Efficiency. Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the socialeconomic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent datarich AI firms, exposing them to more competition from datapoor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency  as more actors gain access to any level of capability  the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the AI production function, will be key to understanding the development of the AI industry and its societal impacts. 
 | 
	The Impact of Data Preparation on the Fairness of Software Systems Machine learning models are widely adopted in scenarios that directly affect people. The development of software systems based on these models raises societal and legal concerns, as their decisions may lead to the unfair treatment of individuals based on attributes like race or gender. Data preparation is key in any machine learning pipeline, but its effect on fairness is yet to be studied in detail. In this paper, we evaluate how the fairness and effectiveness of the learned models are affected by the removal of the sensitive attribute, the encoding of the categorical attributes, and instance selection methods (including crossvalidators and random undersampling). We used the Adult Income and the German Credit Data datasets, which are widely studied and known to have fairness concerns. We applied each data preparation technique individually to analyse the difference in predictive performance and fairness, using statistical parity difference, disparate impact, and the normalised prejudice index. The results show that fairness is affected by transformations made to the training data, particularly in imbalanced datasets. Removing the sensitive attribute is insufficient to eliminate all the unfairness in the predictions, as expected, but it is key to achieve fairer models. Additionally, the standard random undersampling with respect to the true labels is sometimes more prejudicial than performing no random undersampling. 
 | 
	Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups. 
 | 
					
	Social and Governance Implications of Improved Data Efficiency. Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the socialeconomic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent datarich AI firms, exposing them to more competition from datapoor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency  as more actors gain access to any level of capability  the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the AI production function, will be key to understanding the development of the AI industry and its societal impacts. 
 | 
	The Impact of Data Preparation on the Fairness of Software Systems Machine learning models are widely adopted in scenarios that directly affect people. The development of software systems based on these models raises societal and legal concerns, as their decisions may lead to the unfair treatment of individuals based on attributes like race or gender. Data preparation is key in any machine learning pipeline, but its effect on fairness is yet to be studied in detail. In this paper, we evaluate how the fairness and effectiveness of the learned models are affected by the removal of the sensitive attribute, the encoding of the categorical attributes, and instance selection methods (including crossvalidators and random undersampling). We used the Adult Income and the German Credit Data datasets, which are widely studied and known to have fairness concerns. We applied each data preparation technique individually to analyse the difference in predictive performance and fairness, using statistical parity difference, disparate impact, and the normalised prejudice index. The results show that fairness is affected by transformations made to the training data, particularly in imbalanced datasets. Removing the sensitive attribute is insufficient to eliminate all the unfairness in the predictions, as expected, but it is key to achieve fairer models. Additionally, the standard random undersampling with respect to the true labels is sometimes more prejudicial than performing no random undersampling. 
 | 
	AirCoupled Reception of a Slow Ultrasonic A0 Mode Wave Propagating in Thin Plastic Film At low frequencies, in thin plates the phase velocity of the guided A0 mode can become slower than that of the ultrasound velocity in air. Such waves do not excite leaky waves in the surrounding air, and therefore, it is impossible to excite and receive them by conventional aircoupled methods. The objective of this research was the development of an aircoupled technique for the reception of slow A0 mode in thin plastic films. This study demonstrates the feasibility of picking up a subsonic A0 mode in plastic films by aircoupled ultrasonic arrays. The aircoupled reception was based on an evanescent wave in air accompanying the propagating A0 mode in a film. The efficiency of the reception was enhanced by using a virtual array which was arranged from the data collected by a single aircoupled receiver. The signals measured at the points corresponding to the positions of the phasematched array were recorded and processed. The transmitting array excited not only the A0 mode in the film, but also a direct wave in air. This wave propagated at ultrasound velocity in air and was faster than the evanescent wave. For efficient reception of the A0 mode, the additional signalprocessing procedure based on the application of the 2D Fourier transform in a spatialxe2x80x93temporal domain. The obtained results can be useful for the development of novel aircoupled ultrasonic nondestructive testing techniques. 
 | 
					
	Social and Governance Implications of Improved Data Efficiency. Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed? This paper explores the socialeconomic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent datarich AI firms, exposing them to more competition from datapoor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency  as more actors gain access to any level of capability  the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the AI production function, will be key to understanding the development of the AI industry and its societal impacts. 
 | 
	The Impact of Data Preparation on the Fairness of Software Systems Machine learning models are widely adopted in scenarios that directly affect people. The development of software systems based on these models raises societal and legal concerns, as their decisions may lead to the unfair treatment of individuals based on attributes like race or gender. Data preparation is key in any machine learning pipeline, but its effect on fairness is yet to be studied in detail. In this paper, we evaluate how the fairness and effectiveness of the learned models are affected by the removal of the sensitive attribute, the encoding of the categorical attributes, and instance selection methods (including crossvalidators and random undersampling). We used the Adult Income and the German Credit Data datasets, which are widely studied and known to have fairness concerns. We applied each data preparation technique individually to analyse the difference in predictive performance and fairness, using statistical parity difference, disparate impact, and the normalised prejudice index. The results show that fairness is affected by transformations made to the training data, particularly in imbalanced datasets. Removing the sensitive attribute is insufficient to eliminate all the unfairness in the predictions, as expected, but it is key to achieve fairer models. Additionally, the standard random undersampling with respect to the true labels is sometimes more prejudicial than performing no random undersampling. 
 | 
	Symmetry Group Classification and Conservation Laws of the Nonlinear Fractional Diffusion Equation with the Riesz Potential Symmetry properties of a nonlinear twodimensional spacefractional diffusion equation with the Riesz potential of the order     xcexb1 xe2x88x88 ( 0 , 1 )     are studied. Lie point symmetry group classification of this equation is performed with respect to diffusivity function. To construct conservation laws for the considered equation, the concept of nonlinear selfadjointness is adopted to a certain class of spacefractional differential equations with the Riesz potential. It is proved that the equation in question is nonlinearly selfadjoint. An extension of Ibragimovxe2x80x99s constructive algorithm for finding conservation laws is proposed, and the corresponding Noether operators for fractional differential equations with the Riesz potential are presented in an explicit form. To illustrate the proposed approach, conservation laws for the considered nonlinear spacefractional diffusion equation are constructed by using its Lie point symmetries. 
 | 
					
	Ischemic Stroke Lesion Prediction in CT Perfusion Scans Using Multiple Parallel UNets Following by a PixelLevel Classifier It is critical to know what brain regions are affected by an ischemic stroke, as this enables doctors to make more effective decisions about stroke patient therapy. These regions are often identified by segmenting computed tomography perfusion (CTP) images. Previously, this task has been done manually by an expert. However, manual segmentation is an extremely tedious and timeconsuming process, that is not suitable for ischemic stroke lesion segmentation as it is highly time sensitive. In addition, these approaches require an expert to do the segmentation task, who may not be available and are prone to errors. Several automatic medical image analysis methods have been proposed for ischemic stroke lesion segmentation. These approaches, typically, use handcrafted features that are predefined to represent the input data. However, because of the irregular and physiologically shapes, ischemic stroke lesions cannot be properly predicted, in an automatic way, using simple predefined features. In this work, we propose an automatic prediction algorithm that learns an effective model for segmenting the ischemic stroke lesion. This learned model first uses four 2D UNets to, separately, extract valuable information about the location of the stroke lesion from four CTP maps (CBV, CBF, MTT, Tmax). The model then combines the probability maps extracted by the UNets, to decide whether the pixels are either lesion or healthy tissues. This approach uses information about each pixel, as well as its neighborhood, to learn the stroke lesion, despite their varying shapes. The segmentation performance is evaluated using dice similarity coefficient (DSC), volume similarity (VS), and Recall. We have used this new algorithm on ISLES 2018 challenge dataset and found that our approach achieved results that are better than stateoftheart approaches. 
 | 
	CRUNet: Cascaded UNet with Residual Mapping for Liver Segmentation in CT Images * Abdominal computed tomography (CT) is a common modality to detect liver lesions. Liver segmentation in CT scan is important for diagnosis and analysis of liver lesions. However, the accuracy of existing liver segmentation methods is slightly insufficient. In this paper, we propose a liver segmentation architecture named CRUNet, which is composed of cascade UNet combined with residual mapping. We make use of the MDice loss function for training in CRUNet, and the secondlevel of cascade network is deeper than the firstlevel to extract more detailed image features. Morphological algorithms are utilized as an intermediateprocessing step to improve the segmentation accuracy. In addition, we evaluate our proposed CRUNet on liver segmentation task under the dataset provided by the 2017 ISBI LiTS Challenge. The experimental result demonstrates that our proposed CRUNet can outperform the stateoftheart methods in term of the performance measures, such as Dice score, VOE, and so on. 
 | 
	Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well.  Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found. 
 | 
					
	Ischemic Stroke Lesion Prediction in CT Perfusion Scans Using Multiple Parallel UNets Following by a PixelLevel Classifier It is critical to know what brain regions are affected by an ischemic stroke, as this enables doctors to make more effective decisions about stroke patient therapy. These regions are often identified by segmenting computed tomography perfusion (CTP) images. Previously, this task has been done manually by an expert. However, manual segmentation is an extremely tedious and timeconsuming process, that is not suitable for ischemic stroke lesion segmentation as it is highly time sensitive. In addition, these approaches require an expert to do the segmentation task, who may not be available and are prone to errors. Several automatic medical image analysis methods have been proposed for ischemic stroke lesion segmentation. These approaches, typically, use handcrafted features that are predefined to represent the input data. However, because of the irregular and physiologically shapes, ischemic stroke lesions cannot be properly predicted, in an automatic way, using simple predefined features. In this work, we propose an automatic prediction algorithm that learns an effective model for segmenting the ischemic stroke lesion. This learned model first uses four 2D UNets to, separately, extract valuable information about the location of the stroke lesion from four CTP maps (CBV, CBF, MTT, Tmax). The model then combines the probability maps extracted by the UNets, to decide whether the pixels are either lesion or healthy tissues. This approach uses information about each pixel, as well as its neighborhood, to learn the stroke lesion, despite their varying shapes. The segmentation performance is evaluated using dice similarity coefficient (DSC), volume similarity (VS), and Recall. We have used this new algorithm on ISLES 2018 challenge dataset and found that our approach achieved results that are better than stateoftheart approaches. 
 | 
	CRUNet: Cascaded UNet with Residual Mapping for Liver Segmentation in CT Images * Abdominal computed tomography (CT) is a common modality to detect liver lesions. Liver segmentation in CT scan is important for diagnosis and analysis of liver lesions. However, the accuracy of existing liver segmentation methods is slightly insufficient. In this paper, we propose a liver segmentation architecture named CRUNet, which is composed of cascade UNet combined with residual mapping. We make use of the MDice loss function for training in CRUNet, and the secondlevel of cascade network is deeper than the firstlevel to extract more detailed image features. Morphological algorithms are utilized as an intermediateprocessing step to improve the segmentation accuracy. In addition, we evaluate our proposed CRUNet on liver segmentation task under the dataset provided by the 2017 ISBI LiTS Challenge. The experimental result demonstrates that our proposed CRUNet can outperform the stateoftheart methods in term of the performance measures, such as Dice score, VOE, and so on. 
 | 
	Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups. 
 | 
					
	Ischemic Stroke Lesion Prediction in CT Perfusion Scans Using Multiple Parallel UNets Following by a PixelLevel Classifier It is critical to know what brain regions are affected by an ischemic stroke, as this enables doctors to make more effective decisions about stroke patient therapy. These regions are often identified by segmenting computed tomography perfusion (CTP) images. Previously, this task has been done manually by an expert. However, manual segmentation is an extremely tedious and timeconsuming process, that is not suitable for ischemic stroke lesion segmentation as it is highly time sensitive. In addition, these approaches require an expert to do the segmentation task, who may not be available and are prone to errors. Several automatic medical image analysis methods have been proposed for ischemic stroke lesion segmentation. These approaches, typically, use handcrafted features that are predefined to represent the input data. However, because of the irregular and physiologically shapes, ischemic stroke lesions cannot be properly predicted, in an automatic way, using simple predefined features. In this work, we propose an automatic prediction algorithm that learns an effective model for segmenting the ischemic stroke lesion. This learned model first uses four 2D UNets to, separately, extract valuable information about the location of the stroke lesion from four CTP maps (CBV, CBF, MTT, Tmax). The model then combines the probability maps extracted by the UNets, to decide whether the pixels are either lesion or healthy tissues. This approach uses information about each pixel, as well as its neighborhood, to learn the stroke lesion, despite their varying shapes. The segmentation performance is evaluated using dice similarity coefficient (DSC), volume similarity (VS), and Recall. We have used this new algorithm on ISLES 2018 challenge dataset and found that our approach achieved results that are better than stateoftheart approaches. 
 | 
	CRUNet: Cascaded UNet with Residual Mapping for Liver Segmentation in CT Images * Abdominal computed tomography (CT) is a common modality to detect liver lesions. Liver segmentation in CT scan is important for diagnosis and analysis of liver lesions. However, the accuracy of existing liver segmentation methods is slightly insufficient. In this paper, we propose a liver segmentation architecture named CRUNet, which is composed of cascade UNet combined with residual mapping. We make use of the MDice loss function for training in CRUNet, and the secondlevel of cascade network is deeper than the firstlevel to extract more detailed image features. Morphological algorithms are utilized as an intermediateprocessing step to improve the segmentation accuracy. In addition, we evaluate our proposed CRUNet on liver segmentation task under the dataset provided by the 2017 ISBI LiTS Challenge. The experimental result demonstrates that our proposed CRUNet can outperform the stateoftheart methods in term of the performance measures, such as Dice score, VOE, and so on. 
 | 
	Analysis of Charging Continuous Energy System and Stable Current Collection for Pantograph and Catenary of Pure Electric LHD Aiming at the problem of limited power battery capacity of pure electric LoadHaulDump (LHD), a method of charging and supplying sufficient power through pantographcatenary current collection system is proposed, which avoids the problem of poor flexibility and mobility of towed cable electric LHD. In this paper, we introduce the research and application status of pantograph and catenary, describe the latest methods and techniques for studying the dynamics of pantographcatenary system, elaborate and analyze various methods and technologies, and outline the important indicators for analyzing and evaluating the stability of current collection between pantographcatenary system. Simultaneously, various control strategies for pantographcatenary system are introduced. Finally, the application of the pantographcatenary system in highspeed railway and urban electric bus is discussed to illustrate the advantages of pantographcatenary system charging and energy supply, and it is applied to pure electric LHD charging and energy supply to ensure power adequacy. 
 | 
					
	Ischemic Stroke Lesion Prediction in CT Perfusion Scans Using Multiple Parallel UNets Following by a PixelLevel Classifier It is critical to know what brain regions are affected by an ischemic stroke, as this enables doctors to make more effective decisions about stroke patient therapy. These regions are often identified by segmenting computed tomography perfusion (CTP) images. Previously, this task has been done manually by an expert. However, manual segmentation is an extremely tedious and timeconsuming process, that is not suitable for ischemic stroke lesion segmentation as it is highly time sensitive. In addition, these approaches require an expert to do the segmentation task, who may not be available and are prone to errors. Several automatic medical image analysis methods have been proposed for ischemic stroke lesion segmentation. These approaches, typically, use handcrafted features that are predefined to represent the input data. However, because of the irregular and physiologically shapes, ischemic stroke lesions cannot be properly predicted, in an automatic way, using simple predefined features. In this work, we propose an automatic prediction algorithm that learns an effective model for segmenting the ischemic stroke lesion. This learned model first uses four 2D UNets to, separately, extract valuable information about the location of the stroke lesion from four CTP maps (CBV, CBF, MTT, Tmax). The model then combines the probability maps extracted by the UNets, to decide whether the pixels are either lesion or healthy tissues. This approach uses information about each pixel, as well as its neighborhood, to learn the stroke lesion, despite their varying shapes. The segmentation performance is evaluated using dice similarity coefficient (DSC), volume similarity (VS), and Recall. We have used this new algorithm on ISLES 2018 challenge dataset and found that our approach achieved results that are better than stateoftheart approaches. 
 | 
	CRUNet: Cascaded UNet with Residual Mapping for Liver Segmentation in CT Images * Abdominal computed tomography (CT) is a common modality to detect liver lesions. Liver segmentation in CT scan is important for diagnosis and analysis of liver lesions. However, the accuracy of existing liver segmentation methods is slightly insufficient. In this paper, we propose a liver segmentation architecture named CRUNet, which is composed of cascade UNet combined with residual mapping. We make use of the MDice loss function for training in CRUNet, and the secondlevel of cascade network is deeper than the firstlevel to extract more detailed image features. Morphological algorithms are utilized as an intermediateprocessing step to improve the segmentation accuracy. In addition, we evaluate our proposed CRUNet on liver segmentation task under the dataset provided by the 2017 ISBI LiTS Challenge. The experimental result demonstrates that our proposed CRUNet can outperform the stateoftheart methods in term of the performance measures, such as Dice score, VOE, and so on. 
 | 
	What Makes a Social Robot Good at Interacting with Humans This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: xe2x80x9cDo social robots need to look like living creatures that already exist in the world for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have animated faces for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have the ability to speak a coherent human language for humans to interact well with them?xe2x80x9d and xe2x80x9cDo social robots need to have the capability to make physical gestures for humans to interact well with them?xe2x80x9d. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethicalmoral concerns have also been discussed. 
 | 
					
	Ischemic Stroke Lesion Prediction in CT Perfusion Scans Using Multiple Parallel UNets Following by a PixelLevel Classifier It is critical to know what brain regions are affected by an ischemic stroke, as this enables doctors to make more effective decisions about stroke patient therapy. These regions are often identified by segmenting computed tomography perfusion (CTP) images. Previously, this task has been done manually by an expert. However, manual segmentation is an extremely tedious and timeconsuming process, that is not suitable for ischemic stroke lesion segmentation as it is highly time sensitive. In addition, these approaches require an expert to do the segmentation task, who may not be available and are prone to errors. Several automatic medical image analysis methods have been proposed for ischemic stroke lesion segmentation. These approaches, typically, use handcrafted features that are predefined to represent the input data. However, because of the irregular and physiologically shapes, ischemic stroke lesions cannot be properly predicted, in an automatic way, using simple predefined features. In this work, we propose an automatic prediction algorithm that learns an effective model for segmenting the ischemic stroke lesion. This learned model first uses four 2D UNets to, separately, extract valuable information about the location of the stroke lesion from four CTP maps (CBV, CBF, MTT, Tmax). The model then combines the probability maps extracted by the UNets, to decide whether the pixels are either lesion or healthy tissues. This approach uses information about each pixel, as well as its neighborhood, to learn the stroke lesion, despite their varying shapes. The segmentation performance is evaluated using dice similarity coefficient (DSC), volume similarity (VS), and Recall. We have used this new algorithm on ISLES 2018 challenge dataset and found that our approach achieved results that are better than stateoftheart approaches. 
 | 
	CRUNet: Cascaded UNet with Residual Mapping for Liver Segmentation in CT Images * Abdominal computed tomography (CT) is a common modality to detect liver lesions. Liver segmentation in CT scan is important for diagnosis and analysis of liver lesions. However, the accuracy of existing liver segmentation methods is slightly insufficient. In this paper, we propose a liver segmentation architecture named CRUNet, which is composed of cascade UNet combined with residual mapping. We make use of the MDice loss function for training in CRUNet, and the secondlevel of cascade network is deeper than the firstlevel to extract more detailed image features. Morphological algorithms are utilized as an intermediateprocessing step to improve the segmentation accuracy. In addition, we evaluate our proposed CRUNet on liver segmentation task under the dataset provided by the 2017 ISBI LiTS Challenge. The experimental result demonstrates that our proposed CRUNet can outperform the stateoftheart methods in term of the performance measures, such as Dice score, VOE, and so on. 
 | 
	Robust cluster consensus of general fractionalorder nonlinear multi agent systems via adaptive sliding mode controller Abstract   In this paper robust cluster consensus is investigated for general fractionalorder multi agent systems with nonlinear dynamics with dynamic uncertainty and external disturbances via adaptive sliding mode controller. First, robust cluster consensus for general fractionalorder nonlinear multi agent systems is investigated with dynamic uncertainty and external disturbances in which multi agent systems are weakly heterogeneous because they have identical nominal dynamics with different normbounded parameter uncertainties. Then, robust cluster consensus for the fractionalorder nonlinear multi agent systems with general form dynamics is investigated by using adaptive sliding mode controller. Robust cluster consensus for general fractionalorder nonlinear multi agent systems is achieved asymptotically without disturbance. It is shown that the errors between agents can converge to a small region in the presence of disturbances based on the linear matrix inequality (LMI) and MittagLeffler stability theory. Finally, simulation examples are presented for general form multi agent systems, i.e. a singlelink flexible joint manipulator which demonstrates the efficiency of the proposed adaptive controller. 
 | 
					
	AdaBoost neural network and cyclopean view for noreference stereoscopic image quality assessment Abstract   Stereoscopic imaging has been widely used in many fields. In many scenarios, stereo images quality could be affected by various degradations, such as asymmetric distortion. Accordingly, to guarantee the best quality of experience, robust and accurate referenceless metrics are required for quality assessment of stereoscopic content. Most existing stereo noreference Image Quality Assessment (IQA) models are not consistent with asymmetrical distortions. This paper presents a new noreference stereoscopic image quality assessment metric using a human visual system (HVS) modeling and an advanced machinelearning algorithm. The proposed approach consists of two stages. In the first stage, cyclopean image is constructed considering the presence of binocular rivalry in order to cover the asymmetrically distorted part. In the second stage, gradient magnitude, relative gradient magnitude, and gradient orientation are extracted. These are used as a predictive source of information for the quality. In order to obtain the best overall performance against different databases, Adaptive Boosting (AdaBoost) idea of machine learning combined with artificial neural network model has been adopted. The benchmark LIVE 3D phaseI, phaseII, and IRCCyNIVC 3D databases have been used to evaluate the performance of the proposed approach. Experimental results have demonstrated that the proposed metric performance achieves high consistency with subjective assessment and outperforms the blind stereo IQA over various types of distortion. 
 | 
	Stereoscopic Image Quality Assessment Weighted Guidance by Disparity Map Using Convolutional Neural Network In this paper, we propose a new twocolumn dense Convolutional Neural Network (CNN) for stereoscopic image quality assessment. The input of one column is the cyclopean image which conforms to the binocular combination and rival mechanism in our brain. The input of other column is the disparity map which provides some compensation information for the cyclopean image. More importantly, we employ the features of disparity map to guide and weight the feature maps obtained from the cyclopean image, which is implemented by modifying the structure of Squeeze and Excitation block. This weighting strategy recalibrates the importance of feature maps extracted from cyclopean image. At the end of CNN, we combine the outputs from the twocolumn through xe2x80x99Concatxe2x80x99, and then process them to get the final quality score of the stereoscopic image. Experimental results demonstrate that the proposed method can achieve high consistent alignment with subjective assessment. 
 | 
	Its time to rethink DDoS protection When you think of distributed denial of service (DDoS) attacks, chances are you conjure up an image of an overwhelming flood of traffic that incapacitates a network. This kind of cyber attack is all about overt, brute force used to take a target down. Some hackers are a little smarter, using DDoS as a distraction while they simultaneously attempt a more targeted strike, as was the case with a Carphone Warehouse hack in 2015. 1  But in general, DDoS isnu0027t subtle.  Retailers are having to rethink how they approach distributed denial of service (DDoS) protection following the rise of a stealthier incarnation of the threat.  There has been a significant increase in smallscale DDoS attacks and a corresponding reduction in conventional largescale events. The hackerxe2x80x99s aim is to remain below the conventional xe2x80x98detect and alertxe2x80x99 threshold that could trigger a DDoS mitigation strategy. Roy Reynolds of Vodat International explains the nature of the threat and the steps organisations can take to protect themselves. 
 | 
					
	AdaBoost neural network and cyclopean view for noreference stereoscopic image quality assessment Abstract   Stereoscopic imaging has been widely used in many fields. In many scenarios, stereo images quality could be affected by various degradations, such as asymmetric distortion. Accordingly, to guarantee the best quality of experience, robust and accurate referenceless metrics are required for quality assessment of stereoscopic content. Most existing stereo noreference Image Quality Assessment (IQA) models are not consistent with asymmetrical distortions. This paper presents a new noreference stereoscopic image quality assessment metric using a human visual system (HVS) modeling and an advanced machinelearning algorithm. The proposed approach consists of two stages. In the first stage, cyclopean image is constructed considering the presence of binocular rivalry in order to cover the asymmetrically distorted part. In the second stage, gradient magnitude, relative gradient magnitude, and gradient orientation are extracted. These are used as a predictive source of information for the quality. In order to obtain the best overall performance against different databases, Adaptive Boosting (AdaBoost) idea of machine learning combined with artificial neural network model has been adopted. The benchmark LIVE 3D phaseI, phaseII, and IRCCyNIVC 3D databases have been used to evaluate the performance of the proposed approach. Experimental results have demonstrated that the proposed metric performance achieves high consistency with subjective assessment and outperforms the blind stereo IQA over various types of distortion. 
 | 
	Stereoscopic Image Quality Assessment Weighted Guidance by Disparity Map Using Convolutional Neural Network In this paper, we propose a new twocolumn dense Convolutional Neural Network (CNN) for stereoscopic image quality assessment. The input of one column is the cyclopean image which conforms to the binocular combination and rival mechanism in our brain. The input of other column is the disparity map which provides some compensation information for the cyclopean image. More importantly, we employ the features of disparity map to guide and weight the feature maps obtained from the cyclopean image, which is implemented by modifying the structure of Squeeze and Excitation block. This weighting strategy recalibrates the importance of feature maps extracted from cyclopean image. At the end of CNN, we combine the outputs from the twocolumn through xe2x80x99Concatxe2x80x99, and then process them to get the final quality score of the stereoscopic image. Experimental results demonstrate that the proposed method can achieve high consistent alignment with subjective assessment. 
 | 
	Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups. 
 | 
					
	AdaBoost neural network and cyclopean view for noreference stereoscopic image quality assessment Abstract   Stereoscopic imaging has been widely used in many fields. In many scenarios, stereo images quality could be affected by various degradations, such as asymmetric distortion. Accordingly, to guarantee the best quality of experience, robust and accurate referenceless metrics are required for quality assessment of stereoscopic content. Most existing stereo noreference Image Quality Assessment (IQA) models are not consistent with asymmetrical distortions. This paper presents a new noreference stereoscopic image quality assessment metric using a human visual system (HVS) modeling and an advanced machinelearning algorithm. The proposed approach consists of two stages. In the first stage, cyclopean image is constructed considering the presence of binocular rivalry in order to cover the asymmetrically distorted part. In the second stage, gradient magnitude, relative gradient magnitude, and gradient orientation are extracted. These are used as a predictive source of information for the quality. In order to obtain the best overall performance against different databases, Adaptive Boosting (AdaBoost) idea of machine learning combined with artificial neural network model has been adopted. The benchmark LIVE 3D phaseI, phaseII, and IRCCyNIVC 3D databases have been used to evaluate the performance of the proposed approach. Experimental results have demonstrated that the proposed metric performance achieves high consistency with subjective assessment and outperforms the blind stereo IQA over various types of distortion. 
 | 
	Stereoscopic Image Quality Assessment Weighted Guidance by Disparity Map Using Convolutional Neural Network In this paper, we propose a new twocolumn dense Convolutional Neural Network (CNN) for stereoscopic image quality assessment. The input of one column is the cyclopean image which conforms to the binocular combination and rival mechanism in our brain. The input of other column is the disparity map which provides some compensation information for the cyclopean image. More importantly, we employ the features of disparity map to guide and weight the feature maps obtained from the cyclopean image, which is implemented by modifying the structure of Squeeze and Excitation block. This weighting strategy recalibrates the importance of feature maps extracted from cyclopean image. At the end of CNN, we combine the outputs from the twocolumn through xe2x80x99Concatxe2x80x99, and then process them to get the final quality score of the stereoscopic image. Experimental results demonstrate that the proposed method can achieve high consistent alignment with subjective assessment. 
 | 
	General Data Protection Regulation in Health Clinics The focus on personal data has merited the EU concerns and attention, resulting in the legislative change regarding privacy and the protection of personal data. The General Data Protection Regulation (GDPR) aims to reform existing measures on the protection of personal data of European Union citizens, with a strong impact on the rights and freedoms of individuals in establishing rules for the processing of personal data. The GDPR considers a special category of personal data, the health data, being these considered as sensitive data and subject to special conditions regarding treatment and access by third parties. This work presents the evolution of the applicability of the Regulation (EU) 2016679 six months after its application in Portuguese health clinics. The results of the present study are discussed in the light of future literature and work are identified. 
 | 
					
	AdaBoost neural network and cyclopean view for noreference stereoscopic image quality assessment Abstract   Stereoscopic imaging has been widely used in many fields. In many scenarios, stereo images quality could be affected by various degradations, such as asymmetric distortion. Accordingly, to guarantee the best quality of experience, robust and accurate referenceless metrics are required for quality assessment of stereoscopic content. Most existing stereo noreference Image Quality Assessment (IQA) models are not consistent with asymmetrical distortions. This paper presents a new noreference stereoscopic image quality assessment metric using a human visual system (HVS) modeling and an advanced machinelearning algorithm. The proposed approach consists of two stages. In the first stage, cyclopean image is constructed considering the presence of binocular rivalry in order to cover the asymmetrically distorted part. In the second stage, gradient magnitude, relative gradient magnitude, and gradient orientation are extracted. These are used as a predictive source of information for the quality. In order to obtain the best overall performance against different databases, Adaptive Boosting (AdaBoost) idea of machine learning combined with artificial neural network model has been adopted. The benchmark LIVE 3D phaseI, phaseII, and IRCCyNIVC 3D databases have been used to evaluate the performance of the proposed approach. Experimental results have demonstrated that the proposed metric performance achieves high consistency with subjective assessment and outperforms the blind stereo IQA over various types of distortion. 
 | 
	Stereoscopic Image Quality Assessment Weighted Guidance by Disparity Map Using Convolutional Neural Network In this paper, we propose a new twocolumn dense Convolutional Neural Network (CNN) for stereoscopic image quality assessment. The input of one column is the cyclopean image which conforms to the binocular combination and rival mechanism in our brain. The input of other column is the disparity map which provides some compensation information for the cyclopean image. More importantly, we employ the features of disparity map to guide and weight the feature maps obtained from the cyclopean image, which is implemented by modifying the structure of Squeeze and Excitation block. This weighting strategy recalibrates the importance of feature maps extracted from cyclopean image. At the end of CNN, we combine the outputs from the twocolumn through xe2x80x99Concatxe2x80x99, and then process them to get the final quality score of the stereoscopic image. Experimental results demonstrate that the proposed method can achieve high consistent alignment with subjective assessment. 
 | 
	Extremal Problems for tPartite and tColorable Hypergraphs Fix integers t ge r ge 2 and an runiform hypergraph F. We prove that the maximum number of edges in a tpartite runiform hypergraph on n vertices that contains no copy of F is c_t, Fn choose r o(nr), where c_t, F can be determined by a finite computation.   We explicitly define a sequence F_1, F_2, ldots of runiform hypergraphs, and prove that the maximum number of edges in a tchromatic runiform hypergraph on n vertices containing no copy of F_i is alpha_t,r,in choose r o(nr), where alpha_t,r,i can be determined by a finite computation for each ige 1. In several cases, alpha_t,r,i is irrational. The main tool used in the proofs is the Lagrangian of a hypergraph. 
 | 
					
	AdaBoost neural network and cyclopean view for noreference stereoscopic image quality assessment Abstract   Stereoscopic imaging has been widely used in many fields. In many scenarios, stereo images quality could be affected by various degradations, such as asymmetric distortion. Accordingly, to guarantee the best quality of experience, robust and accurate referenceless metrics are required for quality assessment of stereoscopic content. Most existing stereo noreference Image Quality Assessment (IQA) models are not consistent with asymmetrical distortions. This paper presents a new noreference stereoscopic image quality assessment metric using a human visual system (HVS) modeling and an advanced machinelearning algorithm. The proposed approach consists of two stages. In the first stage, cyclopean image is constructed considering the presence of binocular rivalry in order to cover the asymmetrically distorted part. In the second stage, gradient magnitude, relative gradient magnitude, and gradient orientation are extracted. These are used as a predictive source of information for the quality. In order to obtain the best overall performance against different databases, Adaptive Boosting (AdaBoost) idea of machine learning combined with artificial neural network model has been adopted. The benchmark LIVE 3D phaseI, phaseII, and IRCCyNIVC 3D databases have been used to evaluate the performance of the proposed approach. Experimental results have demonstrated that the proposed metric performance achieves high consistency with subjective assessment and outperforms the blind stereo IQA over various types of distortion. 
 | 
	Stereoscopic Image Quality Assessment Weighted Guidance by Disparity Map Using Convolutional Neural Network In this paper, we propose a new twocolumn dense Convolutional Neural Network (CNN) for stereoscopic image quality assessment. The input of one column is the cyclopean image which conforms to the binocular combination and rival mechanism in our brain. The input of other column is the disparity map which provides some compensation information for the cyclopean image. More importantly, we employ the features of disparity map to guide and weight the feature maps obtained from the cyclopean image, which is implemented by modifying the structure of Squeeze and Excitation block. This weighting strategy recalibrates the importance of feature maps extracted from cyclopean image. At the end of CNN, we combine the outputs from the twocolumn through xe2x80x99Concatxe2x80x99, and then process them to get the final quality score of the stereoscopic image. Experimental results demonstrate that the proposed method can achieve high consistent alignment with subjective assessment. 
 | 
	Exit Regions of Cavities in Proteins Proteins have a complex three dimensional structure with empty cavities and tunnels in the interatomic space and these spatial features are often essential for the correct biological function. Many discrete and analytical methods have been developed for the computation, analysis and visualization of these features. In this paper, we focus on the connection of cavities with the space outside a protein. This connection would normally be described by tunnels. However, the number of possible solutions can be very high and therefore a nontrivial pruning of solutions is needed to deliver only a few representatives. Therefore, we propose an alternative kind of spatial features called exit regions of cavities. These regions capture the critical locations where a spherical probe, initially placed in a cavity, can leave the protein if the probe is allowed to shrink. The shape of an exit region is more detailed when compared against the simple circular profile of a tunnel. Tunnels, on the other hand, provide more information about the exact path. 
 | 
					
	Human Trust After Robot Mistakes: Study of the Effects of Different Forms of Robot Communication Collaborative robots that work alongside humans will experience service breakdowns and make mistakes. These robotic failures can cause a degradation of trust between the robot and the community being served. A loss of trust may impact whether a user continues to rely on the robot for assistance. In order to improve the teaming capabilities between humans and robots, forms of communication that aid in developing and maintaining trust need to be investigated. In our study, we identify four forms of communication which dictate the timing of information given and type of initiation used by a robot. We investigate the effect that these forms of communication have on trust with and without robot mistakes during a cooperative task. Participants played a memory task game with the help of a humanoid robot that was designed to make mistakes after a certain amount of time passed. The results showed that participantsxe2x80x99 trust in the robot was better preserved when that robot offered advice only upon request as opposed to when the robot took initiative to give advice. 
 | 
	HumanRobot Team: Effects of Communication in Analyzing Trust Trust is related to the performance of human teams, making it a significant characteristic, which needs to be analyzed inside humanrobot teams. Trust was researched for a long time in other domains like social sciences, psychology, and economics. Building trust within a team is formed through common tasks and it depends on team performance and communication. By applying an online game based tasks for humanrobot teams, the effects of three communication conditions (communication without text and verbal interaction, communication with text and verbal interaction relatednot related to the task) on trust are analyzed. Additionally, we found that the participantsxe2x80x99 background is linked to the trust in the interaction with the robot. The results show that in a humanrobot team the human trust will increase more over time when heshe is working with a robot that uses text and verbal interaction communication related to the task. They further suggest that human trust will decrease to a lower extent when the robot fails in doing the tasks if it uses text and verbal communication with the human. 
 | 
	Structure and expression of the gene coding for the alphasubunit of DNAdependent RNA polymerase from the chloroplast genome of Zea mays.  :0,rpoA gene coding for the alphasubunit of DNAdependent RNA polymerase located on the DNA of Zea mays chloroplasts has been characterized with respect to its position on the chloroplast genome and its nucleotide sequence. The amino acid sequence derived for a 39 Kd polypeptide shows strong homology with sequences derived from the :0,rpoA genes of other chloroplast species and with the amino acid sequence of the alphasubunit from E. coli RNA polymerase. Transcripts of the :0,rpoA gene were identified by Northern hybridization and characterized by S1 mapping using total RNA isolated from maize chloroplasts. Antibodies raised against a synthetic Cterminal heptapeptide show cross reactivity with a 39 Kd polypeptide contained in the stroma fraction of maize chloroplasts. It is concluded that the :0,rpoA gene is a functional gene and that therefore, at least the alphasubunit of plastidic RNA polymerase, is expressed in chloroplasts. 
 | 
					
	Human Trust After Robot Mistakes: Study of the Effects of Different Forms of Robot Communication Collaborative robots that work alongside humans will experience service breakdowns and make mistakes. These robotic failures can cause a degradation of trust between the robot and the community being served. A loss of trust may impact whether a user continues to rely on the robot for assistance. In order to improve the teaming capabilities between humans and robots, forms of communication that aid in developing and maintaining trust need to be investigated. In our study, we identify four forms of communication which dictate the timing of information given and type of initiation used by a robot. We investigate the effect that these forms of communication have on trust with and without robot mistakes during a cooperative task. Participants played a memory task game with the help of a humanoid robot that was designed to make mistakes after a certain amount of time passed. The results showed that participantsxe2x80x99 trust in the robot was better preserved when that robot offered advice only upon request as opposed to when the robot took initiative to give advice. 
 | 
	HumanRobot Team: Effects of Communication in Analyzing Trust Trust is related to the performance of human teams, making it a significant characteristic, which needs to be analyzed inside humanrobot teams. Trust was researched for a long time in other domains like social sciences, psychology, and economics. Building trust within a team is formed through common tasks and it depends on team performance and communication. By applying an online game based tasks for humanrobot teams, the effects of three communication conditions (communication without text and verbal interaction, communication with text and verbal interaction relatednot related to the task) on trust are analyzed. Additionally, we found that the participantsxe2x80x99 background is linked to the trust in the interaction with the robot. The results show that in a humanrobot team the human trust will increase more over time when heshe is working with a robot that uses text and verbal interaction communication related to the task. They further suggest that human trust will decrease to a lower extent when the robot fails in doing the tasks if it uses text and verbal communication with the human. 
 | 
	Design of a 28 GHz differential GaAs power amplifier with capacitive neutralization for 5G mmwave applications This paper describes the design of a 28 GHz power amplifier (PA) in a commercial GaAs mHEMT technology using concepts that are typical for mmwave CMOS design. Simulations show a 1dB output compression point of around 23 dBm with a 30% poweradded efficiency (PAE) at 28 GHz, while providing a gain of 12 dB. Comparison with the performance of a similar 28 GHz fullydepleted SiliconOnInsulator (FDSOI) PA shows an increase of the compression point with 10 dB while efficiency is comparable. The high compression point of this GaAsPA offers a margin for system optimization such as a reduction of the number of antennas for beamforming. 
 | 
					
	Human Trust After Robot Mistakes: Study of the Effects of Different Forms of Robot Communication Collaborative robots that work alongside humans will experience service breakdowns and make mistakes. These robotic failures can cause a degradation of trust between the robot and the community being served. A loss of trust may impact whether a user continues to rely on the robot for assistance. In order to improve the teaming capabilities between humans and robots, forms of communication that aid in developing and maintaining trust need to be investigated. In our study, we identify four forms of communication which dictate the timing of information given and type of initiation used by a robot. We investigate the effect that these forms of communication have on trust with and without robot mistakes during a cooperative task. Participants played a memory task game with the help of a humanoid robot that was designed to make mistakes after a certain amount of time passed. The results showed that participantsxe2x80x99 trust in the robot was better preserved when that robot offered advice only upon request as opposed to when the robot took initiative to give advice. 
 | 
	HumanRobot Team: Effects of Communication in Analyzing Trust Trust is related to the performance of human teams, making it a significant characteristic, which needs to be analyzed inside humanrobot teams. Trust was researched for a long time in other domains like social sciences, psychology, and economics. Building trust within a team is formed through common tasks and it depends on team performance and communication. By applying an online game based tasks for humanrobot teams, the effects of three communication conditions (communication without text and verbal interaction, communication with text and verbal interaction relatednot related to the task) on trust are analyzed. Additionally, we found that the participantsxe2x80x99 background is linked to the trust in the interaction with the robot. The results show that in a humanrobot team the human trust will increase more over time when heshe is working with a robot that uses text and verbal interaction communication related to the task. They further suggest that human trust will decrease to a lower extent when the robot fails in doing the tasks if it uses text and verbal communication with the human. 
 | 
	Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well.  Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found. 
 | 
					
	Human Trust After Robot Mistakes: Study of the Effects of Different Forms of Robot Communication Collaborative robots that work alongside humans will experience service breakdowns and make mistakes. These robotic failures can cause a degradation of trust between the robot and the community being served. A loss of trust may impact whether a user continues to rely on the robot for assistance. In order to improve the teaming capabilities between humans and robots, forms of communication that aid in developing and maintaining trust need to be investigated. In our study, we identify four forms of communication which dictate the timing of information given and type of initiation used by a robot. We investigate the effect that these forms of communication have on trust with and without robot mistakes during a cooperative task. Participants played a memory task game with the help of a humanoid robot that was designed to make mistakes after a certain amount of time passed. The results showed that participantsxe2x80x99 trust in the robot was better preserved when that robot offered advice only upon request as opposed to when the robot took initiative to give advice. 
 | 
	HumanRobot Team: Effects of Communication in Analyzing Trust Trust is related to the performance of human teams, making it a significant characteristic, which needs to be analyzed inside humanrobot teams. Trust was researched for a long time in other domains like social sciences, psychology, and economics. Building trust within a team is formed through common tasks and it depends on team performance and communication. By applying an online game based tasks for humanrobot teams, the effects of three communication conditions (communication without text and verbal interaction, communication with text and verbal interaction relatednot related to the task) on trust are analyzed. Additionally, we found that the participantsxe2x80x99 background is linked to the trust in the interaction with the robot. The results show that in a humanrobot team the human trust will increase more over time when heshe is working with a robot that uses text and verbal interaction communication related to the task. They further suggest that human trust will decrease to a lower extent when the robot fails in doing the tasks if it uses text and verbal communication with the human. 
 | 
	Symmetric Simplicial Pseudoline Arrangements A simplicial arrangement of pseudolines is a collection of topological lines in the projective plane where each region that is formed is triangular. This paper refines and develops David Eppsteinu0027s notion of a kaleidoscope construction for symmetric pseudoline arrangements to construct and analyze several infinite families of simplicial pseudoline arrangements with high degrees of geometric symmetry. In particular, all simplicial pseudoline arrangements with the symmetries of a regular kgon and three symmetry classes of pseudolines, consisting of the mirrors of the kgon and two other symmetry classes, plus sometimes the line at infinity, are classified, and other interesting families (with more symmetry classes of pseudolines) are discussed. 
 | 
					
	Human Trust After Robot Mistakes: Study of the Effects of Different Forms of Robot Communication Collaborative robots that work alongside humans will experience service breakdowns and make mistakes. These robotic failures can cause a degradation of trust between the robot and the community being served. A loss of trust may impact whether a user continues to rely on the robot for assistance. In order to improve the teaming capabilities between humans and robots, forms of communication that aid in developing and maintaining trust need to be investigated. In our study, we identify four forms of communication which dictate the timing of information given and type of initiation used by a robot. We investigate the effect that these forms of communication have on trust with and without robot mistakes during a cooperative task. Participants played a memory task game with the help of a humanoid robot that was designed to make mistakes after a certain amount of time passed. The results showed that participantsxe2x80x99 trust in the robot was better preserved when that robot offered advice only upon request as opposed to when the robot took initiative to give advice. 
 | 
	HumanRobot Team: Effects of Communication in Analyzing Trust Trust is related to the performance of human teams, making it a significant characteristic, which needs to be analyzed inside humanrobot teams. Trust was researched for a long time in other domains like social sciences, psychology, and economics. Building trust within a team is formed through common tasks and it depends on team performance and communication. By applying an online game based tasks for humanrobot teams, the effects of three communication conditions (communication without text and verbal interaction, communication with text and verbal interaction relatednot related to the task) on trust are analyzed. Additionally, we found that the participantsxe2x80x99 background is linked to the trust in the interaction with the robot. The results show that in a humanrobot team the human trust will increase more over time when heshe is working with a robot that uses text and verbal interaction communication related to the task. They further suggest that human trust will decrease to a lower extent when the robot fails in doing the tasks if it uses text and verbal communication with the human. 
 | 
	Extremal Problems for tPartite and tColorable Hypergraphs Fix integers t ge r ge 2 and an runiform hypergraph F. We prove that the maximum number of edges in a tpartite runiform hypergraph on n vertices that contains no copy of F is c_t, Fn choose r o(nr), where c_t, F can be determined by a finite computation.   We explicitly define a sequence F_1, F_2, ldots of runiform hypergraphs, and prove that the maximum number of edges in a tchromatic runiform hypergraph on n vertices containing no copy of F_i is alpha_t,r,in choose r o(nr), where alpha_t,r,i can be determined by a finite computation for each ige 1. In several cases, alpha_t,r,i is irrational. The main tool used in the proofs is the Lagrangian of a hypergraph. 
 | 
					
	The effect of Participants Interactions on the Sustainability of Online Communities Social media has become an important online social venue where people can connect and communicate with each other. However, despite the increasing value of social media, researchers have noticed that the participants are not necessarily as active as it has been believed. It is also not uncommon that some online communities have not attracted enough participants and turned into xe2x80x9ccyber ghost towns.xe2x80x9d In this paper, we concentrate on investigating the effect of participantsu0027 interactions on the sustainability of online communities. Social network analysis is adopted as the underlying analytical method and used to estimate diverse social network measures as indicators of participantsu0027 interactions for sustainability analysis. Three types of social network indicators are examined. Moreover, Reddit, a leading social news and media aggregation website, is adopted as our data source for empirical evaluation. Some interesting and promising results are identified and discussed. 
 | 
	Tweeting the United Nations Climate Change Conference in Paris (COP21): An analysis of a social network and factors determining the network influence Abstract   To understand the Twitter network of an environmental and political event and to extend the network theory of social capital, we first performed a network analysis of the English tweets during the first 10 days of the United Nationsxe2x80x99 Conference of the Parties in Paris in 2015. Accounts for nonprofit and government agencies were more likely to be influential in the Twitter network and be retweeted, whereas individual accounts were more likely to retweet others. Based on a quota sample of 133 Twitter accounts and using both manual and machine coding, we further found that the number of followers (but not the size of following) and the commongoal frame (i.e., mitigationadaptation) positively predicted an accountu0027s influence in the Twitter network, whereas the conflict frame negatively predicted an accountu0027s network influence. 
 | 
	Development of a foldable fivefinger robotic hand for assisting in laparoscopic surgery This study aims to develop a robotic hand that can be inserted into the body from a small incision wound and can handle large organs in a laparoscopic surgery. We determined the requirements for the proposed hand based on a surgeonxe2x80x99s motions in Handassisted laparoscopic surgery(HALS). We identified four basic motions: xe2x80x9cgrasp,xe2x80x9d xe2x80x9cpinch,xe2x80x9d xe2x80x9cexclusion,xe2x80x9d and xe2x80x9cspread.xe2x80x9d The proposed hand has the necessary degree of freedom(DoFs) for performing these movements, five fingers, as in a humanxe2x80x99s hand, and a palm that can be folded into a bellows when the surgeon inserts the hand into the abdominal cavity. We evaluated the proposed robot hand based on a performance test, and confirmed that it can be inserted from a 20 mm incision wound and grasp the simulated organs. 
 | 
					
	The effect of Participants Interactions on the Sustainability of Online Communities Social media has become an important online social venue where people can connect and communicate with each other. However, despite the increasing value of social media, researchers have noticed that the participants are not necessarily as active as it has been believed. It is also not uncommon that some online communities have not attracted enough participants and turned into xe2x80x9ccyber ghost towns.xe2x80x9d In this paper, we concentrate on investigating the effect of participantsu0027 interactions on the sustainability of online communities. Social network analysis is adopted as the underlying analytical method and used to estimate diverse social network measures as indicators of participantsu0027 interactions for sustainability analysis. Three types of social network indicators are examined. Moreover, Reddit, a leading social news and media aggregation website, is adopted as our data source for empirical evaluation. Some interesting and promising results are identified and discussed. 
 | 
	Tweeting the United Nations Climate Change Conference in Paris (COP21): An analysis of a social network and factors determining the network influence Abstract   To understand the Twitter network of an environmental and political event and to extend the network theory of social capital, we first performed a network analysis of the English tweets during the first 10 days of the United Nationsxe2x80x99 Conference of the Parties in Paris in 2015. Accounts for nonprofit and government agencies were more likely to be influential in the Twitter network and be retweeted, whereas individual accounts were more likely to retweet others. Based on a quota sample of 133 Twitter accounts and using both manual and machine coding, we further found that the number of followers (but not the size of following) and the commongoal frame (i.e., mitigationadaptation) positively predicted an accountu0027s influence in the Twitter network, whereas the conflict frame negatively predicted an accountu0027s network influence. 
 | 
	Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well.  Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found. 
 | 
					
	The effect of Participants Interactions on the Sustainability of Online Communities Social media has become an important online social venue where people can connect and communicate with each other. However, despite the increasing value of social media, researchers have noticed that the participants are not necessarily as active as it has been believed. It is also not uncommon that some online communities have not attracted enough participants and turned into xe2x80x9ccyber ghost towns.xe2x80x9d In this paper, we concentrate on investigating the effect of participantsu0027 interactions on the sustainability of online communities. Social network analysis is adopted as the underlying analytical method and used to estimate diverse social network measures as indicators of participantsu0027 interactions for sustainability analysis. Three types of social network indicators are examined. Moreover, Reddit, a leading social news and media aggregation website, is adopted as our data source for empirical evaluation. Some interesting and promising results are identified and discussed. 
 | 
	Tweeting the United Nations Climate Change Conference in Paris (COP21): An analysis of a social network and factors determining the network influence Abstract   To understand the Twitter network of an environmental and political event and to extend the network theory of social capital, we first performed a network analysis of the English tweets during the first 10 days of the United Nationsxe2x80x99 Conference of the Parties in Paris in 2015. Accounts for nonprofit and government agencies were more likely to be influential in the Twitter network and be retweeted, whereas individual accounts were more likely to retweet others. Based on a quota sample of 133 Twitter accounts and using both manual and machine coding, we further found that the number of followers (but not the size of following) and the commongoal frame (i.e., mitigationadaptation) positively predicted an accountu0027s influence in the Twitter network, whereas the conflict frame negatively predicted an accountu0027s network influence. 
 | 
	AirCoupled Reception of a Slow Ultrasonic A0 Mode Wave Propagating in Thin Plastic Film At low frequencies, in thin plates the phase velocity of the guided A0 mode can become slower than that of the ultrasound velocity in air. Such waves do not excite leaky waves in the surrounding air, and therefore, it is impossible to excite and receive them by conventional aircoupled methods. The objective of this research was the development of an aircoupled technique for the reception of slow A0 mode in thin plastic films. This study demonstrates the feasibility of picking up a subsonic A0 mode in plastic films by aircoupled ultrasonic arrays. The aircoupled reception was based on an evanescent wave in air accompanying the propagating A0 mode in a film. The efficiency of the reception was enhanced by using a virtual array which was arranged from the data collected by a single aircoupled receiver. The signals measured at the points corresponding to the positions of the phasematched array were recorded and processed. The transmitting array excited not only the A0 mode in the film, but also a direct wave in air. This wave propagated at ultrasound velocity in air and was faster than the evanescent wave. For efficient reception of the A0 mode, the additional signalprocessing procedure based on the application of the 2D Fourier transform in a spatialxe2x80x93temporal domain. The obtained results can be useful for the development of novel aircoupled ultrasonic nondestructive testing techniques. 
 | 
					
	The effect of Participants Interactions on the Sustainability of Online Communities Social media has become an important online social venue where people can connect and communicate with each other. However, despite the increasing value of social media, researchers have noticed that the participants are not necessarily as active as it has been believed. It is also not uncommon that some online communities have not attracted enough participants and turned into xe2x80x9ccyber ghost towns.xe2x80x9d In this paper, we concentrate on investigating the effect of participantsu0027 interactions on the sustainability of online communities. Social network analysis is adopted as the underlying analytical method and used to estimate diverse social network measures as indicators of participantsu0027 interactions for sustainability analysis. Three types of social network indicators are examined. Moreover, Reddit, a leading social news and media aggregation website, is adopted as our data source for empirical evaluation. Some interesting and promising results are identified and discussed. 
 | 
	Tweeting the United Nations Climate Change Conference in Paris (COP21): An analysis of a social network and factors determining the network influence Abstract   To understand the Twitter network of an environmental and political event and to extend the network theory of social capital, we first performed a network analysis of the English tweets during the first 10 days of the United Nationsxe2x80x99 Conference of the Parties in Paris in 2015. Accounts for nonprofit and government agencies were more likely to be influential in the Twitter network and be retweeted, whereas individual accounts were more likely to retweet others. Based on a quota sample of 133 Twitter accounts and using both manual and machine coding, we further found that the number of followers (but not the size of following) and the commongoal frame (i.e., mitigationadaptation) positively predicted an accountu0027s influence in the Twitter network, whereas the conflict frame negatively predicted an accountu0027s network influence. 
 | 
	Symmetry Group Classification and Conservation Laws of the Nonlinear Fractional Diffusion Equation with the Riesz Potential Symmetry properties of a nonlinear twodimensional spacefractional diffusion equation with the Riesz potential of the order     xcexb1 xe2x88x88 ( 0 , 1 )     are studied. Lie point symmetry group classification of this equation is performed with respect to diffusivity function. To construct conservation laws for the considered equation, the concept of nonlinear selfadjointness is adopted to a certain class of spacefractional differential equations with the Riesz potential. It is proved that the equation in question is nonlinearly selfadjoint. An extension of Ibragimovxe2x80x99s constructive algorithm for finding conservation laws is proposed, and the corresponding Noether operators for fractional differential equations with the Riesz potential are presented in an explicit form. To illustrate the proposed approach, conservation laws for the considered nonlinear spacefractional diffusion equation are constructed by using its Lie point symmetries. 
 | 
					
	The effect of Participants Interactions on the Sustainability of Online Communities Social media has become an important online social venue where people can connect and communicate with each other. However, despite the increasing value of social media, researchers have noticed that the participants are not necessarily as active as it has been believed. It is also not uncommon that some online communities have not attracted enough participants and turned into xe2x80x9ccyber ghost towns.xe2x80x9d In this paper, we concentrate on investigating the effect of participantsu0027 interactions on the sustainability of online communities. Social network analysis is adopted as the underlying analytical method and used to estimate diverse social network measures as indicators of participantsu0027 interactions for sustainability analysis. Three types of social network indicators are examined. Moreover, Reddit, a leading social news and media aggregation website, is adopted as our data source for empirical evaluation. Some interesting and promising results are identified and discussed. 
 | 
	Tweeting the United Nations Climate Change Conference in Paris (COP21): An analysis of a social network and factors determining the network influence Abstract   To understand the Twitter network of an environmental and political event and to extend the network theory of social capital, we first performed a network analysis of the English tweets during the first 10 days of the United Nationsxe2x80x99 Conference of the Parties in Paris in 2015. Accounts for nonprofit and government agencies were more likely to be influential in the Twitter network and be retweeted, whereas individual accounts were more likely to retweet others. Based on a quota sample of 133 Twitter accounts and using both manual and machine coding, we further found that the number of followers (but not the size of following) and the commongoal frame (i.e., mitigationadaptation) positively predicted an accountu0027s influence in the Twitter network, whereas the conflict frame negatively predicted an accountu0027s network influence. 
 | 
	A Spatialxe2x80x93Temporal SubspaceBased Compressive Channel Estimation Technique in Unknown Interference MIMO Channels Spatialxe2x80x93temporal (ST) subspacebased channel estimation techniques formulated with   ell 2   minimum mean square error (MMSE) criterion alleviate the multiaccess interference (MAI) problem when the interested signals exhibit lowrank property. However, the conventional   ell 2  ST subspacebased methods suffer from mean squared error (MSE) deterioration in unknown interference channels, due to the difficulty to separate the interested signals from the channel covariance matrices (CCMs) contaminated with unknown interference. As a solution to the problem, we propose a new   ell 1   regularized ST channel estimation algorithm by applying the expectationmaximization (EM) algorithm to iteratively examine the signal subspace and the corresponding sparsesupports. The new algorithm updates the CCM independently of the slotdependent   ell 1   regularization, which enables it to correctly perform the sparseindependent component analysis (ICA) with a reasonable complexity order. Simulation results shown in this paper verify that the proposed technique significantly improves MSE performance in unknown interference MIMO channels, and hence, solves the BER floor problems from which the conventional receivers suffer. 
 | 
					
	Parallelization of DirectForcing Immersed Boundary Method Using OpenACC A parallelization of Poisson equation solver is developed by using the parallel algorithm of redblack SOR method performed on GPU by OpenACC. We use the parallel computing solver to solve the fluidstructure interaction problem with directforcing immersed boundary method. The speedup for the parallel computing solver is up to 8.62 times. In addition, we present that the optimization of the redblack SOR (RBSOR) algorithm for the problem with the complexgeometry objects is to allocate the same memory on two variables which is to take advantages of accessing data by random indexes and formatted indexes. The implementation of parallel computing RBSOR with OpenACC is efficient to delegate the simulation of fluidstructure interaction problem on the GPU. 
 | 
	GPU Acceleration of Communication Avoiding Chebyshev Basis Conjugate Gradient Solver for Multiphase CFD Simulations Iterative methods for solving large linear systems are common parts of computational fluid dynamics (CFD) codes. The Preconditioned Conjugate Gradient (PCG) method is one of the most widely used iterative methods. However, in the PCG method, global collective communication is a crucial bottleneck especially on accelerated computing platforms. To resolve this issue, communication avoiding (CA) variants of the PCG method are becoming increasingly important. In this paper, the PCG and Preconditioned Chebyshev Basis CA CG (PCBCG) solvers in the multiphase CFD code JUPITER are ported to the latest V100 GPUs. All GPU kernels are highly optimized to achieve about 90% of the roofline performance, the block Jacobi preconditioner is redesigned to extract high computing power of GPUs, and the remaining bottleneck of halo data communication is avoided by overlapping communication and computation. The overall performance of the PCG and PCBCG solvers is determined by the competition between the CA properties of the global collective communication and the halo data communication, indicating an importance of the internode interconnect bandwidth per GPU. The developed GPU solvers are accelerated up to 2x compared with the former CPU solvers on KNLs, and excellent strong scaling is achieved up to 7,680 GPUs on the Summit. 
 | 
	Erkundung und Erforschung. Alexander von Humboldts Amerikareise Zusammenfassung   Ahnlich wie Adalbert Stifters Erzahler im Roman xe2x80x9eNachsommerxe2x80x9c verband A. v. Humboldt auf seiner Amerikareise Erkundung und Erforschung, Reiselust und Erkenntnisstreben. Humboldt hat sein doppeltes Ziel klar benannt: Bekanntmachung der besuchten Lander, Sammeln von Tatsachen zur Erweiterung der physikalischen Geographie. Der Aufsatz ist in funf Abschnitte gegliedert: Anliegen, Route, Methoden, Ergebnisse, Auswertung.   Abstract   In a similar way as Adalbert Stifteru0027s narrator in the novel xe2x80x9cLate summerxe2x80x9d A. v. Humboldt combined exploration with research, fondness for travelling with striving for findings during his travel through South America. Humboldt clearly indicated his double aim: to report on the visited countries, to collect facts in order to improve physical geography. The treatise consists of five sections: object, route, methods, results, evaluation. 
 | 
					
	Parallelization of DirectForcing Immersed Boundary Method Using OpenACC A parallelization of Poisson equation solver is developed by using the parallel algorithm of redblack SOR method performed on GPU by OpenACC. We use the parallel computing solver to solve the fluidstructure interaction problem with directforcing immersed boundary method. The speedup for the parallel computing solver is up to 8.62 times. In addition, we present that the optimization of the redblack SOR (RBSOR) algorithm for the problem with the complexgeometry objects is to allocate the same memory on two variables which is to take advantages of accessing data by random indexes and formatted indexes. The implementation of parallel computing RBSOR with OpenACC is efficient to delegate the simulation of fluidstructure interaction problem on the GPU. 
 | 
	GPU Acceleration of Communication Avoiding Chebyshev Basis Conjugate Gradient Solver for Multiphase CFD Simulations Iterative methods for solving large linear systems are common parts of computational fluid dynamics (CFD) codes. The Preconditioned Conjugate Gradient (PCG) method is one of the most widely used iterative methods. However, in the PCG method, global collective communication is a crucial bottleneck especially on accelerated computing platforms. To resolve this issue, communication avoiding (CA) variants of the PCG method are becoming increasingly important. In this paper, the PCG and Preconditioned Chebyshev Basis CA CG (PCBCG) solvers in the multiphase CFD code JUPITER are ported to the latest V100 GPUs. All GPU kernels are highly optimized to achieve about 90% of the roofline performance, the block Jacobi preconditioner is redesigned to extract high computing power of GPUs, and the remaining bottleneck of halo data communication is avoided by overlapping communication and computation. The overall performance of the PCG and PCBCG solvers is determined by the competition between the CA properties of the global collective communication and the halo data communication, indicating an importance of the internode interconnect bandwidth per GPU. The developed GPU solvers are accelerated up to 2x compared with the former CPU solvers on KNLs, and excellent strong scaling is achieved up to 7,680 GPUs on the Summit. 
 | 
	Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority. 
 | 
					
	Parallelization of DirectForcing Immersed Boundary Method Using OpenACC A parallelization of Poisson equation solver is developed by using the parallel algorithm of redblack SOR method performed on GPU by OpenACC. We use the parallel computing solver to solve the fluidstructure interaction problem with directforcing immersed boundary method. The speedup for the parallel computing solver is up to 8.62 times. In addition, we present that the optimization of the redblack SOR (RBSOR) algorithm for the problem with the complexgeometry objects is to allocate the same memory on two variables which is to take advantages of accessing data by random indexes and formatted indexes. The implementation of parallel computing RBSOR with OpenACC is efficient to delegate the simulation of fluidstructure interaction problem on the GPU. 
 | 
	GPU Acceleration of Communication Avoiding Chebyshev Basis Conjugate Gradient Solver for Multiphase CFD Simulations Iterative methods for solving large linear systems are common parts of computational fluid dynamics (CFD) codes. The Preconditioned Conjugate Gradient (PCG) method is one of the most widely used iterative methods. However, in the PCG method, global collective communication is a crucial bottleneck especially on accelerated computing platforms. To resolve this issue, communication avoiding (CA) variants of the PCG method are becoming increasingly important. In this paper, the PCG and Preconditioned Chebyshev Basis CA CG (PCBCG) solvers in the multiphase CFD code JUPITER are ported to the latest V100 GPUs. All GPU kernels are highly optimized to achieve about 90% of the roofline performance, the block Jacobi preconditioner is redesigned to extract high computing power of GPUs, and the remaining bottleneck of halo data communication is avoided by overlapping communication and computation. The overall performance of the PCG and PCBCG solvers is determined by the competition between the CA properties of the global collective communication and the halo data communication, indicating an importance of the internode interconnect bandwidth per GPU. The developed GPU solvers are accelerated up to 2x compared with the former CPU solvers on KNLs, and excellent strong scaling is achieved up to 7,680 GPUs on the Summit. 
 | 
	Virtually perfect democracy In the 2009 Security Protocols Workshop, the Pretty Good Democracy scheme was presented. This scheme has the appeal of allowing voters to cast votes remotely, e.g. via the Internet, and confirm correct receipt in a single session. The scheme provides a degree of endto end verifiability: receipt of the correct acknowledgement code provides assurance that the vote will be accurately included in the final tally. The scheme does not require any trust in a voter client device. It does however have a number of vulnerabilities: privacy and accuracy depend on vote codes being kept secret. It also suffers the usual coercion style threats common to most remote voting schemes. 
 | 
					
	Parallelization of DirectForcing Immersed Boundary Method Using OpenACC A parallelization of Poisson equation solver is developed by using the parallel algorithm of redblack SOR method performed on GPU by OpenACC. We use the parallel computing solver to solve the fluidstructure interaction problem with directforcing immersed boundary method. The speedup for the parallel computing solver is up to 8.62 times. In addition, we present that the optimization of the redblack SOR (RBSOR) algorithm for the problem with the complexgeometry objects is to allocate the same memory on two variables which is to take advantages of accessing data by random indexes and formatted indexes. The implementation of parallel computing RBSOR with OpenACC is efficient to delegate the simulation of fluidstructure interaction problem on the GPU. 
 | 
	GPU Acceleration of Communication Avoiding Chebyshev Basis Conjugate Gradient Solver for Multiphase CFD Simulations Iterative methods for solving large linear systems are common parts of computational fluid dynamics (CFD) codes. The Preconditioned Conjugate Gradient (PCG) method is one of the most widely used iterative methods. However, in the PCG method, global collective communication is a crucial bottleneck especially on accelerated computing platforms. To resolve this issue, communication avoiding (CA) variants of the PCG method are becoming increasingly important. In this paper, the PCG and Preconditioned Chebyshev Basis CA CG (PCBCG) solvers in the multiphase CFD code JUPITER are ported to the latest V100 GPUs. All GPU kernels are highly optimized to achieve about 90% of the roofline performance, the block Jacobi preconditioner is redesigned to extract high computing power of GPUs, and the remaining bottleneck of halo data communication is avoided by overlapping communication and computation. The overall performance of the PCG and PCBCG solvers is determined by the competition between the CA properties of the global collective communication and the halo data communication, indicating an importance of the internode interconnect bandwidth per GPU. The developed GPU solvers are accelerated up to 2x compared with the former CPU solvers on KNLs, and excellent strong scaling is achieved up to 7,680 GPUs on the Summit. 
 | 
	Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well.  Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found. 
 | 
					
	Parallelization of DirectForcing Immersed Boundary Method Using OpenACC A parallelization of Poisson equation solver is developed by using the parallel algorithm of redblack SOR method performed on GPU by OpenACC. We use the parallel computing solver to solve the fluidstructure interaction problem with directforcing immersed boundary method. The speedup for the parallel computing solver is up to 8.62 times. In addition, we present that the optimization of the redblack SOR (RBSOR) algorithm for the problem with the complexgeometry objects is to allocate the same memory on two variables which is to take advantages of accessing data by random indexes and formatted indexes. The implementation of parallel computing RBSOR with OpenACC is efficient to delegate the simulation of fluidstructure interaction problem on the GPU. 
 | 
	GPU Acceleration of Communication Avoiding Chebyshev Basis Conjugate Gradient Solver for Multiphase CFD Simulations Iterative methods for solving large linear systems are common parts of computational fluid dynamics (CFD) codes. The Preconditioned Conjugate Gradient (PCG) method is one of the most widely used iterative methods. However, in the PCG method, global collective communication is a crucial bottleneck especially on accelerated computing platforms. To resolve this issue, communication avoiding (CA) variants of the PCG method are becoming increasingly important. In this paper, the PCG and Preconditioned Chebyshev Basis CA CG (PCBCG) solvers in the multiphase CFD code JUPITER are ported to the latest V100 GPUs. All GPU kernels are highly optimized to achieve about 90% of the roofline performance, the block Jacobi preconditioner is redesigned to extract high computing power of GPUs, and the remaining bottleneck of halo data communication is avoided by overlapping communication and computation. The overall performance of the PCG and PCBCG solvers is determined by the competition between the CA properties of the global collective communication and the halo data communication, indicating an importance of the internode interconnect bandwidth per GPU. The developed GPU solvers are accelerated up to 2x compared with the former CPU solvers on KNLs, and excellent strong scaling is achieved up to 7,680 GPUs on the Summit. 
 | 
	General Data Protection Regulation in Health Clinics The focus on personal data has merited the EU concerns and attention, resulting in the legislative change regarding privacy and the protection of personal data. The General Data Protection Regulation (GDPR) aims to reform existing measures on the protection of personal data of European Union citizens, with a strong impact on the rights and freedoms of individuals in establishing rules for the processing of personal data. The GDPR considers a special category of personal data, the health data, being these considered as sensitive data and subject to special conditions regarding treatment and access by third parties. This work presents the evolution of the applicability of the Regulation (EU) 2016679 six months after its application in Portuguese health clinics. The results of the present study are discussed in the light of future literature and work are identified. 
 | 
					
	Medial prefrontal decoupling from the default mode network benefits memory Abstract   In the last few years the involvement of the medial prefrontal cortex (mPFC) in memory processing has received increased attention. It has been shown to be centrally involved when we use prior knowledge (schemas) to improve learning of related material. With the mPFC also being one of the core hubs of the default mode network (DMN) and the DMNxe2x80x99s role in memory retrieval, we decided to investigate whether the mPFC in a schema paradigm acts independent of the DMN. We tested this with data from a crosssectional developmental study with a schema paradigm. During retrieval of schema items, the mPFC decoupled from the DMN with the degree of decoupling predicting memory performance. This finding suggests that a demand specific reconfiguration of the DMN supports schema memory. Additionally, we found that in the control condition, which relied on episodic memory, activity in the parahippocampal gyrus was positively related to memory performance. We interpret these results as a demand specific network reconfiguration of the DMN: a decoupling of the mPFC to support schema memory and a decoupling of the parahippocampal gyrus facilitating episodic memory. 
 | 
	Precuneus stimulation alters the neural dynamics of autobiographical memory retrieval Abstract   Autobiographical memory (AM) unfolds over time, but little is known about the dynamics of its retrieval. Spacebased models of memory implicate the hippocampus, retrosplenial cortex, and precuneus in early memory computations. Here we used transcranial magnetic stimulation (TMS) and magnetoencephalography (MEG) to investigate the causal role of the precuneus in the dynamics of AM retrieval. During early memory search and construction, precuneus stimulation compared to vertex stimulation led to delayed evoked neural activity within 1000xc2xa0xe2x80x8bms after cue presentation. During later memory elaboration, stimulation led to decreased sustained positivity. We further identified a parietal late positive component during memory elaboration, the amplitude of which was associated with spatial perspective during recollection. This association was disrupted following precuneus stimulation, suggesting that this region plays an important role in the neural representation of spatial perspective during AM. These findings demonstrate a causal role for the precuneus in early AM retrieval, during memory search before a specific memory is accessed, and in spatial context reinstatement during the initial stages of memory elaboration and reexperiencing. By utilizing the high temporal resolution of MEG and the causality of TMS, this study helps clarify the neural correlates of early naturalistic memory retrieval. 
 | 
	Unmanned agricultural product sales system The invention relates to the field of agricultural product sales, provides an unmanned agricultural product sales system, and aims to solve the problem of agricultural product waste caused by the factthat most farmers can only prepare goods according to guessing and experiences when selling agricultural products at present. The unmanned agricultural product sales system comprises an acquisition module for acquiring selection information of customers; a storage module which prestores a vegetable preparation scheme; a matching module which is used for matching a corresponding side dish schemefrom the storage module according to the selection information of the client; a pushing module which is used for pushing the matched side dish scheme back to the client; an acquisition module which isalso used for acquiring confirmation information of a client; an order module which is used for generating order information according to the confirmation information of the client, wherein the pushing module is used for pushing the order information to the client and the seller, and the acquisition module is also used for acquiring the delivery information of the seller; and a logistics trackingmodule which is used for tracking the delivery information to obtain logistics information, wherein the pushing module is used for pushing the logistics information to the client. The scheme is usedfor sales of unmanned agricultural product shops. 
 | 
					
	Medial prefrontal decoupling from the default mode network benefits memory Abstract   In the last few years the involvement of the medial prefrontal cortex (mPFC) in memory processing has received increased attention. It has been shown to be centrally involved when we use prior knowledge (schemas) to improve learning of related material. With the mPFC also being one of the core hubs of the default mode network (DMN) and the DMNxe2x80x99s role in memory retrieval, we decided to investigate whether the mPFC in a schema paradigm acts independent of the DMN. We tested this with data from a crosssectional developmental study with a schema paradigm. During retrieval of schema items, the mPFC decoupled from the DMN with the degree of decoupling predicting memory performance. This finding suggests that a demand specific reconfiguration of the DMN supports schema memory. Additionally, we found that in the control condition, which relied on episodic memory, activity in the parahippocampal gyrus was positively related to memory performance. We interpret these results as a demand specific network reconfiguration of the DMN: a decoupling of the mPFC to support schema memory and a decoupling of the parahippocampal gyrus facilitating episodic memory. 
 | 
	Precuneus stimulation alters the neural dynamics of autobiographical memory retrieval Abstract   Autobiographical memory (AM) unfolds over time, but little is known about the dynamics of its retrieval. Spacebased models of memory implicate the hippocampus, retrosplenial cortex, and precuneus in early memory computations. Here we used transcranial magnetic stimulation (TMS) and magnetoencephalography (MEG) to investigate the causal role of the precuneus in the dynamics of AM retrieval. During early memory search and construction, precuneus stimulation compared to vertex stimulation led to delayed evoked neural activity within 1000xc2xa0xe2x80x8bms after cue presentation. During later memory elaboration, stimulation led to decreased sustained positivity. We further identified a parietal late positive component during memory elaboration, the amplitude of which was associated with spatial perspective during recollection. This association was disrupted following precuneus stimulation, suggesting that this region plays an important role in the neural representation of spatial perspective during AM. These findings demonstrate a causal role for the precuneus in early AM retrieval, during memory search before a specific memory is accessed, and in spatial context reinstatement during the initial stages of memory elaboration and reexperiencing. By utilizing the high temporal resolution of MEG and the causality of TMS, this study helps clarify the neural correlates of early naturalistic memory retrieval. 
 | 
	Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority. 
 | 
					
	Medial prefrontal decoupling from the default mode network benefits memory Abstract   In the last few years the involvement of the medial prefrontal cortex (mPFC) in memory processing has received increased attention. It has been shown to be centrally involved when we use prior knowledge (schemas) to improve learning of related material. With the mPFC also being one of the core hubs of the default mode network (DMN) and the DMNxe2x80x99s role in memory retrieval, we decided to investigate whether the mPFC in a schema paradigm acts independent of the DMN. We tested this with data from a crosssectional developmental study with a schema paradigm. During retrieval of schema items, the mPFC decoupled from the DMN with the degree of decoupling predicting memory performance. This finding suggests that a demand specific reconfiguration of the DMN supports schema memory. Additionally, we found that in the control condition, which relied on episodic memory, activity in the parahippocampal gyrus was positively related to memory performance. We interpret these results as a demand specific network reconfiguration of the DMN: a decoupling of the mPFC to support schema memory and a decoupling of the parahippocampal gyrus facilitating episodic memory. 
 | 
	Precuneus stimulation alters the neural dynamics of autobiographical memory retrieval Abstract   Autobiographical memory (AM) unfolds over time, but little is known about the dynamics of its retrieval. Spacebased models of memory implicate the hippocampus, retrosplenial cortex, and precuneus in early memory computations. Here we used transcranial magnetic stimulation (TMS) and magnetoencephalography (MEG) to investigate the causal role of the precuneus in the dynamics of AM retrieval. During early memory search and construction, precuneus stimulation compared to vertex stimulation led to delayed evoked neural activity within 1000xc2xa0xe2x80x8bms after cue presentation. During later memory elaboration, stimulation led to decreased sustained positivity. We further identified a parietal late positive component during memory elaboration, the amplitude of which was associated with spatial perspective during recollection. This association was disrupted following precuneus stimulation, suggesting that this region plays an important role in the neural representation of spatial perspective during AM. These findings demonstrate a causal role for the precuneus in early AM retrieval, during memory search before a specific memory is accessed, and in spatial context reinstatement during the initial stages of memory elaboration and reexperiencing. By utilizing the high temporal resolution of MEG and the causality of TMS, this study helps clarify the neural correlates of early naturalistic memory retrieval. 
 | 
	Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups. 
 | 
					
	Medial prefrontal decoupling from the default mode network benefits memory Abstract   In the last few years the involvement of the medial prefrontal cortex (mPFC) in memory processing has received increased attention. It has been shown to be centrally involved when we use prior knowledge (schemas) to improve learning of related material. With the mPFC also being one of the core hubs of the default mode network (DMN) and the DMNxe2x80x99s role in memory retrieval, we decided to investigate whether the mPFC in a schema paradigm acts independent of the DMN. We tested this with data from a crosssectional developmental study with a schema paradigm. During retrieval of schema items, the mPFC decoupled from the DMN with the degree of decoupling predicting memory performance. This finding suggests that a demand specific reconfiguration of the DMN supports schema memory. Additionally, we found that in the control condition, which relied on episodic memory, activity in the parahippocampal gyrus was positively related to memory performance. We interpret these results as a demand specific network reconfiguration of the DMN: a decoupling of the mPFC to support schema memory and a decoupling of the parahippocampal gyrus facilitating episodic memory. 
 | 
	Precuneus stimulation alters the neural dynamics of autobiographical memory retrieval Abstract   Autobiographical memory (AM) unfolds over time, but little is known about the dynamics of its retrieval. Spacebased models of memory implicate the hippocampus, retrosplenial cortex, and precuneus in early memory computations. Here we used transcranial magnetic stimulation (TMS) and magnetoencephalography (MEG) to investigate the causal role of the precuneus in the dynamics of AM retrieval. During early memory search and construction, precuneus stimulation compared to vertex stimulation led to delayed evoked neural activity within 1000xc2xa0xe2x80x8bms after cue presentation. During later memory elaboration, stimulation led to decreased sustained positivity. We further identified a parietal late positive component during memory elaboration, the amplitude of which was associated with spatial perspective during recollection. This association was disrupted following precuneus stimulation, suggesting that this region plays an important role in the neural representation of spatial perspective during AM. These findings demonstrate a causal role for the precuneus in early AM retrieval, during memory search before a specific memory is accessed, and in spatial context reinstatement during the initial stages of memory elaboration and reexperiencing. By utilizing the high temporal resolution of MEG and the causality of TMS, this study helps clarify the neural correlates of early naturalistic memory retrieval. 
 | 
	Classifying unavoidable Tverberg partitions Let T(d,r)  (r1)(d1)1 be the parameter in Tverbergu0027s theorem, and call a partition mathcal I of 1,2,ldots,T(d,r) into r parts a  Tverberg type . We say that mathcal I o ccurs xc2xa0in an ordered point sequence P if P contains a subsequence Pu0027 of T(d,r) points such that the partition of Pu0027 that is orderisomorphic to mathcal I is a Tverberg partition. We say that mathcal I is  unavoidable xc2xa0if it occurs in every sufficiently long point sequence.  In this paper we study the problem of determining which Tverberg types are unavoidable. We conjecture a complete characterization of the unavoidable Tverberg types, and we prove some cases of our conjecture for dle 4. Along the way, we study the avoidability of many other geometric predicates.  Our techniques also yield a large family of T(d,r)point sets for which the number of Tverberg partitions is exactly (r1)!d. This lends further support for Sierksmau0027s conjecture on the number of Tverberg partitions. 
 | 
					
	Medial prefrontal decoupling from the default mode network benefits memory Abstract   In the last few years the involvement of the medial prefrontal cortex (mPFC) in memory processing has received increased attention. It has been shown to be centrally involved when we use prior knowledge (schemas) to improve learning of related material. With the mPFC also being one of the core hubs of the default mode network (DMN) and the DMNxe2x80x99s role in memory retrieval, we decided to investigate whether the mPFC in a schema paradigm acts independent of the DMN. We tested this with data from a crosssectional developmental study with a schema paradigm. During retrieval of schema items, the mPFC decoupled from the DMN with the degree of decoupling predicting memory performance. This finding suggests that a demand specific reconfiguration of the DMN supports schema memory. Additionally, we found that in the control condition, which relied on episodic memory, activity in the parahippocampal gyrus was positively related to memory performance. We interpret these results as a demand specific network reconfiguration of the DMN: a decoupling of the mPFC to support schema memory and a decoupling of the parahippocampal gyrus facilitating episodic memory. 
 | 
	Precuneus stimulation alters the neural dynamics of autobiographical memory retrieval Abstract   Autobiographical memory (AM) unfolds over time, but little is known about the dynamics of its retrieval. Spacebased models of memory implicate the hippocampus, retrosplenial cortex, and precuneus in early memory computations. Here we used transcranial magnetic stimulation (TMS) and magnetoencephalography (MEG) to investigate the causal role of the precuneus in the dynamics of AM retrieval. During early memory search and construction, precuneus stimulation compared to vertex stimulation led to delayed evoked neural activity within 1000xc2xa0xe2x80x8bms after cue presentation. During later memory elaboration, stimulation led to decreased sustained positivity. We further identified a parietal late positive component during memory elaboration, the amplitude of which was associated with spatial perspective during recollection. This association was disrupted following precuneus stimulation, suggesting that this region plays an important role in the neural representation of spatial perspective during AM. These findings demonstrate a causal role for the precuneus in early AM retrieval, during memory search before a specific memory is accessed, and in spatial context reinstatement during the initial stages of memory elaboration and reexperiencing. By utilizing the high temporal resolution of MEG and the causality of TMS, this study helps clarify the neural correlates of early naturalistic memory retrieval. 
 | 
	Analysis of Charging Continuous Energy System and Stable Current Collection for Pantograph and Catenary of Pure Electric LHD Aiming at the problem of limited power battery capacity of pure electric LoadHaulDump (LHD), a method of charging and supplying sufficient power through pantographcatenary current collection system is proposed, which avoids the problem of poor flexibility and mobility of towed cable electric LHD. In this paper, we introduce the research and application status of pantograph and catenary, describe the latest methods and techniques for studying the dynamics of pantographcatenary system, elaborate and analyze various methods and technologies, and outline the important indicators for analyzing and evaluating the stability of current collection between pantographcatenary system. Simultaneously, various control strategies for pantographcatenary system are introduced. Finally, the application of the pantographcatenary system in highspeed railway and urban electric bus is discussed to illustrate the advantages of pantographcatenary system charging and energy supply, and it is applied to pure electric LHD charging and energy supply to ensure power adequacy. 
 | 
					
	Efficient 3D Reconstruction and Streaming for GroupScale Multiclient Live Telepresence Sharing live telepresence experiences for teleconferencing or remote collaboration receives increasing interest with the recent progress in capturing and ARVR technology. Whereas impressive telepresence systems have been proposed on top of onthefly scene capture, data transmission and visualization, these systems are restricted to the immersion of single or up to a low number of users into the respective scenarios. In this paper, we direct our attention on immersing significantly larger groups of people into livecaptured scenes as required in education, entertainment or collaboration scenarios. For this purpose, rather than abandoning previous approaches, we present a range of optimizations of the involved reconstruction and streaming components that allow the immersion of a group of more than 24 users within the same scene  which is about a factor of 6 higher than in previous work  without introducing further latency or changing the involved consumer hardware setup. We demonstrate that our optimized system is capable of generating highquality scene reconstructions as well as providing an immersive viewing experience to a large group of people within these livecaptured scenes. 
 | 
	A VR System for Immersive Teleoperation and Live Exploration with a Mobile Robot Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard videobased approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VRbased practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGBD data that is streamed to a SLAMbased live multiclient telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. headmounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robotspecific data and enables a quick integration into existing robotic systems. This way, in contrast to first person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proofofconcept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidthefficient data streaming and visualization. Furthermore, we show its benefits over purely videobased teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments. 
 | 
	Structure and expression of the gene coding for the alphasubunit of DNAdependent RNA polymerase from the chloroplast genome of Zea mays.  :0,rpoA gene coding for the alphasubunit of DNAdependent RNA polymerase located on the DNA of Zea mays chloroplasts has been characterized with respect to its position on the chloroplast genome and its nucleotide sequence. The amino acid sequence derived for a 39 Kd polypeptide shows strong homology with sequences derived from the :0,rpoA genes of other chloroplast species and with the amino acid sequence of the alphasubunit from E. coli RNA polymerase. Transcripts of the :0,rpoA gene were identified by Northern hybridization and characterized by S1 mapping using total RNA isolated from maize chloroplasts. Antibodies raised against a synthetic Cterminal heptapeptide show cross reactivity with a 39 Kd polypeptide contained in the stroma fraction of maize chloroplasts. It is concluded that the :0,rpoA gene is a functional gene and that therefore, at least the alphasubunit of plastidic RNA polymerase, is expressed in chloroplasts. 
 | 
					
	Efficient 3D Reconstruction and Streaming for GroupScale Multiclient Live Telepresence Sharing live telepresence experiences for teleconferencing or remote collaboration receives increasing interest with the recent progress in capturing and ARVR technology. Whereas impressive telepresence systems have been proposed on top of onthefly scene capture, data transmission and visualization, these systems are restricted to the immersion of single or up to a low number of users into the respective scenarios. In this paper, we direct our attention on immersing significantly larger groups of people into livecaptured scenes as required in education, entertainment or collaboration scenarios. For this purpose, rather than abandoning previous approaches, we present a range of optimizations of the involved reconstruction and streaming components that allow the immersion of a group of more than 24 users within the same scene  which is about a factor of 6 higher than in previous work  without introducing further latency or changing the involved consumer hardware setup. We demonstrate that our optimized system is capable of generating highquality scene reconstructions as well as providing an immersive viewing experience to a large group of people within these livecaptured scenes. 
 | 
	A VR System for Immersive Teleoperation and Live Exploration with a Mobile Robot Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard videobased approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VRbased practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGBD data that is streamed to a SLAMbased live multiclient telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. headmounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robotspecific data and enables a quick integration into existing robotic systems. This way, in contrast to first person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proofofconcept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidthefficient data streaming and visualization. Furthermore, we show its benefits over purely videobased teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments. 
 | 
	Adaptive fraud detection A computerimplemented method includes receiving a new data record associated with a transaction, and generating, using an adaptive model executed by the computer, a score to represent a likelihood that the transaction is associated with fraud. The adaptive model employs feedback from one or more external data sources, the feedback containing information about one or more previous data records associated with fraud and nonfraud by at least one of the one or more external data sources. Further, the adaptive model uses the information about the one or more previous data records as input variables to update scoring parameters used to generate the score for the new data record. 
 | 
					
	Efficient 3D Reconstruction and Streaming for GroupScale Multiclient Live Telepresence Sharing live telepresence experiences for teleconferencing or remote collaboration receives increasing interest with the recent progress in capturing and ARVR technology. Whereas impressive telepresence systems have been proposed on top of onthefly scene capture, data transmission and visualization, these systems are restricted to the immersion of single or up to a low number of users into the respective scenarios. In this paper, we direct our attention on immersing significantly larger groups of people into livecaptured scenes as required in education, entertainment or collaboration scenarios. For this purpose, rather than abandoning previous approaches, we present a range of optimizations of the involved reconstruction and streaming components that allow the immersion of a group of more than 24 users within the same scene  which is about a factor of 6 higher than in previous work  without introducing further latency or changing the involved consumer hardware setup. We demonstrate that our optimized system is capable of generating highquality scene reconstructions as well as providing an immersive viewing experience to a large group of people within these livecaptured scenes. 
 | 
	A VR System for Immersive Teleoperation and Live Exploration with a Mobile Robot Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard videobased approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VRbased practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGBD data that is streamed to a SLAMbased live multiclient telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. headmounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robotspecific data and enables a quick integration into existing robotic systems. This way, in contrast to first person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proofofconcept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidthefficient data streaming and visualization. Furthermore, we show its benefits over purely videobased teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments. 
 | 
	Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority. 
 | 
					
	Efficient 3D Reconstruction and Streaming for GroupScale Multiclient Live Telepresence Sharing live telepresence experiences for teleconferencing or remote collaboration receives increasing interest with the recent progress in capturing and ARVR technology. Whereas impressive telepresence systems have been proposed on top of onthefly scene capture, data transmission and visualization, these systems are restricted to the immersion of single or up to a low number of users into the respective scenarios. In this paper, we direct our attention on immersing significantly larger groups of people into livecaptured scenes as required in education, entertainment or collaboration scenarios. For this purpose, rather than abandoning previous approaches, we present a range of optimizations of the involved reconstruction and streaming components that allow the immersion of a group of more than 24 users within the same scene  which is about a factor of 6 higher than in previous work  without introducing further latency or changing the involved consumer hardware setup. We demonstrate that our optimized system is capable of generating highquality scene reconstructions as well as providing an immersive viewing experience to a large group of people within these livecaptured scenes. 
 | 
	A VR System for Immersive Teleoperation and Live Exploration with a Mobile Robot Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard videobased approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VRbased practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGBD data that is streamed to a SLAMbased live multiclient telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. headmounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robotspecific data and enables a quick integration into existing robotic systems. This way, in contrast to first person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proofofconcept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidthefficient data streaming and visualization. Furthermore, we show its benefits over purely videobased teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments. 
 | 
	A Critical Look at the 2019 College Admissions Scandal Discusses the 2019 College admissions scandal. Let me begin with a disclaimer: I am making no legal excuses for the participants in the current scandal. I am only offering contextual background that places it in the broader academic, cultural, and political perspective required for understanding. It is only the most recent installment of a wellworn narrative: the controlling elite make their own rules and live by them, if they can get away with it. Unfortunately, some of the participants, who are either serving or facing jail time, didnxe2x80x99t know to not go into a gunfight with a sharp stick. Money alone is not enough to avoid prosecution for fraud: you need political clout. The best protection a defendant can have is a prosecutor who fears political reprisal. Compare how the Koch brothers escaped prosecution for stealing millions of oil dollars from Native American tribes1,2 with the fate of actresses Lori Loughlin and Felicity Huffman, who, at the time of this writing, face jail time for paying bribes to get their children into good universities.3,4 In the former case, the federal prosecutor who dared to empanel a grand jury to get at the truth was fired for cause, which put a quick end to the prosecution. In the latter case, the prosecutors pushed for jail terms and public admonishment with the zeal of Oliver Cromwell. There you have it: stealing oil from Native Americans versus trying to bribe your kids into a great university. Where is the greater crime? Admittedly, these actresses and their 
 | 
					
	Efficient 3D Reconstruction and Streaming for GroupScale Multiclient Live Telepresence Sharing live telepresence experiences for teleconferencing or remote collaboration receives increasing interest with the recent progress in capturing and ARVR technology. Whereas impressive telepresence systems have been proposed on top of onthefly scene capture, data transmission and visualization, these systems are restricted to the immersion of single or up to a low number of users into the respective scenarios. In this paper, we direct our attention on immersing significantly larger groups of people into livecaptured scenes as required in education, entertainment or collaboration scenarios. For this purpose, rather than abandoning previous approaches, we present a range of optimizations of the involved reconstruction and streaming components that allow the immersion of a group of more than 24 users within the same scene  which is about a factor of 6 higher than in previous work  without introducing further latency or changing the involved consumer hardware setup. We demonstrate that our optimized system is capable of generating highquality scene reconstructions as well as providing an immersive viewing experience to a large group of people within these livecaptured scenes. 
 | 
	A VR System for Immersive Teleoperation and Live Exploration with a Mobile Robot Applications like disaster management and industrial inspection often require experts to enter contaminated places. To circumvent the need for physical presence, it is desirable to generate a fully immersive individual live teleoperation experience. However, standard videobased approaches suffer from a limited degree of immersion and situation awareness due to the restriction to the camera view, which impacts the navigation. In this paper, we present a novel VRbased practical system for immersive robot teleoperation and scene exploration. While being operated through the scene, a robot captures RGBD data that is streamed to a SLAMbased live multiclient telepresence system. Here, a global 3D model of the already captured scene parts is reconstructed and streamed to the individual remote user clients where the rendering for e.g. headmounted display devices (HMDs) is performed. We introduce a novel lightweight robot client component which transmits robotspecific data and enables a quick integration into existing robotic systems. This way, in contrast to first person exploration systems, the operators can explore and navigate in the remote site completely independent of the current position and view of the capturing robot, complementing traditional input devices for teleoperation. We provide a proofofconcept implementation and demonstrate the capabilities as well as the performance of our system regarding interactive object measurements and bandwidthefficient data streaming and visualization. Furthermore, we show its benefits over purely videobased teleoperation in a user study revealing a higher degree of situation awareness and a more precise navigation in challenging environments. 
 | 
	Distinguishing Cartesian Powers of Graphs Given a graph G, a labeling c:V(G) rightarrow 1, 2, ldots, d is said to be  ddistinguishing  if the only element in rm Aut(G) that preserves the labels is the identity.  The  distinguishing number  of G, denoted by D(G), is the minimum d such that G has a ddistinguishing labeling. If G square H denotes the Cartesian product of G and H, let G2  G square G and Gr  G square Gr1. A graph G is said to be  prime  with respect to the Cartesian product if whenever G cong G_1 square G_2, then either G_1 or G_2 is a singleton vertex.  This paper proves that if G is a connected, prime graph, then D(Gr)  2 whenever r geq 4. 
 | 
					
	Laboratory Calibration and Performance Evaluation of LowCost Capacitive and Very LowCost Resistive Soil Moisture Sensors Soil volumetric water content (    V W C    ) is a vital parameter to understand several ecohydrological and environmental processes. Its costeffective measurement can potentially drive various technological tools to promote datadriven sustainable agriculture through supplemental irrigation solutions, the lack of which has contributed to severe agricultural distress, particularly for smallholder farmers. The cost of commercially available     V W C     sensors varies over four orders of magnitude. A laboratory study characterizing and testing sensors from this wide range of cost categories, which is a prerequisite to explore their applicability for irrigation management, has not been conducted. Within this context, two lowcost capacitive sensorsxe2x80x94SMEC300 and SM100xe2x80x94manufactured by Spectrum Technologies Inc. (Aurora, IL, USA), and two very lowcost resistive sensorsxe2x80x94the Soil Hygrometer Detection Module Soil Moisture Sensor (YL100) by Electronicfans and the Generic Soil Moisture Sensor Module (YL69) by KitsGuruxe2x80x94were tested for performance in laboratory conditions. Each sensor was calibrated in different repacked soils, and tested to evaluate accuracy, precision and sensitivity to variations in temperature and salinity. The capacitive sensors were additionally tested for their performance in liquids of known dielectric constants, and a comparative analysis of the calibration equations developed inhouse and provided by the manufacturer was carried out. The value for money of the sensors is reflected in their precision performance, i.e., the precision performance largely follows sensor costs. The other aspects of sensor performance do not necessarily follow sensor costs. The lowcost capacitive sensors were more accurate than manufacturer specifications, and could match the performance of the secondary standard sensor, after soil specific calibration. SMEC300 is accurate (    M A E    ,     R M S E    , and     R A E     of 2.12%, 2.88% and 0.28 respectively), precise, and performed well considering its price as well as multipurpose sensing capabilities. The lessexpensive SM100 sensor had a better accuracy (    M A E    ,     R M S E    , and     R A E     of 1.67%, 2.36% and 0.21 respectively) but poorer precision than the SMEC300. However, it was established as a robust, field ready, lowcost sensor due to its more consistent performance in soils (particularly the field soil) and superior performance in fluids. Both the capacitive sensors responded reasonably to variations in temperature and salinity conditions. Though the resistive sensors were less accurate and precise compared to the capacitive sensors, they performed well considering their cost category. The YL100 was more accurate (    M A E    ,     R M S E    , and     R A E     of 3.51%, 5.21% and 0.37 respectively) than YL69 (    M A E    ,     R M S E    , and     R A E     of 4.13%, 5.54%, and 0.41, respectively). However, YL69 outperformed YL100 in terms of precision, and response to temperature and salinity variations, to emerge as a more robust resistive sensor. These very lowcost sensors may be used in combination with more accurate sensors to better characterize the spatiotemporal variability of field scale soil moisture. The laboratory characterization conducted in this study is a prerequisite to estimate the effect of low and very lowcost sensor measurements on the efficiency of soil moisture based irrigation scheduling systems. 
 | 
	Estimation of water table depth using DUALEM2 system Abstract   Most of the agricultural fields are generally irrigated or drained uniformly without considering the spatial and temporal variation in water table depth (WTD). Investigating WTD is important for scheduling irrigation, drainage system designs and water balance models. The objective of this study was to develop a software interface to estimate the variations in WTD via electromagnetic induction (EMI) methods using frequencies of a DUALEM2 system. Two fields (Field 1: 45.38xc2xb0N, 63.23xc2xb0W and Field 2: 45.37xc2xb0N, 63.25xc2xb0W) were selected and thirty perforated observation wells were installed to calibrate the DUALEM2 for predicting WTD. Boundaries of the selected sites and location of the wells were marked using a real time kinematics global positioning system (RTKGPS). The user interface program was developed in Delphi 5.0 software and imported in a laptop computer to retrieve data from the DUALEM2 system. The horizontal coplanar (HCP) geometry, perpendicular coplanar (PRP) geometry and WTD were recorded simultaneously from each well before and after every significant rainfall for three consecutive days. Comprehensive surveys were conducted to measure apparent ground conductivity (ECa) with DUALEM2 and corresponding locations of the sampling points using Trimble Ag GPS 332. The regression model showed significant correlation between the HCP and WTD with coefficient of determination (R2xc2xa0xc2xa00.71) for field 1 and (R2xc2xa0xc2xa00.53) for field 2. Maps were generated in ArcGIS 10 software to examine the accuracy of predicted WTD in comparison with actual values. Results indicated that the DUALEM2 system was efficient in mapping variation rapidly and reliably in WTD in a nondestructive fashion rather than following conventional way of repeated well drilling for WTD determination. This information could be used for measuring the depletion of WTD during dry periods (droughts), sitespecific irrigation and drainage design with the aided advantage of labor and time savings in case of observing precision water management practices at large fields. 
 | 
	Design of Human Resource Management System in the Background of Computer Big Data Under the background of big data, human resources management not only ushered in new opportunities, but also faced with more and more severe challenges. We need to do a good job in time to ensure the effectiveness of human resources management. This paper analyses the problems in human resource management of enterprises in our country. Aiming at this problem and the analysis of big data, this paper designs a human resource management system based on SOA architecture, which can effectively solve the management of human resources under the modern enterprise system. The system has a certain practical reference for the management of human resources. 
 | 
					
	Laboratory Calibration and Performance Evaluation of LowCost Capacitive and Very LowCost Resistive Soil Moisture Sensors Soil volumetric water content (    V W C    ) is a vital parameter to understand several ecohydrological and environmental processes. Its costeffective measurement can potentially drive various technological tools to promote datadriven sustainable agriculture through supplemental irrigation solutions, the lack of which has contributed to severe agricultural distress, particularly for smallholder farmers. The cost of commercially available     V W C     sensors varies over four orders of magnitude. A laboratory study characterizing and testing sensors from this wide range of cost categories, which is a prerequisite to explore their applicability for irrigation management, has not been conducted. Within this context, two lowcost capacitive sensorsxe2x80x94SMEC300 and SM100xe2x80x94manufactured by Spectrum Technologies Inc. (Aurora, IL, USA), and two very lowcost resistive sensorsxe2x80x94the Soil Hygrometer Detection Module Soil Moisture Sensor (YL100) by Electronicfans and the Generic Soil Moisture Sensor Module (YL69) by KitsGuruxe2x80x94were tested for performance in laboratory conditions. Each sensor was calibrated in different repacked soils, and tested to evaluate accuracy, precision and sensitivity to variations in temperature and salinity. The capacitive sensors were additionally tested for their performance in liquids of known dielectric constants, and a comparative analysis of the calibration equations developed inhouse and provided by the manufacturer was carried out. The value for money of the sensors is reflected in their precision performance, i.e., the precision performance largely follows sensor costs. The other aspects of sensor performance do not necessarily follow sensor costs. The lowcost capacitive sensors were more accurate than manufacturer specifications, and could match the performance of the secondary standard sensor, after soil specific calibration. SMEC300 is accurate (    M A E    ,     R M S E    , and     R A E     of 2.12%, 2.88% and 0.28 respectively), precise, and performed well considering its price as well as multipurpose sensing capabilities. The lessexpensive SM100 sensor had a better accuracy (    M A E    ,     R M S E    , and     R A E     of 1.67%, 2.36% and 0.21 respectively) but poorer precision than the SMEC300. However, it was established as a robust, field ready, lowcost sensor due to its more consistent performance in soils (particularly the field soil) and superior performance in fluids. Both the capacitive sensors responded reasonably to variations in temperature and salinity conditions. Though the resistive sensors were less accurate and precise compared to the capacitive sensors, they performed well considering their cost category. The YL100 was more accurate (    M A E    ,     R M S E    , and     R A E     of 3.51%, 5.21% and 0.37 respectively) than YL69 (    M A E    ,     R M S E    , and     R A E     of 4.13%, 5.54%, and 0.41, respectively). However, YL69 outperformed YL100 in terms of precision, and response to temperature and salinity variations, to emerge as a more robust resistive sensor. These very lowcost sensors may be used in combination with more accurate sensors to better characterize the spatiotemporal variability of field scale soil moisture. The laboratory characterization conducted in this study is a prerequisite to estimate the effect of low and very lowcost sensor measurements on the efficiency of soil moisture based irrigation scheduling systems. 
 | 
	Estimation of water table depth using DUALEM2 system Abstract   Most of the agricultural fields are generally irrigated or drained uniformly without considering the spatial and temporal variation in water table depth (WTD). Investigating WTD is important for scheduling irrigation, drainage system designs and water balance models. The objective of this study was to develop a software interface to estimate the variations in WTD via electromagnetic induction (EMI) methods using frequencies of a DUALEM2 system. Two fields (Field 1: 45.38xc2xb0N, 63.23xc2xb0W and Field 2: 45.37xc2xb0N, 63.25xc2xb0W) were selected and thirty perforated observation wells were installed to calibrate the DUALEM2 for predicting WTD. Boundaries of the selected sites and location of the wells were marked using a real time kinematics global positioning system (RTKGPS). The user interface program was developed in Delphi 5.0 software and imported in a laptop computer to retrieve data from the DUALEM2 system. The horizontal coplanar (HCP) geometry, perpendicular coplanar (PRP) geometry and WTD were recorded simultaneously from each well before and after every significant rainfall for three consecutive days. Comprehensive surveys were conducted to measure apparent ground conductivity (ECa) with DUALEM2 and corresponding locations of the sampling points using Trimble Ag GPS 332. The regression model showed significant correlation between the HCP and WTD with coefficient of determination (R2xc2xa0xc2xa00.71) for field 1 and (R2xc2xa0xc2xa00.53) for field 2. Maps were generated in ArcGIS 10 software to examine the accuracy of predicted WTD in comparison with actual values. Results indicated that the DUALEM2 system was efficient in mapping variation rapidly and reliably in WTD in a nondestructive fashion rather than following conventional way of repeated well drilling for WTD determination. This information could be used for measuring the depletion of WTD during dry periods (droughts), sitespecific irrigation and drainage design with the aided advantage of labor and time savings in case of observing precision water management practices at large fields. 
 | 
	Understanding How to Implement Privacy by Design, One Step at a Time While widely accepted as a game changer for protecting privacy, Privacy by Design (PbD) has also developed a reputation for being challenging to implement for businesses. The reality is much different. PbD was intended to form the foundation of how to proactively embed privacy into the design of products and services. This is why the 7 principles of PbD are called Foundational Principles. They can be actualized and customized in many different ways, depending on the particular requirements of an organization. This article will present a simplified discussion of practically implementing PbD and how PbD can enhance corporate interests, while offering the strongest privacy and data protection to achieve multiple goals. Itu0027s a clear: winwin! 
 | 
					
	Laboratory Calibration and Performance Evaluation of LowCost Capacitive and Very LowCost Resistive Soil Moisture Sensors Soil volumetric water content (    V W C    ) is a vital parameter to understand several ecohydrological and environmental processes. Its costeffective measurement can potentially drive various technological tools to promote datadriven sustainable agriculture through supplemental irrigation solutions, the lack of which has contributed to severe agricultural distress, particularly for smallholder farmers. The cost of commercially available     V W C     sensors varies over four orders of magnitude. A laboratory study characterizing and testing sensors from this wide range of cost categories, which is a prerequisite to explore their applicability for irrigation management, has not been conducted. Within this context, two lowcost capacitive sensorsxe2x80x94SMEC300 and SM100xe2x80x94manufactured by Spectrum Technologies Inc. (Aurora, IL, USA), and two very lowcost resistive sensorsxe2x80x94the Soil Hygrometer Detection Module Soil Moisture Sensor (YL100) by Electronicfans and the Generic Soil Moisture Sensor Module (YL69) by KitsGuruxe2x80x94were tested for performance in laboratory conditions. Each sensor was calibrated in different repacked soils, and tested to evaluate accuracy, precision and sensitivity to variations in temperature and salinity. The capacitive sensors were additionally tested for their performance in liquids of known dielectric constants, and a comparative analysis of the calibration equations developed inhouse and provided by the manufacturer was carried out. The value for money of the sensors is reflected in their precision performance, i.e., the precision performance largely follows sensor costs. The other aspects of sensor performance do not necessarily follow sensor costs. The lowcost capacitive sensors were more accurate than manufacturer specifications, and could match the performance of the secondary standard sensor, after soil specific calibration. SMEC300 is accurate (    M A E    ,     R M S E    , and     R A E     of 2.12%, 2.88% and 0.28 respectively), precise, and performed well considering its price as well as multipurpose sensing capabilities. The lessexpensive SM100 sensor had a better accuracy (    M A E    ,     R M S E    , and     R A E     of 1.67%, 2.36% and 0.21 respectively) but poorer precision than the SMEC300. However, it was established as a robust, field ready, lowcost sensor due to its more consistent performance in soils (particularly the field soil) and superior performance in fluids. Both the capacitive sensors responded reasonably to variations in temperature and salinity conditions. Though the resistive sensors were less accurate and precise compared to the capacitive sensors, they performed well considering their cost category. The YL100 was more accurate (    M A E    ,     R M S E    , and     R A E     of 3.51%, 5.21% and 0.37 respectively) than YL69 (    M A E    ,     R M S E    , and     R A E     of 4.13%, 5.54%, and 0.41, respectively). However, YL69 outperformed YL100 in terms of precision, and response to temperature and salinity variations, to emerge as a more robust resistive sensor. These very lowcost sensors may be used in combination with more accurate sensors to better characterize the spatiotemporal variability of field scale soil moisture. The laboratory characterization conducted in this study is a prerequisite to estimate the effect of low and very lowcost sensor measurements on the efficiency of soil moisture based irrigation scheduling systems. 
 | 
	Estimation of water table depth using DUALEM2 system Abstract   Most of the agricultural fields are generally irrigated or drained uniformly without considering the spatial and temporal variation in water table depth (WTD). Investigating WTD is important for scheduling irrigation, drainage system designs and water balance models. The objective of this study was to develop a software interface to estimate the variations in WTD via electromagnetic induction (EMI) methods using frequencies of a DUALEM2 system. Two fields (Field 1: 45.38xc2xb0N, 63.23xc2xb0W and Field 2: 45.37xc2xb0N, 63.25xc2xb0W) were selected and thirty perforated observation wells were installed to calibrate the DUALEM2 for predicting WTD. Boundaries of the selected sites and location of the wells were marked using a real time kinematics global positioning system (RTKGPS). The user interface program was developed in Delphi 5.0 software and imported in a laptop computer to retrieve data from the DUALEM2 system. The horizontal coplanar (HCP) geometry, perpendicular coplanar (PRP) geometry and WTD were recorded simultaneously from each well before and after every significant rainfall for three consecutive days. Comprehensive surveys were conducted to measure apparent ground conductivity (ECa) with DUALEM2 and corresponding locations of the sampling points using Trimble Ag GPS 332. The regression model showed significant correlation between the HCP and WTD with coefficient of determination (R2xc2xa0xc2xa00.71) for field 1 and (R2xc2xa0xc2xa00.53) for field 2. Maps were generated in ArcGIS 10 software to examine the accuracy of predicted WTD in comparison with actual values. Results indicated that the DUALEM2 system was efficient in mapping variation rapidly and reliably in WTD in a nondestructive fashion rather than following conventional way of repeated well drilling for WTD determination. This information could be used for measuring the depletion of WTD during dry periods (droughts), sitespecific irrigation and drainage design with the aided advantage of labor and time savings in case of observing precision water management practices at large fields. 
 | 
	The Relative Oriented Clique Number of TriangleFree Planar Graphs Is 10. A vertex subset R of an oriented graph (overrightarrowG) is a relative oriented clique if each pair of nonadjacent vertices of R is connected by a directed 2path. The relative oriented clique number (omega _ro(overrightarrowG)) of (overrightarrowG) is the maximum value of R where R is a relative oriented clique of (overrightarrowG). Given a family (mathcal F) of oriented graphs, the relative oriented clique number is (omega _ro(mathcal F)  max omega _ro(overrightarrowG)overrightarrowG in mathcal F). For the family (mathcal P_4) of oriented trianglefree planar graphs, it was conjectured that (omega _ro(mathcal P_4)10). In this article, we prove the conjecture. 
 | 
					
	Laboratory Calibration and Performance Evaluation of LowCost Capacitive and Very LowCost Resistive Soil Moisture Sensors Soil volumetric water content (    V W C    ) is a vital parameter to understand several ecohydrological and environmental processes. Its costeffective measurement can potentially drive various technological tools to promote datadriven sustainable agriculture through supplemental irrigation solutions, the lack of which has contributed to severe agricultural distress, particularly for smallholder farmers. The cost of commercially available     V W C     sensors varies over four orders of magnitude. A laboratory study characterizing and testing sensors from this wide range of cost categories, which is a prerequisite to explore their applicability for irrigation management, has not been conducted. Within this context, two lowcost capacitive sensorsxe2x80x94SMEC300 and SM100xe2x80x94manufactured by Spectrum Technologies Inc. (Aurora, IL, USA), and two very lowcost resistive sensorsxe2x80x94the Soil Hygrometer Detection Module Soil Moisture Sensor (YL100) by Electronicfans and the Generic Soil Moisture Sensor Module (YL69) by KitsGuruxe2x80x94were tested for performance in laboratory conditions. Each sensor was calibrated in different repacked soils, and tested to evaluate accuracy, precision and sensitivity to variations in temperature and salinity. The capacitive sensors were additionally tested for their performance in liquids of known dielectric constants, and a comparative analysis of the calibration equations developed inhouse and provided by the manufacturer was carried out. The value for money of the sensors is reflected in their precision performance, i.e., the precision performance largely follows sensor costs. The other aspects of sensor performance do not necessarily follow sensor costs. The lowcost capacitive sensors were more accurate than manufacturer specifications, and could match the performance of the secondary standard sensor, after soil specific calibration. SMEC300 is accurate (    M A E    ,     R M S E    , and     R A E     of 2.12%, 2.88% and 0.28 respectively), precise, and performed well considering its price as well as multipurpose sensing capabilities. The lessexpensive SM100 sensor had a better accuracy (    M A E    ,     R M S E    , and     R A E     of 1.67%, 2.36% and 0.21 respectively) but poorer precision than the SMEC300. However, it was established as a robust, field ready, lowcost sensor due to its more consistent performance in soils (particularly the field soil) and superior performance in fluids. Both the capacitive sensors responded reasonably to variations in temperature and salinity conditions. Though the resistive sensors were less accurate and precise compared to the capacitive sensors, they performed well considering their cost category. The YL100 was more accurate (    M A E    ,     R M S E    , and     R A E     of 3.51%, 5.21% and 0.37 respectively) than YL69 (    M A E    ,     R M S E    , and     R A E     of 4.13%, 5.54%, and 0.41, respectively). However, YL69 outperformed YL100 in terms of precision, and response to temperature and salinity variations, to emerge as a more robust resistive sensor. These very lowcost sensors may be used in combination with more accurate sensors to better characterize the spatiotemporal variability of field scale soil moisture. The laboratory characterization conducted in this study is a prerequisite to estimate the effect of low and very lowcost sensor measurements on the efficiency of soil moisture based irrigation scheduling systems. 
 | 
	Estimation of water table depth using DUALEM2 system Abstract   Most of the agricultural fields are generally irrigated or drained uniformly without considering the spatial and temporal variation in water table depth (WTD). Investigating WTD is important for scheduling irrigation, drainage system designs and water balance models. The objective of this study was to develop a software interface to estimate the variations in WTD via electromagnetic induction (EMI) methods using frequencies of a DUALEM2 system. Two fields (Field 1: 45.38xc2xb0N, 63.23xc2xb0W and Field 2: 45.37xc2xb0N, 63.25xc2xb0W) were selected and thirty perforated observation wells were installed to calibrate the DUALEM2 for predicting WTD. Boundaries of the selected sites and location of the wells were marked using a real time kinematics global positioning system (RTKGPS). The user interface program was developed in Delphi 5.0 software and imported in a laptop computer to retrieve data from the DUALEM2 system. The horizontal coplanar (HCP) geometry, perpendicular coplanar (PRP) geometry and WTD were recorded simultaneously from each well before and after every significant rainfall for three consecutive days. Comprehensive surveys were conducted to measure apparent ground conductivity (ECa) with DUALEM2 and corresponding locations of the sampling points using Trimble Ag GPS 332. The regression model showed significant correlation between the HCP and WTD with coefficient of determination (R2xc2xa0xc2xa00.71) for field 1 and (R2xc2xa0xc2xa00.53) for field 2. Maps were generated in ArcGIS 10 software to examine the accuracy of predicted WTD in comparison with actual values. Results indicated that the DUALEM2 system was efficient in mapping variation rapidly and reliably in WTD in a nondestructive fashion rather than following conventional way of repeated well drilling for WTD determination. This information could be used for measuring the depletion of WTD during dry periods (droughts), sitespecific irrigation and drainage design with the aided advantage of labor and time savings in case of observing precision water management practices at large fields. 
 | 
	Symmetric Simplicial Pseudoline Arrangements A simplicial arrangement of pseudolines is a collection of topological lines in the projective plane where each region that is formed is triangular. This paper refines and develops David Eppsteinu0027s notion of a kaleidoscope construction for symmetric pseudoline arrangements to construct and analyze several infinite families of simplicial pseudoline arrangements with high degrees of geometric symmetry. In particular, all simplicial pseudoline arrangements with the symmetries of a regular kgon and three symmetry classes of pseudolines, consisting of the mirrors of the kgon and two other symmetry classes, plus sometimes the line at infinity, are classified, and other interesting families (with more symmetry classes of pseudolines) are discussed. 
 | 
					
	Laboratory Calibration and Performance Evaluation of LowCost Capacitive and Very LowCost Resistive Soil Moisture Sensors Soil volumetric water content (    V W C    ) is a vital parameter to understand several ecohydrological and environmental processes. Its costeffective measurement can potentially drive various technological tools to promote datadriven sustainable agriculture through supplemental irrigation solutions, the lack of which has contributed to severe agricultural distress, particularly for smallholder farmers. The cost of commercially available     V W C     sensors varies over four orders of magnitude. A laboratory study characterizing and testing sensors from this wide range of cost categories, which is a prerequisite to explore their applicability for irrigation management, has not been conducted. Within this context, two lowcost capacitive sensorsxe2x80x94SMEC300 and SM100xe2x80x94manufactured by Spectrum Technologies Inc. (Aurora, IL, USA), and two very lowcost resistive sensorsxe2x80x94the Soil Hygrometer Detection Module Soil Moisture Sensor (YL100) by Electronicfans and the Generic Soil Moisture Sensor Module (YL69) by KitsGuruxe2x80x94were tested for performance in laboratory conditions. Each sensor was calibrated in different repacked soils, and tested to evaluate accuracy, precision and sensitivity to variations in temperature and salinity. The capacitive sensors were additionally tested for their performance in liquids of known dielectric constants, and a comparative analysis of the calibration equations developed inhouse and provided by the manufacturer was carried out. The value for money of the sensors is reflected in their precision performance, i.e., the precision performance largely follows sensor costs. The other aspects of sensor performance do not necessarily follow sensor costs. The lowcost capacitive sensors were more accurate than manufacturer specifications, and could match the performance of the secondary standard sensor, after soil specific calibration. SMEC300 is accurate (    M A E    ,     R M S E    , and     R A E     of 2.12%, 2.88% and 0.28 respectively), precise, and performed well considering its price as well as multipurpose sensing capabilities. The lessexpensive SM100 sensor had a better accuracy (    M A E    ,     R M S E    , and     R A E     of 1.67%, 2.36% and 0.21 respectively) but poorer precision than the SMEC300. However, it was established as a robust, field ready, lowcost sensor due to its more consistent performance in soils (particularly the field soil) and superior performance in fluids. Both the capacitive sensors responded reasonably to variations in temperature and salinity conditions. Though the resistive sensors were less accurate and precise compared to the capacitive sensors, they performed well considering their cost category. The YL100 was more accurate (    M A E    ,     R M S E    , and     R A E     of 3.51%, 5.21% and 0.37 respectively) than YL69 (    M A E    ,     R M S E    , and     R A E     of 4.13%, 5.54%, and 0.41, respectively). However, YL69 outperformed YL100 in terms of precision, and response to temperature and salinity variations, to emerge as a more robust resistive sensor. These very lowcost sensors may be used in combination with more accurate sensors to better characterize the spatiotemporal variability of field scale soil moisture. The laboratory characterization conducted in this study is a prerequisite to estimate the effect of low and very lowcost sensor measurements on the efficiency of soil moisture based irrigation scheduling systems. 
 | 
	Estimation of water table depth using DUALEM2 system Abstract   Most of the agricultural fields are generally irrigated or drained uniformly without considering the spatial and temporal variation in water table depth (WTD). Investigating WTD is important for scheduling irrigation, drainage system designs and water balance models. The objective of this study was to develop a software interface to estimate the variations in WTD via electromagnetic induction (EMI) methods using frequencies of a DUALEM2 system. Two fields (Field 1: 45.38xc2xb0N, 63.23xc2xb0W and Field 2: 45.37xc2xb0N, 63.25xc2xb0W) were selected and thirty perforated observation wells were installed to calibrate the DUALEM2 for predicting WTD. Boundaries of the selected sites and location of the wells were marked using a real time kinematics global positioning system (RTKGPS). The user interface program was developed in Delphi 5.0 software and imported in a laptop computer to retrieve data from the DUALEM2 system. The horizontal coplanar (HCP) geometry, perpendicular coplanar (PRP) geometry and WTD were recorded simultaneously from each well before and after every significant rainfall for three consecutive days. Comprehensive surveys were conducted to measure apparent ground conductivity (ECa) with DUALEM2 and corresponding locations of the sampling points using Trimble Ag GPS 332. The regression model showed significant correlation between the HCP and WTD with coefficient of determination (R2xc2xa0xc2xa00.71) for field 1 and (R2xc2xa0xc2xa00.53) for field 2. Maps were generated in ArcGIS 10 software to examine the accuracy of predicted WTD in comparison with actual values. Results indicated that the DUALEM2 system was efficient in mapping variation rapidly and reliably in WTD in a nondestructive fashion rather than following conventional way of repeated well drilling for WTD determination. This information could be used for measuring the depletion of WTD during dry periods (droughts), sitespecific irrigation and drainage design with the aided advantage of labor and time savings in case of observing precision water management practices at large fields. 
 | 
	Extremal Problems for tPartite and tColorable Hypergraphs Fix integers t ge r ge 2 and an runiform hypergraph F. We prove that the maximum number of edges in a tpartite runiform hypergraph on n vertices that contains no copy of F is c_t, Fn choose r o(nr), where c_t, F can be determined by a finite computation.   We explicitly define a sequence F_1, F_2, ldots of runiform hypergraphs, and prove that the maximum number of edges in a tchromatic runiform hypergraph on n vertices containing no copy of F_i is alpha_t,r,in choose r o(nr), where alpha_t,r,i can be determined by a finite computation for each ige 1. In several cases, alpha_t,r,i is irrational. The main tool used in the proofs is the Lagrangian of a hypergraph. 
 | 
					
	SensorBased Daily Physical Activity: Towards Prediction of the Level of Concern about Falling in Peripheral Neuropathy Concern about falling is prevalent and increases the risk of falling in people with peripheral neuropathy (PN). However, the assessment of concern about falling relies on selfreport surveys, and thus continuous monitoring has not been possible. We investigated the influence of concern about falling on sensorbased daily physical activity among people with PN. Fortynine people with PN and various levels of concern about falling participated in this study. Physical activity outcomes were measured over a period of 48 hours using a validated chestworn sensor. The level of concern about falling was assessed using the falls efficacy scaleinternational (FESI). The low concern group spent approximately 80 min more in walking and approximately 100 min less in sittinglying compared to the high concern group. In addition, the low concern group had approximately 50% more walking bouts and step counts compared to the high concern group. Across all participants, the duration of walking bouts and total step counts was significantly correlated with FESI scores. The duration of walking bouts and total step counts may serve as eHealth targets and strategies for fall risk assessment among people with PN. 
 | 
	Using Accelerometer and GPS Data for RealLife Physical Activity Type Detection This paper aims to examine the role of global positioning system (GPS) sensor data in reallife physical activity (PA) type detection. Thirtythree young participants wore devices including GPS and accelerometer sensors on five body positions and performed daily PAs in two protocols, namely semistructured and reallife. One general random forest (RF) model integrating data from all sensors and five individual RF models using data from each sensor position were trained using semistructured (Scenario 1) and combined (semistructured  reallife) data (Scenario 2). The results showed that in general, adding GPS features (speed and elevation difference) to accelerometer data improves classification performance particularly for detecting nonlevel and level walking. Assessing the transferability of the models on reallife data showed that models from Scenario 2 are strongly transferable, particularly when adding GPS data to the training data. Comparing individual models indicated that kneemodels provide comparable classification performance (above 80%) to general models in both scenarios. In conclusion, adding GPS data improves reallife PA type classification performance if combined data are used for training the model. Moreover, the kneemodel provides the minimal device configuration with reliable accuracy for detecting reallife PA types. 
 | 
	Unmanned agricultural product sales system The invention relates to the field of agricultural product sales, provides an unmanned agricultural product sales system, and aims to solve the problem of agricultural product waste caused by the factthat most farmers can only prepare goods according to guessing and experiences when selling agricultural products at present. The unmanned agricultural product sales system comprises an acquisition module for acquiring selection information of customers; a storage module which prestores a vegetable preparation scheme; a matching module which is used for matching a corresponding side dish schemefrom the storage module according to the selection information of the client; a pushing module which is used for pushing the matched side dish scheme back to the client; an acquisition module which isalso used for acquiring confirmation information of a client; an order module which is used for generating order information according to the confirmation information of the client, wherein the pushing module is used for pushing the order information to the client and the seller, and the acquisition module is also used for acquiring the delivery information of the seller; and a logistics trackingmodule which is used for tracking the delivery information to obtain logistics information, wherein the pushing module is used for pushing the logistics information to the client. The scheme is usedfor sales of unmanned agricultural product shops. 
 | 
					
	SensorBased Daily Physical Activity: Towards Prediction of the Level of Concern about Falling in Peripheral Neuropathy Concern about falling is prevalent and increases the risk of falling in people with peripheral neuropathy (PN). However, the assessment of concern about falling relies on selfreport surveys, and thus continuous monitoring has not been possible. We investigated the influence of concern about falling on sensorbased daily physical activity among people with PN. Fortynine people with PN and various levels of concern about falling participated in this study. Physical activity outcomes were measured over a period of 48 hours using a validated chestworn sensor. The level of concern about falling was assessed using the falls efficacy scaleinternational (FESI). The low concern group spent approximately 80 min more in walking and approximately 100 min less in sittinglying compared to the high concern group. In addition, the low concern group had approximately 50% more walking bouts and step counts compared to the high concern group. Across all participants, the duration of walking bouts and total step counts was significantly correlated with FESI scores. The duration of walking bouts and total step counts may serve as eHealth targets and strategies for fall risk assessment among people with PN. 
 | 
	Using Accelerometer and GPS Data for RealLife Physical Activity Type Detection This paper aims to examine the role of global positioning system (GPS) sensor data in reallife physical activity (PA) type detection. Thirtythree young participants wore devices including GPS and accelerometer sensors on five body positions and performed daily PAs in two protocols, namely semistructured and reallife. One general random forest (RF) model integrating data from all sensors and five individual RF models using data from each sensor position were trained using semistructured (Scenario 1) and combined (semistructured  reallife) data (Scenario 2). The results showed that in general, adding GPS features (speed and elevation difference) to accelerometer data improves classification performance particularly for detecting nonlevel and level walking. Assessing the transferability of the models on reallife data showed that models from Scenario 2 are strongly transferable, particularly when adding GPS data to the training data. Comparing individual models indicated that kneemodels provide comparable classification performance (above 80%) to general models in both scenarios. In conclusion, adding GPS data improves reallife PA type classification performance if combined data are used for training the model. Moreover, the kneemodel provides the minimal device configuration with reliable accuracy for detecting reallife PA types. 
 | 
	Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups. 
 | 
					
	SensorBased Daily Physical Activity: Towards Prediction of the Level of Concern about Falling in Peripheral Neuropathy Concern about falling is prevalent and increases the risk of falling in people with peripheral neuropathy (PN). However, the assessment of concern about falling relies on selfreport surveys, and thus continuous monitoring has not been possible. We investigated the influence of concern about falling on sensorbased daily physical activity among people with PN. Fortynine people with PN and various levels of concern about falling participated in this study. Physical activity outcomes were measured over a period of 48 hours using a validated chestworn sensor. The level of concern about falling was assessed using the falls efficacy scaleinternational (FESI). The low concern group spent approximately 80 min more in walking and approximately 100 min less in sittinglying compared to the high concern group. In addition, the low concern group had approximately 50% more walking bouts and step counts compared to the high concern group. Across all participants, the duration of walking bouts and total step counts was significantly correlated with FESI scores. The duration of walking bouts and total step counts may serve as eHealth targets and strategies for fall risk assessment among people with PN. 
 | 
	Using Accelerometer and GPS Data for RealLife Physical Activity Type Detection This paper aims to examine the role of global positioning system (GPS) sensor data in reallife physical activity (PA) type detection. Thirtythree young participants wore devices including GPS and accelerometer sensors on five body positions and performed daily PAs in two protocols, namely semistructured and reallife. One general random forest (RF) model integrating data from all sensors and five individual RF models using data from each sensor position were trained using semistructured (Scenario 1) and combined (semistructured  reallife) data (Scenario 2). The results showed that in general, adding GPS features (speed and elevation difference) to accelerometer data improves classification performance particularly for detecting nonlevel and level walking. Assessing the transferability of the models on reallife data showed that models from Scenario 2 are strongly transferable, particularly when adding GPS data to the training data. Comparing individual models indicated that kneemodels provide comparable classification performance (above 80%) to general models in both scenarios. In conclusion, adding GPS data improves reallife PA type classification performance if combined data are used for training the model. Moreover, the kneemodel provides the minimal device configuration with reliable accuracy for detecting reallife PA types. 
 | 
	Classifying unavoidable Tverberg partitions Let T(d,r)  (r1)(d1)1 be the parameter in Tverbergu0027s theorem, and call a partition mathcal I of 1,2,ldots,T(d,r) into r parts a  Tverberg type . We say that mathcal I o ccurs xc2xa0in an ordered point sequence P if P contains a subsequence Pu0027 of T(d,r) points such that the partition of Pu0027 that is orderisomorphic to mathcal I is a Tverberg partition. We say that mathcal I is  unavoidable xc2xa0if it occurs in every sufficiently long point sequence.  In this paper we study the problem of determining which Tverberg types are unavoidable. We conjecture a complete characterization of the unavoidable Tverberg types, and we prove some cases of our conjecture for dle 4. Along the way, we study the avoidability of many other geometric predicates.  Our techniques also yield a large family of T(d,r)point sets for which the number of Tverberg partitions is exactly (r1)!d. This lends further support for Sierksmau0027s conjecture on the number of Tverberg partitions. 
 | 
					
	SensorBased Daily Physical Activity: Towards Prediction of the Level of Concern about Falling in Peripheral Neuropathy Concern about falling is prevalent and increases the risk of falling in people with peripheral neuropathy (PN). However, the assessment of concern about falling relies on selfreport surveys, and thus continuous monitoring has not been possible. We investigated the influence of concern about falling on sensorbased daily physical activity among people with PN. Fortynine people with PN and various levels of concern about falling participated in this study. Physical activity outcomes were measured over a period of 48 hours using a validated chestworn sensor. The level of concern about falling was assessed using the falls efficacy scaleinternational (FESI). The low concern group spent approximately 80 min more in walking and approximately 100 min less in sittinglying compared to the high concern group. In addition, the low concern group had approximately 50% more walking bouts and step counts compared to the high concern group. Across all participants, the duration of walking bouts and total step counts was significantly correlated with FESI scores. The duration of walking bouts and total step counts may serve as eHealth targets and strategies for fall risk assessment among people with PN. 
 | 
	Using Accelerometer and GPS Data for RealLife Physical Activity Type Detection This paper aims to examine the role of global positioning system (GPS) sensor data in reallife physical activity (PA) type detection. Thirtythree young participants wore devices including GPS and accelerometer sensors on five body positions and performed daily PAs in two protocols, namely semistructured and reallife. One general random forest (RF) model integrating data from all sensors and five individual RF models using data from each sensor position were trained using semistructured (Scenario 1) and combined (semistructured  reallife) data (Scenario 2). The results showed that in general, adding GPS features (speed and elevation difference) to accelerometer data improves classification performance particularly for detecting nonlevel and level walking. Assessing the transferability of the models on reallife data showed that models from Scenario 2 are strongly transferable, particularly when adding GPS data to the training data. Comparing individual models indicated that kneemodels provide comparable classification performance (above 80%) to general models in both scenarios. In conclusion, adding GPS data improves reallife PA type classification performance if combined data are used for training the model. Moreover, the kneemodel provides the minimal device configuration with reliable accuracy for detecting reallife PA types. 
 | 
	Symmetric Simplicial Pseudoline Arrangements A simplicial arrangement of pseudolines is a collection of topological lines in the projective plane where each region that is formed is triangular. This paper refines and develops David Eppsteinu0027s notion of a kaleidoscope construction for symmetric pseudoline arrangements to construct and analyze several infinite families of simplicial pseudoline arrangements with high degrees of geometric symmetry. In particular, all simplicial pseudoline arrangements with the symmetries of a regular kgon and three symmetry classes of pseudolines, consisting of the mirrors of the kgon and two other symmetry classes, plus sometimes the line at infinity, are classified, and other interesting families (with more symmetry classes of pseudolines) are discussed. 
 | 
					
	SensorBased Daily Physical Activity: Towards Prediction of the Level of Concern about Falling in Peripheral Neuropathy Concern about falling is prevalent and increases the risk of falling in people with peripheral neuropathy (PN). However, the assessment of concern about falling relies on selfreport surveys, and thus continuous monitoring has not been possible. We investigated the influence of concern about falling on sensorbased daily physical activity among people with PN. Fortynine people with PN and various levels of concern about falling participated in this study. Physical activity outcomes were measured over a period of 48 hours using a validated chestworn sensor. The level of concern about falling was assessed using the falls efficacy scaleinternational (FESI). The low concern group spent approximately 80 min more in walking and approximately 100 min less in sittinglying compared to the high concern group. In addition, the low concern group had approximately 50% more walking bouts and step counts compared to the high concern group. Across all participants, the duration of walking bouts and total step counts was significantly correlated with FESI scores. The duration of walking bouts and total step counts may serve as eHealth targets and strategies for fall risk assessment among people with PN. 
 | 
	Using Accelerometer and GPS Data for RealLife Physical Activity Type Detection This paper aims to examine the role of global positioning system (GPS) sensor data in reallife physical activity (PA) type detection. Thirtythree young participants wore devices including GPS and accelerometer sensors on five body positions and performed daily PAs in two protocols, namely semistructured and reallife. One general random forest (RF) model integrating data from all sensors and five individual RF models using data from each sensor position were trained using semistructured (Scenario 1) and combined (semistructured  reallife) data (Scenario 2). The results showed that in general, adding GPS features (speed and elevation difference) to accelerometer data improves classification performance particularly for detecting nonlevel and level walking. Assessing the transferability of the models on reallife data showed that models from Scenario 2 are strongly transferable, particularly when adding GPS data to the training data. Comparing individual models indicated that kneemodels provide comparable classification performance (above 80%) to general models in both scenarios. In conclusion, adding GPS data improves reallife PA type classification performance if combined data are used for training the model. Moreover, the kneemodel provides the minimal device configuration with reliable accuracy for detecting reallife PA types. 
 | 
	Extremal Problems for tPartite and tColorable Hypergraphs Fix integers t ge r ge 2 and an runiform hypergraph F. We prove that the maximum number of edges in a tpartite runiform hypergraph on n vertices that contains no copy of F is c_t, Fn choose r o(nr), where c_t, F can be determined by a finite computation.   We explicitly define a sequence F_1, F_2, ldots of runiform hypergraphs, and prove that the maximum number of edges in a tchromatic runiform hypergraph on n vertices containing no copy of F_i is alpha_t,r,in choose r o(nr), where alpha_t,r,i can be determined by a finite computation for each ige 1. In several cases, alpha_t,r,i is irrational. The main tool used in the proofs is the Lagrangian of a hypergraph. 
 | 
					
	Bitcoin Fee Decisions in Transaction Confirmation Queueing Games Under Limited MultiPriority Rule In the Bitcoin system, transaction fees serve not only as the fundamental economic incentive to stimulate miners, but also as an important tuner for the Bitcoin system to define the priorities in the transaction confirmation process. In this paper, we aim to study the priority rules for queueing transactions based on their associated fees, and in turn usersu0027 decisionmaking in formulating their fees in the transaction confirmation queueing game. Based on the queueing theory, we first analyzed the waiting time of users under nonpreemptive limited multipriority (LMP) rule, which is formulated to adjust usersu0027 waiting time over different priorities. We then established a gametheoretical model, and analyze usersu0027 equilibrium fee decisions. Towards the end, we conducted computational experiments to validate the theoretical analysis. Our research findings can not only help understand usersu0027 fee decisions under the LMP rule, but also offer useful managerial insights in optimizing the queueing rules of Bitcoin transactions. 
 | 
	Selfish Mining in Ethereum As the second largest cryptocurrency by market capitalization and todayu0027s biggest decentralized platform that runs smart contracts, Ethereum has received much attention from both industry and academia. Nevertheless, there exist very few studies about the security of its mining strategies, especially from the selfish mining perspective. In this paper, we aim to fill this research gap by analyzing selfish mining in Ethereum and understanding its potential threat. First, we introduce a 2dimensional Markov process to model the behavior of a selfish mining strategy inspired by a Bitcoin mining strategy proposed by Eyal and Sirer. Second, we derive the stationary distribution of our Markov model and compute longterm average mining rewards. This allows us to determine the threshold of computational power that makes selfish mining profitable in Ethereum. We find that this threshold is lower than that in Bitcoin mining (which is 25% as discovered by Eyal and Sirer), suggesting that Ethereum is more vulnerable to selfish mining than Bitcoin. 
 | 
	The longterm effect of media violence exposure on aggression of youngsters Abstract   The effect of media violence on aggression has always been a trending issue, and a better understanding of the psychological mechanism of the impact of media violence on youth aggression is an extremely important research topic for preventing the negative impacts of media violence and juvenile delinquency. From the perspective of anger, this study explored the longterm effect of different degrees of media violence exposure on the aggression of youngsters, as well as the role of aggressive emotions. The studies found that individuals with a high degree of media violence exposure (HMVE) exhibited higher levels of proactive aggression in both irritation situations and higher levels of reactive aggression in lowirritation situations than did participants with a low degree of media violence exposure (LMVE). After being provoked, the anger of all participants was significantly increased, and the anger and proactive aggression levels of the HMVE group were significantly higher than those of the LMVE group. Additionally, rumination and anger played a mediating role in the relationship between media violence exposure and aggression. Overall, this study enriches the theoretical understanding of the longterm effect of media violence exposure on individual aggression. Second, this study deepens our understanding of the relatively new and relevant phenomenon of the mechanism between media violence exposure and individual aggression. 
 | 
					
	Bitcoin Fee Decisions in Transaction Confirmation Queueing Games Under Limited MultiPriority Rule In the Bitcoin system, transaction fees serve not only as the fundamental economic incentive to stimulate miners, but also as an important tuner for the Bitcoin system to define the priorities in the transaction confirmation process. In this paper, we aim to study the priority rules for queueing transactions based on their associated fees, and in turn usersu0027 decisionmaking in formulating their fees in the transaction confirmation queueing game. Based on the queueing theory, we first analyzed the waiting time of users under nonpreemptive limited multipriority (LMP) rule, which is formulated to adjust usersu0027 waiting time over different priorities. We then established a gametheoretical model, and analyze usersu0027 equilibrium fee decisions. Towards the end, we conducted computational experiments to validate the theoretical analysis. Our research findings can not only help understand usersu0027 fee decisions under the LMP rule, but also offer useful managerial insights in optimizing the queueing rules of Bitcoin transactions. 
 | 
	Selfish Mining in Ethereum As the second largest cryptocurrency by market capitalization and todayu0027s biggest decentralized platform that runs smart contracts, Ethereum has received much attention from both industry and academia. Nevertheless, there exist very few studies about the security of its mining strategies, especially from the selfish mining perspective. In this paper, we aim to fill this research gap by analyzing selfish mining in Ethereum and understanding its potential threat. First, we introduce a 2dimensional Markov process to model the behavior of a selfish mining strategy inspired by a Bitcoin mining strategy proposed by Eyal and Sirer. Second, we derive the stationary distribution of our Markov model and compute longterm average mining rewards. This allows us to determine the threshold of computational power that makes selfish mining profitable in Ethereum. We find that this threshold is lower than that in Bitcoin mining (which is 25% as discovered by Eyal and Sirer), suggesting that Ethereum is more vulnerable to selfish mining than Bitcoin. 
 | 
	AirCoupled Reception of a Slow Ultrasonic A0 Mode Wave Propagating in Thin Plastic Film At low frequencies, in thin plates the phase velocity of the guided A0 mode can become slower than that of the ultrasound velocity in air. Such waves do not excite leaky waves in the surrounding air, and therefore, it is impossible to excite and receive them by conventional aircoupled methods. The objective of this research was the development of an aircoupled technique for the reception of slow A0 mode in thin plastic films. This study demonstrates the feasibility of picking up a subsonic A0 mode in plastic films by aircoupled ultrasonic arrays. The aircoupled reception was based on an evanescent wave in air accompanying the propagating A0 mode in a film. The efficiency of the reception was enhanced by using a virtual array which was arranged from the data collected by a single aircoupled receiver. The signals measured at the points corresponding to the positions of the phasematched array were recorded and processed. The transmitting array excited not only the A0 mode in the film, but also a direct wave in air. This wave propagated at ultrasound velocity in air and was faster than the evanescent wave. For efficient reception of the A0 mode, the additional signalprocessing procedure based on the application of the 2D Fourier transform in a spatialxe2x80x93temporal domain. The obtained results can be useful for the development of novel aircoupled ultrasonic nondestructive testing techniques. 
 | 
					
			Subsets and Splits
				
	
				
			
				
No community queries yet
The top public SQL queries from the community will appear here once available.