anchor
stringlengths 579
4.42k
| positive
stringlengths 574
3.84k
| negative
stringclasses 619
values |
|---|---|---|
Bitcoin Fee Decisions in Transaction Confirmation Queueing Games Under Limited MultiPriority Rule In the Bitcoin system, transaction fees serve not only as the fundamental economic incentive to stimulate miners, but also as an important tuner for the Bitcoin system to define the priorities in the transaction confirmation process. In this paper, we aim to study the priority rules for queueing transactions based on their associated fees, and in turn usersu0027 decisionmaking in formulating their fees in the transaction confirmation queueing game. Based on the queueing theory, we first analyzed the waiting time of users under nonpreemptive limited multipriority (LMP) rule, which is formulated to adjust usersu0027 waiting time over different priorities. We then established a gametheoretical model, and analyze usersu0027 equilibrium fee decisions. Towards the end, we conducted computational experiments to validate the theoretical analysis. Our research findings can not only help understand usersu0027 fee decisions under the LMP rule, but also offer useful managerial insights in optimizing the queueing rules of Bitcoin transactions.
|
Selfish Mining in Ethereum As the second largest cryptocurrency by market capitalization and todayu0027s biggest decentralized platform that runs smart contracts, Ethereum has received much attention from both industry and academia. Nevertheless, there exist very few studies about the security of its mining strategies, especially from the selfish mining perspective. In this paper, we aim to fill this research gap by analyzing selfish mining in Ethereum and understanding its potential threat. First, we introduce a 2dimensional Markov process to model the behavior of a selfish mining strategy inspired by a Bitcoin mining strategy proposed by Eyal and Sirer. Second, we derive the stationary distribution of our Markov model and compute longterm average mining rewards. This allows us to determine the threshold of computational power that makes selfish mining profitable in Ethereum. We find that this threshold is lower than that in Bitcoin mining (which is 25% as discovered by Eyal and Sirer), suggesting that Ethereum is more vulnerable to selfish mining than Bitcoin.
|
Instrument Design and Performance of the HighFrequency Airborne Microwave and MillimeterWave Radiometer The highfrequency airborne microwave and millimeterwave radiometer (HAMMR) is a crosstrack scanning airborne radiometer instrument with 25 channels from 18.7 to 183.3 GHz. HAMMR includes: lowfrequency microwave channels at 18.7, 23.8, and 34.0 GHz at two linearorthogonal polarizations; highfrequency millimeterwave channels at 90, 130 and 168 GHz; and millimeterwave sounding channels consisting of eight channels near the 118.75xc2xa0GHz oxygen absorption line for temperature profiling and eight additional channels near the 183.31 GHz water vapor absorption line for water vapor profiling. HAMMR was deployed on a twin otter aircraft for a west coast flight campaign (WCFC) from November 4xe2x80x9317, 2014. During the WCFC, HAMMR collected radiometric observations for more than 53.5 h under diverse atmospheric conditions, including clear sky, scattered and dense clouds, as well as over a variety of surface types, including coastal ocean areas, inland water and land. These measurements provide a comprehensive dataset to validate the instrument.
|
Bitcoin Fee Decisions in Transaction Confirmation Queueing Games Under Limited MultiPriority Rule In the Bitcoin system, transaction fees serve not only as the fundamental economic incentive to stimulate miners, but also as an important tuner for the Bitcoin system to define the priorities in the transaction confirmation process. In this paper, we aim to study the priority rules for queueing transactions based on their associated fees, and in turn usersu0027 decisionmaking in formulating their fees in the transaction confirmation queueing game. Based on the queueing theory, we first analyzed the waiting time of users under nonpreemptive limited multipriority (LMP) rule, which is formulated to adjust usersu0027 waiting time over different priorities. We then established a gametheoretical model, and analyze usersu0027 equilibrium fee decisions. Towards the end, we conducted computational experiments to validate the theoretical analysis. Our research findings can not only help understand usersu0027 fee decisions under the LMP rule, but also offer useful managerial insights in optimizing the queueing rules of Bitcoin transactions.
|
Selfish Mining in Ethereum As the second largest cryptocurrency by market capitalization and todayu0027s biggest decentralized platform that runs smart contracts, Ethereum has received much attention from both industry and academia. Nevertheless, there exist very few studies about the security of its mining strategies, especially from the selfish mining perspective. In this paper, we aim to fill this research gap by analyzing selfish mining in Ethereum and understanding its potential threat. First, we introduce a 2dimensional Markov process to model the behavior of a selfish mining strategy inspired by a Bitcoin mining strategy proposed by Eyal and Sirer. Second, we derive the stationary distribution of our Markov model and compute longterm average mining rewards. This allows us to determine the threshold of computational power that makes selfish mining profitable in Ethereum. We find that this threshold is lower than that in Bitcoin mining (which is 25% as discovered by Eyal and Sirer), suggesting that Ethereum is more vulnerable to selfish mining than Bitcoin.
|
Erkundung und Erforschung. Alexander von Humboldts Amerikareise Zusammenfassung Ahnlich wie Adalbert Stifters Erzahler im Roman xe2x80x9eNachsommerxe2x80x9c verband A. v. Humboldt auf seiner Amerikareise Erkundung und Erforschung, Reiselust und Erkenntnisstreben. Humboldt hat sein doppeltes Ziel klar benannt: Bekanntmachung der besuchten Lander, Sammeln von Tatsachen zur Erweiterung der physikalischen Geographie. Der Aufsatz ist in funf Abschnitte gegliedert: Anliegen, Route, Methoden, Ergebnisse, Auswertung. Abstract In a similar way as Adalbert Stifteru0027s narrator in the novel xe2x80x9cLate summerxe2x80x9d A. v. Humboldt combined exploration with research, fondness for travelling with striving for findings during his travel through South America. Humboldt clearly indicated his double aim: to report on the visited countries, to collect facts in order to improve physical geography. The treatise consists of five sections: object, route, methods, results, evaluation.
|
Fractional meshfree linear diffusion method for image enhancement and segmentation for automatic tumor classification Abstract Computer aided diagnostic (CAD) models have shown outstanding performance in identifying many kind of diseases. Tumor identification is one of the most useful application of CAD model. Benign and malignant are two categories of tumor cells. Both categories share some textural features due to which tumor classification becomes complex and difficult task. In the present manuscript, a novel CAD model is being presented for classifying tumor cells automatically. The proposed model has been divided into four modules (image enhancement, segmentation, feature extraction and classification). All the modules are equally important in tumor classification. The manuscript focuses on effect of first two modules namely image enhancement and segmentation on tumor classification. In the present work, a novel linear fractional meshfree partial differential equation (FPDE) based image enhancement method has been proposed to improve quality of the images. The proposed enhancement model is able to preserve fine details in smooth areas also nonlinearly increases high frequency information. A novel fractional meshfree segmentation method has been proposed to extract tumor region. It has been found that the method is able segment tumor region more accurately. Thirteen textural features have been used for training and testing. Support vector machine (SVM) classifier has been used to classify extracted tumor region. Quantitative analysis of proposed image enhancement, segmentation and CAD model have been done with other popular models. Higher performance has been observed using proposed models.
|
A Backgroundbased Data Enhancement Method for Lymphoma Segmentation in 3D PET Images Due to the poor resolution and low signaltonoise ratio in PET images, and especially to the wide variation in size, shape, site and SUV value among different patients or even for the same patient, lymphoma segmentation in 3D PET Images is still a challenging task in the field of medical image processing. In this work, a novel nonself backgroundbased data enhancement method is proposed for the deep learningbased lymphoma segmentation problem. Firstly, a lymphoma pool with 1991 lymphoid lesions is created. Then, some lymphomas from the lymphoma pool are randomly selected and integrated into their nonself images of the patients according to their respective coordinates when training networks. Finally, a series of comparison experiments among various network models and methods are conducted to verify the effectiveness of the proposed method. The results indicated that the proposed method was promising, and could obtain better comprehensive performance than other methods without any data enhancements for the lymphoma segmentation problems.
|
Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups.
|
Fractional meshfree linear diffusion method for image enhancement and segmentation for automatic tumor classification Abstract Computer aided diagnostic (CAD) models have shown outstanding performance in identifying many kind of diseases. Tumor identification is one of the most useful application of CAD model. Benign and malignant are two categories of tumor cells. Both categories share some textural features due to which tumor classification becomes complex and difficult task. In the present manuscript, a novel CAD model is being presented for classifying tumor cells automatically. The proposed model has been divided into four modules (image enhancement, segmentation, feature extraction and classification). All the modules are equally important in tumor classification. The manuscript focuses on effect of first two modules namely image enhancement and segmentation on tumor classification. In the present work, a novel linear fractional meshfree partial differential equation (FPDE) based image enhancement method has been proposed to improve quality of the images. The proposed enhancement model is able to preserve fine details in smooth areas also nonlinearly increases high frequency information. A novel fractional meshfree segmentation method has been proposed to extract tumor region. It has been found that the method is able segment tumor region more accurately. Thirteen textural features have been used for training and testing. Support vector machine (SVM) classifier has been used to classify extracted tumor region. Quantitative analysis of proposed image enhancement, segmentation and CAD model have been done with other popular models. Higher performance has been observed using proposed models.
|
A Backgroundbased Data Enhancement Method for Lymphoma Segmentation in 3D PET Images Due to the poor resolution and low signaltonoise ratio in PET images, and especially to the wide variation in size, shape, site and SUV value among different patients or even for the same patient, lymphoma segmentation in 3D PET Images is still a challenging task in the field of medical image processing. In this work, a novel nonself backgroundbased data enhancement method is proposed for the deep learningbased lymphoma segmentation problem. Firstly, a lymphoma pool with 1991 lymphoid lesions is created. Then, some lymphomas from the lymphoma pool are randomly selected and integrated into their nonself images of the patients according to their respective coordinates when training networks. Finally, a series of comparison experiments among various network models and methods are conducted to verify the effectiveness of the proposed method. The results indicated that the proposed method was promising, and could obtain better comprehensive performance than other methods without any data enhancements for the lymphoma segmentation problems.
|
On the Round Complexity of Randomized Byzantine Agreement We prove lower bounds on the round complexity of randomized Byzantine agreement (BA) protocols, bounding the halting probability of such protocols after one and two rounds. In particular, we prove that: :31,BA protocols resilient against n3 resp., n4 corruptions terminate (under attack) at the end of the first round with probability at most o(1) resp., 12 o(1). :31,BA protocols resilient against :58,n4 corruptions terminate at the end of the second round with probability at most 1Theta(1). :78,For a large class of protocols (including all :31,BA protocols used in practice) and under a plausible combinatorial conjecture, :31,BA protocols resilient against n3 resp., n4 corruptions terminate at the end of the second round with probability at most o(1) resp., 12 o(1). :123,141,above bounds hold even when the parties use a trusted setup phase, e.g., a publickey infrastructure (PKI). third bound essentially matches the recent protocol of Micali (ITCSu002717) that tolerates up to n3 corruptions and terminates at the end of the third round with constant probability.
|
Fractional meshfree linear diffusion method for image enhancement and segmentation for automatic tumor classification Abstract Computer aided diagnostic (CAD) models have shown outstanding performance in identifying many kind of diseases. Tumor identification is one of the most useful application of CAD model. Benign and malignant are two categories of tumor cells. Both categories share some textural features due to which tumor classification becomes complex and difficult task. In the present manuscript, a novel CAD model is being presented for classifying tumor cells automatically. The proposed model has been divided into four modules (image enhancement, segmentation, feature extraction and classification). All the modules are equally important in tumor classification. The manuscript focuses on effect of first two modules namely image enhancement and segmentation on tumor classification. In the present work, a novel linear fractional meshfree partial differential equation (FPDE) based image enhancement method has been proposed to improve quality of the images. The proposed enhancement model is able to preserve fine details in smooth areas also nonlinearly increases high frequency information. A novel fractional meshfree segmentation method has been proposed to extract tumor region. It has been found that the method is able segment tumor region more accurately. Thirteen textural features have been used for training and testing. Support vector machine (SVM) classifier has been used to classify extracted tumor region. Quantitative analysis of proposed image enhancement, segmentation and CAD model have been done with other popular models. Higher performance has been observed using proposed models.
|
A Backgroundbased Data Enhancement Method for Lymphoma Segmentation in 3D PET Images Due to the poor resolution and low signaltonoise ratio in PET images, and especially to the wide variation in size, shape, site and SUV value among different patients or even for the same patient, lymphoma segmentation in 3D PET Images is still a challenging task in the field of medical image processing. In this work, a novel nonself backgroundbased data enhancement method is proposed for the deep learningbased lymphoma segmentation problem. Firstly, a lymphoma pool with 1991 lymphoid lesions is created. Then, some lymphomas from the lymphoma pool are randomly selected and integrated into their nonself images of the patients according to their respective coordinates when training networks. Finally, a series of comparison experiments among various network models and methods are conducted to verify the effectiveness of the proposed method. The results indicated that the proposed method was promising, and could obtain better comprehensive performance than other methods without any data enhancements for the lymphoma segmentation problems.
|
Pooled Mining is Driving Blockchains Toward Centralized Systems The decentralization property of blockchains stems from the fact that each miner accepts or refuses transactions and blocks based on its own verification results. However, pooled mining causes blockchains to evolve into centralized systems because pool participants delegate their decisionmaking rights to pool managers. In this paper, we established and validated a model for ProofofWork mining, introduced the concept of equivalent blocks, and quantitatively derived that pooling effectively lowers the income variance of miners. We also analyzed Bitcoin and Ethereum data to prove that pooled mining has become prevalent in the real world. The percentage of poolmined blocks increased from 49.91% to 91.12% within four months in Bitcoin and from 76.9% to 92.2% within five months in Ethereum. In July 2018, Bitcoin and Ethereum mining were dominated by only six and five pools respectively.
|
Fractional meshfree linear diffusion method for image enhancement and segmentation for automatic tumor classification Abstract Computer aided diagnostic (CAD) models have shown outstanding performance in identifying many kind of diseases. Tumor identification is one of the most useful application of CAD model. Benign and malignant are two categories of tumor cells. Both categories share some textural features due to which tumor classification becomes complex and difficult task. In the present manuscript, a novel CAD model is being presented for classifying tumor cells automatically. The proposed model has been divided into four modules (image enhancement, segmentation, feature extraction and classification). All the modules are equally important in tumor classification. The manuscript focuses on effect of first two modules namely image enhancement and segmentation on tumor classification. In the present work, a novel linear fractional meshfree partial differential equation (FPDE) based image enhancement method has been proposed to improve quality of the images. The proposed enhancement model is able to preserve fine details in smooth areas also nonlinearly increases high frequency information. A novel fractional meshfree segmentation method has been proposed to extract tumor region. It has been found that the method is able segment tumor region more accurately. Thirteen textural features have been used for training and testing. Support vector machine (SVM) classifier has been used to classify extracted tumor region. Quantitative analysis of proposed image enhancement, segmentation and CAD model have been done with other popular models. Higher performance has been observed using proposed models.
|
A Backgroundbased Data Enhancement Method for Lymphoma Segmentation in 3D PET Images Due to the poor resolution and low signaltonoise ratio in PET images, and especially to the wide variation in size, shape, site and SUV value among different patients or even for the same patient, lymphoma segmentation in 3D PET Images is still a challenging task in the field of medical image processing. In this work, a novel nonself backgroundbased data enhancement method is proposed for the deep learningbased lymphoma segmentation problem. Firstly, a lymphoma pool with 1991 lymphoid lesions is created. Then, some lymphomas from the lymphoma pool are randomly selected and integrated into their nonself images of the patients according to their respective coordinates when training networks. Finally, a series of comparison experiments among various network models and methods are conducted to verify the effectiveness of the proposed method. The results indicated that the proposed method was promising, and could obtain better comprehensive performance than other methods without any data enhancements for the lymphoma segmentation problems.
|
Trust Degree Calculation Method Based on Trust Blockchain Node Due to the diversity and mobility of blockchain network nodes and the decentralized nature of blockchain networks, traditional trust value evaluation indicators cannot be directly used. In order to obtain trusted nodes, a trustworthiness calculation method based on trust blockchain nodes is proposed. Different from the traditional P2P network trust value calculation, the trust blockchain not only acquires the working state of the node, but also collects the special behavior information of the node, and calculates the joining time by synthesizing the trust value generated by the node transaction and the trust value generated by the node behavior. After the attenuation factor is comprehensively evaluated, the trusted nodes are selected to effectively ensure the security of the blockchain network environment, while reducing the average transaction delay and increasing the block rate.
|
Fractional meshfree linear diffusion method for image enhancement and segmentation for automatic tumor classification Abstract Computer aided diagnostic (CAD) models have shown outstanding performance in identifying many kind of diseases. Tumor identification is one of the most useful application of CAD model. Benign and malignant are two categories of tumor cells. Both categories share some textural features due to which tumor classification becomes complex and difficult task. In the present manuscript, a novel CAD model is being presented for classifying tumor cells automatically. The proposed model has been divided into four modules (image enhancement, segmentation, feature extraction and classification). All the modules are equally important in tumor classification. The manuscript focuses on effect of first two modules namely image enhancement and segmentation on tumor classification. In the present work, a novel linear fractional meshfree partial differential equation (FPDE) based image enhancement method has been proposed to improve quality of the images. The proposed enhancement model is able to preserve fine details in smooth areas also nonlinearly increases high frequency information. A novel fractional meshfree segmentation method has been proposed to extract tumor region. It has been found that the method is able segment tumor region more accurately. Thirteen textural features have been used for training and testing. Support vector machine (SVM) classifier has been used to classify extracted tumor region. Quantitative analysis of proposed image enhancement, segmentation and CAD model have been done with other popular models. Higher performance has been observed using proposed models.
|
A Backgroundbased Data Enhancement Method for Lymphoma Segmentation in 3D PET Images Due to the poor resolution and low signaltonoise ratio in PET images, and especially to the wide variation in size, shape, site and SUV value among different patients or even for the same patient, lymphoma segmentation in 3D PET Images is still a challenging task in the field of medical image processing. In this work, a novel nonself backgroundbased data enhancement method is proposed for the deep learningbased lymphoma segmentation problem. Firstly, a lymphoma pool with 1991 lymphoid lesions is created. Then, some lymphomas from the lymphoma pool are randomly selected and integrated into their nonself images of the patients according to their respective coordinates when training networks. Finally, a series of comparison experiments among various network models and methods are conducted to verify the effectiveness of the proposed method. The results indicated that the proposed method was promising, and could obtain better comprehensive performance than other methods without any data enhancements for the lymphoma segmentation problems.
|
A Case for Dynamically Programmable Storage Background Tasks Modern storage infrastructures feature long and complicated IO paths composed of several layers, each employing their own optimizations to serve varied applications with fluctuating requirements. However, as these layers do not have global infrastructure visibility, they are unable to optimally tune their behavior to achieve maximum performance. Background storage tasks, in particular, can rapidly overload shared resources, but are executed either periodically or whenever a certain threshold is hit regardless of the overall load on the system. In this paper, we argue that to achieve optimal holistic performance, these tasks should be dynamically programmable and handled by a controller with global visibility. To support this argument, we evaluate the impact on performance of compaction and checkpointing in the context of HBase and PostgreSQL. We find that these tasks can respectively increase 99th percentile latencies by 955.2% and 61.9%. We also identify future research directions to achieve programmable background tasks.
|
A leastsquares particle model with other techniques for 2D viscoelastic fluidfree surface flow Abstract In this work, a Lagrangian finite pointset model (FPM) is first developed to solve the 2D viscoelastic fluid governing equations, and then a corrected particle shifting technique (CPST) is introduced to eliminate the tensile instability in the long time simulations and added in the above FPM scheme (named as the FPMT). Subsequently, a coupled particle method (FPMTSPH) is tentatively proposed to simulate the viscoelastic free surface flow, in which the FPMT method is used in the interior of fluid domain and the SPH method is adopted to treat the free surface near the boundary. The proposed FPMT method and FPMTSPH method are motivated by: a) the spatial derivatives of the velocity and the viscoelastic stress are approximated and obtained by the weighted least squares method; b) the pressure is accurately solved by the application of projection method with a local iterative procedure; c) a corrected particle shifting technique is added to remedy the tensile instability (FPMT); d) an identification technique of freesurface particles is given in the FPMTSPH method. The accuracy and the convergence of the proposed FPMT scheme for viscoelastic flow are first discussed by solving the planar flow based on the OldroydB model, and compared with the analytical solutions. Secondly, the validity of the CPST is tested by several benchmarks, and compared with other numerical results. Thirdly, the viscoelastic liddriven cavity flow at high Weissenberg number is simulated and compared with gridbased results, for further illustrating the robustness and the ability of the proposed FPMT method. Finally, the proposed coupled FPMTSPH method is used to simulate the challenging free surface flow problem of a viscoelastic droplet impacting and spreading on rigid. All the numerical results show that the proposed particle method for the viscoelastic fluid or free surface flow is robust and reliable.
|
A stable SPH discretization of the elliptic operator with heterogeneous coefficients Abstract Smoothed particle hydrodynamics (SPH) has been extensively used to model high and low Reynolds number flows, free surface flows and collapse of dams, study porescale flow and dispersion, elasticity, and thermal problems. In different applications, it is required to have a stable and accurate discretization of the elliptic operator with homogeneous and heterogeneous coefficients. In this paper, the stability and approximation analysis of different SPH discretization schemes (traditional and new) of the diagonal elliptic operator for homogeneous and heterogeneous media are presented. The optimum and new discretization scheme of specific shape satisfying the twopoint flux approximation nature is also proposed. This scheme enhances the Laplace approximation (Brookshawxe2x80x99s scheme (1985) and Schwaigerxe2x80x99s scheme (2008)) used in the SPH community for thermal, viscous, and pressure projection problems with an isotropic elliptic operator. The numerical results are illustrated by numerical examples, where the comparison between different versions of the meshless discretization methods are presented.
|
A Critical Look at the 2019 College Admissions Scandal Discusses the 2019 College admissions scandal. Let me begin with a disclaimer: I am making no legal excuses for the participants in the current scandal. I am only offering contextual background that places it in the broader academic, cultural, and political perspective required for understanding. It is only the most recent installment of a wellworn narrative: the controlling elite make their own rules and live by them, if they can get away with it. Unfortunately, some of the participants, who are either serving or facing jail time, didnxe2x80x99t know to not go into a gunfight with a sharp stick. Money alone is not enough to avoid prosecution for fraud: you need political clout. The best protection a defendant can have is a prosecutor who fears political reprisal. Compare how the Koch brothers escaped prosecution for stealing millions of oil dollars from Native American tribes1,2 with the fate of actresses Lori Loughlin and Felicity Huffman, who, at the time of this writing, face jail time for paying bribes to get their children into good universities.3,4 In the former case, the federal prosecutor who dared to empanel a grand jury to get at the truth was fired for cause, which put a quick end to the prosecution. In the latter case, the prosecutors pushed for jail terms and public admonishment with the zeal of Oliver Cromwell. There you have it: stealing oil from Native Americans versus trying to bribe your kids into a great university. Where is the greater crime? Admittedly, these actresses and their
|
A leastsquares particle model with other techniques for 2D viscoelastic fluidfree surface flow Abstract In this work, a Lagrangian finite pointset model (FPM) is first developed to solve the 2D viscoelastic fluid governing equations, and then a corrected particle shifting technique (CPST) is introduced to eliminate the tensile instability in the long time simulations and added in the above FPM scheme (named as the FPMT). Subsequently, a coupled particle method (FPMTSPH) is tentatively proposed to simulate the viscoelastic free surface flow, in which the FPMT method is used in the interior of fluid domain and the SPH method is adopted to treat the free surface near the boundary. The proposed FPMT method and FPMTSPH method are motivated by: a) the spatial derivatives of the velocity and the viscoelastic stress are approximated and obtained by the weighted least squares method; b) the pressure is accurately solved by the application of projection method with a local iterative procedure; c) a corrected particle shifting technique is added to remedy the tensile instability (FPMT); d) an identification technique of freesurface particles is given in the FPMTSPH method. The accuracy and the convergence of the proposed FPMT scheme for viscoelastic flow are first discussed by solving the planar flow based on the OldroydB model, and compared with the analytical solutions. Secondly, the validity of the CPST is tested by several benchmarks, and compared with other numerical results. Thirdly, the viscoelastic liddriven cavity flow at high Weissenberg number is simulated and compared with gridbased results, for further illustrating the robustness and the ability of the proposed FPMT method. Finally, the proposed coupled FPMTSPH method is used to simulate the challenging free surface flow problem of a viscoelastic droplet impacting and spreading on rigid. All the numerical results show that the proposed particle method for the viscoelastic fluid or free surface flow is robust and reliable.
|
A stable SPH discretization of the elliptic operator with heterogeneous coefficients Abstract Smoothed particle hydrodynamics (SPH) has been extensively used to model high and low Reynolds number flows, free surface flows and collapse of dams, study porescale flow and dispersion, elasticity, and thermal problems. In different applications, it is required to have a stable and accurate discretization of the elliptic operator with homogeneous and heterogeneous coefficients. In this paper, the stability and approximation analysis of different SPH discretization schemes (traditional and new) of the diagonal elliptic operator for homogeneous and heterogeneous media are presented. The optimum and new discretization scheme of specific shape satisfying the twopoint flux approximation nature is also proposed. This scheme enhances the Laplace approximation (Brookshawxe2x80x99s scheme (1985) and Schwaigerxe2x80x99s scheme (2008)) used in the SPH community for thermal, viscous, and pressure projection problems with an isotropic elliptic operator. The numerical results are illustrated by numerical examples, where the comparison between different versions of the meshless discretization methods are presented.
|
Managing Information From the :2,Information highlights the increasing value of information and IT within organizations and shows how organizations use it. It also deals with the crucial relationship between information and personal effectiveness. The use of computer software and communications in a management context are discussed in detail, including how to mould an information system to your needs. The book explains the basics using reallife examples and brings managers uptodate with the latest developments in electronic commerce and the Internet. The book is based on the Management Charter Initiativeu0027s Occupational Standards for Management NVQs and SVQs at level 4. It is particularly suitable for managers on the Certificate in Management, or Part I of the Diploma, especially those accredited by the IM and BTEC.
|
A leastsquares particle model with other techniques for 2D viscoelastic fluidfree surface flow Abstract In this work, a Lagrangian finite pointset model (FPM) is first developed to solve the 2D viscoelastic fluid governing equations, and then a corrected particle shifting technique (CPST) is introduced to eliminate the tensile instability in the long time simulations and added in the above FPM scheme (named as the FPMT). Subsequently, a coupled particle method (FPMTSPH) is tentatively proposed to simulate the viscoelastic free surface flow, in which the FPMT method is used in the interior of fluid domain and the SPH method is adopted to treat the free surface near the boundary. The proposed FPMT method and FPMTSPH method are motivated by: a) the spatial derivatives of the velocity and the viscoelastic stress are approximated and obtained by the weighted least squares method; b) the pressure is accurately solved by the application of projection method with a local iterative procedure; c) a corrected particle shifting technique is added to remedy the tensile instability (FPMT); d) an identification technique of freesurface particles is given in the FPMTSPH method. The accuracy and the convergence of the proposed FPMT scheme for viscoelastic flow are first discussed by solving the planar flow based on the OldroydB model, and compared with the analytical solutions. Secondly, the validity of the CPST is tested by several benchmarks, and compared with other numerical results. Thirdly, the viscoelastic liddriven cavity flow at high Weissenberg number is simulated and compared with gridbased results, for further illustrating the robustness and the ability of the proposed FPMT method. Finally, the proposed coupled FPMTSPH method is used to simulate the challenging free surface flow problem of a viscoelastic droplet impacting and spreading on rigid. All the numerical results show that the proposed particle method for the viscoelastic fluid or free surface flow is robust and reliable.
|
A stable SPH discretization of the elliptic operator with heterogeneous coefficients Abstract Smoothed particle hydrodynamics (SPH) has been extensively used to model high and low Reynolds number flows, free surface flows and collapse of dams, study porescale flow and dispersion, elasticity, and thermal problems. In different applications, it is required to have a stable and accurate discretization of the elliptic operator with homogeneous and heterogeneous coefficients. In this paper, the stability and approximation analysis of different SPH discretization schemes (traditional and new) of the diagonal elliptic operator for homogeneous and heterogeneous media are presented. The optimum and new discretization scheme of specific shape satisfying the twopoint flux approximation nature is also proposed. This scheme enhances the Laplace approximation (Brookshawxe2x80x99s scheme (1985) and Schwaigerxe2x80x99s scheme (2008)) used in the SPH community for thermal, viscous, and pressure projection problems with an isotropic elliptic operator. The numerical results are illustrated by numerical examples, where the comparison between different versions of the meshless discretization methods are presented.
|
Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups.
|
A leastsquares particle model with other techniques for 2D viscoelastic fluidfree surface flow Abstract In this work, a Lagrangian finite pointset model (FPM) is first developed to solve the 2D viscoelastic fluid governing equations, and then a corrected particle shifting technique (CPST) is introduced to eliminate the tensile instability in the long time simulations and added in the above FPM scheme (named as the FPMT). Subsequently, a coupled particle method (FPMTSPH) is tentatively proposed to simulate the viscoelastic free surface flow, in which the FPMT method is used in the interior of fluid domain and the SPH method is adopted to treat the free surface near the boundary. The proposed FPMT method and FPMTSPH method are motivated by: a) the spatial derivatives of the velocity and the viscoelastic stress are approximated and obtained by the weighted least squares method; b) the pressure is accurately solved by the application of projection method with a local iterative procedure; c) a corrected particle shifting technique is added to remedy the tensile instability (FPMT); d) an identification technique of freesurface particles is given in the FPMTSPH method. The accuracy and the convergence of the proposed FPMT scheme for viscoelastic flow are first discussed by solving the planar flow based on the OldroydB model, and compared with the analytical solutions. Secondly, the validity of the CPST is tested by several benchmarks, and compared with other numerical results. Thirdly, the viscoelastic liddriven cavity flow at high Weissenberg number is simulated and compared with gridbased results, for further illustrating the robustness and the ability of the proposed FPMT method. Finally, the proposed coupled FPMTSPH method is used to simulate the challenging free surface flow problem of a viscoelastic droplet impacting and spreading on rigid. All the numerical results show that the proposed particle method for the viscoelastic fluid or free surface flow is robust and reliable.
|
A stable SPH discretization of the elliptic operator with heterogeneous coefficients Abstract Smoothed particle hydrodynamics (SPH) has been extensively used to model high and low Reynolds number flows, free surface flows and collapse of dams, study porescale flow and dispersion, elasticity, and thermal problems. In different applications, it is required to have a stable and accurate discretization of the elliptic operator with homogeneous and heterogeneous coefficients. In this paper, the stability and approximation analysis of different SPH discretization schemes (traditional and new) of the diagonal elliptic operator for homogeneous and heterogeneous media are presented. The optimum and new discretization scheme of specific shape satisfying the twopoint flux approximation nature is also proposed. This scheme enhances the Laplace approximation (Brookshawxe2x80x99s scheme (1985) and Schwaigerxe2x80x99s scheme (2008)) used in the SPH community for thermal, viscous, and pressure projection problems with an isotropic elliptic operator. The numerical results are illustrated by numerical examples, where the comparison between different versions of the meshless discretization methods are presented.
|
General Data Protection Regulation in Health Clinics The focus on personal data has merited the EU concerns and attention, resulting in the legislative change regarding privacy and the protection of personal data. The General Data Protection Regulation (GDPR) aims to reform existing measures on the protection of personal data of European Union citizens, with a strong impact on the rights and freedoms of individuals in establishing rules for the processing of personal data. The GDPR considers a special category of personal data, the health data, being these considered as sensitive data and subject to special conditions regarding treatment and access by third parties. This work presents the evolution of the applicability of the Regulation (EU) 2016679 six months after its application in Portuguese health clinics. The results of the present study are discussed in the light of future literature and work are identified.
|
A leastsquares particle model with other techniques for 2D viscoelastic fluidfree surface flow Abstract In this work, a Lagrangian finite pointset model (FPM) is first developed to solve the 2D viscoelastic fluid governing equations, and then a corrected particle shifting technique (CPST) is introduced to eliminate the tensile instability in the long time simulations and added in the above FPM scheme (named as the FPMT). Subsequently, a coupled particle method (FPMTSPH) is tentatively proposed to simulate the viscoelastic free surface flow, in which the FPMT method is used in the interior of fluid domain and the SPH method is adopted to treat the free surface near the boundary. The proposed FPMT method and FPMTSPH method are motivated by: a) the spatial derivatives of the velocity and the viscoelastic stress are approximated and obtained by the weighted least squares method; b) the pressure is accurately solved by the application of projection method with a local iterative procedure; c) a corrected particle shifting technique is added to remedy the tensile instability (FPMT); d) an identification technique of freesurface particles is given in the FPMTSPH method. The accuracy and the convergence of the proposed FPMT scheme for viscoelastic flow are first discussed by solving the planar flow based on the OldroydB model, and compared with the analytical solutions. Secondly, the validity of the CPST is tested by several benchmarks, and compared with other numerical results. Thirdly, the viscoelastic liddriven cavity flow at high Weissenberg number is simulated and compared with gridbased results, for further illustrating the robustness and the ability of the proposed FPMT method. Finally, the proposed coupled FPMTSPH method is used to simulate the challenging free surface flow problem of a viscoelastic droplet impacting and spreading on rigid. All the numerical results show that the proposed particle method for the viscoelastic fluid or free surface flow is robust and reliable.
|
A stable SPH discretization of the elliptic operator with heterogeneous coefficients Abstract Smoothed particle hydrodynamics (SPH) has been extensively used to model high and low Reynolds number flows, free surface flows and collapse of dams, study porescale flow and dispersion, elasticity, and thermal problems. In different applications, it is required to have a stable and accurate discretization of the elliptic operator with homogeneous and heterogeneous coefficients. In this paper, the stability and approximation analysis of different SPH discretization schemes (traditional and new) of the diagonal elliptic operator for homogeneous and heterogeneous media are presented. The optimum and new discretization scheme of specific shape satisfying the twopoint flux approximation nature is also proposed. This scheme enhances the Laplace approximation (Brookshawxe2x80x99s scheme (1985) and Schwaigerxe2x80x99s scheme (2008)) used in the SPH community for thermal, viscous, and pressure projection problems with an isotropic elliptic operator. The numerical results are illustrated by numerical examples, where the comparison between different versions of the meshless discretization methods are presented.
|
What Makes a Social Robot Good at Interacting with Humans This paper discusses the nuances of a social robot, how and why social robots are becoming increasingly significant, and what they are currently being used for. This paper also reflects on the current design of social robots as a means of interaction with humans and also reports potential solutions about several important questions around the futuristic design of these robots. The specific questions explored in this paper are: xe2x80x9cDo social robots need to look like living creatures that already exist in the world for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have animated faces for humans to interact well with them?xe2x80x9d; xe2x80x9cDo social robots need to have the ability to speak a coherent human language for humans to interact well with them?xe2x80x9d and xe2x80x9cDo social robots need to have the capability to make physical gestures for humans to interact well with them?xe2x80x9d. This paper reviews both verbal as well as nonverbal social and conversational cues that could be incorporated into the design of social robots, and also briefly discusses the emotional bonds that may be built between humans and robots. Facets surrounding acceptance of social robots by humans and also ethicalmoral concerns have also been discussed.
|
Analyzing barriers for adopting sustainable online consumption: A rough hierarchical DEMATEL method Abstract Sustainable online consumption is attracting attention in both the academic and managerial communities. Identifying and analyzing the barriers of sustainable online consumption is critical to its successful implementation. However, the previous approaches often assume that the barriers are equally important in manipulating the interrelationships among barriers, and they need auxiliary information or preassumptions (e.g. preset fuzzy membership functions) in dealing with vague information, which will influence the accuracy of the barrier analysis results. To solve the problems, an integrated method based on improved DEMATEL, ISM and rough set theory is developed in this paper. The proposed method integrates the strength of improved DEMATEL approach in exploring causeeffect relationships considering the impact of barrier strength on the relationships, the advantage of ISM method in building the hierarchical structure of elements, and the merit of rough number in manipulating the vagueness without any auxiliary information or preassumptions. Finally, an application in a large ecommerce company is provided to show the feasibility and effectiveness of the proposed method.
|
A FuzzyAHP Approach for Strategic Evaluation and Selection of Digital Marketing Tools The prevalence and rapid development of the Internet and mobile technology in recent decades has revamped our living styles and daily habits. To ride on the digital trend, more business activities have been engaged in the digital world. Marketing and advertising is one of typical business areas that is transformed digitally. The rise of Key Opinion Leaders (KOLs), social media platforms, and Omnichannel retailing have attracted countless business entities to consider the adoption of digital marketing tools for promoting and advertising their brands and products. However, with the increasing diversity of the types of digital marketing tools, they must be carefully selected based on a multiple number of criterion. In this paper, a fuzzyAHP method is proposed and developed for assisting industry practitioners in systematically and effectively evaluate and select proper digital marketing tool(s) for adoption. The developed method not only streamlines the internal business process of digital marketing tool selection, but it also increases the practitioneru0027s effectiveness of achieving the predefined strategic marketing objectives.
|
Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well. Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found.
|
Analyzing barriers for adopting sustainable online consumption: A rough hierarchical DEMATEL method Abstract Sustainable online consumption is attracting attention in both the academic and managerial communities. Identifying and analyzing the barriers of sustainable online consumption is critical to its successful implementation. However, the previous approaches often assume that the barriers are equally important in manipulating the interrelationships among barriers, and they need auxiliary information or preassumptions (e.g. preset fuzzy membership functions) in dealing with vague information, which will influence the accuracy of the barrier analysis results. To solve the problems, an integrated method based on improved DEMATEL, ISM and rough set theory is developed in this paper. The proposed method integrates the strength of improved DEMATEL approach in exploring causeeffect relationships considering the impact of barrier strength on the relationships, the advantage of ISM method in building the hierarchical structure of elements, and the merit of rough number in manipulating the vagueness without any auxiliary information or preassumptions. Finally, an application in a large ecommerce company is provided to show the feasibility and effectiveness of the proposed method.
|
A FuzzyAHP Approach for Strategic Evaluation and Selection of Digital Marketing Tools The prevalence and rapid development of the Internet and mobile technology in recent decades has revamped our living styles and daily habits. To ride on the digital trend, more business activities have been engaged in the digital world. Marketing and advertising is one of typical business areas that is transformed digitally. The rise of Key Opinion Leaders (KOLs), social media platforms, and Omnichannel retailing have attracted countless business entities to consider the adoption of digital marketing tools for promoting and advertising their brands and products. However, with the increasing diversity of the types of digital marketing tools, they must be carefully selected based on a multiple number of criterion. In this paper, a fuzzyAHP method is proposed and developed for assisting industry practitioners in systematically and effectively evaluate and select proper digital marketing tool(s) for adoption. The developed method not only streamlines the internal business process of digital marketing tool selection, but it also increases the practitioneru0027s effectiveness of achieving the predefined strategic marketing objectives.
|
Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups.
|
Analyzing barriers for adopting sustainable online consumption: A rough hierarchical DEMATEL method Abstract Sustainable online consumption is attracting attention in both the academic and managerial communities. Identifying and analyzing the barriers of sustainable online consumption is critical to its successful implementation. However, the previous approaches often assume that the barriers are equally important in manipulating the interrelationships among barriers, and they need auxiliary information or preassumptions (e.g. preset fuzzy membership functions) in dealing with vague information, which will influence the accuracy of the barrier analysis results. To solve the problems, an integrated method based on improved DEMATEL, ISM and rough set theory is developed in this paper. The proposed method integrates the strength of improved DEMATEL approach in exploring causeeffect relationships considering the impact of barrier strength on the relationships, the advantage of ISM method in building the hierarchical structure of elements, and the merit of rough number in manipulating the vagueness without any auxiliary information or preassumptions. Finally, an application in a large ecommerce company is provided to show the feasibility and effectiveness of the proposed method.
|
A FuzzyAHP Approach for Strategic Evaluation and Selection of Digital Marketing Tools The prevalence and rapid development of the Internet and mobile technology in recent decades has revamped our living styles and daily habits. To ride on the digital trend, more business activities have been engaged in the digital world. Marketing and advertising is one of typical business areas that is transformed digitally. The rise of Key Opinion Leaders (KOLs), social media platforms, and Omnichannel retailing have attracted countless business entities to consider the adoption of digital marketing tools for promoting and advertising their brands and products. However, with the increasing diversity of the types of digital marketing tools, they must be carefully selected based on a multiple number of criterion. In this paper, a fuzzyAHP method is proposed and developed for assisting industry practitioners in systematically and effectively evaluate and select proper digital marketing tool(s) for adoption. The developed method not only streamlines the internal business process of digital marketing tool selection, but it also increases the practitioneru0027s effectiveness of achieving the predefined strategic marketing objectives.
|
Realtime 3D light field transmission Although capturing and displaying stereo 3D content is now commonplace, informationrich lightfield video content capture, transmission and display are much more challenging, resulting in at least one order of magnitude increase in complexity even in the simplest cases. We present an endtoend system capable of capturing and realtime displaying of highquality lightfield video content on various HoloVizio lightfield displays, providing very high 3D image quality and continuous motion parallax. The system is compact in terms of number of computers, and provides superior image quality, resolution and frame rate compared to other published systems. To generate lightfield content, we have built a camera system with a large number of cameras and connected them to PC computers. The cameras were in an evenly spaced linear arrangement. The capture PC was directly connected through a single gigabit Ethernet connection to the demonstration 3D display, supported by a PC computation cluster. For the task of dense light field displaying massively parallel reordering and filtering of the original camera images is required. We were utilizing both CPU and GPU threads for this task. On the GPU we do the lightfield conversion and reordering, filtering and the YUVRGB conversion. We use OpenGL 3.0 shaders and 2D texture arrays to have easy access to individual camera images. A networkbased synchronization scheme is used to present the final rendered images.
|
Analyzing barriers for adopting sustainable online consumption: A rough hierarchical DEMATEL method Abstract Sustainable online consumption is attracting attention in both the academic and managerial communities. Identifying and analyzing the barriers of sustainable online consumption is critical to its successful implementation. However, the previous approaches often assume that the barriers are equally important in manipulating the interrelationships among barriers, and they need auxiliary information or preassumptions (e.g. preset fuzzy membership functions) in dealing with vague information, which will influence the accuracy of the barrier analysis results. To solve the problems, an integrated method based on improved DEMATEL, ISM and rough set theory is developed in this paper. The proposed method integrates the strength of improved DEMATEL approach in exploring causeeffect relationships considering the impact of barrier strength on the relationships, the advantage of ISM method in building the hierarchical structure of elements, and the merit of rough number in manipulating the vagueness without any auxiliary information or preassumptions. Finally, an application in a large ecommerce company is provided to show the feasibility and effectiveness of the proposed method.
|
A FuzzyAHP Approach for Strategic Evaluation and Selection of Digital Marketing Tools The prevalence and rapid development of the Internet and mobile technology in recent decades has revamped our living styles and daily habits. To ride on the digital trend, more business activities have been engaged in the digital world. Marketing and advertising is one of typical business areas that is transformed digitally. The rise of Key Opinion Leaders (KOLs), social media platforms, and Omnichannel retailing have attracted countless business entities to consider the adoption of digital marketing tools for promoting and advertising their brands and products. However, with the increasing diversity of the types of digital marketing tools, they must be carefully selected based on a multiple number of criterion. In this paper, a fuzzyAHP method is proposed and developed for assisting industry practitioners in systematically and effectively evaluate and select proper digital marketing tool(s) for adoption. The developed method not only streamlines the internal business process of digital marketing tool selection, but it also increases the practitioneru0027s effectiveness of achieving the predefined strategic marketing objectives.
|
AirCoupled Reception of a Slow Ultrasonic A0 Mode Wave Propagating in Thin Plastic Film At low frequencies, in thin plates the phase velocity of the guided A0 mode can become slower than that of the ultrasound velocity in air. Such waves do not excite leaky waves in the surrounding air, and therefore, it is impossible to excite and receive them by conventional aircoupled methods. The objective of this research was the development of an aircoupled technique for the reception of slow A0 mode in thin plastic films. This study demonstrates the feasibility of picking up a subsonic A0 mode in plastic films by aircoupled ultrasonic arrays. The aircoupled reception was based on an evanescent wave in air accompanying the propagating A0 mode in a film. The efficiency of the reception was enhanced by using a virtual array which was arranged from the data collected by a single aircoupled receiver. The signals measured at the points corresponding to the positions of the phasematched array were recorded and processed. The transmitting array excited not only the A0 mode in the film, but also a direct wave in air. This wave propagated at ultrasound velocity in air and was faster than the evanescent wave. For efficient reception of the A0 mode, the additional signalprocessing procedure based on the application of the 2D Fourier transform in a spatialxe2x80x93temporal domain. The obtained results can be useful for the development of novel aircoupled ultrasonic nondestructive testing techniques.
|
Analyzing barriers for adopting sustainable online consumption: A rough hierarchical DEMATEL method Abstract Sustainable online consumption is attracting attention in both the academic and managerial communities. Identifying and analyzing the barriers of sustainable online consumption is critical to its successful implementation. However, the previous approaches often assume that the barriers are equally important in manipulating the interrelationships among barriers, and they need auxiliary information or preassumptions (e.g. preset fuzzy membership functions) in dealing with vague information, which will influence the accuracy of the barrier analysis results. To solve the problems, an integrated method based on improved DEMATEL, ISM and rough set theory is developed in this paper. The proposed method integrates the strength of improved DEMATEL approach in exploring causeeffect relationships considering the impact of barrier strength on the relationships, the advantage of ISM method in building the hierarchical structure of elements, and the merit of rough number in manipulating the vagueness without any auxiliary information or preassumptions. Finally, an application in a large ecommerce company is provided to show the feasibility and effectiveness of the proposed method.
|
A FuzzyAHP Approach for Strategic Evaluation and Selection of Digital Marketing Tools The prevalence and rapid development of the Internet and mobile technology in recent decades has revamped our living styles and daily habits. To ride on the digital trend, more business activities have been engaged in the digital world. Marketing and advertising is one of typical business areas that is transformed digitally. The rise of Key Opinion Leaders (KOLs), social media platforms, and Omnichannel retailing have attracted countless business entities to consider the adoption of digital marketing tools for promoting and advertising their brands and products. However, with the increasing diversity of the types of digital marketing tools, they must be carefully selected based on a multiple number of criterion. In this paper, a fuzzyAHP method is proposed and developed for assisting industry practitioners in systematically and effectively evaluate and select proper digital marketing tool(s) for adoption. The developed method not only streamlines the internal business process of digital marketing tool selection, but it also increases the practitioneru0027s effectiveness of achieving the predefined strategic marketing objectives.
|
A Spatialxe2x80x93Temporal SubspaceBased Compressive Channel Estimation Technique in Unknown Interference MIMO Channels Spatialxe2x80x93temporal (ST) subspacebased channel estimation techniques formulated with ell 2 minimum mean square error (MMSE) criterion alleviate the multiaccess interference (MAI) problem when the interested signals exhibit lowrank property. However, the conventional ell 2 ST subspacebased methods suffer from mean squared error (MSE) deterioration in unknown interference channels, due to the difficulty to separate the interested signals from the channel covariance matrices (CCMs) contaminated with unknown interference. As a solution to the problem, we propose a new ell 1 regularized ST channel estimation algorithm by applying the expectationmaximization (EM) algorithm to iteratively examine the signal subspace and the corresponding sparsesupports. The new algorithm updates the CCM independently of the slotdependent ell 1 regularization, which enables it to correctly perform the sparseindependent component analysis (ICA) with a reasonable complexity order. Simulation results shown in this paper verify that the proposed technique significantly improves MSE performance in unknown interference MIMO channels, and hence, solves the BER floor problems from which the conventional receivers suffer.
|
Design and Analysis of a Soft Bidirectional Bending Actuator for HumanRobot Interaction Applications The design of a novel, soft bidirectional actuator which can improve the humanrobot interactions in collaborative applications is proposed in this paper. This actuator is advantageous over the existing designs due to the additional degree of freedom for the same number of pressure inputs as found in the conventional designs. This improves the workspace of the bidirectional actuator significantly and is able to achieve higher angles of bidirectional bending at much lower values of input pressure. This is achieved by eliminating the passive impedance offered by one side of the bending chamber in compression when the other side of the chamber is inflated. A simple kinematic model of the actuator is presented and theoretical and finite element analysis is carried out to predict the fundamental behavior of the actuator. The results are validated through experiments using a fabricated model of the soft bidirectional bending actuator.
|
Modeling, Simulation and Experimental Validation of a Tendondriven Softarm Robot Configuration A Continuum Mechanics Method This paper presents the mathematical derivation and experimental validation of a computational model, which accurately predicts static, largestrain deformations of tendon driven nonslender softarm manipulators subjected to gravity. The large strain behaviors are captured by employing the GreenLagrange strain and by deriving analytical expressions for the variation of the equivalent Young modulus of the structure due to the large strains. No simplifying assumptions are made regarding the curvature of the structure, the stretching or the compression. Furthermore the paper proposes an iterative method for numerically solving the resultant nonlinear system of coupled differential equations and demonstrates a number of application scenarios. The model is experimentally validated using a setup comprising one segment of tendon driven softarm, which integrates stretchable and compressible hyperelastic (rubbertype) materials into its nonhomogeneous back bone structure.
|
A Critical Look at the 2019 College Admissions Scandal Discusses the 2019 College admissions scandal. Let me begin with a disclaimer: I am making no legal excuses for the participants in the current scandal. I am only offering contextual background that places it in the broader academic, cultural, and political perspective required for understanding. It is only the most recent installment of a wellworn narrative: the controlling elite make their own rules and live by them, if they can get away with it. Unfortunately, some of the participants, who are either serving or facing jail time, didnxe2x80x99t know to not go into a gunfight with a sharp stick. Money alone is not enough to avoid prosecution for fraud: you need political clout. The best protection a defendant can have is a prosecutor who fears political reprisal. Compare how the Koch brothers escaped prosecution for stealing millions of oil dollars from Native American tribes1,2 with the fate of actresses Lori Loughlin and Felicity Huffman, who, at the time of this writing, face jail time for paying bribes to get their children into good universities.3,4 In the former case, the federal prosecutor who dared to empanel a grand jury to get at the truth was fired for cause, which put a quick end to the prosecution. In the latter case, the prosecutors pushed for jail terms and public admonishment with the zeal of Oliver Cromwell. There you have it: stealing oil from Native Americans versus trying to bribe your kids into a great university. Where is the greater crime? Admittedly, these actresses and their
|
Design and Analysis of a Soft Bidirectional Bending Actuator for HumanRobot Interaction Applications The design of a novel, soft bidirectional actuator which can improve the humanrobot interactions in collaborative applications is proposed in this paper. This actuator is advantageous over the existing designs due to the additional degree of freedom for the same number of pressure inputs as found in the conventional designs. This improves the workspace of the bidirectional actuator significantly and is able to achieve higher angles of bidirectional bending at much lower values of input pressure. This is achieved by eliminating the passive impedance offered by one side of the bending chamber in compression when the other side of the chamber is inflated. A simple kinematic model of the actuator is presented and theoretical and finite element analysis is carried out to predict the fundamental behavior of the actuator. The results are validated through experiments using a fabricated model of the soft bidirectional bending actuator.
|
Modeling, Simulation and Experimental Validation of a Tendondriven Softarm Robot Configuration A Continuum Mechanics Method This paper presents the mathematical derivation and experimental validation of a computational model, which accurately predicts static, largestrain deformations of tendon driven nonslender softarm manipulators subjected to gravity. The large strain behaviors are captured by employing the GreenLagrange strain and by deriving analytical expressions for the variation of the equivalent Young modulus of the structure due to the large strains. No simplifying assumptions are made regarding the curvature of the structure, the stretching or the compression. Furthermore the paper proposes an iterative method for numerically solving the resultant nonlinear system of coupled differential equations and demonstrates a number of application scenarios. The model is experimentally validated using a setup comprising one segment of tendon driven softarm, which integrates stretchable and compressible hyperelastic (rubbertype) materials into its nonhomogeneous back bone structure.
|
Managing Information From the :2,Information highlights the increasing value of information and IT within organizations and shows how organizations use it. It also deals with the crucial relationship between information and personal effectiveness. The use of computer software and communications in a management context are discussed in detail, including how to mould an information system to your needs. The book explains the basics using reallife examples and brings managers uptodate with the latest developments in electronic commerce and the Internet. The book is based on the Management Charter Initiativeu0027s Occupational Standards for Management NVQs and SVQs at level 4. It is particularly suitable for managers on the Certificate in Management, or Part I of the Diploma, especially those accredited by the IM and BTEC.
|
Design and Analysis of a Soft Bidirectional Bending Actuator for HumanRobot Interaction Applications The design of a novel, soft bidirectional actuator which can improve the humanrobot interactions in collaborative applications is proposed in this paper. This actuator is advantageous over the existing designs due to the additional degree of freedom for the same number of pressure inputs as found in the conventional designs. This improves the workspace of the bidirectional actuator significantly and is able to achieve higher angles of bidirectional bending at much lower values of input pressure. This is achieved by eliminating the passive impedance offered by one side of the bending chamber in compression when the other side of the chamber is inflated. A simple kinematic model of the actuator is presented and theoretical and finite element analysis is carried out to predict the fundamental behavior of the actuator. The results are validated through experiments using a fabricated model of the soft bidirectional bending actuator.
|
Modeling, Simulation and Experimental Validation of a Tendondriven Softarm Robot Configuration A Continuum Mechanics Method This paper presents the mathematical derivation and experimental validation of a computational model, which accurately predicts static, largestrain deformations of tendon driven nonslender softarm manipulators subjected to gravity. The large strain behaviors are captured by employing the GreenLagrange strain and by deriving analytical expressions for the variation of the equivalent Young modulus of the structure due to the large strains. No simplifying assumptions are made regarding the curvature of the structure, the stretching or the compression. Furthermore the paper proposes an iterative method for numerically solving the resultant nonlinear system of coupled differential equations and demonstrates a number of application scenarios. The model is experimentally validated using a setup comprising one segment of tendon driven softarm, which integrates stretchable and compressible hyperelastic (rubbertype) materials into its nonhomogeneous back bone structure.
|
Classifying unavoidable Tverberg partitions Let T(d,r) (r1)(d1)1 be the parameter in Tverbergu0027s theorem, and call a partition mathcal I of 1,2,ldots,T(d,r) into r parts a Tverberg type . We say that mathcal I o ccurs xc2xa0in an ordered point sequence P if P contains a subsequence Pu0027 of T(d,r) points such that the partition of Pu0027 that is orderisomorphic to mathcal I is a Tverberg partition. We say that mathcal I is unavoidable xc2xa0if it occurs in every sufficiently long point sequence. In this paper we study the problem of determining which Tverberg types are unavoidable. We conjecture a complete characterization of the unavoidable Tverberg types, and we prove some cases of our conjecture for dle 4. Along the way, we study the avoidability of many other geometric predicates. Our techniques also yield a large family of T(d,r)point sets for which the number of Tverberg partitions is exactly (r1)!d. This lends further support for Sierksmau0027s conjecture on the number of Tverberg partitions.
|
Design and Analysis of a Soft Bidirectional Bending Actuator for HumanRobot Interaction Applications The design of a novel, soft bidirectional actuator which can improve the humanrobot interactions in collaborative applications is proposed in this paper. This actuator is advantageous over the existing designs due to the additional degree of freedom for the same number of pressure inputs as found in the conventional designs. This improves the workspace of the bidirectional actuator significantly and is able to achieve higher angles of bidirectional bending at much lower values of input pressure. This is achieved by eliminating the passive impedance offered by one side of the bending chamber in compression when the other side of the chamber is inflated. A simple kinematic model of the actuator is presented and theoretical and finite element analysis is carried out to predict the fundamental behavior of the actuator. The results are validated through experiments using a fabricated model of the soft bidirectional bending actuator.
|
Modeling, Simulation and Experimental Validation of a Tendondriven Softarm Robot Configuration A Continuum Mechanics Method This paper presents the mathematical derivation and experimental validation of a computational model, which accurately predicts static, largestrain deformations of tendon driven nonslender softarm manipulators subjected to gravity. The large strain behaviors are captured by employing the GreenLagrange strain and by deriving analytical expressions for the variation of the equivalent Young modulus of the structure due to the large strains. No simplifying assumptions are made regarding the curvature of the structure, the stretching or the compression. Furthermore the paper proposes an iterative method for numerically solving the resultant nonlinear system of coupled differential equations and demonstrates a number of application scenarios. The model is experimentally validated using a setup comprising one segment of tendon driven softarm, which integrates stretchable and compressible hyperelastic (rubbertype) materials into its nonhomogeneous back bone structure.
|
On Colorings Avoiding a Rainbow Cycle and a Fixed Monochromatic Subgraph Let H and G be two graphs on fixed number of vertices. An edge coloring of a complete graph is called (H,G)good if there is no monochromatic copy of G and no rainbow (totally multicolored) copy of H in this coloring. As shown by Jamison and West, an (H,G)good coloring of an arbitrarily large complete graph exists unless either G is a star or H is a forest. The largest number of colors in an (H,G)good coloring of K_n is denoted maxR(n, G,H). For graphs H which can not be vertexpartitioned into at most two induced forests, maxR(n, G,H) has been determined asymptotically. Determining maxR(n; G, H) is challenging for other graphs H, in particular for bipartite graphs or even for cycles. This manuscript treats the case when H is a cycle. The value of maxR(n, G, C_k) is determined for all graphs G whose edges do not induce a star.
|
Design and Analysis of a Soft Bidirectional Bending Actuator for HumanRobot Interaction Applications The design of a novel, soft bidirectional actuator which can improve the humanrobot interactions in collaborative applications is proposed in this paper. This actuator is advantageous over the existing designs due to the additional degree of freedom for the same number of pressure inputs as found in the conventional designs. This improves the workspace of the bidirectional actuator significantly and is able to achieve higher angles of bidirectional bending at much lower values of input pressure. This is achieved by eliminating the passive impedance offered by one side of the bending chamber in compression when the other side of the chamber is inflated. A simple kinematic model of the actuator is presented and theoretical and finite element analysis is carried out to predict the fundamental behavior of the actuator. The results are validated through experiments using a fabricated model of the soft bidirectional bending actuator.
|
Modeling, Simulation and Experimental Validation of a Tendondriven Softarm Robot Configuration A Continuum Mechanics Method This paper presents the mathematical derivation and experimental validation of a computational model, which accurately predicts static, largestrain deformations of tendon driven nonslender softarm manipulators subjected to gravity. The large strain behaviors are captured by employing the GreenLagrange strain and by deriving analytical expressions for the variation of the equivalent Young modulus of the structure due to the large strains. No simplifying assumptions are made regarding the curvature of the structure, the stretching or the compression. Furthermore the paper proposes an iterative method for numerically solving the resultant nonlinear system of coupled differential equations and demonstrates a number of application scenarios. The model is experimentally validated using a setup comprising one segment of tendon driven softarm, which integrates stretchable and compressible hyperelastic (rubbertype) materials into its nonhomogeneous back bone structure.
|
Extremal Problems for tPartite and tColorable Hypergraphs Fix integers t ge r ge 2 and an runiform hypergraph F. We prove that the maximum number of edges in a tpartite runiform hypergraph on n vertices that contains no copy of F is c_t, Fn choose r o(nr), where c_t, F can be determined by a finite computation. We explicitly define a sequence F_1, F_2, ldots of runiform hypergraphs, and prove that the maximum number of edges in a tchromatic runiform hypergraph on n vertices containing no copy of F_i is alpha_t,r,in choose r o(nr), where alpha_t,r,i can be determined by a finite computation for each ige 1. In several cases, alpha_t,r,i is irrational. The main tool used in the proofs is the Lagrangian of a hypergraph.
|
Edge Provisioning and Fairness in VPNDiffServ Networks Customers of Virtual Private Networks (VPNs) over Differentiated Services (DiffServ) infrastructure are most likely to demand not only security but also guaranteed QualityofService (QoS) in pursuance of their desire to have leasedlinelike services. However, expectedly they will be unable or unwilling to predict the load between VPN endpoints. This paper proposes that customers specify their requirements as a range of quantitative services in the Service Level Agreements (SLAs). To support such services Internet Service Providers (ISPs) would need an automated provisioning system that can logically partition the capacity at the edges to various classes (or groups) of VPN connections and manage them efficiently to allow resource sharing among the groups in a dynamic and fair manner. While with edge provisioning a certain amount of resources based on SLAs (traffic contract at edge) are allocated to VPN connections, we also need to provision the interior nodes of a transit network to meet the assurances offered at the boundaries of the network. We, therefore, propose a twolayered model to provision such VPNDiffServ networks where the top layer is responsible for edge provisioning, and drives the lower layer in charge of interior resource provisioning with the help of a Bandwidth Broker (BB). Various algorithms with examples and analyses are presented to provision and allocate resources dynamically at the edges for VPN connections. We have developed a prototype BB performing the required provisioning and connection admission.
|
BwShare: Efficient Bandwidth Guarantee in Cloud with Transparent Share Adaptation Abstract Bandwidth guarantee (BwG) is a highly desired feature in cloud data centers for enabling tenants (i.e., users) to achieve predictable performances. However, such a function currently is not commonly available in clouds since it potentially lowers the utilization efficiency of the network fabric (i.e., denoted network efficiency). The reasons lie in two aspects. First, tenants often present a timevarying andor spatiallyvarying bandwidth demands. Second, the current cloud network architecture makes it hard to enable tenants to reuse unused bandwidth guarantees efficiently. To tackle these challenges, we propose BwShare, a novel bandwidth guarantee scheme in cloud data centers that can effectively improve the network efficiency at both the guaranteed level and the besteffort level in a synergized manner, while current approaches only adopt one of them. Such a design goal is achieved through two components: an adaptive bandwidth guarantee adaptation module (i.e., BGAdaptor) and a transparent work conservation module (i.e., WCEnabler). BGAdaptor improves the network efficiency at the guaranteed level by adjusting bandwidth guarantees among VMs of the same tenant based on traffic monitoring. WCEnabler works at the besteffort level by enabling VMs to reuse idled bandwidth efficiently and freely without disrupting bandwidth guarantees. The two modules can also facilitate each otherxe2x80x99s performance. Extensive experiments show that BwShare can effectively improve the network efficiency while providing the function of bandwidth guarantee in cloud data centers.
|
Molecular Action Mechanisms of the Traditional Chinese Medicine in the Treatment of Rheumatoid Arthritis Based on the Computer Simulation The rheumatoid arthritis (RA), also known as rheumatoid, is a chronic systemic inflammatory disease with unknown etiology. At present, it is recognized that the rheumatoid arthritis is an autoimmune disease. It may be related to endocrine, metabolic, nutritional, geographical, occupational, psychological and social environment differences, the bacterial and viral infections and the genetic causes. Chronic, symmetrical, polysynovial arthritis and extraarticular lesions are the main clinical manifestations and belong to the autoimmune inflammatory diseases. Based on the computer simulation, the molecular mechanism of the traditional Chinese medicine in the treatment of the rheumatoid arthritis is studied, which provides an important platform for extending the value of the computer simulation technology.
|
Edge Provisioning and Fairness in VPNDiffServ Networks Customers of Virtual Private Networks (VPNs) over Differentiated Services (DiffServ) infrastructure are most likely to demand not only security but also guaranteed QualityofService (QoS) in pursuance of their desire to have leasedlinelike services. However, expectedly they will be unable or unwilling to predict the load between VPN endpoints. This paper proposes that customers specify their requirements as a range of quantitative services in the Service Level Agreements (SLAs). To support such services Internet Service Providers (ISPs) would need an automated provisioning system that can logically partition the capacity at the edges to various classes (or groups) of VPN connections and manage them efficiently to allow resource sharing among the groups in a dynamic and fair manner. While with edge provisioning a certain amount of resources based on SLAs (traffic contract at edge) are allocated to VPN connections, we also need to provision the interior nodes of a transit network to meet the assurances offered at the boundaries of the network. We, therefore, propose a twolayered model to provision such VPNDiffServ networks where the top layer is responsible for edge provisioning, and drives the lower layer in charge of interior resource provisioning with the help of a Bandwidth Broker (BB). Various algorithms with examples and analyses are presented to provision and allocate resources dynamically at the edges for VPN connections. We have developed a prototype BB performing the required provisioning and connection admission.
|
BwShare: Efficient Bandwidth Guarantee in Cloud with Transparent Share Adaptation Abstract Bandwidth guarantee (BwG) is a highly desired feature in cloud data centers for enabling tenants (i.e., users) to achieve predictable performances. However, such a function currently is not commonly available in clouds since it potentially lowers the utilization efficiency of the network fabric (i.e., denoted network efficiency). The reasons lie in two aspects. First, tenants often present a timevarying andor spatiallyvarying bandwidth demands. Second, the current cloud network architecture makes it hard to enable tenants to reuse unused bandwidth guarantees efficiently. To tackle these challenges, we propose BwShare, a novel bandwidth guarantee scheme in cloud data centers that can effectively improve the network efficiency at both the guaranteed level and the besteffort level in a synergized manner, while current approaches only adopt one of them. Such a design goal is achieved through two components: an adaptive bandwidth guarantee adaptation module (i.e., BGAdaptor) and a transparent work conservation module (i.e., WCEnabler). BGAdaptor improves the network efficiency at the guaranteed level by adjusting bandwidth guarantees among VMs of the same tenant based on traffic monitoring. WCEnabler works at the besteffort level by enabling VMs to reuse idled bandwidth efficiently and freely without disrupting bandwidth guarantees. The two modules can also facilitate each otherxe2x80x99s performance. Extensive experiments show that BwShare can effectively improve the network efficiency while providing the function of bandwidth guarantee in cloud data centers.
|
Antirheumatoid Arthritis Mechanisms of Angelicae Pubescentis Radix Angelicase Pubescentis Radix (APR) has been used in the practices of traditional Chinese medicine (TCM) with antirheumatoid arthritis (RA) effects called xe2x80x9cwinddampnessdispelling and cold dispersingxe2x80x9d. In TCM prescriptions and patient medicines, APR has been widely adopted against RA. Till now, the antirheumatoid mechanisms are still obscure. In this study, bioactive compounds and targeted proteins based on experiment evidences were collected. Then, bioinformatics analysis towards RA was deployed. As a result, five RA associated biological processes were enriched with proteins targetedregulated by APR e.g. ossification, osteoclast differentiation, positive regulation of lymph activation and regulation of T and B cell. The result can also be validated with bioinformatics tools and RA microarray data. In short, the antiRA effect of APR recognized in TCM may associated with regulations on bone formation and inflammations.
|
Edge Provisioning and Fairness in VPNDiffServ Networks Customers of Virtual Private Networks (VPNs) over Differentiated Services (DiffServ) infrastructure are most likely to demand not only security but also guaranteed QualityofService (QoS) in pursuance of their desire to have leasedlinelike services. However, expectedly they will be unable or unwilling to predict the load between VPN endpoints. This paper proposes that customers specify their requirements as a range of quantitative services in the Service Level Agreements (SLAs). To support such services Internet Service Providers (ISPs) would need an automated provisioning system that can logically partition the capacity at the edges to various classes (or groups) of VPN connections and manage them efficiently to allow resource sharing among the groups in a dynamic and fair manner. While with edge provisioning a certain amount of resources based on SLAs (traffic contract at edge) are allocated to VPN connections, we also need to provision the interior nodes of a transit network to meet the assurances offered at the boundaries of the network. We, therefore, propose a twolayered model to provision such VPNDiffServ networks where the top layer is responsible for edge provisioning, and drives the lower layer in charge of interior resource provisioning with the help of a Bandwidth Broker (BB). Various algorithms with examples and analyses are presented to provision and allocate resources dynamically at the edges for VPN connections. We have developed a prototype BB performing the required provisioning and connection admission.
|
BwShare: Efficient Bandwidth Guarantee in Cloud with Transparent Share Adaptation Abstract Bandwidth guarantee (BwG) is a highly desired feature in cloud data centers for enabling tenants (i.e., users) to achieve predictable performances. However, such a function currently is not commonly available in clouds since it potentially lowers the utilization efficiency of the network fabric (i.e., denoted network efficiency). The reasons lie in two aspects. First, tenants often present a timevarying andor spatiallyvarying bandwidth demands. Second, the current cloud network architecture makes it hard to enable tenants to reuse unused bandwidth guarantees efficiently. To tackle these challenges, we propose BwShare, a novel bandwidth guarantee scheme in cloud data centers that can effectively improve the network efficiency at both the guaranteed level and the besteffort level in a synergized manner, while current approaches only adopt one of them. Such a design goal is achieved through two components: an adaptive bandwidth guarantee adaptation module (i.e., BGAdaptor) and a transparent work conservation module (i.e., WCEnabler). BGAdaptor improves the network efficiency at the guaranteed level by adjusting bandwidth guarantees among VMs of the same tenant based on traffic monitoring. WCEnabler works at the besteffort level by enabling VMs to reuse idled bandwidth efficiently and freely without disrupting bandwidth guarantees. The two modules can also facilitate each otherxe2x80x99s performance. Extensive experiments show that BwShare can effectively improve the network efficiency while providing the function of bandwidth guarantee in cloud data centers.
|
Structure and expression of the gene coding for the alphasubunit of DNAdependent RNA polymerase from the chloroplast genome of Zea mays. :0,rpoA gene coding for the alphasubunit of DNAdependent RNA polymerase located on the DNA of Zea mays chloroplasts has been characterized with respect to its position on the chloroplast genome and its nucleotide sequence. The amino acid sequence derived for a 39 Kd polypeptide shows strong homology with sequences derived from the :0,rpoA genes of other chloroplast species and with the amino acid sequence of the alphasubunit from E. coli RNA polymerase. Transcripts of the :0,rpoA gene were identified by Northern hybridization and characterized by S1 mapping using total RNA isolated from maize chloroplasts. Antibodies raised against a synthetic Cterminal heptapeptide show cross reactivity with a 39 Kd polypeptide contained in the stroma fraction of maize chloroplasts. It is concluded that the :0,rpoA gene is a functional gene and that therefore, at least the alphasubunit of plastidic RNA polymerase, is expressed in chloroplasts.
|
Edge Provisioning and Fairness in VPNDiffServ Networks Customers of Virtual Private Networks (VPNs) over Differentiated Services (DiffServ) infrastructure are most likely to demand not only security but also guaranteed QualityofService (QoS) in pursuance of their desire to have leasedlinelike services. However, expectedly they will be unable or unwilling to predict the load between VPN endpoints. This paper proposes that customers specify their requirements as a range of quantitative services in the Service Level Agreements (SLAs). To support such services Internet Service Providers (ISPs) would need an automated provisioning system that can logically partition the capacity at the edges to various classes (or groups) of VPN connections and manage them efficiently to allow resource sharing among the groups in a dynamic and fair manner. While with edge provisioning a certain amount of resources based on SLAs (traffic contract at edge) are allocated to VPN connections, we also need to provision the interior nodes of a transit network to meet the assurances offered at the boundaries of the network. We, therefore, propose a twolayered model to provision such VPNDiffServ networks where the top layer is responsible for edge provisioning, and drives the lower layer in charge of interior resource provisioning with the help of a Bandwidth Broker (BB). Various algorithms with examples and analyses are presented to provision and allocate resources dynamically at the edges for VPN connections. We have developed a prototype BB performing the required provisioning and connection admission.
|
BwShare: Efficient Bandwidth Guarantee in Cloud with Transparent Share Adaptation Abstract Bandwidth guarantee (BwG) is a highly desired feature in cloud data centers for enabling tenants (i.e., users) to achieve predictable performances. However, such a function currently is not commonly available in clouds since it potentially lowers the utilization efficiency of the network fabric (i.e., denoted network efficiency). The reasons lie in two aspects. First, tenants often present a timevarying andor spatiallyvarying bandwidth demands. Second, the current cloud network architecture makes it hard to enable tenants to reuse unused bandwidth guarantees efficiently. To tackle these challenges, we propose BwShare, a novel bandwidth guarantee scheme in cloud data centers that can effectively improve the network efficiency at both the guaranteed level and the besteffort level in a synergized manner, while current approaches only adopt one of them. Such a design goal is achieved through two components: an adaptive bandwidth guarantee adaptation module (i.e., BGAdaptor) and a transparent work conservation module (i.e., WCEnabler). BGAdaptor improves the network efficiency at the guaranteed level by adjusting bandwidth guarantees among VMs of the same tenant based on traffic monitoring. WCEnabler works at the besteffort level by enabling VMs to reuse idled bandwidth efficiently and freely without disrupting bandwidth guarantees. The two modules can also facilitate each otherxe2x80x99s performance. Extensive experiments show that BwShare can effectively improve the network efficiency while providing the function of bandwidth guarantee in cloud data centers.
|
Erkundung und Erforschung. Alexander von Humboldts Amerikareise Zusammenfassung Ahnlich wie Adalbert Stifters Erzahler im Roman xe2x80x9eNachsommerxe2x80x9c verband A. v. Humboldt auf seiner Amerikareise Erkundung und Erforschung, Reiselust und Erkenntnisstreben. Humboldt hat sein doppeltes Ziel klar benannt: Bekanntmachung der besuchten Lander, Sammeln von Tatsachen zur Erweiterung der physikalischen Geographie. Der Aufsatz ist in funf Abschnitte gegliedert: Anliegen, Route, Methoden, Ergebnisse, Auswertung. Abstract In a similar way as Adalbert Stifteru0027s narrator in the novel xe2x80x9cLate summerxe2x80x9d A. v. Humboldt combined exploration with research, fondness for travelling with striving for findings during his travel through South America. Humboldt clearly indicated his double aim: to report on the visited countries, to collect facts in order to improve physical geography. The treatise consists of five sections: object, route, methods, results, evaluation.
|
Edge Provisioning and Fairness in VPNDiffServ Networks Customers of Virtual Private Networks (VPNs) over Differentiated Services (DiffServ) infrastructure are most likely to demand not only security but also guaranteed QualityofService (QoS) in pursuance of their desire to have leasedlinelike services. However, expectedly they will be unable or unwilling to predict the load between VPN endpoints. This paper proposes that customers specify their requirements as a range of quantitative services in the Service Level Agreements (SLAs). To support such services Internet Service Providers (ISPs) would need an automated provisioning system that can logically partition the capacity at the edges to various classes (or groups) of VPN connections and manage them efficiently to allow resource sharing among the groups in a dynamic and fair manner. While with edge provisioning a certain amount of resources based on SLAs (traffic contract at edge) are allocated to VPN connections, we also need to provision the interior nodes of a transit network to meet the assurances offered at the boundaries of the network. We, therefore, propose a twolayered model to provision such VPNDiffServ networks where the top layer is responsible for edge provisioning, and drives the lower layer in charge of interior resource provisioning with the help of a Bandwidth Broker (BB). Various algorithms with examples and analyses are presented to provision and allocate resources dynamically at the edges for VPN connections. We have developed a prototype BB performing the required provisioning and connection admission.
|
BwShare: Efficient Bandwidth Guarantee in Cloud with Transparent Share Adaptation Abstract Bandwidth guarantee (BwG) is a highly desired feature in cloud data centers for enabling tenants (i.e., users) to achieve predictable performances. However, such a function currently is not commonly available in clouds since it potentially lowers the utilization efficiency of the network fabric (i.e., denoted network efficiency). The reasons lie in two aspects. First, tenants often present a timevarying andor spatiallyvarying bandwidth demands. Second, the current cloud network architecture makes it hard to enable tenants to reuse unused bandwidth guarantees efficiently. To tackle these challenges, we propose BwShare, a novel bandwidth guarantee scheme in cloud data centers that can effectively improve the network efficiency at both the guaranteed level and the besteffort level in a synergized manner, while current approaches only adopt one of them. Such a design goal is achieved through two components: an adaptive bandwidth guarantee adaptation module (i.e., BGAdaptor) and a transparent work conservation module (i.e., WCEnabler). BGAdaptor improves the network efficiency at the guaranteed level by adjusting bandwidth guarantees among VMs of the same tenant based on traffic monitoring. WCEnabler works at the besteffort level by enabling VMs to reuse idled bandwidth efficiently and freely without disrupting bandwidth guarantees. The two modules can also facilitate each otherxe2x80x99s performance. Extensive experiments show that BwShare can effectively improve the network efficiency while providing the function of bandwidth guarantee in cloud data centers.
|
Productivity Pants With thousands of books on the topic, productivity is a popular subject in the business world. Increase work output, reduce the bottom line, and stay competitivexe2x80x94those are some of the magic bullets that productivity promises. But with so many thoughts, schools, methods and gurus in the world, where do we begin? How do we find a productivity strategy that is right for our professional and personal goals? How do we know which voices to listen to? The short answer? xe2x80x9cIt depends.xe2x80x9d In this lightning talk, we will discuss important considerations when we decide to embark on our own productivity journey. We will focus on sharpening our goals, understanding what we value and creating a space for improving the way we work (and live).
|
HumanRobot Interaction Through Fingertip Haptic Devices for Cooperative Manipulation Tasks Teleoperation of multirobot systems, e.g. dual manipulators, in cooperative manipulation tasks requires haptic feedback of multicontact interaction forces. Classical haptic devices restrict the workspace of the human operator and provide only one contact point. An alternative solution is to enable the operator to command the robot system via freehand motions which extends the workspace of the human. In such a setting, a multicontact haptic feedback may be provided to the human through multiple wearable haptic devices, e.g. fingertip devices that display forces on the human fingertips. In this paper we evaluate the benefit of using wearable haptic fingertip devices to interact with a bimanual robot setup in a pickandplace manipulation task. We show that haptic feedback through wearable devices improves task performance compared to the base condition of no haptic feedback. Therefore, wearable haptic devices are a promising interface for guidance of multirobot manipulation systems.
|
An assisted telemanipulation approach: combining autonomous grasp planning with haptic cues This paper presents an assisted telemanipulation approach with integrated grasp planning. It also studies how the human teleoperation performance benefits from the incorporated visual and haptic cues while manipulating objects in cluttered environments. The developed system combines the widely used masterslave teleoperation with our previous modelfree and learningfree grasping algorithm by means of a dynamic grasp reranking strategy and a semiautonomous reachtograsptrajectory guidance. The proposed reranking metric helps in dynamically updating the stable grasps based on the current state of the slave device. The trajectory guidance system assists in maintaining smooth trajectory by controlling the haptic forces. A virtual pose controller has been integrated with the guidance scheme to automatically correct the endeffector orientation while reaching towards the grasp. Various experiments are conducted evaluating the proposed method using a six degrees of freedom (dof) haptic master and a seven dof slave robot. Results obtained with these tests along with the results gathered from the performed humanfactor trials demonstrate the efficiency of our method in terms of objective metrics of task completion, and also subjective metrics of user experience.
|
A Critical Look at the 2019 College Admissions Scandal Discusses the 2019 College admissions scandal. Let me begin with a disclaimer: I am making no legal excuses for the participants in the current scandal. I am only offering contextual background that places it in the broader academic, cultural, and political perspective required for understanding. It is only the most recent installment of a wellworn narrative: the controlling elite make their own rules and live by them, if they can get away with it. Unfortunately, some of the participants, who are either serving or facing jail time, didnxe2x80x99t know to not go into a gunfight with a sharp stick. Money alone is not enough to avoid prosecution for fraud: you need political clout. The best protection a defendant can have is a prosecutor who fears political reprisal. Compare how the Koch brothers escaped prosecution for stealing millions of oil dollars from Native American tribes1,2 with the fate of actresses Lori Loughlin and Felicity Huffman, who, at the time of this writing, face jail time for paying bribes to get their children into good universities.3,4 In the former case, the federal prosecutor who dared to empanel a grand jury to get at the truth was fired for cause, which put a quick end to the prosecution. In the latter case, the prosecutors pushed for jail terms and public admonishment with the zeal of Oliver Cromwell. There you have it: stealing oil from Native Americans versus trying to bribe your kids into a great university. Where is the greater crime? Admittedly, these actresses and their
|
HumanRobot Interaction Through Fingertip Haptic Devices for Cooperative Manipulation Tasks Teleoperation of multirobot systems, e.g. dual manipulators, in cooperative manipulation tasks requires haptic feedback of multicontact interaction forces. Classical haptic devices restrict the workspace of the human operator and provide only one contact point. An alternative solution is to enable the operator to command the robot system via freehand motions which extends the workspace of the human. In such a setting, a multicontact haptic feedback may be provided to the human through multiple wearable haptic devices, e.g. fingertip devices that display forces on the human fingertips. In this paper we evaluate the benefit of using wearable haptic fingertip devices to interact with a bimanual robot setup in a pickandplace manipulation task. We show that haptic feedback through wearable devices improves task performance compared to the base condition of no haptic feedback. Therefore, wearable haptic devices are a promising interface for guidance of multirobot manipulation systems.
|
An assisted telemanipulation approach: combining autonomous grasp planning with haptic cues This paper presents an assisted telemanipulation approach with integrated grasp planning. It also studies how the human teleoperation performance benefits from the incorporated visual and haptic cues while manipulating objects in cluttered environments. The developed system combines the widely used masterslave teleoperation with our previous modelfree and learningfree grasping algorithm by means of a dynamic grasp reranking strategy and a semiautonomous reachtograsptrajectory guidance. The proposed reranking metric helps in dynamically updating the stable grasps based on the current state of the slave device. The trajectory guidance system assists in maintaining smooth trajectory by controlling the haptic forces. A virtual pose controller has been integrated with the guidance scheme to automatically correct the endeffector orientation while reaching towards the grasp. Various experiments are conducted evaluating the proposed method using a six degrees of freedom (dof) haptic master and a seven dof slave robot. Results obtained with these tests along with the results gathered from the performed humanfactor trials demonstrate the efficiency of our method in terms of objective metrics of task completion, and also subjective metrics of user experience.
|
MinWise Independent Linear Permutations A set of permutations cal F subseteq S_n is minwise independent if for any set X subseteq n and any x in X, when pi is chosen at random in cal F we have bf P left(minpi(X) pi(x)right) 1over X. This notion was introduced by Broder, Charikar, Frieze and Mitzenmacher and is motivated by an algorithm for filtering nearduplicate web documents. Linear permutations are an important class of permutations. Let p be a (large) prime and let cal F_pp_a,b:;1leq aleq p1,,0leq bleq p1 where for xin p0,1,ldots,p1, p_a,b(x)axbpmod p. For Xsubseteq p we let F(X)max_xin Xleftbf P_a,b(minp(X)p(x))right where bf P_a,b is over p chosen uniformly at random from cal F_p. We show that as k,p toinfty, bf E_XF(X)1over kOleft((log k)3over k32right) confirming that a simply chosen random linear permutation will suffice for an average set from the point of view of approximate minwise independence.
|
HumanRobot Interaction Through Fingertip Haptic Devices for Cooperative Manipulation Tasks Teleoperation of multirobot systems, e.g. dual manipulators, in cooperative manipulation tasks requires haptic feedback of multicontact interaction forces. Classical haptic devices restrict the workspace of the human operator and provide only one contact point. An alternative solution is to enable the operator to command the robot system via freehand motions which extends the workspace of the human. In such a setting, a multicontact haptic feedback may be provided to the human through multiple wearable haptic devices, e.g. fingertip devices that display forces on the human fingertips. In this paper we evaluate the benefit of using wearable haptic fingertip devices to interact with a bimanual robot setup in a pickandplace manipulation task. We show that haptic feedback through wearable devices improves task performance compared to the base condition of no haptic feedback. Therefore, wearable haptic devices are a promising interface for guidance of multirobot manipulation systems.
|
An assisted telemanipulation approach: combining autonomous grasp planning with haptic cues This paper presents an assisted telemanipulation approach with integrated grasp planning. It also studies how the human teleoperation performance benefits from the incorporated visual and haptic cues while manipulating objects in cluttered environments. The developed system combines the widely used masterslave teleoperation with our previous modelfree and learningfree grasping algorithm by means of a dynamic grasp reranking strategy and a semiautonomous reachtograsptrajectory guidance. The proposed reranking metric helps in dynamically updating the stable grasps based on the current state of the slave device. The trajectory guidance system assists in maintaining smooth trajectory by controlling the haptic forces. A virtual pose controller has been integrated with the guidance scheme to automatically correct the endeffector orientation while reaching towards the grasp. Various experiments are conducted evaluating the proposed method using a six degrees of freedom (dof) haptic master and a seven dof slave robot. Results obtained with these tests along with the results gathered from the performed humanfactor trials demonstrate the efficiency of our method in terms of objective metrics of task completion, and also subjective metrics of user experience.
|
On the Probabilistic Degrees of Symmetric Boolean functions. The probabilistic degree of a Boolean function f:0,1nrightarrow 0,1 is defined to be the smallest d such that there is a random polynomial mathbfP of degree at most d that agrees with f at each point with high probability. Introduced by Razborov (1987), upper and lower bounds on probabilistic degrees of Boolean functions specifically symmetric Boolean functions have been used to prove explicit lower bounds, design pseudorandom generators, and devise algorithms for combinatorial problems. :76,this paper, we characterize the probabilistic degrees of all symmetric Boolean functions up to polylogarithmic factors over all fields of fixed characteristic (positive or zero).
|
HumanRobot Interaction Through Fingertip Haptic Devices for Cooperative Manipulation Tasks Teleoperation of multirobot systems, e.g. dual manipulators, in cooperative manipulation tasks requires haptic feedback of multicontact interaction forces. Classical haptic devices restrict the workspace of the human operator and provide only one contact point. An alternative solution is to enable the operator to command the robot system via freehand motions which extends the workspace of the human. In such a setting, a multicontact haptic feedback may be provided to the human through multiple wearable haptic devices, e.g. fingertip devices that display forces on the human fingertips. In this paper we evaluate the benefit of using wearable haptic fingertip devices to interact with a bimanual robot setup in a pickandplace manipulation task. We show that haptic feedback through wearable devices improves task performance compared to the base condition of no haptic feedback. Therefore, wearable haptic devices are a promising interface for guidance of multirobot manipulation systems.
|
An assisted telemanipulation approach: combining autonomous grasp planning with haptic cues This paper presents an assisted telemanipulation approach with integrated grasp planning. It also studies how the human teleoperation performance benefits from the incorporated visual and haptic cues while manipulating objects in cluttered environments. The developed system combines the widely used masterslave teleoperation with our previous modelfree and learningfree grasping algorithm by means of a dynamic grasp reranking strategy and a semiautonomous reachtograsptrajectory guidance. The proposed reranking metric helps in dynamically updating the stable grasps based on the current state of the slave device. The trajectory guidance system assists in maintaining smooth trajectory by controlling the haptic forces. A virtual pose controller has been integrated with the guidance scheme to automatically correct the endeffector orientation while reaching towards the grasp. Various experiments are conducted evaluating the proposed method using a six degrees of freedom (dof) haptic master and a seven dof slave robot. Results obtained with these tests along with the results gathered from the performed humanfactor trials demonstrate the efficiency of our method in terms of objective metrics of task completion, and also subjective metrics of user experience.
|
On the Utility of the Inverse Gamma Distribution in Modeling Composite Fading Channels. We introduce a general approach to characterize composite fading models based on inverse gamma (IG) shadowing. We first determine to what extent the IG distribution is an adequate choice for modeling shadow fading, by means of a comprehensive test with field measurements and other distributions conventionally used for this purpose. Then, we prove that the probability density function and cumulative density function of any IGbased composite fading model are directly expressed in terms of a Laplacedomain statistic of the underlying fast fading model, and in some relevant cases, as a mixture of wellknown stateoftheart distributions. We exemplify our approach by presenting a composite IGtwowave with diffuse power fading model, for which its statistical characterization is directly attained in a simple form.
|
HumanRobot Interaction Through Fingertip Haptic Devices for Cooperative Manipulation Tasks Teleoperation of multirobot systems, e.g. dual manipulators, in cooperative manipulation tasks requires haptic feedback of multicontact interaction forces. Classical haptic devices restrict the workspace of the human operator and provide only one contact point. An alternative solution is to enable the operator to command the robot system via freehand motions which extends the workspace of the human. In such a setting, a multicontact haptic feedback may be provided to the human through multiple wearable haptic devices, e.g. fingertip devices that display forces on the human fingertips. In this paper we evaluate the benefit of using wearable haptic fingertip devices to interact with a bimanual robot setup in a pickandplace manipulation task. We show that haptic feedback through wearable devices improves task performance compared to the base condition of no haptic feedback. Therefore, wearable haptic devices are a promising interface for guidance of multirobot manipulation systems.
|
An assisted telemanipulation approach: combining autonomous grasp planning with haptic cues This paper presents an assisted telemanipulation approach with integrated grasp planning. It also studies how the human teleoperation performance benefits from the incorporated visual and haptic cues while manipulating objects in cluttered environments. The developed system combines the widely used masterslave teleoperation with our previous modelfree and learningfree grasping algorithm by means of a dynamic grasp reranking strategy and a semiautonomous reachtograsptrajectory guidance. The proposed reranking metric helps in dynamically updating the stable grasps based on the current state of the slave device. The trajectory guidance system assists in maintaining smooth trajectory by controlling the haptic forces. A virtual pose controller has been integrated with the guidance scheme to automatically correct the endeffector orientation while reaching towards the grasp. Various experiments are conducted evaluating the proposed method using a six degrees of freedom (dof) haptic master and a seven dof slave robot. Results obtained with these tests along with the results gathered from the performed humanfactor trials demonstrate the efficiency of our method in terms of objective metrics of task completion, and also subjective metrics of user experience.
|
Classifying unavoidable Tverberg partitions Let T(d,r) (r1)(d1)1 be the parameter in Tverbergu0027s theorem, and call a partition mathcal I of 1,2,ldots,T(d,r) into r parts a Tverberg type . We say that mathcal I o ccurs xc2xa0in an ordered point sequence P if P contains a subsequence Pu0027 of T(d,r) points such that the partition of Pu0027 that is orderisomorphic to mathcal I is a Tverberg partition. We say that mathcal I is unavoidable xc2xa0if it occurs in every sufficiently long point sequence. In this paper we study the problem of determining which Tverberg types are unavoidable. We conjecture a complete characterization of the unavoidable Tverberg types, and we prove some cases of our conjecture for dle 4. Along the way, we study the avoidability of many other geometric predicates. Our techniques also yield a large family of T(d,r)point sets for which the number of Tverberg partitions is exactly (r1)!d. This lends further support for Sierksmau0027s conjecture on the number of Tverberg partitions.
|
Reverse Prevention Sampling for Misinformation Mitigation in Social Networks. In this work, we consider misinformation propagating through a social network and study the problem of its prevention. In this problem, a bad campaign starts propagating from a set of seed nodes in the network and we use the notion of a limiting (or good) campaign to counteract the effect of misinformation. The goal is to identify a set of k users that need to be convinced to adopt the limiting campaign so as to minimize the number of people that adopt the bad campaign at the end of both propagation processes. :92,work presents emphRPS (Reverse Prevention Sampling), an algorithm that provides a scalable solution to the misinformation mitigation problem. Our theoretical analysis shows that emphRPS runs in O((k l)(n m)(frac11 gamma) log n epsilon2 ) expected time and returns a (1 1e epsilon)approximate solution with at least 1 nl probability (where gamma is a typically small network parameter and l is a confidence parameter). The time complexity of emphRPS substantially improves upon the previously bestknown algorithms that run in time Omega(m n k cdot POLY(epsilon1)). We experimentally evaluate emphRPS on large datasets and show that it outperforms the stateoftheart solution by several orders of magnitude in terms of running time. This demonstrates that misinformation mitigation can be made practical while still offering strong theoretical guarantees.
|
Latencybounded Target Set Selection in Signed Networks It is welldocumented that social networks play a considerable role in information spreading. The dynamic processes governing the diffusion of information have been studied in many fields, including epidemiology, sociology, economics, and computer science. A widely studied problem in the area of viral marketing is the target set selection: in order to market a new product, hoping it will be adopted by a large fraction of individuals in the network, which set of individuals should we xe2x80x9ctargetxe2x80x9d (for instance, by offering them free samples of the product)? In this paper, we introduce a diffusion model in which some of the neighbors of a node have a negative influence on that node, namely, they induce the node to reject the feature that is supposed to be spread. We study the target set selection problem within this model, first proving a strong inapproximability result holding also when the diffusion process is required to reach all the nodes in a couple of rounds. Then, we consider a set of restrictions under which the problem is approximable to some extent.
|
Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority.
|
Reverse Prevention Sampling for Misinformation Mitigation in Social Networks. In this work, we consider misinformation propagating through a social network and study the problem of its prevention. In this problem, a bad campaign starts propagating from a set of seed nodes in the network and we use the notion of a limiting (or good) campaign to counteract the effect of misinformation. The goal is to identify a set of k users that need to be convinced to adopt the limiting campaign so as to minimize the number of people that adopt the bad campaign at the end of both propagation processes. :92,work presents emphRPS (Reverse Prevention Sampling), an algorithm that provides a scalable solution to the misinformation mitigation problem. Our theoretical analysis shows that emphRPS runs in O((k l)(n m)(frac11 gamma) log n epsilon2 ) expected time and returns a (1 1e epsilon)approximate solution with at least 1 nl probability (where gamma is a typically small network parameter and l is a confidence parameter). The time complexity of emphRPS substantially improves upon the previously bestknown algorithms that run in time Omega(m n k cdot POLY(epsilon1)). We experimentally evaluate emphRPS on large datasets and show that it outperforms the stateoftheart solution by several orders of magnitude in terms of running time. This demonstrates that misinformation mitigation can be made practical while still offering strong theoretical guarantees.
|
Latencybounded Target Set Selection in Signed Networks It is welldocumented that social networks play a considerable role in information spreading. The dynamic processes governing the diffusion of information have been studied in many fields, including epidemiology, sociology, economics, and computer science. A widely studied problem in the area of viral marketing is the target set selection: in order to market a new product, hoping it will be adopted by a large fraction of individuals in the network, which set of individuals should we xe2x80x9ctargetxe2x80x9d (for instance, by offering them free samples of the product)? In this paper, we introduce a diffusion model in which some of the neighbors of a node have a negative influence on that node, namely, they induce the node to reject the feature that is supposed to be spread. We study the target set selection problem within this model, first proving a strong inapproximability result holding also when the diffusion process is required to reach all the nodes in a couple of rounds. Then, we consider a set of restrictions under which the problem is approximable to some extent.
|
Development of a foldable fivefinger robotic hand for assisting in laparoscopic surgery This study aims to develop a robotic hand that can be inserted into the body from a small incision wound and can handle large organs in a laparoscopic surgery. We determined the requirements for the proposed hand based on a surgeonxe2x80x99s motions in Handassisted laparoscopic surgery(HALS). We identified four basic motions: xe2x80x9cgrasp,xe2x80x9d xe2x80x9cpinch,xe2x80x9d xe2x80x9cexclusion,xe2x80x9d and xe2x80x9cspread.xe2x80x9d The proposed hand has the necessary degree of freedom(DoFs) for performing these movements, five fingers, as in a humanxe2x80x99s hand, and a palm that can be folded into a bellows when the surgeon inserts the hand into the abdominal cavity. We evaluated the proposed robot hand based on a performance test, and confirmed that it can be inserted from a 20 mm incision wound and grasp the simulated organs.
|
Reverse Prevention Sampling for Misinformation Mitigation in Social Networks. In this work, we consider misinformation propagating through a social network and study the problem of its prevention. In this problem, a bad campaign starts propagating from a set of seed nodes in the network and we use the notion of a limiting (or good) campaign to counteract the effect of misinformation. The goal is to identify a set of k users that need to be convinced to adopt the limiting campaign so as to minimize the number of people that adopt the bad campaign at the end of both propagation processes. :92,work presents emphRPS (Reverse Prevention Sampling), an algorithm that provides a scalable solution to the misinformation mitigation problem. Our theoretical analysis shows that emphRPS runs in O((k l)(n m)(frac11 gamma) log n epsilon2 ) expected time and returns a (1 1e epsilon)approximate solution with at least 1 nl probability (where gamma is a typically small network parameter and l is a confidence parameter). The time complexity of emphRPS substantially improves upon the previously bestknown algorithms that run in time Omega(m n k cdot POLY(epsilon1)). We experimentally evaluate emphRPS on large datasets and show that it outperforms the stateoftheart solution by several orders of magnitude in terms of running time. This demonstrates that misinformation mitigation can be made practical while still offering strong theoretical guarantees.
|
Latencybounded Target Set Selection in Signed Networks It is welldocumented that social networks play a considerable role in information spreading. The dynamic processes governing the diffusion of information have been studied in many fields, including epidemiology, sociology, economics, and computer science. A widely studied problem in the area of viral marketing is the target set selection: in order to market a new product, hoping it will be adopted by a large fraction of individuals in the network, which set of individuals should we xe2x80x9ctargetxe2x80x9d (for instance, by offering them free samples of the product)? In this paper, we introduce a diffusion model in which some of the neighbors of a node have a negative influence on that node, namely, they induce the node to reject the feature that is supposed to be spread. We study the target set selection problem within this model, first proving a strong inapproximability result holding also when the diffusion process is required to reach all the nodes in a couple of rounds. Then, we consider a set of restrictions under which the problem is approximable to some extent.
|
Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups.
|
Reverse Prevention Sampling for Misinformation Mitigation in Social Networks. In this work, we consider misinformation propagating through a social network and study the problem of its prevention. In this problem, a bad campaign starts propagating from a set of seed nodes in the network and we use the notion of a limiting (or good) campaign to counteract the effect of misinformation. The goal is to identify a set of k users that need to be convinced to adopt the limiting campaign so as to minimize the number of people that adopt the bad campaign at the end of both propagation processes. :92,work presents emphRPS (Reverse Prevention Sampling), an algorithm that provides a scalable solution to the misinformation mitigation problem. Our theoretical analysis shows that emphRPS runs in O((k l)(n m)(frac11 gamma) log n epsilon2 ) expected time and returns a (1 1e epsilon)approximate solution with at least 1 nl probability (where gamma is a typically small network parameter and l is a confidence parameter). The time complexity of emphRPS substantially improves upon the previously bestknown algorithms that run in time Omega(m n k cdot POLY(epsilon1)). We experimentally evaluate emphRPS on large datasets and show that it outperforms the stateoftheart solution by several orders of magnitude in terms of running time. This demonstrates that misinformation mitigation can be made practical while still offering strong theoretical guarantees.
|
Latencybounded Target Set Selection in Signed Networks It is welldocumented that social networks play a considerable role in information spreading. The dynamic processes governing the diffusion of information have been studied in many fields, including epidemiology, sociology, economics, and computer science. A widely studied problem in the area of viral marketing is the target set selection: in order to market a new product, hoping it will be adopted by a large fraction of individuals in the network, which set of individuals should we xe2x80x9ctargetxe2x80x9d (for instance, by offering them free samples of the product)? In this paper, we introduce a diffusion model in which some of the neighbors of a node have a negative influence on that node, namely, they induce the node to reject the feature that is supposed to be spread. We study the target set selection problem within this model, first proving a strong inapproximability result holding also when the diffusion process is required to reach all the nodes in a couple of rounds. Then, we consider a set of restrictions under which the problem is approximable to some extent.
|
Design and Kinematic Analysis of a Robotics Maxillofacial Surgery System Due to the complex anatomical structure in the skull base and lateral deep facial region, it is difficult to avoid important organizations and easy to cause many postoperative complications in the current surgery. In this paper, a masterslave surgical robot system was designed to solve the problem. The slave manipulator with 5 DOF was proposed, which has a universal interface to mount three kinds of puncture surgical end effectors. Finally, the dexterity calculation and kinematic simulation analysis based on the condition number of the Jacobian matrix were calculated, and the result indicates that this robot has stable motion performance and can meet the demands of puncture surgery.
|
Reverse Prevention Sampling for Misinformation Mitigation in Social Networks. In this work, we consider misinformation propagating through a social network and study the problem of its prevention. In this problem, a bad campaign starts propagating from a set of seed nodes in the network and we use the notion of a limiting (or good) campaign to counteract the effect of misinformation. The goal is to identify a set of k users that need to be convinced to adopt the limiting campaign so as to minimize the number of people that adopt the bad campaign at the end of both propagation processes. :92,work presents emphRPS (Reverse Prevention Sampling), an algorithm that provides a scalable solution to the misinformation mitigation problem. Our theoretical analysis shows that emphRPS runs in O((k l)(n m)(frac11 gamma) log n epsilon2 ) expected time and returns a (1 1e epsilon)approximate solution with at least 1 nl probability (where gamma is a typically small network parameter and l is a confidence parameter). The time complexity of emphRPS substantially improves upon the previously bestknown algorithms that run in time Omega(m n k cdot POLY(epsilon1)). We experimentally evaluate emphRPS on large datasets and show that it outperforms the stateoftheart solution by several orders of magnitude in terms of running time. This demonstrates that misinformation mitigation can be made practical while still offering strong theoretical guarantees.
|
Latencybounded Target Set Selection in Signed Networks It is welldocumented that social networks play a considerable role in information spreading. The dynamic processes governing the diffusion of information have been studied in many fields, including epidemiology, sociology, economics, and computer science. A widely studied problem in the area of viral marketing is the target set selection: in order to market a new product, hoping it will be adopted by a large fraction of individuals in the network, which set of individuals should we xe2x80x9ctargetxe2x80x9d (for instance, by offering them free samples of the product)? In this paper, we introduce a diffusion model in which some of the neighbors of a node have a negative influence on that node, namely, they induce the node to reject the feature that is supposed to be spread. We study the target set selection problem within this model, first proving a strong inapproximability result holding also when the diffusion process is required to reach all the nodes in a couple of rounds. Then, we consider a set of restrictions under which the problem is approximable to some extent.
|
Analysis of Charging Continuous Energy System and Stable Current Collection for Pantograph and Catenary of Pure Electric LHD Aiming at the problem of limited power battery capacity of pure electric LoadHaulDump (LHD), a method of charging and supplying sufficient power through pantographcatenary current collection system is proposed, which avoids the problem of poor flexibility and mobility of towed cable electric LHD. In this paper, we introduce the research and application status of pantograph and catenary, describe the latest methods and techniques for studying the dynamics of pantographcatenary system, elaborate and analyze various methods and technologies, and outline the important indicators for analyzing and evaluating the stability of current collection between pantographcatenary system. Simultaneously, various control strategies for pantographcatenary system are introduced. Finally, the application of the pantographcatenary system in highspeed railway and urban electric bus is discussed to illustrate the advantages of pantographcatenary system charging and energy supply, and it is applied to pure electric LHD charging and energy supply to ensure power adequacy.
|
A Case for Dynamically Programmable Storage Background Tasks Modern storage infrastructures feature long and complicated IO paths composed of several layers, each employing their own optimizations to serve varied applications with fluctuating requirements. However, as these layers do not have global infrastructure visibility, they are unable to optimally tune their behavior to achieve maximum performance. Background storage tasks, in particular, can rapidly overload shared resources, but are executed either periodically or whenever a certain threshold is hit regardless of the overall load on the system. In this paper, we argue that to achieve optimal holistic performance, these tasks should be dynamically programmable and handled by a controller with global visibility. To support this argument, we evaluate the impact on performance of compaction and checkpointing in the context of HBase and PostgreSQL. We find that these tasks can respectively increase 99th percentile latencies by 955.2% and 61.9%. We also identify future research directions to achieve programmable background tasks.
|
Automating Distributed Tiered Storage Management in Cluster Computing Dataintensive platforms such as Hadoop and Spark are routinely used to process massive amounts of data residing on distributed file systems like HDFS. Increasing memory sizes and new hardware technologies (e.g., NVRAM, SSDs) have recently led to the introduction of storage tiering in such settings. However, users are now burdened with the additional complexity of managing the multiple storage tiers and the data residing on them while trying to optimize their workloads. In this paper, we develop a general framework for automatically moving data across the available storage tiers in distributed file systems. Moreover, we employ machine learning for tracking and predicting file access patterns, which we use to decide when and which data to move up or down the storage tiers for increasing system performance. Our approach uses incremental learning to dynamically refine the models with new file accesses, allowing them to naturally adjust and adapt to workload changes over time. Our extensive evaluation using realistic workloads derived from Facebook and CMU traces compares our approach with several other policies and showcases significant benefits in terms of both workload performance and cluster efficiency.
|
Influence Factors of Consumer Satisfaction Based on SEM Structural equation modeling, SEM, can be used to calculate the influence factors of the consumer satisfaction. Based on the SEM, the authors use questionnaire survey method to look for the main influence factors of consumer when purchasing the household appliances. The structural equation model is used to evaluate the consumer satisfaction. From the result, we can see consumers are most concerned about the brand image and least concerned about the perceived value. Besides the quality, the production companies should strengthen the cultivation of brands. Production companies put more focus on the Ru0026D process and can raise prices appropriately when pricing.
|
A Case for Dynamically Programmable Storage Background Tasks Modern storage infrastructures feature long and complicated IO paths composed of several layers, each employing their own optimizations to serve varied applications with fluctuating requirements. However, as these layers do not have global infrastructure visibility, they are unable to optimally tune their behavior to achieve maximum performance. Background storage tasks, in particular, can rapidly overload shared resources, but are executed either periodically or whenever a certain threshold is hit regardless of the overall load on the system. In this paper, we argue that to achieve optimal holistic performance, these tasks should be dynamically programmable and handled by a controller with global visibility. To support this argument, we evaluate the impact on performance of compaction and checkpointing in the context of HBase and PostgreSQL. We find that these tasks can respectively increase 99th percentile latencies by 955.2% and 61.9%. We also identify future research directions to achieve programmable background tasks.
|
Automating Distributed Tiered Storage Management in Cluster Computing Dataintensive platforms such as Hadoop and Spark are routinely used to process massive amounts of data residing on distributed file systems like HDFS. Increasing memory sizes and new hardware technologies (e.g., NVRAM, SSDs) have recently led to the introduction of storage tiering in such settings. However, users are now burdened with the additional complexity of managing the multiple storage tiers and the data residing on them while trying to optimize their workloads. In this paper, we develop a general framework for automatically moving data across the available storage tiers in distributed file systems. Moreover, we employ machine learning for tracking and predicting file access patterns, which we use to decide when and which data to move up or down the storage tiers for increasing system performance. Our approach uses incremental learning to dynamically refine the models with new file accesses, allowing them to naturally adjust and adapt to workload changes over time. Our extensive evaluation using realistic workloads derived from Facebook and CMU traces compares our approach with several other policies and showcases significant benefits in terms of both workload performance and cluster efficiency.
|
Erkundung und Erforschung. Alexander von Humboldts Amerikareise Zusammenfassung Ahnlich wie Adalbert Stifters Erzahler im Roman xe2x80x9eNachsommerxe2x80x9c verband A. v. Humboldt auf seiner Amerikareise Erkundung und Erforschung, Reiselust und Erkenntnisstreben. Humboldt hat sein doppeltes Ziel klar benannt: Bekanntmachung der besuchten Lander, Sammeln von Tatsachen zur Erweiterung der physikalischen Geographie. Der Aufsatz ist in funf Abschnitte gegliedert: Anliegen, Route, Methoden, Ergebnisse, Auswertung. Abstract In a similar way as Adalbert Stifteru0027s narrator in the novel xe2x80x9cLate summerxe2x80x9d A. v. Humboldt combined exploration with research, fondness for travelling with striving for findings during his travel through South America. Humboldt clearly indicated his double aim: to report on the visited countries, to collect facts in order to improve physical geography. The treatise consists of five sections: object, route, methods, results, evaluation.
|
A Case for Dynamically Programmable Storage Background Tasks Modern storage infrastructures feature long and complicated IO paths composed of several layers, each employing their own optimizations to serve varied applications with fluctuating requirements. However, as these layers do not have global infrastructure visibility, they are unable to optimally tune their behavior to achieve maximum performance. Background storage tasks, in particular, can rapidly overload shared resources, but are executed either periodically or whenever a certain threshold is hit regardless of the overall load on the system. In this paper, we argue that to achieve optimal holistic performance, these tasks should be dynamically programmable and handled by a controller with global visibility. To support this argument, we evaluate the impact on performance of compaction and checkpointing in the context of HBase and PostgreSQL. We find that these tasks can respectively increase 99th percentile latencies by 955.2% and 61.9%. We also identify future research directions to achieve programmable background tasks.
|
Automating Distributed Tiered Storage Management in Cluster Computing Dataintensive platforms such as Hadoop and Spark are routinely used to process massive amounts of data residing on distributed file systems like HDFS. Increasing memory sizes and new hardware technologies (e.g., NVRAM, SSDs) have recently led to the introduction of storage tiering in such settings. However, users are now burdened with the additional complexity of managing the multiple storage tiers and the data residing on them while trying to optimize their workloads. In this paper, we develop a general framework for automatically moving data across the available storage tiers in distributed file systems. Moreover, we employ machine learning for tracking and predicting file access patterns, which we use to decide when and which data to move up or down the storage tiers for increasing system performance. Our approach uses incremental learning to dynamically refine the models with new file accesses, allowing them to naturally adjust and adapt to workload changes over time. Our extensive evaluation using realistic workloads derived from Facebook and CMU traces compares our approach with several other policies and showcases significant benefits in terms of both workload performance and cluster efficiency.
|
An Empirical Study of Application of Multimodal Approach to Teaching Reading in EFL in Senior High School English Reading skill is one of the most important skills for senior high school students who learn English as a foreign language (EFL). However, the present ELF teaching method is still teachercentered which neglects studentsxe2x80x99 learning interest and their participation in the process of teaching. In 1990s, Western scholars proposed multimodal theory which suggests that semiotic resources (sound, images, video, animation, motion, color, facial expressions, etc.) can be used to stimulate different senses of students so as to improve their learning efficiency. The present study is intended to apply the multimodal approach to EFL reading teaching in senior high school and tries to find out whether the multimodal teaching can stimulate studentsxe2x80x99 interest in English reading and improve their reading proficiency. In this study, with students of a high school in Anhui in China as the research subjects, an English reading teaching experiment was carried out. The analysis of data collected from reading tests and questionnaires indicates that the application of multimodal teaching approach in high school EFL reading teaching can stimulate studentsxe2x80x99 interest in English reading and improve studentsxe2x80x99 English reading proficiency, and that most students take a positive attitude towards multimodal teaching approach.
|
A Case for Dynamically Programmable Storage Background Tasks Modern storage infrastructures feature long and complicated IO paths composed of several layers, each employing their own optimizations to serve varied applications with fluctuating requirements. However, as these layers do not have global infrastructure visibility, they are unable to optimally tune their behavior to achieve maximum performance. Background storage tasks, in particular, can rapidly overload shared resources, but are executed either periodically or whenever a certain threshold is hit regardless of the overall load on the system. In this paper, we argue that to achieve optimal holistic performance, these tasks should be dynamically programmable and handled by a controller with global visibility. To support this argument, we evaluate the impact on performance of compaction and checkpointing in the context of HBase and PostgreSQL. We find that these tasks can respectively increase 99th percentile latencies by 955.2% and 61.9%. We also identify future research directions to achieve programmable background tasks.
|
Automating Distributed Tiered Storage Management in Cluster Computing Dataintensive platforms such as Hadoop and Spark are routinely used to process massive amounts of data residing on distributed file systems like HDFS. Increasing memory sizes and new hardware technologies (e.g., NVRAM, SSDs) have recently led to the introduction of storage tiering in such settings. However, users are now burdened with the additional complexity of managing the multiple storage tiers and the data residing on them while trying to optimize their workloads. In this paper, we develop a general framework for automatically moving data across the available storage tiers in distributed file systems. Moreover, we employ machine learning for tracking and predicting file access patterns, which we use to decide when and which data to move up or down the storage tiers for increasing system performance. Our approach uses incremental learning to dynamically refine the models with new file accesses, allowing them to naturally adjust and adapt to workload changes over time. Our extensive evaluation using realistic workloads derived from Facebook and CMU traces compares our approach with several other policies and showcases significant benefits in terms of both workload performance and cluster efficiency.
|
Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority.
|
A Case for Dynamically Programmable Storage Background Tasks Modern storage infrastructures feature long and complicated IO paths composed of several layers, each employing their own optimizations to serve varied applications with fluctuating requirements. However, as these layers do not have global infrastructure visibility, they are unable to optimally tune their behavior to achieve maximum performance. Background storage tasks, in particular, can rapidly overload shared resources, but are executed either periodically or whenever a certain threshold is hit regardless of the overall load on the system. In this paper, we argue that to achieve optimal holistic performance, these tasks should be dynamically programmable and handled by a controller with global visibility. To support this argument, we evaluate the impact on performance of compaction and checkpointing in the context of HBase and PostgreSQL. We find that these tasks can respectively increase 99th percentile latencies by 955.2% and 61.9%. We also identify future research directions to achieve programmable background tasks.
|
Automating Distributed Tiered Storage Management in Cluster Computing Dataintensive platforms such as Hadoop and Spark are routinely used to process massive amounts of data residing on distributed file systems like HDFS. Increasing memory sizes and new hardware technologies (e.g., NVRAM, SSDs) have recently led to the introduction of storage tiering in such settings. However, users are now burdened with the additional complexity of managing the multiple storage tiers and the data residing on them while trying to optimize their workloads. In this paper, we develop a general framework for automatically moving data across the available storage tiers in distributed file systems. Moreover, we employ machine learning for tracking and predicting file access patterns, which we use to decide when and which data to move up or down the storage tiers for increasing system performance. Our approach uses incremental learning to dynamically refine the models with new file accesses, allowing them to naturally adjust and adapt to workload changes over time. Our extensive evaluation using realistic workloads derived from Facebook and CMU traces compares our approach with several other policies and showcases significant benefits in terms of both workload performance and cluster efficiency.
|
A Study on Electric Scooters for the Elderly by Applying Fuzzy Theory This research is based on fuzzy comprehensive evaluation, and lists the fuzzy rule table for designers to control a scooter, in order to affect smoothness in the product design process of electric scooters for the elderly. Step 1: Use questionnaire survey method to understand the factors considered by the designer in designing the electric scooter for the elderly. Step 2: Establish hierarchical analysis and consider the factor weight set in electric scooter design. Step 3: Establish fuzzy hierarchical analysis, and sum up the evaluation result set, as based on the designeru0027s experience. Step 4: Comprehensively consider the influence of all factors and obtain the judgment result. Step 5: List fuzzy rules as an application method to improve the traditional design of electric scooters for the elderly. This study found that the travel speed showed the greatest influence 24.98% on the set of factors affecting smoothness.
|
Experimental Study of Visual Corona under Aeronautic Pressure Conditions Using LowCost Imaging Sensors Visual corona tests have been broadly applied for identifying the critical corona points of diverse highvoltage devices, although other approaches based on partial discharge or radio interference voltage measurements are also widely applied to detect corona activity. Nevertheless, these two techniques must be applied in screened laboratories, which are scarce and expensive, require sophisticated instrumentation, and typically do not allow location of the discharge points. This paper describes the detection of the visual corona and location of the critical corona points of a sphereplane gap configurations under different pressure conditions ranging from 100 to 20 kPa, covering the pressures typically found in aeronautic environments. The corona detection is made with a lowcost CMOS imaging sensor from both the visible and ultraviolet (UV) spectrum, which allows detection of the discharge points and their locations, thus significantly reducing the complexity and costs of the instrumentation required while preserving the sensitivity and accuracy of the measurements. The approach proposed in this paper can be applied in aerospace applications to prevent the arc tracking phenomenon, which can lead to catastrophic consequences since there is not a clear protection solution, due to the low levels of leakage current involved in the prearc phenomenon.
|
A SiPMBased Trinal Spectral Sensor Developed for Detecting Hazardous Discharges in HighVoltage Switchgear Electric discharges seriously threat the safety of highvoltage switchgear. In this paper, a spectrumbased optical method is proposed for hazardous discharge monitoring. A SiPMbased trinal spectral sensor is developed with good performances in terms of sensitivity, defect resolution, and risk evaluation. Experiments carried out on two types of artificial discharges (i.e., partial discharge and arc discharge) demonstrate that the light intensities coupled in the three spectral bands account for different proportions and the ratio among the three components generally experiences a regular change with increase in severity of discharge. The typical spectral ratio values are then acquired for hazard rating of discharge and recognition of discharge types with high confidence.
|
Application of Information Technology to Promote the Practice of Medical Education Teaching Purpose: To ensure the clinical teachers and students able to participate in teaching activities at any time and at any place. Methods: Clinical medical teaching must attach importance to medical information resources and curriculum structure and focus on supporting system construction on the basis of using information technology to promote the transformation of the traditional classroom teaching mode to the full integration of medical teaching quality resources, construction of threedimensional teaching model. Results: It shows that reasonable use of information technology plays a positive role in clinical medicine teaching. Clinical medicine is constantly advancing with time. Conclusion: Through online education which is the support of studentsxe2x80x99 autonomous learning, a variety of learning modes in network learning environment. It helps to break through the limitations of traditional education and expand the function of classroom teaching mode.
|
Experimental Study of Visual Corona under Aeronautic Pressure Conditions Using LowCost Imaging Sensors Visual corona tests have been broadly applied for identifying the critical corona points of diverse highvoltage devices, although other approaches based on partial discharge or radio interference voltage measurements are also widely applied to detect corona activity. Nevertheless, these two techniques must be applied in screened laboratories, which are scarce and expensive, require sophisticated instrumentation, and typically do not allow location of the discharge points. This paper describes the detection of the visual corona and location of the critical corona points of a sphereplane gap configurations under different pressure conditions ranging from 100 to 20 kPa, covering the pressures typically found in aeronautic environments. The corona detection is made with a lowcost CMOS imaging sensor from both the visible and ultraviolet (UV) spectrum, which allows detection of the discharge points and their locations, thus significantly reducing the complexity and costs of the instrumentation required while preserving the sensitivity and accuracy of the measurements. The approach proposed in this paper can be applied in aerospace applications to prevent the arc tracking phenomenon, which can lead to catastrophic consequences since there is not a clear protection solution, due to the low levels of leakage current involved in the prearc phenomenon.
|
A SiPMBased Trinal Spectral Sensor Developed for Detecting Hazardous Discharges in HighVoltage Switchgear Electric discharges seriously threat the safety of highvoltage switchgear. In this paper, a spectrumbased optical method is proposed for hazardous discharge monitoring. A SiPMbased trinal spectral sensor is developed with good performances in terms of sensitivity, defect resolution, and risk evaluation. Experiments carried out on two types of artificial discharges (i.e., partial discharge and arc discharge) demonstrate that the light intensities coupled in the three spectral bands account for different proportions and the ratio among the three components generally experiences a regular change with increase in severity of discharge. The typical spectral ratio values are then acquired for hazard rating of discharge and recognition of discharge types with high confidence.
|
Design of the Ideological and Political Education Platform for College Students Based on the Mobile App The mobile Internet has brought abundant teaching resources to the ideological and political education of college students. While enhancing the effectiveness of the ideological and political education, college students are also facing challenges in their world outlook and their mental health. Colleges and universities should make full use of the advantages of the mobile Internet and enhance the pertinence and effectiveness of college studentsxe2x80x99 ideological and political education by building a new platform for college studentsxe2x80x99 ideological and political education, mastering the characteristics of the mobile Internet transmission, strengthening the cultural construction and strengthening the supervision and management of the mobile Internet.
|
Experimental Study of Visual Corona under Aeronautic Pressure Conditions Using LowCost Imaging Sensors Visual corona tests have been broadly applied for identifying the critical corona points of diverse highvoltage devices, although other approaches based on partial discharge or radio interference voltage measurements are also widely applied to detect corona activity. Nevertheless, these two techniques must be applied in screened laboratories, which are scarce and expensive, require sophisticated instrumentation, and typically do not allow location of the discharge points. This paper describes the detection of the visual corona and location of the critical corona points of a sphereplane gap configurations under different pressure conditions ranging from 100 to 20 kPa, covering the pressures typically found in aeronautic environments. The corona detection is made with a lowcost CMOS imaging sensor from both the visible and ultraviolet (UV) spectrum, which allows detection of the discharge points and their locations, thus significantly reducing the complexity and costs of the instrumentation required while preserving the sensitivity and accuracy of the measurements. The approach proposed in this paper can be applied in aerospace applications to prevent the arc tracking phenomenon, which can lead to catastrophic consequences since there is not a clear protection solution, due to the low levels of leakage current involved in the prearc phenomenon.
|
A SiPMBased Trinal Spectral Sensor Developed for Detecting Hazardous Discharges in HighVoltage Switchgear Electric discharges seriously threat the safety of highvoltage switchgear. In this paper, a spectrumbased optical method is proposed for hazardous discharge monitoring. A SiPMbased trinal spectral sensor is developed with good performances in terms of sensitivity, defect resolution, and risk evaluation. Experiments carried out on two types of artificial discharges (i.e., partial discharge and arc discharge) demonstrate that the light intensities coupled in the three spectral bands account for different proportions and the ratio among the three components generally experiences a regular change with increase in severity of discharge. The typical spectral ratio values are then acquired for hazard rating of discharge and recognition of discharge types with high confidence.
|
Unmanned agricultural product sales system The invention relates to the field of agricultural product sales, provides an unmanned agricultural product sales system, and aims to solve the problem of agricultural product waste caused by the factthat most farmers can only prepare goods according to guessing and experiences when selling agricultural products at present. The unmanned agricultural product sales system comprises an acquisition module for acquiring selection information of customers; a storage module which prestores a vegetable preparation scheme; a matching module which is used for matching a corresponding side dish schemefrom the storage module according to the selection information of the client; a pushing module which is used for pushing the matched side dish scheme back to the client; an acquisition module which isalso used for acquiring confirmation information of a client; an order module which is used for generating order information according to the confirmation information of the client, wherein the pushing module is used for pushing the order information to the client and the seller, and the acquisition module is also used for acquiring the delivery information of the seller; and a logistics trackingmodule which is used for tracking the delivery information to obtain logistics information, wherein the pushing module is used for pushing the logistics information to the client. The scheme is usedfor sales of unmanned agricultural product shops.
|
Experimental Study of Visual Corona under Aeronautic Pressure Conditions Using LowCost Imaging Sensors Visual corona tests have been broadly applied for identifying the critical corona points of diverse highvoltage devices, although other approaches based on partial discharge or radio interference voltage measurements are also widely applied to detect corona activity. Nevertheless, these two techniques must be applied in screened laboratories, which are scarce and expensive, require sophisticated instrumentation, and typically do not allow location of the discharge points. This paper describes the detection of the visual corona and location of the critical corona points of a sphereplane gap configurations under different pressure conditions ranging from 100 to 20 kPa, covering the pressures typically found in aeronautic environments. The corona detection is made with a lowcost CMOS imaging sensor from both the visible and ultraviolet (UV) spectrum, which allows detection of the discharge points and their locations, thus significantly reducing the complexity and costs of the instrumentation required while preserving the sensitivity and accuracy of the measurements. The approach proposed in this paper can be applied in aerospace applications to prevent the arc tracking phenomenon, which can lead to catastrophic consequences since there is not a clear protection solution, due to the low levels of leakage current involved in the prearc phenomenon.
|
A SiPMBased Trinal Spectral Sensor Developed for Detecting Hazardous Discharges in HighVoltage Switchgear Electric discharges seriously threat the safety of highvoltage switchgear. In this paper, a spectrumbased optical method is proposed for hazardous discharge monitoring. A SiPMbased trinal spectral sensor is developed with good performances in terms of sensitivity, defect resolution, and risk evaluation. Experiments carried out on two types of artificial discharges (i.e., partial discharge and arc discharge) demonstrate that the light intensities coupled in the three spectral bands account for different proportions and the ratio among the three components generally experiences a regular change with increase in severity of discharge. The typical spectral ratio values are then acquired for hazard rating of discharge and recognition of discharge types with high confidence.
|
Conjectured Statistics for the Higher q;tCatalan Sequences This article describes conjectured combinatorial interpretations for the higher q,tCatalan sequences introduced by Garsia and Haiman, which arise in the theory of symmetric functions and Macdonald polynomials. We define new combinatorial statistics generalizing those proposed by Haglund and Haiman for the original q,tCatalan sequence. We prove explicit summation formulas, bijections, and recursions involving the new statistics. We show that specializations of the combinatorial sequences obtained by setting t1 or q1 or t1q agree with the corresponding specializations of the GarsiaHaiman sequences. A third statistic occurs naturally in the combinatorial setting, leading to the introduction of q,t,rCatalan sequences. Similar combinatorial results are proved for these trivariate sequences.
|
Experimental Study of Visual Corona under Aeronautic Pressure Conditions Using LowCost Imaging Sensors Visual corona tests have been broadly applied for identifying the critical corona points of diverse highvoltage devices, although other approaches based on partial discharge or radio interference voltage measurements are also widely applied to detect corona activity. Nevertheless, these two techniques must be applied in screened laboratories, which are scarce and expensive, require sophisticated instrumentation, and typically do not allow location of the discharge points. This paper describes the detection of the visual corona and location of the critical corona points of a sphereplane gap configurations under different pressure conditions ranging from 100 to 20 kPa, covering the pressures typically found in aeronautic environments. The corona detection is made with a lowcost CMOS imaging sensor from both the visible and ultraviolet (UV) spectrum, which allows detection of the discharge points and their locations, thus significantly reducing the complexity and costs of the instrumentation required while preserving the sensitivity and accuracy of the measurements. The approach proposed in this paper can be applied in aerospace applications to prevent the arc tracking phenomenon, which can lead to catastrophic consequences since there is not a clear protection solution, due to the low levels of leakage current involved in the prearc phenomenon.
|
A SiPMBased Trinal Spectral Sensor Developed for Detecting Hazardous Discharges in HighVoltage Switchgear Electric discharges seriously threat the safety of highvoltage switchgear. In this paper, a spectrumbased optical method is proposed for hazardous discharge monitoring. A SiPMbased trinal spectral sensor is developed with good performances in terms of sensitivity, defect resolution, and risk evaluation. Experiments carried out on two types of artificial discharges (i.e., partial discharge and arc discharge) demonstrate that the light intensities coupled in the three spectral bands account for different proportions and the ratio among the three components generally experiences a regular change with increase in severity of discharge. The typical spectral ratio values are then acquired for hazard rating of discharge and recognition of discharge types with high confidence.
|
The Complete Picture of the Twitter Social Graph In this work, we collected the entire Twitter social graph that consists of 537 million Twitter accounts connected by 23.95 billion links, and performed a preliminary analysis of the collected data. In order to collect the social graph, we implemented a distributed crawler on the PlanetLab infrastructure that collected all information in 4 months. Our preliminary analysis already revealed some interesting properties. Whereas there are 537 million Twitter accounts, only 268 million already sent at least one tweet and no more than 54 million have been recently active. In addition, 40% of the accounts are not followed by anybody and 25% do not follow anybody. Finally, we found that the Twitter policies, but also social conventions (like the followback convention) have a huge impact on the structure of the Twitter social graph.
|
Quality Assessment of Stereoscopic 360degree Images from Multiviewports Objective quality assessment of stereoscopic panoramic images becomes a challenging problem owing to the rapid growth of 360degree contents. Different from traditional 2D image quality assessment (IQA), more complex aspects are involved in 3D omnidirectional IQA, especially unlimited field of view (FoV) and extra depth perception, which brings difficulty to evaluate the quality of experience (QoE) of 3D omnidirectional images. In this paper, we propose a multiviewport based fullreference stereo 360 IQA model. Due to the freely changeable viewports when browsing in the headmounted display, our proposed approach processes the image inside FoV rather than the projected one such as equirectangular projection (ERP). In addition, since overall QoE depends on both image quality and depth perception, we utilize the features estimated by the difference map between left and right views which can reflect disparity. The depth perception features along with binocular image qualities are employed to further predict the overall QoE of 3D 360 images. The experimental results on our public Stereoscopic OmnidirectionaL Image quality assessment Database (SOLID) show that the proposed method achieves a significant improvement over some wellknown IQA metrics and can accurately reflect the overall QoE of perceived images.
|
Quality Assessment for Omnidirectional Video with Consideration of Temporal Distortion Variations Omnidirectional video, also known as 360degree video, offers an immersive visual experience by providing viewers with an ability to look in all directions within a scene. The quality assessment for omnidirectional video is still a quite difficult task compared to 2D video. As the temporal changes of spatial distortions can considerably influence human visual perception, this paper proposes a full reference objective video quality assessment metric by considering both the spatial characteristics of omnidirectional video and the temporal variation of distortions across frames. Firstly, we construct a spatioxe2x80x93temporal quality assessment unit to evaluate the average distortion in temporal dimension at eye fixation level. The smoothed distortion value is then consolidated by the characteristics of temporal variations. Afterwards, a global quality score of the whole video sequence is produced by pooling. Finally, our experimental results show that our proposed VQA method improves the prediction performance of existing VQA methods for omnidirectional video.
|
Its time to rethink DDoS protection When you think of distributed denial of service (DDoS) attacks, chances are you conjure up an image of an overwhelming flood of traffic that incapacitates a network. This kind of cyber attack is all about overt, brute force used to take a target down. Some hackers are a little smarter, using DDoS as a distraction while they simultaneously attempt a more targeted strike, as was the case with a Carphone Warehouse hack in 2015. 1 But in general, DDoS isnu0027t subtle. Retailers are having to rethink how they approach distributed denial of service (DDoS) protection following the rise of a stealthier incarnation of the threat. There has been a significant increase in smallscale DDoS attacks and a corresponding reduction in conventional largescale events. The hackerxe2x80x99s aim is to remain below the conventional xe2x80x98detect and alertxe2x80x99 threshold that could trigger a DDoS mitigation strategy. Roy Reynolds of Vodat International explains the nature of the threat and the steps organisations can take to protect themselves.
|
Quality Assessment of Stereoscopic 360degree Images from Multiviewports Objective quality assessment of stereoscopic panoramic images becomes a challenging problem owing to the rapid growth of 360degree contents. Different from traditional 2D image quality assessment (IQA), more complex aspects are involved in 3D omnidirectional IQA, especially unlimited field of view (FoV) and extra depth perception, which brings difficulty to evaluate the quality of experience (QoE) of 3D omnidirectional images. In this paper, we propose a multiviewport based fullreference stereo 360 IQA model. Due to the freely changeable viewports when browsing in the headmounted display, our proposed approach processes the image inside FoV rather than the projected one such as equirectangular projection (ERP). In addition, since overall QoE depends on both image quality and depth perception, we utilize the features estimated by the difference map between left and right views which can reflect disparity. The depth perception features along with binocular image qualities are employed to further predict the overall QoE of 3D 360 images. The experimental results on our public Stereoscopic OmnidirectionaL Image quality assessment Database (SOLID) show that the proposed method achieves a significant improvement over some wellknown IQA metrics and can accurately reflect the overall QoE of perceived images.
|
Quality Assessment for Omnidirectional Video with Consideration of Temporal Distortion Variations Omnidirectional video, also known as 360degree video, offers an immersive visual experience by providing viewers with an ability to look in all directions within a scene. The quality assessment for omnidirectional video is still a quite difficult task compared to 2D video. As the temporal changes of spatial distortions can considerably influence human visual perception, this paper proposes a full reference objective video quality assessment metric by considering both the spatial characteristics of omnidirectional video and the temporal variation of distortions across frames. Firstly, we construct a spatioxe2x80x93temporal quality assessment unit to evaluate the average distortion in temporal dimension at eye fixation level. The smoothed distortion value is then consolidated by the characteristics of temporal variations. Afterwards, a global quality score of the whole video sequence is produced by pooling. Finally, our experimental results show that our proposed VQA method improves the prediction performance of existing VQA methods for omnidirectional video.
|
A Critical Look at the 2019 College Admissions Scandal Discusses the 2019 College admissions scandal. Let me begin with a disclaimer: I am making no legal excuses for the participants in the current scandal. I am only offering contextual background that places it in the broader academic, cultural, and political perspective required for understanding. It is only the most recent installment of a wellworn narrative: the controlling elite make their own rules and live by them, if they can get away with it. Unfortunately, some of the participants, who are either serving or facing jail time, didnxe2x80x99t know to not go into a gunfight with a sharp stick. Money alone is not enough to avoid prosecution for fraud: you need political clout. The best protection a defendant can have is a prosecutor who fears political reprisal. Compare how the Koch brothers escaped prosecution for stealing millions of oil dollars from Native American tribes1,2 with the fate of actresses Lori Loughlin and Felicity Huffman, who, at the time of this writing, face jail time for paying bribes to get their children into good universities.3,4 In the former case, the federal prosecutor who dared to empanel a grand jury to get at the truth was fired for cause, which put a quick end to the prosecution. In the latter case, the prosecutors pushed for jail terms and public admonishment with the zeal of Oliver Cromwell. There you have it: stealing oil from Native Americans versus trying to bribe your kids into a great university. Where is the greater crime? Admittedly, these actresses and their
|
Quality Assessment of Stereoscopic 360degree Images from Multiviewports Objective quality assessment of stereoscopic panoramic images becomes a challenging problem owing to the rapid growth of 360degree contents. Different from traditional 2D image quality assessment (IQA), more complex aspects are involved in 3D omnidirectional IQA, especially unlimited field of view (FoV) and extra depth perception, which brings difficulty to evaluate the quality of experience (QoE) of 3D omnidirectional images. In this paper, we propose a multiviewport based fullreference stereo 360 IQA model. Due to the freely changeable viewports when browsing in the headmounted display, our proposed approach processes the image inside FoV rather than the projected one such as equirectangular projection (ERP). In addition, since overall QoE depends on both image quality and depth perception, we utilize the features estimated by the difference map between left and right views which can reflect disparity. The depth perception features along with binocular image qualities are employed to further predict the overall QoE of 3D 360 images. The experimental results on our public Stereoscopic OmnidirectionaL Image quality assessment Database (SOLID) show that the proposed method achieves a significant improvement over some wellknown IQA metrics and can accurately reflect the overall QoE of perceived images.
|
Quality Assessment for Omnidirectional Video with Consideration of Temporal Distortion Variations Omnidirectional video, also known as 360degree video, offers an immersive visual experience by providing viewers with an ability to look in all directions within a scene. The quality assessment for omnidirectional video is still a quite difficult task compared to 2D video. As the temporal changes of spatial distortions can considerably influence human visual perception, this paper proposes a full reference objective video quality assessment metric by considering both the spatial characteristics of omnidirectional video and the temporal variation of distortions across frames. Firstly, we construct a spatioxe2x80x93temporal quality assessment unit to evaluate the average distortion in temporal dimension at eye fixation level. The smoothed distortion value is then consolidated by the characteristics of temporal variations. Afterwards, a global quality score of the whole video sequence is produced by pooling. Finally, our experimental results show that our proposed VQA method improves the prediction performance of existing VQA methods for omnidirectional video.
|
Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups.
|
Quality Assessment of Stereoscopic 360degree Images from Multiviewports Objective quality assessment of stereoscopic panoramic images becomes a challenging problem owing to the rapid growth of 360degree contents. Different from traditional 2D image quality assessment (IQA), more complex aspects are involved in 3D omnidirectional IQA, especially unlimited field of view (FoV) and extra depth perception, which brings difficulty to evaluate the quality of experience (QoE) of 3D omnidirectional images. In this paper, we propose a multiviewport based fullreference stereo 360 IQA model. Due to the freely changeable viewports when browsing in the headmounted display, our proposed approach processes the image inside FoV rather than the projected one such as equirectangular projection (ERP). In addition, since overall QoE depends on both image quality and depth perception, we utilize the features estimated by the difference map between left and right views which can reflect disparity. The depth perception features along with binocular image qualities are employed to further predict the overall QoE of 3D 360 images. The experimental results on our public Stereoscopic OmnidirectionaL Image quality assessment Database (SOLID) show that the proposed method achieves a significant improvement over some wellknown IQA metrics and can accurately reflect the overall QoE of perceived images.
|
Quality Assessment for Omnidirectional Video with Consideration of Temporal Distortion Variations Omnidirectional video, also known as 360degree video, offers an immersive visual experience by providing viewers with an ability to look in all directions within a scene. The quality assessment for omnidirectional video is still a quite difficult task compared to 2D video. As the temporal changes of spatial distortions can considerably influence human visual perception, this paper proposes a full reference objective video quality assessment metric by considering both the spatial characteristics of omnidirectional video and the temporal variation of distortions across frames. Firstly, we construct a spatioxe2x80x93temporal quality assessment unit to evaluate the average distortion in temporal dimension at eye fixation level. The smoothed distortion value is then consolidated by the characteristics of temporal variations. Afterwards, a global quality score of the whole video sequence is produced by pooling. Finally, our experimental results show that our proposed VQA method improves the prediction performance of existing VQA methods for omnidirectional video.
|
Improvement of Twoswarm Cooperative Particle Swarm Optimization Using Immune Algorithms and Swarm Clustering Particle Swarm Optimization (PSO) is useful as a method for solving optimization problems with continuous value variables because the convergence speed of solution search is fast. PSO is a evolutionary computation method in which individuals (particles) with position and velocity information are placed in the search space and acts for the purpose of finding an optimal solution with sharing information with other particles. This study constructs a particle swarm optimization method introducing the immune algorithms to improve the search capability of each particle and perform solution search more efficiently. To verify the usefulness of the proposed method, some numerical experiments are performed in this study.
|
Quality Assessment of Stereoscopic 360degree Images from Multiviewports Objective quality assessment of stereoscopic panoramic images becomes a challenging problem owing to the rapid growth of 360degree contents. Different from traditional 2D image quality assessment (IQA), more complex aspects are involved in 3D omnidirectional IQA, especially unlimited field of view (FoV) and extra depth perception, which brings difficulty to evaluate the quality of experience (QoE) of 3D omnidirectional images. In this paper, we propose a multiviewport based fullreference stereo 360 IQA model. Due to the freely changeable viewports when browsing in the headmounted display, our proposed approach processes the image inside FoV rather than the projected one such as equirectangular projection (ERP). In addition, since overall QoE depends on both image quality and depth perception, we utilize the features estimated by the difference map between left and right views which can reflect disparity. The depth perception features along with binocular image qualities are employed to further predict the overall QoE of 3D 360 images. The experimental results on our public Stereoscopic OmnidirectionaL Image quality assessment Database (SOLID) show that the proposed method achieves a significant improvement over some wellknown IQA metrics and can accurately reflect the overall QoE of perceived images.
|
Quality Assessment for Omnidirectional Video with Consideration of Temporal Distortion Variations Omnidirectional video, also known as 360degree video, offers an immersive visual experience by providing viewers with an ability to look in all directions within a scene. The quality assessment for omnidirectional video is still a quite difficult task compared to 2D video. As the temporal changes of spatial distortions can considerably influence human visual perception, this paper proposes a full reference objective video quality assessment metric by considering both the spatial characteristics of omnidirectional video and the temporal variation of distortions across frames. Firstly, we construct a spatioxe2x80x93temporal quality assessment unit to evaluate the average distortion in temporal dimension at eye fixation level. The smoothed distortion value is then consolidated by the characteristics of temporal variations. Afterwards, a global quality score of the whole video sequence is produced by pooling. Finally, our experimental results show that our proposed VQA method improves the prediction performance of existing VQA methods for omnidirectional video.
|
Shifted Set Families, Degree Sequences, and Plethysm We study, in three parts, degree sequences of kfamilies (or kuniform hypergraphs) and shifted kfamilies. bullet The first part collects for the first time in one place, various implications such as scriptstyle hboxThreshold Rightarrow hboxUniquely Realizable Rightarrow hboxDegreeMaximal Rightarrow hboxShifted which are equivalent concepts for 2families ( simple graphs), but strict implications for kfamilies with k geq 3. The implication that uniquely realizable implies degreemaximal seems to be new. bullet The second part recalls Merris and Robyu0027s reformulation of the characterization due to Ruch and Gutman for graphical degree sequences and shifted 2families. It then introduces two generalizations which are characterizations of shifted kfamilies. bullet The third part recalls the connection between degree sequences of kfamilies of size m and the plethysm of elementary symmetric functions e_me_k. It then uses highest weight theory to explain how shifted kfamilies provide the top part of these plethysm expansions, along with offering a conjecture about a further relation.
|
Modelling distributed decisionmaking in Command and Control using stochastic network synchronisation Abstract We advance a mathematical representation of Command and Control as distributed decision makers using the Kuramoto model of networked phase oscillators. The phase represents a continuous PerceptionAction cycle of agents at each network node; the network the formal and informal communications of human agents and information artefacts; coupling the strength of relationship between agents; native frequencies the individual decision speeds of agents when isolated; and stochasticity temporal noisiness in intrinsic agent behaviour. Skewed heavytailed noise captures that agents may randomly xe2x80x98jumpxe2x80x99 forward (rather than backwards) in their decision state under time stress; there is considerable evidence from organisational science that experienced decisionmakers behave in this way in critical situations. We present a usecase for the model using data for military headquarters staff tasked to drive a twentyfour hour xe2x80x98battlerhythmxe2x80x99. This serves to illustrate how such a mathematical model may be used realistically. We draw on a previous casestudy where headquartersxe2x80x99 networks were mapped for routine business and crisis scenarios to provide advice to a military sponsor. We tune the model using the first data set to match observations that staff performed synchronously under such conditions. Testing the impact of the crisis scenario using the corresponding network and heavytailed stochasticity, we find increased probability of decision incoherence due to the high information demand of some agents in this case. This demonstrates the utility of the model to identify risks in headquarters design, and potential means of identifying points to change. We compare to qualitative organisational theories to initially validate the model.
|
Plasticity in Collective DecisionMaking for Robots: Creating Global Reference Frames, Detecting Dynamic Environments, and Preventing Lockins Swarm robots operate as autonomous agents and a swarm as a whole gets autonomous by its capability of collective decisionmaking. Despite intensive research on models of collective decisionmaking, the implementation in multirobot systems is still challenging. Here, we advance the state of the art by introducing more plasticity to the decisionmaking process and by increasing the scenario difficulty. Most studies on largescale multirobot decisionmaking are limited to one instance of an iterated explorationdissemination phase followed by successful and permanent convergence. We investigate a dynamic environment that requires constant collective monitoring of option qualities. Once a significant change in qualities is detected by the swarm, it has to collectively reconsider its previous decision accordingly. This is only possible by preventing lockins, a global consensus state of no return (i.e., a dominant majority of robots prevents the swarm from switching to another, possibly better option). In addition, we introduce a scenario of increased difficulty as the robots must locate themselves to assess the quality of an option. Using local communication, swarm robots propagate hopcount information throughout the swarm to form a global reference frame. We successfully validate our implementation in many swarm robot experiments concerning robustness to disruptions of the reference frame, scalability, and adaptivity to a dynamic environment.
|
Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority.
|
Modelling distributed decisionmaking in Command and Control using stochastic network synchronisation Abstract We advance a mathematical representation of Command and Control as distributed decision makers using the Kuramoto model of networked phase oscillators. The phase represents a continuous PerceptionAction cycle of agents at each network node; the network the formal and informal communications of human agents and information artefacts; coupling the strength of relationship between agents; native frequencies the individual decision speeds of agents when isolated; and stochasticity temporal noisiness in intrinsic agent behaviour. Skewed heavytailed noise captures that agents may randomly xe2x80x98jumpxe2x80x99 forward (rather than backwards) in their decision state under time stress; there is considerable evidence from organisational science that experienced decisionmakers behave in this way in critical situations. We present a usecase for the model using data for military headquarters staff tasked to drive a twentyfour hour xe2x80x98battlerhythmxe2x80x99. This serves to illustrate how such a mathematical model may be used realistically. We draw on a previous casestudy where headquartersxe2x80x99 networks were mapped for routine business and crisis scenarios to provide advice to a military sponsor. We tune the model using the first data set to match observations that staff performed synchronously under such conditions. Testing the impact of the crisis scenario using the corresponding network and heavytailed stochasticity, we find increased probability of decision incoherence due to the high information demand of some agents in this case. This demonstrates the utility of the model to identify risks in headquarters design, and potential means of identifying points to change. We compare to qualitative organisational theories to initially validate the model.
|
Plasticity in Collective DecisionMaking for Robots: Creating Global Reference Frames, Detecting Dynamic Environments, and Preventing Lockins Swarm robots operate as autonomous agents and a swarm as a whole gets autonomous by its capability of collective decisionmaking. Despite intensive research on models of collective decisionmaking, the implementation in multirobot systems is still challenging. Here, we advance the state of the art by introducing more plasticity to the decisionmaking process and by increasing the scenario difficulty. Most studies on largescale multirobot decisionmaking are limited to one instance of an iterated explorationdissemination phase followed by successful and permanent convergence. We investigate a dynamic environment that requires constant collective monitoring of option qualities. Once a significant change in qualities is detected by the swarm, it has to collectively reconsider its previous decision accordingly. This is only possible by preventing lockins, a global consensus state of no return (i.e., a dominant majority of robots prevents the swarm from switching to another, possibly better option). In addition, we introduce a scenario of increased difficulty as the robots must locate themselves to assess the quality of an option. Using local communication, swarm robots propagate hopcount information throughout the swarm to form a global reference frame. We successfully validate our implementation in many swarm robot experiments concerning robustness to disruptions of the reference frame, scalability, and adaptivity to a dynamic environment.
|
Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well. Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found.
|
Modelling distributed decisionmaking in Command and Control using stochastic network synchronisation Abstract We advance a mathematical representation of Command and Control as distributed decision makers using the Kuramoto model of networked phase oscillators. The phase represents a continuous PerceptionAction cycle of agents at each network node; the network the formal and informal communications of human agents and information artefacts; coupling the strength of relationship between agents; native frequencies the individual decision speeds of agents when isolated; and stochasticity temporal noisiness in intrinsic agent behaviour. Skewed heavytailed noise captures that agents may randomly xe2x80x98jumpxe2x80x99 forward (rather than backwards) in their decision state under time stress; there is considerable evidence from organisational science that experienced decisionmakers behave in this way in critical situations. We present a usecase for the model using data for military headquarters staff tasked to drive a twentyfour hour xe2x80x98battlerhythmxe2x80x99. This serves to illustrate how such a mathematical model may be used realistically. We draw on a previous casestudy where headquartersxe2x80x99 networks were mapped for routine business and crisis scenarios to provide advice to a military sponsor. We tune the model using the first data set to match observations that staff performed synchronously under such conditions. Testing the impact of the crisis scenario using the corresponding network and heavytailed stochasticity, we find increased probability of decision incoherence due to the high information demand of some agents in this case. This demonstrates the utility of the model to identify risks in headquarters design, and potential means of identifying points to change. We compare to qualitative organisational theories to initially validate the model.
|
Plasticity in Collective DecisionMaking for Robots: Creating Global Reference Frames, Detecting Dynamic Environments, and Preventing Lockins Swarm robots operate as autonomous agents and a swarm as a whole gets autonomous by its capability of collective decisionmaking. Despite intensive research on models of collective decisionmaking, the implementation in multirobot systems is still challenging. Here, we advance the state of the art by introducing more plasticity to the decisionmaking process and by increasing the scenario difficulty. Most studies on largescale multirobot decisionmaking are limited to one instance of an iterated explorationdissemination phase followed by successful and permanent convergence. We investigate a dynamic environment that requires constant collective monitoring of option qualities. Once a significant change in qualities is detected by the swarm, it has to collectively reconsider its previous decision accordingly. This is only possible by preventing lockins, a global consensus state of no return (i.e., a dominant majority of robots prevents the swarm from switching to another, possibly better option). In addition, we introduce a scenario of increased difficulty as the robots must locate themselves to assess the quality of an option. Using local communication, swarm robots propagate hopcount information throughout the swarm to form a global reference frame. We successfully validate our implementation in many swarm robot experiments concerning robustness to disruptions of the reference frame, scalability, and adaptivity to a dynamic environment.
|
General Data Protection Regulation in Health Clinics The focus on personal data has merited the EU concerns and attention, resulting in the legislative change regarding privacy and the protection of personal data. The General Data Protection Regulation (GDPR) aims to reform existing measures on the protection of personal data of European Union citizens, with a strong impact on the rights and freedoms of individuals in establishing rules for the processing of personal data. The GDPR considers a special category of personal data, the health data, being these considered as sensitive data and subject to special conditions regarding treatment and access by third parties. This work presents the evolution of the applicability of the Regulation (EU) 2016679 six months after its application in Portuguese health clinics. The results of the present study are discussed in the light of future literature and work are identified.
|
Modelling distributed decisionmaking in Command and Control using stochastic network synchronisation Abstract We advance a mathematical representation of Command and Control as distributed decision makers using the Kuramoto model of networked phase oscillators. The phase represents a continuous PerceptionAction cycle of agents at each network node; the network the formal and informal communications of human agents and information artefacts; coupling the strength of relationship between agents; native frequencies the individual decision speeds of agents when isolated; and stochasticity temporal noisiness in intrinsic agent behaviour. Skewed heavytailed noise captures that agents may randomly xe2x80x98jumpxe2x80x99 forward (rather than backwards) in their decision state under time stress; there is considerable evidence from organisational science that experienced decisionmakers behave in this way in critical situations. We present a usecase for the model using data for military headquarters staff tasked to drive a twentyfour hour xe2x80x98battlerhythmxe2x80x99. This serves to illustrate how such a mathematical model may be used realistically. We draw on a previous casestudy where headquartersxe2x80x99 networks were mapped for routine business and crisis scenarios to provide advice to a military sponsor. We tune the model using the first data set to match observations that staff performed synchronously under such conditions. Testing the impact of the crisis scenario using the corresponding network and heavytailed stochasticity, we find increased probability of decision incoherence due to the high information demand of some agents in this case. This demonstrates the utility of the model to identify risks in headquarters design, and potential means of identifying points to change. We compare to qualitative organisational theories to initially validate the model.
|
Plasticity in Collective DecisionMaking for Robots: Creating Global Reference Frames, Detecting Dynamic Environments, and Preventing Lockins Swarm robots operate as autonomous agents and a swarm as a whole gets autonomous by its capability of collective decisionmaking. Despite intensive research on models of collective decisionmaking, the implementation in multirobot systems is still challenging. Here, we advance the state of the art by introducing more plasticity to the decisionmaking process and by increasing the scenario difficulty. Most studies on largescale multirobot decisionmaking are limited to one instance of an iterated explorationdissemination phase followed by successful and permanent convergence. We investigate a dynamic environment that requires constant collective monitoring of option qualities. Once a significant change in qualities is detected by the swarm, it has to collectively reconsider its previous decision accordingly. This is only possible by preventing lockins, a global consensus state of no return (i.e., a dominant majority of robots prevents the swarm from switching to another, possibly better option). In addition, we introduce a scenario of increased difficulty as the robots must locate themselves to assess the quality of an option. Using local communication, swarm robots propagate hopcount information throughout the swarm to form a global reference frame. We successfully validate our implementation in many swarm robot experiments concerning robustness to disruptions of the reference frame, scalability, and adaptivity to a dynamic environment.
|
Effects of Brownfield Remediation on Total Gaseous Mercury Concentrations in an Urban Landscape In order to obtain a better perspective of the impacts of brownfields on the landxe2x80x93atmosphere exchange of mercury in urban areas, total gaseous mercury (TGM) was measured at two heights (1.8 m and 42.7 m) prior to 2011xe2x80x932012 and after 2015xe2x80x932016 for the remediation of a brownfield and installation of a parking lot adjacent to the Syracuse Center of Excellence in Syracuse, NY, USA. Prior to brownfield remediation, the annual average TGM concentrations were 1.6 xc2xb1 0.6 and 1.4 xc2xb1 0.4 ng xc2xb7 m xe2x88x92 3 at the ground and upper heights, respectively. After brownfield remediation, the annual average TGM concentrations decreased by 32% and 22% at the ground and the upper height, respectively. Mercury soil flux measurements during summer after remediation showed net TGM deposition of 1.7 ng xc2xb7 m xe2x88x92 2 xc2xb7 day xe2x88x92 1 suggesting that the site transitioned from a mercury source to a net mercury sink. Measurements from the Atmospheric Mercury Network (AMNet) indicate that there was no regional decrease in TGM concentrations during the study period. This study demonstrates that evasion from mercurycontaminated soil significantly increased local TGM concentrations, which was subsequently mitigated after soil restoration. Considering the large number of brownfields, they may be an important source of mercury emissions source to local urban ecosystems and warrant future study at additional locations.
|
Modelling distributed decisionmaking in Command and Control using stochastic network synchronisation Abstract We advance a mathematical representation of Command and Control as distributed decision makers using the Kuramoto model of networked phase oscillators. The phase represents a continuous PerceptionAction cycle of agents at each network node; the network the formal and informal communications of human agents and information artefacts; coupling the strength of relationship between agents; native frequencies the individual decision speeds of agents when isolated; and stochasticity temporal noisiness in intrinsic agent behaviour. Skewed heavytailed noise captures that agents may randomly xe2x80x98jumpxe2x80x99 forward (rather than backwards) in their decision state under time stress; there is considerable evidence from organisational science that experienced decisionmakers behave in this way in critical situations. We present a usecase for the model using data for military headquarters staff tasked to drive a twentyfour hour xe2x80x98battlerhythmxe2x80x99. This serves to illustrate how such a mathematical model may be used realistically. We draw on a previous casestudy where headquartersxe2x80x99 networks were mapped for routine business and crisis scenarios to provide advice to a military sponsor. We tune the model using the first data set to match observations that staff performed synchronously under such conditions. Testing the impact of the crisis scenario using the corresponding network and heavytailed stochasticity, we find increased probability of decision incoherence due to the high information demand of some agents in this case. This demonstrates the utility of the model to identify risks in headquarters design, and potential means of identifying points to change. We compare to qualitative organisational theories to initially validate the model.
|
Plasticity in Collective DecisionMaking for Robots: Creating Global Reference Frames, Detecting Dynamic Environments, and Preventing Lockins Swarm robots operate as autonomous agents and a swarm as a whole gets autonomous by its capability of collective decisionmaking. Despite intensive research on models of collective decisionmaking, the implementation in multirobot systems is still challenging. Here, we advance the state of the art by introducing more plasticity to the decisionmaking process and by increasing the scenario difficulty. Most studies on largescale multirobot decisionmaking are limited to one instance of an iterated explorationdissemination phase followed by successful and permanent convergence. We investigate a dynamic environment that requires constant collective monitoring of option qualities. Once a significant change in qualities is detected by the swarm, it has to collectively reconsider its previous decision accordingly. This is only possible by preventing lockins, a global consensus state of no return (i.e., a dominant majority of robots prevents the swarm from switching to another, possibly better option). In addition, we introduce a scenario of increased difficulty as the robots must locate themselves to assess the quality of an option. Using local communication, swarm robots propagate hopcount information throughout the swarm to form a global reference frame. We successfully validate our implementation in many swarm robot experiments concerning robustness to disruptions of the reference frame, scalability, and adaptivity to a dynamic environment.
|
Symmetric Simplicial Pseudoline Arrangements A simplicial arrangement of pseudolines is a collection of topological lines in the projective plane where each region that is formed is triangular. This paper refines and develops David Eppsteinu0027s notion of a kaleidoscope construction for symmetric pseudoline arrangements to construct and analyze several infinite families of simplicial pseudoline arrangements with high degrees of geometric symmetry. In particular, all simplicial pseudoline arrangements with the symmetries of a regular kgon and three symmetry classes of pseudolines, consisting of the mirrors of the kgon and two other symmetry classes, plus sometimes the line at infinity, are classified, and other interesting families (with more symmetry classes of pseudolines) are discussed.
|
Towards GAN Benchmarks Which Require Generalization. For many evaluation metrics commonly used as benchmarks for unconditional image generation, trivially memorizing the training set attains a better score than models which are considered stateoftheart; we consider this problematic. We clarify a necessary condition for an evaluation metric not to behave this way: estimating the function must require a large sample from the model. In search of such a metric, we turn to neural network divergences (NNDs), which are defined in terms of a neural network trained to distinguish between distributions. The resulting benchmarks cannot be won by training set memorization, while still being perceptually correlated and computable only from samples. We survey past work on using NNDs for evaluation and implement an example blackbox metric based on these ideas. Through experimental validation we show that it can effectively measure diversity, sample quality, and generalization.
|
A StyleBased Generator Architecture for Generative Adversarial Networks We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of highlevel attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scalespecific control of the synthesis. The new generator improves the stateoftheart in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and highquality dataset of human faces.
|
Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority.
|
Towards GAN Benchmarks Which Require Generalization. For many evaluation metrics commonly used as benchmarks for unconditional image generation, trivially memorizing the training set attains a better score than models which are considered stateoftheart; we consider this problematic. We clarify a necessary condition for an evaluation metric not to behave this way: estimating the function must require a large sample from the model. In search of such a metric, we turn to neural network divergences (NNDs), which are defined in terms of a neural network trained to distinguish between distributions. The resulting benchmarks cannot be won by training set memorization, while still being perceptually correlated and computable only from samples. We survey past work on using NNDs for evaluation and implement an example blackbox metric based on these ideas. Through experimental validation we show that it can effectively measure diversity, sample quality, and generalization.
|
A StyleBased Generator Architecture for Generative Adversarial Networks We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of highlevel attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scalespecific control of the synthesis. The new generator improves the stateoftheart in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and highquality dataset of human faces.
|
Development of a foldable fivefinger robotic hand for assisting in laparoscopic surgery This study aims to develop a robotic hand that can be inserted into the body from a small incision wound and can handle large organs in a laparoscopic surgery. We determined the requirements for the proposed hand based on a surgeonxe2x80x99s motions in Handassisted laparoscopic surgery(HALS). We identified four basic motions: xe2x80x9cgrasp,xe2x80x9d xe2x80x9cpinch,xe2x80x9d xe2x80x9cexclusion,xe2x80x9d and xe2x80x9cspread.xe2x80x9d The proposed hand has the necessary degree of freedom(DoFs) for performing these movements, five fingers, as in a humanxe2x80x99s hand, and a palm that can be folded into a bellows when the surgeon inserts the hand into the abdominal cavity. We evaluated the proposed robot hand based on a performance test, and confirmed that it can be inserted from a 20 mm incision wound and grasp the simulated organs.
|
Towards GAN Benchmarks Which Require Generalization. For many evaluation metrics commonly used as benchmarks for unconditional image generation, trivially memorizing the training set attains a better score than models which are considered stateoftheart; we consider this problematic. We clarify a necessary condition for an evaluation metric not to behave this way: estimating the function must require a large sample from the model. In search of such a metric, we turn to neural network divergences (NNDs), which are defined in terms of a neural network trained to distinguish between distributions. The resulting benchmarks cannot be won by training set memorization, while still being perceptually correlated and computable only from samples. We survey past work on using NNDs for evaluation and implement an example blackbox metric based on these ideas. Through experimental validation we show that it can effectively measure diversity, sample quality, and generalization.
|
A StyleBased Generator Architecture for Generative Adversarial Networks We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of highlevel attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scalespecific control of the synthesis. The new generator improves the stateoftheart in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and highquality dataset of human faces.
|
Managing Information From the :2,Information highlights the increasing value of information and IT within organizations and shows how organizations use it. It also deals with the crucial relationship between information and personal effectiveness. The use of computer software and communications in a management context are discussed in detail, including how to mould an information system to your needs. The book explains the basics using reallife examples and brings managers uptodate with the latest developments in electronic commerce and the Internet. The book is based on the Management Charter Initiativeu0027s Occupational Standards for Management NVQs and SVQs at level 4. It is particularly suitable for managers on the Certificate in Management, or Part I of the Diploma, especially those accredited by the IM and BTEC.
|
Towards GAN Benchmarks Which Require Generalization. For many evaluation metrics commonly used as benchmarks for unconditional image generation, trivially memorizing the training set attains a better score than models which are considered stateoftheart; we consider this problematic. We clarify a necessary condition for an evaluation metric not to behave this way: estimating the function must require a large sample from the model. In search of such a metric, we turn to neural network divergences (NNDs), which are defined in terms of a neural network trained to distinguish between distributions. The resulting benchmarks cannot be won by training set memorization, while still being perceptually correlated and computable only from samples. We survey past work on using NNDs for evaluation and implement an example blackbox metric based on these ideas. Through experimental validation we show that it can effectively measure diversity, sample quality, and generalization.
|
A StyleBased Generator Architecture for Generative Adversarial Networks We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of highlevel attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scalespecific control of the synthesis. The new generator improves the stateoftheart in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and highquality dataset of human faces.
|
Development and Flight Experiments of a Bluffbodied X4Blimp The body of X4blimp using four propellers manufactured in conventional research was a structure which has arranged four envelopes in which the buoyancy was equally divided centering on the gondola to which the propeller was attached. However, with this structure, the variation in the buoyancy arose among four envelopes, and there was a problem to which the body posture becomes unstable. In this research, it returns to the starting point which arranges one envelope at the center of the body, and the body of a fundamental structure of the nonstreamline is developed, in which the number of envelopes is suppressed to the minimum, and the variation in the buoyancy is avoided by attaching the special frame which can carry four propellers in the circumference of the envelope. The validity of the manufactured body is demonstrated through some flight experiments.
|
Towards GAN Benchmarks Which Require Generalization. For many evaluation metrics commonly used as benchmarks for unconditional image generation, trivially memorizing the training set attains a better score than models which are considered stateoftheart; we consider this problematic. We clarify a necessary condition for an evaluation metric not to behave this way: estimating the function must require a large sample from the model. In search of such a metric, we turn to neural network divergences (NNDs), which are defined in terms of a neural network trained to distinguish between distributions. The resulting benchmarks cannot be won by training set memorization, while still being perceptually correlated and computable only from samples. We survey past work on using NNDs for evaluation and implement an example blackbox metric based on these ideas. Through experimental validation we show that it can effectively measure diversity, sample quality, and generalization.
|
A StyleBased Generator Architecture for Generative Adversarial Networks We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of highlevel attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scalespecific control of the synthesis. The new generator improves the stateoftheart in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and highquality dataset of human faces.
|
General Data Protection Regulation in Health Clinics The focus on personal data has merited the EU concerns and attention, resulting in the legislative change regarding privacy and the protection of personal data. The General Data Protection Regulation (GDPR) aims to reform existing measures on the protection of personal data of European Union citizens, with a strong impact on the rights and freedoms of individuals in establishing rules for the processing of personal data. The GDPR considers a special category of personal data, the health data, being these considered as sensitive data and subject to special conditions regarding treatment and access by third parties. This work presents the evolution of the applicability of the Regulation (EU) 2016679 six months after its application in Portuguese health clinics. The results of the present study are discussed in the light of future literature and work are identified.
|
On the fly MHP Analysis MayHappeninParallel (MHP) analysis forms the basis for many problems of program analysis and program understanding. MHP analysis can also be used by integrateddevelopmentenvironments (IDEs) to help programmers to refactor parallelprograms, identify racy programs, understand which parts of the program run in parallel, and so on. Since the code keeps changing in the IDE, recomputing the MHP information after every change can be an expensive affair. In this manuscript, we propose a novel scheme to perform onthefly MHP analysis of programs written in APGAS based task parallel languages like X10 to keep the MHP information up to date, in an IDE environment.NThe key insight of our proposed approach to maintain the MHP information up to date is that we need not rebuild (from scratch) every data structure related to MHP information, after each modification (addition or deletion of statements) in the source code. The idea is to reuse the old MHP information as much as possible and recompute the MHP information (of a small set of specific statements) which depends on the statement added or removed. We introduce two new algorithms that deal with addition and removal of parallel constructs like finish, async, atomic, and sequential constructs like loop, if, ifelse and other sequential statements, on the fly. We have implemented these algorithms as part of a plugin for X10DT, a popular IDE (Integrated Development Environment) to develop X10 programs. Our evaluation shows that the proposed onthefly techniques run much faster than the repeated invocations of the fastest known MHP analysis for X10 programs.
|
Batch Alias Analysis Many programanalysis based tools require precise pointstoalias information only for some program variables. To meet this requirement efficiently, there have been many works on demanddriven analyses that perform only the work necessary to compute the pointsto or alias information on the requested variables (queries). However, these demanddriven analyses can be very expensive when applied on large systems where the number of queries can be significant. Such a blowup in analysis time is unacceptable in cases where scalability with realtime constraints is crucial; for example, when program analysis tools are plugged into an IDE (Integrated Development Environment). In this paper, we propose schemes to improve the scalability of demanddriven analyses without compromising on precision. Our work is based on novel ideas for eliminating irrelevant and redundant dataflow paths for the given queries. We introduce the idea of batch analysis, which can answer multiple given queries in batch mode. Batch analysis suits the environments with strict time constraints, where the queries come in batch. We present a batch alias analysis framework that can be used to speed up given demanddriven alias analysis. To show the effectiveness of this framework, we use two demanddriven alias analyses (1) the existing best performing demanddriven alias analysis tool for racedetection clients and (2) an optimized version thereof that avoids irrelevant computation. Our evaluations on a simulated datarace client, and on a recent programunderstanding tool, show that batch analysis leads to significant performance gains, along with minor gains in precision.
|
Trials of 60 GHz Radio for a Future 5G New Radio (NR) Solution for High Capacity CCTV Offload and Multimedia Transfer This paper studies the radio interface performance of a 60 GHz radio in both indoor and outdoor conditions. The target is to assess its suitability for resolving emerging needs in the public transport, especially, in rail traffic, to transfer large amounts of data from vehicles to the stations and vice versa, during a short period of time. 60 GHz could also be ideal band for the wireless intercarriage connection between the railcars. The related services and requirements are defined in the 5G specification Mobile Communication System for Railways TS22.289. 60 GHz band is also included in the 5G standard as an unlicensed band.
|
On the fly MHP Analysis MayHappeninParallel (MHP) analysis forms the basis for many problems of program analysis and program understanding. MHP analysis can also be used by integrateddevelopmentenvironments (IDEs) to help programmers to refactor parallelprograms, identify racy programs, understand which parts of the program run in parallel, and so on. Since the code keeps changing in the IDE, recomputing the MHP information after every change can be an expensive affair. In this manuscript, we propose a novel scheme to perform onthefly MHP analysis of programs written in APGAS based task parallel languages like X10 to keep the MHP information up to date, in an IDE environment.NThe key insight of our proposed approach to maintain the MHP information up to date is that we need not rebuild (from scratch) every data structure related to MHP information, after each modification (addition or deletion of statements) in the source code. The idea is to reuse the old MHP information as much as possible and recompute the MHP information (of a small set of specific statements) which depends on the statement added or removed. We introduce two new algorithms that deal with addition and removal of parallel constructs like finish, async, atomic, and sequential constructs like loop, if, ifelse and other sequential statements, on the fly. We have implemented these algorithms as part of a plugin for X10DT, a popular IDE (Integrated Development Environment) to develop X10 programs. Our evaluation shows that the proposed onthefly techniques run much faster than the repeated invocations of the fastest known MHP analysis for X10 programs.
|
Batch Alias Analysis Many programanalysis based tools require precise pointstoalias information only for some program variables. To meet this requirement efficiently, there have been many works on demanddriven analyses that perform only the work necessary to compute the pointsto or alias information on the requested variables (queries). However, these demanddriven analyses can be very expensive when applied on large systems where the number of queries can be significant. Such a blowup in analysis time is unacceptable in cases where scalability with realtime constraints is crucial; for example, when program analysis tools are plugged into an IDE (Integrated Development Environment). In this paper, we propose schemes to improve the scalability of demanddriven analyses without compromising on precision. Our work is based on novel ideas for eliminating irrelevant and redundant dataflow paths for the given queries. We introduce the idea of batch analysis, which can answer multiple given queries in batch mode. Batch analysis suits the environments with strict time constraints, where the queries come in batch. We present a batch alias analysis framework that can be used to speed up given demanddriven alias analysis. To show the effectiveness of this framework, we use two demanddriven alias analyses (1) the existing best performing demanddriven alias analysis tool for racedetection clients and (2) an optimized version thereof that avoids irrelevant computation. Our evaluations on a simulated datarace client, and on a recent programunderstanding tool, show that batch analysis leads to significant performance gains, along with minor gains in precision.
|
Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority.
|
On the fly MHP Analysis MayHappeninParallel (MHP) analysis forms the basis for many problems of program analysis and program understanding. MHP analysis can also be used by integrateddevelopmentenvironments (IDEs) to help programmers to refactor parallelprograms, identify racy programs, understand which parts of the program run in parallel, and so on. Since the code keeps changing in the IDE, recomputing the MHP information after every change can be an expensive affair. In this manuscript, we propose a novel scheme to perform onthefly MHP analysis of programs written in APGAS based task parallel languages like X10 to keep the MHP information up to date, in an IDE environment.NThe key insight of our proposed approach to maintain the MHP information up to date is that we need not rebuild (from scratch) every data structure related to MHP information, after each modification (addition or deletion of statements) in the source code. The idea is to reuse the old MHP information as much as possible and recompute the MHP information (of a small set of specific statements) which depends on the statement added or removed. We introduce two new algorithms that deal with addition and removal of parallel constructs like finish, async, atomic, and sequential constructs like loop, if, ifelse and other sequential statements, on the fly. We have implemented these algorithms as part of a plugin for X10DT, a popular IDE (Integrated Development Environment) to develop X10 programs. Our evaluation shows that the proposed onthefly techniques run much faster than the repeated invocations of the fastest known MHP analysis for X10 programs.
|
Batch Alias Analysis Many programanalysis based tools require precise pointstoalias information only for some program variables. To meet this requirement efficiently, there have been many works on demanddriven analyses that perform only the work necessary to compute the pointsto or alias information on the requested variables (queries). However, these demanddriven analyses can be very expensive when applied on large systems where the number of queries can be significant. Such a blowup in analysis time is unacceptable in cases where scalability with realtime constraints is crucial; for example, when program analysis tools are plugged into an IDE (Integrated Development Environment). In this paper, we propose schemes to improve the scalability of demanddriven analyses without compromising on precision. Our work is based on novel ideas for eliminating irrelevant and redundant dataflow paths for the given queries. We introduce the idea of batch analysis, which can answer multiple given queries in batch mode. Batch analysis suits the environments with strict time constraints, where the queries come in batch. We present a batch alias analysis framework that can be used to speed up given demanddriven alias analysis. To show the effectiveness of this framework, we use two demanddriven alias analyses (1) the existing best performing demanddriven alias analysis tool for racedetection clients and (2) an optimized version thereof that avoids irrelevant computation. Our evaluations on a simulated datarace client, and on a recent programunderstanding tool, show that batch analysis leads to significant performance gains, along with minor gains in precision.
|
Development of a foldable fivefinger robotic hand for assisting in laparoscopic surgery This study aims to develop a robotic hand that can be inserted into the body from a small incision wound and can handle large organs in a laparoscopic surgery. We determined the requirements for the proposed hand based on a surgeonxe2x80x99s motions in Handassisted laparoscopic surgery(HALS). We identified four basic motions: xe2x80x9cgrasp,xe2x80x9d xe2x80x9cpinch,xe2x80x9d xe2x80x9cexclusion,xe2x80x9d and xe2x80x9cspread.xe2x80x9d The proposed hand has the necessary degree of freedom(DoFs) for performing these movements, five fingers, as in a humanxe2x80x99s hand, and a palm that can be folded into a bellows when the surgeon inserts the hand into the abdominal cavity. We evaluated the proposed robot hand based on a performance test, and confirmed that it can be inserted from a 20 mm incision wound and grasp the simulated organs.
|
On the fly MHP Analysis MayHappeninParallel (MHP) analysis forms the basis for many problems of program analysis and program understanding. MHP analysis can also be used by integrateddevelopmentenvironments (IDEs) to help programmers to refactor parallelprograms, identify racy programs, understand which parts of the program run in parallel, and so on. Since the code keeps changing in the IDE, recomputing the MHP information after every change can be an expensive affair. In this manuscript, we propose a novel scheme to perform onthefly MHP analysis of programs written in APGAS based task parallel languages like X10 to keep the MHP information up to date, in an IDE environment.NThe key insight of our proposed approach to maintain the MHP information up to date is that we need not rebuild (from scratch) every data structure related to MHP information, after each modification (addition or deletion of statements) in the source code. The idea is to reuse the old MHP information as much as possible and recompute the MHP information (of a small set of specific statements) which depends on the statement added or removed. We introduce two new algorithms that deal with addition and removal of parallel constructs like finish, async, atomic, and sequential constructs like loop, if, ifelse and other sequential statements, on the fly. We have implemented these algorithms as part of a plugin for X10DT, a popular IDE (Integrated Development Environment) to develop X10 programs. Our evaluation shows that the proposed onthefly techniques run much faster than the repeated invocations of the fastest known MHP analysis for X10 programs.
|
Batch Alias Analysis Many programanalysis based tools require precise pointstoalias information only for some program variables. To meet this requirement efficiently, there have been many works on demanddriven analyses that perform only the work necessary to compute the pointsto or alias information on the requested variables (queries). However, these demanddriven analyses can be very expensive when applied on large systems where the number of queries can be significant. Such a blowup in analysis time is unacceptable in cases where scalability with realtime constraints is crucial; for example, when program analysis tools are plugged into an IDE (Integrated Development Environment). In this paper, we propose schemes to improve the scalability of demanddriven analyses without compromising on precision. Our work is based on novel ideas for eliminating irrelevant and redundant dataflow paths for the given queries. We introduce the idea of batch analysis, which can answer multiple given queries in batch mode. Batch analysis suits the environments with strict time constraints, where the queries come in batch. We present a batch alias analysis framework that can be used to speed up given demanddriven alias analysis. To show the effectiveness of this framework, we use two demanddriven alias analyses (1) the existing best performing demanddriven alias analysis tool for racedetection clients and (2) an optimized version thereof that avoids irrelevant computation. Our evaluations on a simulated datarace client, and on a recent programunderstanding tool, show that batch analysis leads to significant performance gains, along with minor gains in precision.
|
Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well. Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found.
|
On the fly MHP Analysis MayHappeninParallel (MHP) analysis forms the basis for many problems of program analysis and program understanding. MHP analysis can also be used by integrateddevelopmentenvironments (IDEs) to help programmers to refactor parallelprograms, identify racy programs, understand which parts of the program run in parallel, and so on. Since the code keeps changing in the IDE, recomputing the MHP information after every change can be an expensive affair. In this manuscript, we propose a novel scheme to perform onthefly MHP analysis of programs written in APGAS based task parallel languages like X10 to keep the MHP information up to date, in an IDE environment.NThe key insight of our proposed approach to maintain the MHP information up to date is that we need not rebuild (from scratch) every data structure related to MHP information, after each modification (addition or deletion of statements) in the source code. The idea is to reuse the old MHP information as much as possible and recompute the MHP information (of a small set of specific statements) which depends on the statement added or removed. We introduce two new algorithms that deal with addition and removal of parallel constructs like finish, async, atomic, and sequential constructs like loop, if, ifelse and other sequential statements, on the fly. We have implemented these algorithms as part of a plugin for X10DT, a popular IDE (Integrated Development Environment) to develop X10 programs. Our evaluation shows that the proposed onthefly techniques run much faster than the repeated invocations of the fastest known MHP analysis for X10 programs.
|
Batch Alias Analysis Many programanalysis based tools require precise pointstoalias information only for some program variables. To meet this requirement efficiently, there have been many works on demanddriven analyses that perform only the work necessary to compute the pointsto or alias information on the requested variables (queries). However, these demanddriven analyses can be very expensive when applied on large systems where the number of queries can be significant. Such a blowup in analysis time is unacceptable in cases where scalability with realtime constraints is crucial; for example, when program analysis tools are plugged into an IDE (Integrated Development Environment). In this paper, we propose schemes to improve the scalability of demanddriven analyses without compromising on precision. Our work is based on novel ideas for eliminating irrelevant and redundant dataflow paths for the given queries. We introduce the idea of batch analysis, which can answer multiple given queries in batch mode. Batch analysis suits the environments with strict time constraints, where the queries come in batch. We present a batch alias analysis framework that can be used to speed up given demanddriven alias analysis. To show the effectiveness of this framework, we use two demanddriven alias analyses (1) the existing best performing demanddriven alias analysis tool for racedetection clients and (2) an optimized version thereof that avoids irrelevant computation. Our evaluations on a simulated datarace client, and on a recent programunderstanding tool, show that batch analysis leads to significant performance gains, along with minor gains in precision.
|
NonuniformlyRotating Ship Refocusing in SAR Imagery Based on the Bilinear Extended Fractional Fourier Transform Nonuniformlyrotating ship refocusing is very significant in the marine surveillance of satellite synthetic aperture radar (SAR). The majority of ship imaging algorithms is based on the inverse SAR (ISAR) technique. On the basis of the ISAR technique, several parameter estimation algorithms were proposed for nonuniformly rotating ships. But these algorithms still have problems on crossterms and noise suppression. In this paper, a refocusing algorithm for nonuniformly rotating ships based on the bilinear extended fractional Fourier transform (BEFRFT) is proposed. The ship signal in a range bin can be modeled as a multicomponent cubic phase signal (CPS) after motion compensation. BEFRFT is a bilinear extension of fractional Fourier transform (FRFT), which can estimate the chirp rates and quadratic chirp rates of CPSs. Furthermore, BEFRFT has excellent performances on crossterms and noise suppression. The results of simulated data and Gaofen3 data verify the effectiveness of BEFRFT.
|
Using Electronic Patient Reported Outcomes to Foster Palliative Cancer Care: The MyPal Approach Palliative care is offered along with primary treatment to improve the quality of life of the patient by relieving the symptoms and stress of a serious illness such as cancer. As per modern definitions, palliative care is appropriate at any age and at any stage of the illness, regardless of the eventual outcome. Patientreported outcomes (PRO), i.e., health status measurements reported directly by the patients or their proxies, and especially their availability in electronic form (ePROs), are gradually gaining popularity as building blocks of innovative palliative care interventions. This paper presents MyPal, an ECfunded collaborative research project that aims to exploit advanced eHealth technologies to develop and evaluate two novel ePRObased general palliative care interventions for cancer patients. In particular, the paper presents: (1) a short overview of MyPal; (2) the target populations, i.e., adults suffering from chronic lymphocytic leukemia (CLL) or myelodysplastic syndromes (MDS), and children with solid or hematologic malignancies; (3) the ePRObased interventions being designed for the target populations, (4) the eHealth platform for delivering the interventions under development, and (5) the international, multicenter clinical studies to be conducted for assessing these interventions, i.e., a randomized controlled trial (RCT) and an observational study for adults and children, respectively.
|
Employing Conversational Agents in Palliative Care: A Feasibility Study and Preliminary Assessment Recording of patientreported outcomes (PROs) enables direct measurement of the experiences of patients with chronic conditions, including cancer; thus, PROs are a critical element of high quality, personcentered care for cancer patients. A growing body of literature reports on the feasibility of using electronic tools for the collection of Patient reported Outcomes (ePROs), although the usability of available solutions does affect their acceptance and use. In parallel, recent advancement in artificial intelligence, machine learning and speech recognition have led to the growing interest in conversational agents, i.e. software applications that mimic written or spoken human speech. In the present manuscript we provide a review of current developments regarding the implementation of conversational agents and their application in the domain of palliative care for oncology patients and also present (i) a methodology for the implementation of a conversational agent able to collect ePRO health data and (ii) initial evaluation results from a relevant feasibility study. Our approach differs from other available systems since the conversational agent reported in the present work is not based on rules, but rather uses machine learning algorithms and more specifically recurrent neural networks (RNN) for identifying appropriate answers. Evaluation results of user experience provided promising results and highlight that users gave positive responds when interacting with the system. Based on the User Experience Questionnaire, pragmatic quality and overall quality were categorized as excellent and hedonic quality was categorized as good. The result of this research can be used as reference for the future development and improvement of the conversational agents in the healthcare domain.
|
Detection of CopyMove Forgery in Digital Image Based on SIFT Features and Automatic Matching Thresholds. Today the technology age is characterized by the spread of the digital images. Itxe2x80x99s the most common form of information transmission whether through the internet or newspaper. This huge use of images technology has been accompanied by an evolution in editing tools which makes modifying and editing an image very simple. This paper proposes an effective and fast method for copymove forgery detection. The paper adopts a SIFT technique for features extraction and wavelet technique to estimate the matching threshold. The lowfrequency components are used to compute a dynamic threshold rather than a fixed threshold. Also, a method to remove false positive areas is proposed in order to produce the best possible results. The method can detect accurately and quickly the forgery even after more complex transformations. The experimental results refer that the proposed method can also detect forgery against postprocessing operation and multiple copies.
|
Using Electronic Patient Reported Outcomes to Foster Palliative Cancer Care: The MyPal Approach Palliative care is offered along with primary treatment to improve the quality of life of the patient by relieving the symptoms and stress of a serious illness such as cancer. As per modern definitions, palliative care is appropriate at any age and at any stage of the illness, regardless of the eventual outcome. Patientreported outcomes (PRO), i.e., health status measurements reported directly by the patients or their proxies, and especially their availability in electronic form (ePROs), are gradually gaining popularity as building blocks of innovative palliative care interventions. This paper presents MyPal, an ECfunded collaborative research project that aims to exploit advanced eHealth technologies to develop and evaluate two novel ePRObased general palliative care interventions for cancer patients. In particular, the paper presents: (1) a short overview of MyPal; (2) the target populations, i.e., adults suffering from chronic lymphocytic leukemia (CLL) or myelodysplastic syndromes (MDS), and children with solid or hematologic malignancies; (3) the ePRObased interventions being designed for the target populations, (4) the eHealth platform for delivering the interventions under development, and (5) the international, multicenter clinical studies to be conducted for assessing these interventions, i.e., a randomized controlled trial (RCT) and an observational study for adults and children, respectively.
|
Employing Conversational Agents in Palliative Care: A Feasibility Study and Preliminary Assessment Recording of patientreported outcomes (PROs) enables direct measurement of the experiences of patients with chronic conditions, including cancer; thus, PROs are a critical element of high quality, personcentered care for cancer patients. A growing body of literature reports on the feasibility of using electronic tools for the collection of Patient reported Outcomes (ePROs), although the usability of available solutions does affect their acceptance and use. In parallel, recent advancement in artificial intelligence, machine learning and speech recognition have led to the growing interest in conversational agents, i.e. software applications that mimic written or spoken human speech. In the present manuscript we provide a review of current developments regarding the implementation of conversational agents and their application in the domain of palliative care for oncology patients and also present (i) a methodology for the implementation of a conversational agent able to collect ePRO health data and (ii) initial evaluation results from a relevant feasibility study. Our approach differs from other available systems since the conversational agent reported in the present work is not based on rules, but rather uses machine learning algorithms and more specifically recurrent neural networks (RNN) for identifying appropriate answers. Evaluation results of user experience provided promising results and highlight that users gave positive responds when interacting with the system. Based on the User Experience Questionnaire, pragmatic quality and overall quality were categorized as excellent and hedonic quality was categorized as good. The result of this research can be used as reference for the future development and improvement of the conversational agents in the healthcare domain.
|
Criterion of quantum phase synchronization in continuous variable systems by local measurement Phase synchronization was proved to be unbounded in quantum level, but the witness of phase synchronization is always expensive in terms of the quantum resource and nonlocal measurements involved. Based on the quantum uncertainty relation, we construct two local criterions for the phase synchronization in this paper. The local criterions indicate that the phase synchronization in the quantum level can be witnessed only by the local measurements, and the deduction has been verified in the optomechanics system in numerical way. Besides, by analyzing the physical essence of the phase synchronization in quantum level, we show that one can prepare a state, which describes two synchronized oscillators with no entanglement between them. Thus, the entanglement resource is not necessary in the occurrence of the ideal phase synchronization, and also the reason for this phenomenon is discussed.
|
Using Electronic Patient Reported Outcomes to Foster Palliative Cancer Care: The MyPal Approach Palliative care is offered along with primary treatment to improve the quality of life of the patient by relieving the symptoms and stress of a serious illness such as cancer. As per modern definitions, palliative care is appropriate at any age and at any stage of the illness, regardless of the eventual outcome. Patientreported outcomes (PRO), i.e., health status measurements reported directly by the patients or their proxies, and especially their availability in electronic form (ePROs), are gradually gaining popularity as building blocks of innovative palliative care interventions. This paper presents MyPal, an ECfunded collaborative research project that aims to exploit advanced eHealth technologies to develop and evaluate two novel ePRObased general palliative care interventions for cancer patients. In particular, the paper presents: (1) a short overview of MyPal; (2) the target populations, i.e., adults suffering from chronic lymphocytic leukemia (CLL) or myelodysplastic syndromes (MDS), and children with solid or hematologic malignancies; (3) the ePRObased interventions being designed for the target populations, (4) the eHealth platform for delivering the interventions under development, and (5) the international, multicenter clinical studies to be conducted for assessing these interventions, i.e., a randomized controlled trial (RCT) and an observational study for adults and children, respectively.
|
Employing Conversational Agents in Palliative Care: A Feasibility Study and Preliminary Assessment Recording of patientreported outcomes (PROs) enables direct measurement of the experiences of patients with chronic conditions, including cancer; thus, PROs are a critical element of high quality, personcentered care for cancer patients. A growing body of literature reports on the feasibility of using electronic tools for the collection of Patient reported Outcomes (ePROs), although the usability of available solutions does affect their acceptance and use. In parallel, recent advancement in artificial intelligence, machine learning and speech recognition have led to the growing interest in conversational agents, i.e. software applications that mimic written or spoken human speech. In the present manuscript we provide a review of current developments regarding the implementation of conversational agents and their application in the domain of palliative care for oncology patients and also present (i) a methodology for the implementation of a conversational agent able to collect ePRO health data and (ii) initial evaluation results from a relevant feasibility study. Our approach differs from other available systems since the conversational agent reported in the present work is not based on rules, but rather uses machine learning algorithms and more specifically recurrent neural networks (RNN) for identifying appropriate answers. Evaluation results of user experience provided promising results and highlight that users gave positive responds when interacting with the system. Based on the User Experience Questionnaire, pragmatic quality and overall quality were categorized as excellent and hedonic quality was categorized as good. The result of this research can be used as reference for the future development and improvement of the conversational agents in the healthcare domain.
|
Crossing Number for Graphs with Bounded Pathwidth The crossing number is the smallest number of pairwise edge crossings when drawing a graph into the plane. There are only very few graph classes for which the exact crossing number is known or for which there at least exist constant approximation ratios. Furthermore, up to now, general crossing number computations have never been successfully tackled using bounded width of graph decompositions, like treewidth or pathwidth. In this paper, we show that the crossing number is tractable (even in linear time) for maximal graphs of bounded pathwidthxc2xa03. The technique also shows that the crossing number and the rectilinear (a.k.a. straightline) crossing number are identical for this graph class, and that we require only an O(n)times O(n)grid to achieve such a drawing. Our techniques can further be extended to devise a 2approximation for general graphs with pathwidth 3. One crucial ingredient here is that the crossing number of a graph with a separation pair can be lowerbounded using the crossing numbers of its cutcomponents, a result that may be interesting in its own right. Finally, we give a 4mathbfw3approximation of the crossing number for maximal graphs of pathwidthxc2xa0mathbfw. This is a constant approximation for bounded pathwidth. We complement this with an NPhardness proof of the weighted crossing number already for pathwidth 3 graphs and bicliques K_3,n.
|
Using Electronic Patient Reported Outcomes to Foster Palliative Cancer Care: The MyPal Approach Palliative care is offered along with primary treatment to improve the quality of life of the patient by relieving the symptoms and stress of a serious illness such as cancer. As per modern definitions, palliative care is appropriate at any age and at any stage of the illness, regardless of the eventual outcome. Patientreported outcomes (PRO), i.e., health status measurements reported directly by the patients or their proxies, and especially their availability in electronic form (ePROs), are gradually gaining popularity as building blocks of innovative palliative care interventions. This paper presents MyPal, an ECfunded collaborative research project that aims to exploit advanced eHealth technologies to develop and evaluate two novel ePRObased general palliative care interventions for cancer patients. In particular, the paper presents: (1) a short overview of MyPal; (2) the target populations, i.e., adults suffering from chronic lymphocytic leukemia (CLL) or myelodysplastic syndromes (MDS), and children with solid or hematologic malignancies; (3) the ePRObased interventions being designed for the target populations, (4) the eHealth platform for delivering the interventions under development, and (5) the international, multicenter clinical studies to be conducted for assessing these interventions, i.e., a randomized controlled trial (RCT) and an observational study for adults and children, respectively.
|
Employing Conversational Agents in Palliative Care: A Feasibility Study and Preliminary Assessment Recording of patientreported outcomes (PROs) enables direct measurement of the experiences of patients with chronic conditions, including cancer; thus, PROs are a critical element of high quality, personcentered care for cancer patients. A growing body of literature reports on the feasibility of using electronic tools for the collection of Patient reported Outcomes (ePROs), although the usability of available solutions does affect their acceptance and use. In parallel, recent advancement in artificial intelligence, machine learning and speech recognition have led to the growing interest in conversational agents, i.e. software applications that mimic written or spoken human speech. In the present manuscript we provide a review of current developments regarding the implementation of conversational agents and their application in the domain of palliative care for oncology patients and also present (i) a methodology for the implementation of a conversational agent able to collect ePRO health data and (ii) initial evaluation results from a relevant feasibility study. Our approach differs from other available systems since the conversational agent reported in the present work is not based on rules, but rather uses machine learning algorithms and more specifically recurrent neural networks (RNN) for identifying appropriate answers. Evaluation results of user experience provided promising results and highlight that users gave positive responds when interacting with the system. Based on the User Experience Questionnaire, pragmatic quality and overall quality were categorized as excellent and hedonic quality was categorized as good. The result of this research can be used as reference for the future development and improvement of the conversational agents in the healthcare domain.
|
Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well. Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found.
|
Using Electronic Patient Reported Outcomes to Foster Palliative Cancer Care: The MyPal Approach Palliative care is offered along with primary treatment to improve the quality of life of the patient by relieving the symptoms and stress of a serious illness such as cancer. As per modern definitions, palliative care is appropriate at any age and at any stage of the illness, regardless of the eventual outcome. Patientreported outcomes (PRO), i.e., health status measurements reported directly by the patients or their proxies, and especially their availability in electronic form (ePROs), are gradually gaining popularity as building blocks of innovative palliative care interventions. This paper presents MyPal, an ECfunded collaborative research project that aims to exploit advanced eHealth technologies to develop and evaluate two novel ePRObased general palliative care interventions for cancer patients. In particular, the paper presents: (1) a short overview of MyPal; (2) the target populations, i.e., adults suffering from chronic lymphocytic leukemia (CLL) or myelodysplastic syndromes (MDS), and children with solid or hematologic malignancies; (3) the ePRObased interventions being designed for the target populations, (4) the eHealth platform for delivering the interventions under development, and (5) the international, multicenter clinical studies to be conducted for assessing these interventions, i.e., a randomized controlled trial (RCT) and an observational study for adults and children, respectively.
|
Employing Conversational Agents in Palliative Care: A Feasibility Study and Preliminary Assessment Recording of patientreported outcomes (PROs) enables direct measurement of the experiences of patients with chronic conditions, including cancer; thus, PROs are a critical element of high quality, personcentered care for cancer patients. A growing body of literature reports on the feasibility of using electronic tools for the collection of Patient reported Outcomes (ePROs), although the usability of available solutions does affect their acceptance and use. In parallel, recent advancement in artificial intelligence, machine learning and speech recognition have led to the growing interest in conversational agents, i.e. software applications that mimic written or spoken human speech. In the present manuscript we provide a review of current developments regarding the implementation of conversational agents and their application in the domain of palliative care for oncology patients and also present (i) a methodology for the implementation of a conversational agent able to collect ePRO health data and (ii) initial evaluation results from a relevant feasibility study. Our approach differs from other available systems since the conversational agent reported in the present work is not based on rules, but rather uses machine learning algorithms and more specifically recurrent neural networks (RNN) for identifying appropriate answers. Evaluation results of user experience provided promising results and highlight that users gave positive responds when interacting with the system. Based on the User Experience Questionnaire, pragmatic quality and overall quality were categorized as excellent and hedonic quality was categorized as good. The result of this research can be used as reference for the future development and improvement of the conversational agents in the healthcare domain.
|
Symmetry Group Classification and Conservation Laws of the Nonlinear Fractional Diffusion Equation with the Riesz Potential Symmetry properties of a nonlinear twodimensional spacefractional diffusion equation with the Riesz potential of the order xcexb1 xe2x88x88 ( 0 , 1 ) are studied. Lie point symmetry group classification of this equation is performed with respect to diffusivity function. To construct conservation laws for the considered equation, the concept of nonlinear selfadjointness is adopted to a certain class of spacefractional differential equations with the Riesz potential. It is proved that the equation in question is nonlinearly selfadjoint. An extension of Ibragimovxe2x80x99s constructive algorithm for finding conservation laws is proposed, and the corresponding Noether operators for fractional differential equations with the Riesz potential are presented in an explicit form. To illustrate the proposed approach, conservation laws for the considered nonlinear spacefractional diffusion equation are constructed by using its Lie point symmetries.
|
Applied ichnology in sedimentary geology: Python scripts as a method to automatize ichnofabric analysis in marine core images Abstract Image analysis has been succesfully applied in core research, especially in studies from modern deposits, to enhance the visibility of ichnological features and characterize ichnoassemblages and ichnofabrics. Its application to ichnological research provides useful information for marine core studies, hence sedimentary geology, but also for hydrocarbon exploration. Here we develop a new methodology, using Python programming language, which significantly improve the ichnological analysis.. The method automatizes the process of obtaining continuous ichnological information, in this case about the percentage of bioturbation as a key aspect of the ichnofabric approach. The method affords the possibility of automatically generating continuous percentage and other index records using pixel counts in previously treated images. The resulting data sets are easy to correlate with the information usually obtained from cores (e.g., geochemical and mineralogical data). Such an integration of different proxies for to the field of sedimentary geology especially in the use of ichnological analysis, making it easier for the researcher, less time consuming, and more likely to be undertaken. The coding and sharing of open software tools allow for great flexibility, giving researchers in ichnology or related fields the option to implement new features, develop more complex tools to improve the package, and share findings with the scientific community.
|
ResIPy, an intuitive open source software for complex geoelectrical inversionmodeling Abstract Electrical resistivity tomography (ERT) and induced polarization (IP) methods are now widely used in many interdisciplinary projects. Although field surveys using these methods are relatively straightforward, ERT and IP data require the application of inverse methods prior to any interpretation. Several established noncommercial inversion codes exist, but they typically require advanced knowledge to use effectively. ResIPy was developed to provide a more intuitive, userfriendly, approach to inversion of geoelectrical data, using an open source graphical user interface (GUI) and a Python application programming interface (API). ResIPy utilizes the mature R2cR2 inversion codes for ERT and IP, respectively. The ResIPy GUI facilitates data importing, data filtering, error modeling, mesh generation, data inversion and plotting of inverse models. Furthermore, the easy to use design of ResIPy and the help provided inside makes it an effective educational tool. This paper highlights the rationale and structure behind the interface, before demonstrating its capabilities in a range of environmental problems. Specifically, we demonstrate the ease at which ResIPy deals with topography, advanced data processing, the ability to fix and constrain regions of known geoelectrical properties, timelapse analysis and the capability for forward modeling and survey design.
|
Unmanned agricultural product sales system The invention relates to the field of agricultural product sales, provides an unmanned agricultural product sales system, and aims to solve the problem of agricultural product waste caused by the factthat most farmers can only prepare goods according to guessing and experiences when selling agricultural products at present. The unmanned agricultural product sales system comprises an acquisition module for acquiring selection information of customers; a storage module which prestores a vegetable preparation scheme; a matching module which is used for matching a corresponding side dish schemefrom the storage module according to the selection information of the client; a pushing module which is used for pushing the matched side dish scheme back to the client; an acquisition module which isalso used for acquiring confirmation information of a client; an order module which is used for generating order information according to the confirmation information of the client, wherein the pushing module is used for pushing the order information to the client and the seller, and the acquisition module is also used for acquiring the delivery information of the seller; and a logistics trackingmodule which is used for tracking the delivery information to obtain logistics information, wherein the pushing module is used for pushing the logistics information to the client. The scheme is usedfor sales of unmanned agricultural product shops.
|
Applied ichnology in sedimentary geology: Python scripts as a method to automatize ichnofabric analysis in marine core images Abstract Image analysis has been succesfully applied in core research, especially in studies from modern deposits, to enhance the visibility of ichnological features and characterize ichnoassemblages and ichnofabrics. Its application to ichnological research provides useful information for marine core studies, hence sedimentary geology, but also for hydrocarbon exploration. Here we develop a new methodology, using Python programming language, which significantly improve the ichnological analysis.. The method automatizes the process of obtaining continuous ichnological information, in this case about the percentage of bioturbation as a key aspect of the ichnofabric approach. The method affords the possibility of automatically generating continuous percentage and other index records using pixel counts in previously treated images. The resulting data sets are easy to correlate with the information usually obtained from cores (e.g., geochemical and mineralogical data). Such an integration of different proxies for to the field of sedimentary geology especially in the use of ichnological analysis, making it easier for the researcher, less time consuming, and more likely to be undertaken. The coding and sharing of open software tools allow for great flexibility, giving researchers in ichnology or related fields the option to implement new features, develop more complex tools to improve the package, and share findings with the scientific community.
|
ResIPy, an intuitive open source software for complex geoelectrical inversionmodeling Abstract Electrical resistivity tomography (ERT) and induced polarization (IP) methods are now widely used in many interdisciplinary projects. Although field surveys using these methods are relatively straightforward, ERT and IP data require the application of inverse methods prior to any interpretation. Several established noncommercial inversion codes exist, but they typically require advanced knowledge to use effectively. ResIPy was developed to provide a more intuitive, userfriendly, approach to inversion of geoelectrical data, using an open source graphical user interface (GUI) and a Python application programming interface (API). ResIPy utilizes the mature R2cR2 inversion codes for ERT and IP, respectively. The ResIPy GUI facilitates data importing, data filtering, error modeling, mesh generation, data inversion and plotting of inverse models. Furthermore, the easy to use design of ResIPy and the help provided inside makes it an effective educational tool. This paper highlights the rationale and structure behind the interface, before demonstrating its capabilities in a range of environmental problems. Specifically, we demonstrate the ease at which ResIPy deals with topography, advanced data processing, the ability to fix and constrain regions of known geoelectrical properties, timelapse analysis and the capability for forward modeling and survey design.
|
Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority.
|
Applied ichnology in sedimentary geology: Python scripts as a method to automatize ichnofabric analysis in marine core images Abstract Image analysis has been succesfully applied in core research, especially in studies from modern deposits, to enhance the visibility of ichnological features and characterize ichnoassemblages and ichnofabrics. Its application to ichnological research provides useful information for marine core studies, hence sedimentary geology, but also for hydrocarbon exploration. Here we develop a new methodology, using Python programming language, which significantly improve the ichnological analysis.. The method automatizes the process of obtaining continuous ichnological information, in this case about the percentage of bioturbation as a key aspect of the ichnofabric approach. The method affords the possibility of automatically generating continuous percentage and other index records using pixel counts in previously treated images. The resulting data sets are easy to correlate with the information usually obtained from cores (e.g., geochemical and mineralogical data). Such an integration of different proxies for to the field of sedimentary geology especially in the use of ichnological analysis, making it easier for the researcher, less time consuming, and more likely to be undertaken. The coding and sharing of open software tools allow for great flexibility, giving researchers in ichnology or related fields the option to implement new features, develop more complex tools to improve the package, and share findings with the scientific community.
|
ResIPy, an intuitive open source software for complex geoelectrical inversionmodeling Abstract Electrical resistivity tomography (ERT) and induced polarization (IP) methods are now widely used in many interdisciplinary projects. Although field surveys using these methods are relatively straightforward, ERT and IP data require the application of inverse methods prior to any interpretation. Several established noncommercial inversion codes exist, but they typically require advanced knowledge to use effectively. ResIPy was developed to provide a more intuitive, userfriendly, approach to inversion of geoelectrical data, using an open source graphical user interface (GUI) and a Python application programming interface (API). ResIPy utilizes the mature R2cR2 inversion codes for ERT and IP, respectively. The ResIPy GUI facilitates data importing, data filtering, error modeling, mesh generation, data inversion and plotting of inverse models. Furthermore, the easy to use design of ResIPy and the help provided inside makes it an effective educational tool. This paper highlights the rationale and structure behind the interface, before demonstrating its capabilities in a range of environmental problems. Specifically, we demonstrate the ease at which ResIPy deals with topography, advanced data processing, the ability to fix and constrain regions of known geoelectrical properties, timelapse analysis and the capability for forward modeling and survey design.
|
Virtually perfect democracy In the 2009 Security Protocols Workshop, the Pretty Good Democracy scheme was presented. This scheme has the appeal of allowing voters to cast votes remotely, e.g. via the Internet, and confirm correct receipt in a single session. The scheme provides a degree of endto end verifiability: receipt of the correct acknowledgement code provides assurance that the vote will be accurately included in the final tally. The scheme does not require any trust in a voter client device. It does however have a number of vulnerabilities: privacy and accuracy depend on vote codes being kept secret. It also suffers the usual coercion style threats common to most remote voting schemes.
|
Applied ichnology in sedimentary geology: Python scripts as a method to automatize ichnofabric analysis in marine core images Abstract Image analysis has been succesfully applied in core research, especially in studies from modern deposits, to enhance the visibility of ichnological features and characterize ichnoassemblages and ichnofabrics. Its application to ichnological research provides useful information for marine core studies, hence sedimentary geology, but also for hydrocarbon exploration. Here we develop a new methodology, using Python programming language, which significantly improve the ichnological analysis.. The method automatizes the process of obtaining continuous ichnological information, in this case about the percentage of bioturbation as a key aspect of the ichnofabric approach. The method affords the possibility of automatically generating continuous percentage and other index records using pixel counts in previously treated images. The resulting data sets are easy to correlate with the information usually obtained from cores (e.g., geochemical and mineralogical data). Such an integration of different proxies for to the field of sedimentary geology especially in the use of ichnological analysis, making it easier for the researcher, less time consuming, and more likely to be undertaken. The coding and sharing of open software tools allow for great flexibility, giving researchers in ichnology or related fields the option to implement new features, develop more complex tools to improve the package, and share findings with the scientific community.
|
ResIPy, an intuitive open source software for complex geoelectrical inversionmodeling Abstract Electrical resistivity tomography (ERT) and induced polarization (IP) methods are now widely used in many interdisciplinary projects. Although field surveys using these methods are relatively straightforward, ERT and IP data require the application of inverse methods prior to any interpretation. Several established noncommercial inversion codes exist, but they typically require advanced knowledge to use effectively. ResIPy was developed to provide a more intuitive, userfriendly, approach to inversion of geoelectrical data, using an open source graphical user interface (GUI) and a Python application programming interface (API). ResIPy utilizes the mature R2cR2 inversion codes for ERT and IP, respectively. The ResIPy GUI facilitates data importing, data filtering, error modeling, mesh generation, data inversion and plotting of inverse models. Furthermore, the easy to use design of ResIPy and the help provided inside makes it an effective educational tool. This paper highlights the rationale and structure behind the interface, before demonstrating its capabilities in a range of environmental problems. Specifically, we demonstrate the ease at which ResIPy deals with topography, advanced data processing, the ability to fix and constrain regions of known geoelectrical properties, timelapse analysis and the capability for forward modeling and survey design.
|
Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups.
|
Applied ichnology in sedimentary geology: Python scripts as a method to automatize ichnofabric analysis in marine core images Abstract Image analysis has been succesfully applied in core research, especially in studies from modern deposits, to enhance the visibility of ichnological features and characterize ichnoassemblages and ichnofabrics. Its application to ichnological research provides useful information for marine core studies, hence sedimentary geology, but also for hydrocarbon exploration. Here we develop a new methodology, using Python programming language, which significantly improve the ichnological analysis.. The method automatizes the process of obtaining continuous ichnological information, in this case about the percentage of bioturbation as a key aspect of the ichnofabric approach. The method affords the possibility of automatically generating continuous percentage and other index records using pixel counts in previously treated images. The resulting data sets are easy to correlate with the information usually obtained from cores (e.g., geochemical and mineralogical data). Such an integration of different proxies for to the field of sedimentary geology especially in the use of ichnological analysis, making it easier for the researcher, less time consuming, and more likely to be undertaken. The coding and sharing of open software tools allow for great flexibility, giving researchers in ichnology or related fields the option to implement new features, develop more complex tools to improve the package, and share findings with the scientific community.
|
ResIPy, an intuitive open source software for complex geoelectrical inversionmodeling Abstract Electrical resistivity tomography (ERT) and induced polarization (IP) methods are now widely used in many interdisciplinary projects. Although field surveys using these methods are relatively straightforward, ERT and IP data require the application of inverse methods prior to any interpretation. Several established noncommercial inversion codes exist, but they typically require advanced knowledge to use effectively. ResIPy was developed to provide a more intuitive, userfriendly, approach to inversion of geoelectrical data, using an open source graphical user interface (GUI) and a Python application programming interface (API). ResIPy utilizes the mature R2cR2 inversion codes for ERT and IP, respectively. The ResIPy GUI facilitates data importing, data filtering, error modeling, mesh generation, data inversion and plotting of inverse models. Furthermore, the easy to use design of ResIPy and the help provided inside makes it an effective educational tool. This paper highlights the rationale and structure behind the interface, before demonstrating its capabilities in a range of environmental problems. Specifically, we demonstrate the ease at which ResIPy deals with topography, advanced data processing, the ability to fix and constrain regions of known geoelectrical properties, timelapse analysis and the capability for forward modeling and survey design.
|
Robust cluster consensus of general fractionalorder nonlinear multi agent systems via adaptive sliding mode controller Abstract In this paper robust cluster consensus is investigated for general fractionalorder multi agent systems with nonlinear dynamics with dynamic uncertainty and external disturbances via adaptive sliding mode controller. First, robust cluster consensus for general fractionalorder nonlinear multi agent systems is investigated with dynamic uncertainty and external disturbances in which multi agent systems are weakly heterogeneous because they have identical nominal dynamics with different normbounded parameter uncertainties. Then, robust cluster consensus for the fractionalorder nonlinear multi agent systems with general form dynamics is investigated by using adaptive sliding mode controller. Robust cluster consensus for general fractionalorder nonlinear multi agent systems is achieved asymptotically without disturbance. It is shown that the errors between agents can converge to a small region in the presence of disturbances based on the linear matrix inequality (LMI) and MittagLeffler stability theory. Finally, simulation examples are presented for general form multi agent systems, i.e. a singlelink flexible joint manipulator which demonstrates the efficiency of the proposed adaptive controller.
|
Enhanced resource allocation in mobile edge computing using reinforcement learning based MOACO algorithm for IIOT Abstract The Mobile networks deploy and offers a multiaspective approach for various resource allocation paradigms and the service based options in the computing segments with its implication in the Industrial Internet of Things (IIOT) and the virtual reality. The Mobile edge computing (MEC) paradigm runs the virtual source with the edge communication between data terminals and the execution in the core network with a high pressure load. The demand to meet all the customer requirements is a better way for planning the execution with the support of cognitive agent. The user data with its behavioral approach is clubbed together to fulfill the service type for IIOT. The swarm intelligence based and reinforcement learning techniques provide a neural caching for the memory within the task execution, the prediction provides the caching strategy and cache business that delay the execution. The factors affecting this delay are predicted with mobile edge computing resources and to assess the performance in the neighboring user equipment. The effectiveness builds a cognitive agent model to assess the resource allocation and the communication network is established to enhance the quality of service. The Reinforcement Learning techniques Multi Objective Ant Colony Optimization (MOACO) algorithms has been applied to deal with the accurate resource allocation between the end users in the way of creating the cost mapping tables creations and optimal allocation in MEC.
|
A Deep Reinforcement Learning Approach Towards Computation Offloading for Mobile Edge Computing. In order to improve the quality of service for users and reduce the energy consumption of the cloud computing environment, Mobile Edge Computing (MEC) is a promising paradigm by providing computing resources which is close to the end device in physical distance. Nevertheless, the computation offloading policy to satisfy the requirements of the service provider and consumer at the same time within a MEC system still remains challenging. In this paper, we propose an offloading decision policy with threelevel structure for MEC system different from the traditional twolevel architecture to formulate the offloading decision optimization problem by minimizing the total cost of energy consumption and delay time. Because the traditional optimization methods could not solve this dynamic system problem efficiently, Reinforcement Learning (RL) has been used in complex control systems in recent years. We design a deep reinforcement learning (DRL) approach to minimize the total cost by applying deep Qlearning algorithm to address the issues of too large system state dimension. The simulation results show that the proposed algorithm has nearly optimal performance than traditional methods.
|
How to Make a Medical Error Disclosure to Patients This paper aims to investigate Chinese publicu0027s expectations of medical error disclosure, and to develop guidelines for hospitals. A national questionnaire survey was conducted in 2019, collecting 1,008 valid responses. Respondentsu0027 were asked their views of the severity of error they would like to be disclosed and what, when, where and who they preferred in an error disclosure. Results showed that Chinese public would like to be disclosed any error reached them even no harm. For both moderate and severe outcome errors, they preferred to be disclosed facetoface, all the information as detail as possible, immediately after the error was recognized and in a prepared meeting room. Regarding attendance of patient side, disclosure was expected to be made to the patient and family. For hospital side, the healthcare provider who committed the error, hisher leader, patient safety manager and highpositioned person of the hospital were anticipated to be present. About the person to make the disclosure, respondents preferred the healthcare provider who committed the error in a moderate outcome case while the leader or highpositioned person in a severe case.
|
Enhanced resource allocation in mobile edge computing using reinforcement learning based MOACO algorithm for IIOT Abstract The Mobile networks deploy and offers a multiaspective approach for various resource allocation paradigms and the service based options in the computing segments with its implication in the Industrial Internet of Things (IIOT) and the virtual reality. The Mobile edge computing (MEC) paradigm runs the virtual source with the edge communication between data terminals and the execution in the core network with a high pressure load. The demand to meet all the customer requirements is a better way for planning the execution with the support of cognitive agent. The user data with its behavioral approach is clubbed together to fulfill the service type for IIOT. The swarm intelligence based and reinforcement learning techniques provide a neural caching for the memory within the task execution, the prediction provides the caching strategy and cache business that delay the execution. The factors affecting this delay are predicted with mobile edge computing resources and to assess the performance in the neighboring user equipment. The effectiveness builds a cognitive agent model to assess the resource allocation and the communication network is established to enhance the quality of service. The Reinforcement Learning techniques Multi Objective Ant Colony Optimization (MOACO) algorithms has been applied to deal with the accurate resource allocation between the end users in the way of creating the cost mapping tables creations and optimal allocation in MEC.
|
A Deep Reinforcement Learning Approach Towards Computation Offloading for Mobile Edge Computing. In order to improve the quality of service for users and reduce the energy consumption of the cloud computing environment, Mobile Edge Computing (MEC) is a promising paradigm by providing computing resources which is close to the end device in physical distance. Nevertheless, the computation offloading policy to satisfy the requirements of the service provider and consumer at the same time within a MEC system still remains challenging. In this paper, we propose an offloading decision policy with threelevel structure for MEC system different from the traditional twolevel architecture to formulate the offloading decision optimization problem by minimizing the total cost of energy consumption and delay time. Because the traditional optimization methods could not solve this dynamic system problem efficiently, Reinforcement Learning (RL) has been used in complex control systems in recent years. We design a deep reinforcement learning (DRL) approach to minimize the total cost by applying deep Qlearning algorithm to address the issues of too large system state dimension. The simulation results show that the proposed algorithm has nearly optimal performance than traditional methods.
|
Structure and expression of the gene coding for the alphasubunit of DNAdependent RNA polymerase from the chloroplast genome of Zea mays. :0,rpoA gene coding for the alphasubunit of DNAdependent RNA polymerase located on the DNA of Zea mays chloroplasts has been characterized with respect to its position on the chloroplast genome and its nucleotide sequence. The amino acid sequence derived for a 39 Kd polypeptide shows strong homology with sequences derived from the :0,rpoA genes of other chloroplast species and with the amino acid sequence of the alphasubunit from E. coli RNA polymerase. Transcripts of the :0,rpoA gene were identified by Northern hybridization and characterized by S1 mapping using total RNA isolated from maize chloroplasts. Antibodies raised against a synthetic Cterminal heptapeptide show cross reactivity with a 39 Kd polypeptide contained in the stroma fraction of maize chloroplasts. It is concluded that the :0,rpoA gene is a functional gene and that therefore, at least the alphasubunit of plastidic RNA polymerase, is expressed in chloroplasts.
|
Enhanced resource allocation in mobile edge computing using reinforcement learning based MOACO algorithm for IIOT Abstract The Mobile networks deploy and offers a multiaspective approach for various resource allocation paradigms and the service based options in the computing segments with its implication in the Industrial Internet of Things (IIOT) and the virtual reality. The Mobile edge computing (MEC) paradigm runs the virtual source with the edge communication between data terminals and the execution in the core network with a high pressure load. The demand to meet all the customer requirements is a better way for planning the execution with the support of cognitive agent. The user data with its behavioral approach is clubbed together to fulfill the service type for IIOT. The swarm intelligence based and reinforcement learning techniques provide a neural caching for the memory within the task execution, the prediction provides the caching strategy and cache business that delay the execution. The factors affecting this delay are predicted with mobile edge computing resources and to assess the performance in the neighboring user equipment. The effectiveness builds a cognitive agent model to assess the resource allocation and the communication network is established to enhance the quality of service. The Reinforcement Learning techniques Multi Objective Ant Colony Optimization (MOACO) algorithms has been applied to deal with the accurate resource allocation between the end users in the way of creating the cost mapping tables creations and optimal allocation in MEC.
|
A Deep Reinforcement Learning Approach Towards Computation Offloading for Mobile Edge Computing. In order to improve the quality of service for users and reduce the energy consumption of the cloud computing environment, Mobile Edge Computing (MEC) is a promising paradigm by providing computing resources which is close to the end device in physical distance. Nevertheless, the computation offloading policy to satisfy the requirements of the service provider and consumer at the same time within a MEC system still remains challenging. In this paper, we propose an offloading decision policy with threelevel structure for MEC system different from the traditional twolevel architecture to formulate the offloading decision optimization problem by minimizing the total cost of energy consumption and delay time. Because the traditional optimization methods could not solve this dynamic system problem efficiently, Reinforcement Learning (RL) has been used in complex control systems in recent years. We design a deep reinforcement learning (DRL) approach to minimize the total cost by applying deep Qlearning algorithm to address the issues of too large system state dimension. The simulation results show that the proposed algorithm has nearly optimal performance than traditional methods.
|
Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well. Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found.
|
Enhanced resource allocation in mobile edge computing using reinforcement learning based MOACO algorithm for IIOT Abstract The Mobile networks deploy and offers a multiaspective approach for various resource allocation paradigms and the service based options in the computing segments with its implication in the Industrial Internet of Things (IIOT) and the virtual reality. The Mobile edge computing (MEC) paradigm runs the virtual source with the edge communication between data terminals and the execution in the core network with a high pressure load. The demand to meet all the customer requirements is a better way for planning the execution with the support of cognitive agent. The user data with its behavioral approach is clubbed together to fulfill the service type for IIOT. The swarm intelligence based and reinforcement learning techniques provide a neural caching for the memory within the task execution, the prediction provides the caching strategy and cache business that delay the execution. The factors affecting this delay are predicted with mobile edge computing resources and to assess the performance in the neighboring user equipment. The effectiveness builds a cognitive agent model to assess the resource allocation and the communication network is established to enhance the quality of service. The Reinforcement Learning techniques Multi Objective Ant Colony Optimization (MOACO) algorithms has been applied to deal with the accurate resource allocation between the end users in the way of creating the cost mapping tables creations and optimal allocation in MEC.
|
A Deep Reinforcement Learning Approach Towards Computation Offloading for Mobile Edge Computing. In order to improve the quality of service for users and reduce the energy consumption of the cloud computing environment, Mobile Edge Computing (MEC) is a promising paradigm by providing computing resources which is close to the end device in physical distance. Nevertheless, the computation offloading policy to satisfy the requirements of the service provider and consumer at the same time within a MEC system still remains challenging. In this paper, we propose an offloading decision policy with threelevel structure for MEC system different from the traditional twolevel architecture to formulate the offloading decision optimization problem by minimizing the total cost of energy consumption and delay time. Because the traditional optimization methods could not solve this dynamic system problem efficiently, Reinforcement Learning (RL) has been used in complex control systems in recent years. We design a deep reinforcement learning (DRL) approach to minimize the total cost by applying deep Qlearning algorithm to address the issues of too large system state dimension. The simulation results show that the proposed algorithm has nearly optimal performance than traditional methods.
|
General Data Protection Regulation in Health Clinics The focus on personal data has merited the EU concerns and attention, resulting in the legislative change regarding privacy and the protection of personal data. The General Data Protection Regulation (GDPR) aims to reform existing measures on the protection of personal data of European Union citizens, with a strong impact on the rights and freedoms of individuals in establishing rules for the processing of personal data. The GDPR considers a special category of personal data, the health data, being these considered as sensitive data and subject to special conditions regarding treatment and access by third parties. This work presents the evolution of the applicability of the Regulation (EU) 2016679 six months after its application in Portuguese health clinics. The results of the present study are discussed in the light of future literature and work are identified.
|
Enhanced resource allocation in mobile edge computing using reinforcement learning based MOACO algorithm for IIOT Abstract The Mobile networks deploy and offers a multiaspective approach for various resource allocation paradigms and the service based options in the computing segments with its implication in the Industrial Internet of Things (IIOT) and the virtual reality. The Mobile edge computing (MEC) paradigm runs the virtual source with the edge communication between data terminals and the execution in the core network with a high pressure load. The demand to meet all the customer requirements is a better way for planning the execution with the support of cognitive agent. The user data with its behavioral approach is clubbed together to fulfill the service type for IIOT. The swarm intelligence based and reinforcement learning techniques provide a neural caching for the memory within the task execution, the prediction provides the caching strategy and cache business that delay the execution. The factors affecting this delay are predicted with mobile edge computing resources and to assess the performance in the neighboring user equipment. The effectiveness builds a cognitive agent model to assess the resource allocation and the communication network is established to enhance the quality of service. The Reinforcement Learning techniques Multi Objective Ant Colony Optimization (MOACO) algorithms has been applied to deal with the accurate resource allocation between the end users in the way of creating the cost mapping tables creations and optimal allocation in MEC.
|
A Deep Reinforcement Learning Approach Towards Computation Offloading for Mobile Edge Computing. In order to improve the quality of service for users and reduce the energy consumption of the cloud computing environment, Mobile Edge Computing (MEC) is a promising paradigm by providing computing resources which is close to the end device in physical distance. Nevertheless, the computation offloading policy to satisfy the requirements of the service provider and consumer at the same time within a MEC system still remains challenging. In this paper, we propose an offloading decision policy with threelevel structure for MEC system different from the traditional twolevel architecture to formulate the offloading decision optimization problem by minimizing the total cost of energy consumption and delay time. Because the traditional optimization methods could not solve this dynamic system problem efficiently, Reinforcement Learning (RL) has been used in complex control systems in recent years. We design a deep reinforcement learning (DRL) approach to minimize the total cost by applying deep Qlearning algorithm to address the issues of too large system state dimension. The simulation results show that the proposed algorithm has nearly optimal performance than traditional methods.
|
Effects of Brownfield Remediation on Total Gaseous Mercury Concentrations in an Urban Landscape In order to obtain a better perspective of the impacts of brownfields on the landxe2x80x93atmosphere exchange of mercury in urban areas, total gaseous mercury (TGM) was measured at two heights (1.8 m and 42.7 m) prior to 2011xe2x80x932012 and after 2015xe2x80x932016 for the remediation of a brownfield and installation of a parking lot adjacent to the Syracuse Center of Excellence in Syracuse, NY, USA. Prior to brownfield remediation, the annual average TGM concentrations were 1.6 xc2xb1 0.6 and 1.4 xc2xb1 0.4 ng xc2xb7 m xe2x88x92 3 at the ground and upper heights, respectively. After brownfield remediation, the annual average TGM concentrations decreased by 32% and 22% at the ground and the upper height, respectively. Mercury soil flux measurements during summer after remediation showed net TGM deposition of 1.7 ng xc2xb7 m xe2x88x92 2 xc2xb7 day xe2x88x92 1 suggesting that the site transitioned from a mercury source to a net mercury sink. Measurements from the Atmospheric Mercury Network (AMNet) indicate that there was no regional decrease in TGM concentrations during the study period. This study demonstrates that evasion from mercurycontaminated soil significantly increased local TGM concentrations, which was subsequently mitigated after soil restoration. Considering the large number of brownfields, they may be an important source of mercury emissions source to local urban ecosystems and warrant future study at additional locations.
|
Intra Frame Prediction for Video Coding Using a Conditional Autoencoder Approach Intra prediction is a vital component of most modern image and video codecs. State of the art video codecs like High Efficiency Video Coding (HEVC) or the upcoming Versatile Video Coding (VVC) use a high number of directional modes. With the recent advances in deep learning, it is now possible to use artificial neural networks for intra frame prediction. Previously published approaches usually add additional ANN based modes or replace all modes by training several networks. In our approach, we use a single autoencoder network to first compress the original with help of already transmitted pixels to four parameters. We then use the parameters together with this support area to generate a prediction for the block. This way, we are able to replace all angular intra modes by a single ANN. In the experiments we compare our method with the intra prediction method currently used in the VVC Test Model (VTM). Using our method, we are able to gain up to 0.85 dB prediction PSNR with a comparable amount of side information or reduce the amount of side information by 2 bit per prediction unit with similar PSNR.
|
Dual Learningbased Video Coding with Inception Dense Blocks In this paper, a dual learningbased method in intra coding is introduced for PCS Grand Challenge. This method is mainly composed of two parts: intra prediction and reconstruction filtering. They use different network structures, the neural networkbased intra prediction uses the fullconnected network to predict the block while the neural networkbased reconstruction filtering utilizes the convolutional networks. Different with the previous filtering works, we use a network with more powerful feature extraction capabilities in our reconstruction filtering network. And the filtering unit is the blocklevel so as to achieve a more accurate filtering compensation. To our best knowledge, among all the learningbased methods, this is the first attempt to combine two different networks in one application, and we achieve the stateoftheart performance for AI configuration on the HEVC Test sequences. The experimental result shows that our method leads to significant BDrate saving for provided 8 sequences compared to HM16.20 baseline (average 10.24% and 3.57% bitrate reductions for allintra and randomaccess coding, respectively). For HEVC test sequences, our model also achieved a 9.70% BDrate saving compared to HM16.20 baseline for allintra configuration.
|
Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority.
|
Intra Frame Prediction for Video Coding Using a Conditional Autoencoder Approach Intra prediction is a vital component of most modern image and video codecs. State of the art video codecs like High Efficiency Video Coding (HEVC) or the upcoming Versatile Video Coding (VVC) use a high number of directional modes. With the recent advances in deep learning, it is now possible to use artificial neural networks for intra frame prediction. Previously published approaches usually add additional ANN based modes or replace all modes by training several networks. In our approach, we use a single autoencoder network to first compress the original with help of already transmitted pixels to four parameters. We then use the parameters together with this support area to generate a prediction for the block. This way, we are able to replace all angular intra modes by a single ANN. In the experiments we compare our method with the intra prediction method currently used in the VVC Test Model (VTM). Using our method, we are able to gain up to 0.85 dB prediction PSNR with a comparable amount of side information or reduce the amount of side information by 2 bit per prediction unit with similar PSNR.
|
Dual Learningbased Video Coding with Inception Dense Blocks In this paper, a dual learningbased method in intra coding is introduced for PCS Grand Challenge. This method is mainly composed of two parts: intra prediction and reconstruction filtering. They use different network structures, the neural networkbased intra prediction uses the fullconnected network to predict the block while the neural networkbased reconstruction filtering utilizes the convolutional networks. Different with the previous filtering works, we use a network with more powerful feature extraction capabilities in our reconstruction filtering network. And the filtering unit is the blocklevel so as to achieve a more accurate filtering compensation. To our best knowledge, among all the learningbased methods, this is the first attempt to combine two different networks in one application, and we achieve the stateoftheart performance for AI configuration on the HEVC Test sequences. The experimental result shows that our method leads to significant BDrate saving for provided 8 sequences compared to HM16.20 baseline (average 10.24% and 3.57% bitrate reductions for allintra and randomaccess coding, respectively). For HEVC test sequences, our model also achieved a 9.70% BDrate saving compared to HM16.20 baseline for allintra configuration.
|
Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups.
|
Intra Frame Prediction for Video Coding Using a Conditional Autoencoder Approach Intra prediction is a vital component of most modern image and video codecs. State of the art video codecs like High Efficiency Video Coding (HEVC) or the upcoming Versatile Video Coding (VVC) use a high number of directional modes. With the recent advances in deep learning, it is now possible to use artificial neural networks for intra frame prediction. Previously published approaches usually add additional ANN based modes or replace all modes by training several networks. In our approach, we use a single autoencoder network to first compress the original with help of already transmitted pixels to four parameters. We then use the parameters together with this support area to generate a prediction for the block. This way, we are able to replace all angular intra modes by a single ANN. In the experiments we compare our method with the intra prediction method currently used in the VVC Test Model (VTM). Using our method, we are able to gain up to 0.85 dB prediction PSNR with a comparable amount of side information or reduce the amount of side information by 2 bit per prediction unit with similar PSNR.
|
Dual Learningbased Video Coding with Inception Dense Blocks In this paper, a dual learningbased method in intra coding is introduced for PCS Grand Challenge. This method is mainly composed of two parts: intra prediction and reconstruction filtering. They use different network structures, the neural networkbased intra prediction uses the fullconnected network to predict the block while the neural networkbased reconstruction filtering utilizes the convolutional networks. Different with the previous filtering works, we use a network with more powerful feature extraction capabilities in our reconstruction filtering network. And the filtering unit is the blocklevel so as to achieve a more accurate filtering compensation. To our best knowledge, among all the learningbased methods, this is the first attempt to combine two different networks in one application, and we achieve the stateoftheart performance for AI configuration on the HEVC Test sequences. The experimental result shows that our method leads to significant BDrate saving for provided 8 sequences compared to HM16.20 baseline (average 10.24% and 3.57% bitrate reductions for allintra and randomaccess coding, respectively). For HEVC test sequences, our model also achieved a 9.70% BDrate saving compared to HM16.20 baseline for allintra configuration.
|
General Data Protection Regulation in Health Clinics The focus on personal data has merited the EU concerns and attention, resulting in the legislative change regarding privacy and the protection of personal data. The General Data Protection Regulation (GDPR) aims to reform existing measures on the protection of personal data of European Union citizens, with a strong impact on the rights and freedoms of individuals in establishing rules for the processing of personal data. The GDPR considers a special category of personal data, the health data, being these considered as sensitive data and subject to special conditions regarding treatment and access by third parties. This work presents the evolution of the applicability of the Regulation (EU) 2016679 six months after its application in Portuguese health clinics. The results of the present study are discussed in the light of future literature and work are identified.
|
Intra Frame Prediction for Video Coding Using a Conditional Autoencoder Approach Intra prediction is a vital component of most modern image and video codecs. State of the art video codecs like High Efficiency Video Coding (HEVC) or the upcoming Versatile Video Coding (VVC) use a high number of directional modes. With the recent advances in deep learning, it is now possible to use artificial neural networks for intra frame prediction. Previously published approaches usually add additional ANN based modes or replace all modes by training several networks. In our approach, we use a single autoencoder network to first compress the original with help of already transmitted pixels to four parameters. We then use the parameters together with this support area to generate a prediction for the block. This way, we are able to replace all angular intra modes by a single ANN. In the experiments we compare our method with the intra prediction method currently used in the VVC Test Model (VTM). Using our method, we are able to gain up to 0.85 dB prediction PSNR with a comparable amount of side information or reduce the amount of side information by 2 bit per prediction unit with similar PSNR.
|
Dual Learningbased Video Coding with Inception Dense Blocks In this paper, a dual learningbased method in intra coding is introduced for PCS Grand Challenge. This method is mainly composed of two parts: intra prediction and reconstruction filtering. They use different network structures, the neural networkbased intra prediction uses the fullconnected network to predict the block while the neural networkbased reconstruction filtering utilizes the convolutional networks. Different with the previous filtering works, we use a network with more powerful feature extraction capabilities in our reconstruction filtering network. And the filtering unit is the blocklevel so as to achieve a more accurate filtering compensation. To our best knowledge, among all the learningbased methods, this is the first attempt to combine two different networks in one application, and we achieve the stateoftheart performance for AI configuration on the HEVC Test sequences. The experimental result shows that our method leads to significant BDrate saving for provided 8 sequences compared to HM16.20 baseline (average 10.24% and 3.57% bitrate reductions for allintra and randomaccess coding, respectively). For HEVC test sequences, our model also achieved a 9.70% BDrate saving compared to HM16.20 baseline for allintra configuration.
|
Symmetric Simplicial Pseudoline Arrangements A simplicial arrangement of pseudolines is a collection of topological lines in the projective plane where each region that is formed is triangular. This paper refines and develops David Eppsteinu0027s notion of a kaleidoscope construction for symmetric pseudoline arrangements to construct and analyze several infinite families of simplicial pseudoline arrangements with high degrees of geometric symmetry. In particular, all simplicial pseudoline arrangements with the symmetries of a regular kgon and three symmetry classes of pseudolines, consisting of the mirrors of the kgon and two other symmetry classes, plus sometimes the line at infinity, are classified, and other interesting families (with more symmetry classes of pseudolines) are discussed.
|
Intra Frame Prediction for Video Coding Using a Conditional Autoencoder Approach Intra prediction is a vital component of most modern image and video codecs. State of the art video codecs like High Efficiency Video Coding (HEVC) or the upcoming Versatile Video Coding (VVC) use a high number of directional modes. With the recent advances in deep learning, it is now possible to use artificial neural networks for intra frame prediction. Previously published approaches usually add additional ANN based modes or replace all modes by training several networks. In our approach, we use a single autoencoder network to first compress the original with help of already transmitted pixels to four parameters. We then use the parameters together with this support area to generate a prediction for the block. This way, we are able to replace all angular intra modes by a single ANN. In the experiments we compare our method with the intra prediction method currently used in the VVC Test Model (VTM). Using our method, we are able to gain up to 0.85 dB prediction PSNR with a comparable amount of side information or reduce the amount of side information by 2 bit per prediction unit with similar PSNR.
|
Dual Learningbased Video Coding with Inception Dense Blocks In this paper, a dual learningbased method in intra coding is introduced for PCS Grand Challenge. This method is mainly composed of two parts: intra prediction and reconstruction filtering. They use different network structures, the neural networkbased intra prediction uses the fullconnected network to predict the block while the neural networkbased reconstruction filtering utilizes the convolutional networks. Different with the previous filtering works, we use a network with more powerful feature extraction capabilities in our reconstruction filtering network. And the filtering unit is the blocklevel so as to achieve a more accurate filtering compensation. To our best knowledge, among all the learningbased methods, this is the first attempt to combine two different networks in one application, and we achieve the stateoftheart performance for AI configuration on the HEVC Test sequences. The experimental result shows that our method leads to significant BDrate saving for provided 8 sequences compared to HM16.20 baseline (average 10.24% and 3.57% bitrate reductions for allintra and randomaccess coding, respectively). For HEVC test sequences, our model also achieved a 9.70% BDrate saving compared to HM16.20 baseline for allintra configuration.
|
Using Electronic Patient Reported Outcomes to Foster Palliative Cancer Care: The MyPal Approach Palliative care is offered along with primary treatment to improve the quality of life of the patient by relieving the symptoms and stress of a serious illness such as cancer. As per modern definitions, palliative care is appropriate at any age and at any stage of the illness, regardless of the eventual outcome. Patientreported outcomes (PRO), i.e., health status measurements reported directly by the patients or their proxies, and especially their availability in electronic form (ePROs), are gradually gaining popularity as building blocks of innovative palliative care interventions. This paper presents MyPal, an ECfunded collaborative research project that aims to exploit advanced eHealth technologies to develop and evaluate two novel ePRObased general palliative care interventions for cancer patients. In particular, the paper presents: (1) a short overview of MyPal; (2) the target populations, i.e., adults suffering from chronic lymphocytic leukemia (CLL) or myelodysplastic syndromes (MDS), and children with solid or hematologic malignancies; (3) the ePRObased interventions being designed for the target populations, (4) the eHealth platform for delivering the interventions under development, and (5) the international, multicenter clinical studies to be conducted for assessing these interventions, i.e., a randomized controlled trial (RCT) and an observational study for adults and children, respectively.
|
Trust Degree Calculation Method Based on Trust Blockchain Node Due to the diversity and mobility of blockchain network nodes and the decentralized nature of blockchain networks, traditional trust value evaluation indicators cannot be directly used. In order to obtain trusted nodes, a trustworthiness calculation method based on trust blockchain nodes is proposed. Different from the traditional P2P network trust value calculation, the trust blockchain not only acquires the working state of the node, but also collects the special behavior information of the node, and calculates the joining time by synthesizing the trust value generated by the node transaction and the trust value generated by the node behavior. After the attenuation factor is comprehensively evaluated, the trusted nodes are selected to effectively ensure the security of the blockchain network environment, while reducing the average transaction delay and increasing the block rate.
|
BlockchainBased Lightweight Trust Management in Mobile AdHoc Networks As a trending and interesting research topic, in recent years, researchers have been adopting the blockchain in the wireless adhoc environment. Owing to its strong characteristics, such as consensus, immutability, finality, and provenance, the blockchain is utilized not only as a secure data storage for critical data but also as a platform that facilitates the trustless exchange of data between independent parties. However, the main challenge of blockchain application in an adhoc network is which kind of nodes should be involved in the validation process and how to adopt the heavy computational complexity of block validation appropriately while maintaining the genuine characteristics of a blockchain. In this paper, we propose the blockchainbased trust management system with a lightweight consensus algorithm in a mobile adhoc network (MANET). The proposed scheme provides the distributed trust framework for routing nodes in MANETs that is tamperproof via blockchain. The optimized link state routing protocol (OLSR) is exploited as a representative protocol to embed the blockchain concept in MANETs. As a securely distributed and trusted platform, blockchain solves most of the security issues in the OLSR, in which every node is performing the security operation individually and in a repetitive manner. Additionally, using predefined principles, the routing nodes in the proposed scheme can collaborate to defend themselves from the attackers in the network. The experimental results show that the proposed consensus algorithm is suitable to be used in the resourcehungry MANET with reduced validation time and less overhead. Meanwhile, the attack detection overhead and time also decrease because the repetitivity of the process is reduced while providing a scalable and distributed trust among the routing nodes.
|
Structure and expression of the gene coding for the alphasubunit of DNAdependent RNA polymerase from the chloroplast genome of Zea mays. :0,rpoA gene coding for the alphasubunit of DNAdependent RNA polymerase located on the DNA of Zea mays chloroplasts has been characterized with respect to its position on the chloroplast genome and its nucleotide sequence. The amino acid sequence derived for a 39 Kd polypeptide shows strong homology with sequences derived from the :0,rpoA genes of other chloroplast species and with the amino acid sequence of the alphasubunit from E. coli RNA polymerase. Transcripts of the :0,rpoA gene were identified by Northern hybridization and characterized by S1 mapping using total RNA isolated from maize chloroplasts. Antibodies raised against a synthetic Cterminal heptapeptide show cross reactivity with a 39 Kd polypeptide contained in the stroma fraction of maize chloroplasts. It is concluded that the :0,rpoA gene is a functional gene and that therefore, at least the alphasubunit of plastidic RNA polymerase, is expressed in chloroplasts.
|
Trust Degree Calculation Method Based on Trust Blockchain Node Due to the diversity and mobility of blockchain network nodes and the decentralized nature of blockchain networks, traditional trust value evaluation indicators cannot be directly used. In order to obtain trusted nodes, a trustworthiness calculation method based on trust blockchain nodes is proposed. Different from the traditional P2P network trust value calculation, the trust blockchain not only acquires the working state of the node, but also collects the special behavior information of the node, and calculates the joining time by synthesizing the trust value generated by the node transaction and the trust value generated by the node behavior. After the attenuation factor is comprehensively evaluated, the trusted nodes are selected to effectively ensure the security of the blockchain network environment, while reducing the average transaction delay and increasing the block rate.
|
BlockchainBased Lightweight Trust Management in Mobile AdHoc Networks As a trending and interesting research topic, in recent years, researchers have been adopting the blockchain in the wireless adhoc environment. Owing to its strong characteristics, such as consensus, immutability, finality, and provenance, the blockchain is utilized not only as a secure data storage for critical data but also as a platform that facilitates the trustless exchange of data between independent parties. However, the main challenge of blockchain application in an adhoc network is which kind of nodes should be involved in the validation process and how to adopt the heavy computational complexity of block validation appropriately while maintaining the genuine characteristics of a blockchain. In this paper, we propose the blockchainbased trust management system with a lightweight consensus algorithm in a mobile adhoc network (MANET). The proposed scheme provides the distributed trust framework for routing nodes in MANETs that is tamperproof via blockchain. The optimized link state routing protocol (OLSR) is exploited as a representative protocol to embed the blockchain concept in MANETs. As a securely distributed and trusted platform, blockchain solves most of the security issues in the OLSR, in which every node is performing the security operation individually and in a repetitive manner. Additionally, using predefined principles, the routing nodes in the proposed scheme can collaborate to defend themselves from the attackers in the network. The experimental results show that the proposed consensus algorithm is suitable to be used in the resourcehungry MANET with reduced validation time and less overhead. Meanwhile, the attack detection overhead and time also decrease because the repetitivity of the process is reduced while providing a scalable and distributed trust among the routing nodes.
|
Reinventing Ourselves: New and Emerging Roles of Academic Librarians in Canadian ResearchIntensive Universities The academic library profession is being redefined by the shifting research and scholarly landscape, the transformation in higher education, and advances in technology. A survey of librarians working in Canadaxe2x80x99s researchintensive universities was conducted to explore new and emerging roles. This study focuses on librariansxe2x80x99 activities in: Research Support, Teaching and Learning, Digital Scholarship, User Experience, and Scholarly Communication. It addresses the scope and nature of the new roles, the skills required to provide new services, and the confidence librarians have in their abilities to perform the new roles. It also reports on librariansxe2x80x99 job satisfaction and their perceived impact on the academic enterprise.
|
Trust Degree Calculation Method Based on Trust Blockchain Node Due to the diversity and mobility of blockchain network nodes and the decentralized nature of blockchain networks, traditional trust value evaluation indicators cannot be directly used. In order to obtain trusted nodes, a trustworthiness calculation method based on trust blockchain nodes is proposed. Different from the traditional P2P network trust value calculation, the trust blockchain not only acquires the working state of the node, but also collects the special behavior information of the node, and calculates the joining time by synthesizing the trust value generated by the node transaction and the trust value generated by the node behavior. After the attenuation factor is comprehensively evaluated, the trusted nodes are selected to effectively ensure the security of the blockchain network environment, while reducing the average transaction delay and increasing the block rate.
|
BlockchainBased Lightweight Trust Management in Mobile AdHoc Networks As a trending and interesting research topic, in recent years, researchers have been adopting the blockchain in the wireless adhoc environment. Owing to its strong characteristics, such as consensus, immutability, finality, and provenance, the blockchain is utilized not only as a secure data storage for critical data but also as a platform that facilitates the trustless exchange of data between independent parties. However, the main challenge of blockchain application in an adhoc network is which kind of nodes should be involved in the validation process and how to adopt the heavy computational complexity of block validation appropriately while maintaining the genuine characteristics of a blockchain. In this paper, we propose the blockchainbased trust management system with a lightweight consensus algorithm in a mobile adhoc network (MANET). The proposed scheme provides the distributed trust framework for routing nodes in MANETs that is tamperproof via blockchain. The optimized link state routing protocol (OLSR) is exploited as a representative protocol to embed the blockchain concept in MANETs. As a securely distributed and trusted platform, blockchain solves most of the security issues in the OLSR, in which every node is performing the security operation individually and in a repetitive manner. Additionally, using predefined principles, the routing nodes in the proposed scheme can collaborate to defend themselves from the attackers in the network. The experimental results show that the proposed consensus algorithm is suitable to be used in the resourcehungry MANET with reduced validation time and less overhead. Meanwhile, the attack detection overhead and time also decrease because the repetitivity of the process is reduced while providing a scalable and distributed trust among the routing nodes.
|
Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well. Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found.
|
Trust Degree Calculation Method Based on Trust Blockchain Node Due to the diversity and mobility of blockchain network nodes and the decentralized nature of blockchain networks, traditional trust value evaluation indicators cannot be directly used. In order to obtain trusted nodes, a trustworthiness calculation method based on trust blockchain nodes is proposed. Different from the traditional P2P network trust value calculation, the trust blockchain not only acquires the working state of the node, but also collects the special behavior information of the node, and calculates the joining time by synthesizing the trust value generated by the node transaction and the trust value generated by the node behavior. After the attenuation factor is comprehensively evaluated, the trusted nodes are selected to effectively ensure the security of the blockchain network environment, while reducing the average transaction delay and increasing the block rate.
|
BlockchainBased Lightweight Trust Management in Mobile AdHoc Networks As a trending and interesting research topic, in recent years, researchers have been adopting the blockchain in the wireless adhoc environment. Owing to its strong characteristics, such as consensus, immutability, finality, and provenance, the blockchain is utilized not only as a secure data storage for critical data but also as a platform that facilitates the trustless exchange of data between independent parties. However, the main challenge of blockchain application in an adhoc network is which kind of nodes should be involved in the validation process and how to adopt the heavy computational complexity of block validation appropriately while maintaining the genuine characteristics of a blockchain. In this paper, we propose the blockchainbased trust management system with a lightweight consensus algorithm in a mobile adhoc network (MANET). The proposed scheme provides the distributed trust framework for routing nodes in MANETs that is tamperproof via blockchain. The optimized link state routing protocol (OLSR) is exploited as a representative protocol to embed the blockchain concept in MANETs. As a securely distributed and trusted platform, blockchain solves most of the security issues in the OLSR, in which every node is performing the security operation individually and in a repetitive manner. Additionally, using predefined principles, the routing nodes in the proposed scheme can collaborate to defend themselves from the attackers in the network. The experimental results show that the proposed consensus algorithm is suitable to be used in the resourcehungry MANET with reduced validation time and less overhead. Meanwhile, the attack detection overhead and time also decrease because the repetitivity of the process is reduced while providing a scalable and distributed trust among the routing nodes.
|
Beyond Reality In virtual reality (VR), a new language of sound design is emerging. As directors grapple to find solutions to some of the inherent problems of telling a story in VRxe2x80x94for instance, the audienceu0027s ability to control the field of viewxe2x80x94sound designers are playing a new role in subconsciously guiding the audienceu0027s attention and consequently, are framing the narrative. However, developing a new language of sound design requires time for creative experimentation, and in direct opposition to this, a typical VR workflow often features compressed project timelines, software difficulties, and budgetary constraints. Turning to VR sound research offers little guidance to sound designers, where decades of research has focused on high fidelity and realistic sound representation in the name of presence and uninterrupted immersion McRoberts, 2018, largely ignoring the potential contribution of cinematic sound design practices that use creative sound to guide an audienceu0027s emotion. Angela McArthur, Rebecca Stewart, and Mark Sandler go as far as to argue that unrealistic and creative sound design may be crucial for an audienceu0027s emotional engagement in virtual reality McArthur et al., 2017. To make a contribution towards the new language of sound for VR, and with reference to the literature, this practiceled research explores cinematic sound practices and principles within 360film through the production of a 5minute 360film entitled Afraid of the Dark. The research is supported by a contextual survey including unpublished interviews with the sound designers of three 360films that had the budget and time to experiment with cinematic sound practices namely, xe2x80x9cUnder the Canopyxe2x80x9d with sound design by Joel Douek, xe2x80x9cMy Africaxe2x80x9d with sound design by Roland Heap, and Emmy awardwinning xe2x80x9cCollisionsxe2x80x9d with sound design by Oscarnominated Tom Myers from Skywalker Sound. Additional insights are included from an unpublished interview with an experienced team of 360film sound designers from xe2x80x9cCutting Edgexe2x80x9d in Brisbane Australia xe2x80x93 Mike Lange, Michael Thomas and Heath Plumb. The findings detail the benefits of thinking about sound from the beginning of preproduction, the practical considerations of onset sound recording, and differing approaches to realistic representation and creative design for documentary in the sound studio. Additionally, the research contributes a lowbudget workflow for creating spatial sound for 360film as well as a template for an ambisonic location sound report.
|
Trust Degree Calculation Method Based on Trust Blockchain Node Due to the diversity and mobility of blockchain network nodes and the decentralized nature of blockchain networks, traditional trust value evaluation indicators cannot be directly used. In order to obtain trusted nodes, a trustworthiness calculation method based on trust blockchain nodes is proposed. Different from the traditional P2P network trust value calculation, the trust blockchain not only acquires the working state of the node, but also collects the special behavior information of the node, and calculates the joining time by synthesizing the trust value generated by the node transaction and the trust value generated by the node behavior. After the attenuation factor is comprehensively evaluated, the trusted nodes are selected to effectively ensure the security of the blockchain network environment, while reducing the average transaction delay and increasing the block rate.
|
BlockchainBased Lightweight Trust Management in Mobile AdHoc Networks As a trending and interesting research topic, in recent years, researchers have been adopting the blockchain in the wireless adhoc environment. Owing to its strong characteristics, such as consensus, immutability, finality, and provenance, the blockchain is utilized not only as a secure data storage for critical data but also as a platform that facilitates the trustless exchange of data between independent parties. However, the main challenge of blockchain application in an adhoc network is which kind of nodes should be involved in the validation process and how to adopt the heavy computational complexity of block validation appropriately while maintaining the genuine characteristics of a blockchain. In this paper, we propose the blockchainbased trust management system with a lightweight consensus algorithm in a mobile adhoc network (MANET). The proposed scheme provides the distributed trust framework for routing nodes in MANETs that is tamperproof via blockchain. The optimized link state routing protocol (OLSR) is exploited as a representative protocol to embed the blockchain concept in MANETs. As a securely distributed and trusted platform, blockchain solves most of the security issues in the OLSR, in which every node is performing the security operation individually and in a repetitive manner. Additionally, using predefined principles, the routing nodes in the proposed scheme can collaborate to defend themselves from the attackers in the network. The experimental results show that the proposed consensus algorithm is suitable to be used in the resourcehungry MANET with reduced validation time and less overhead. Meanwhile, the attack detection overhead and time also decrease because the repetitivity of the process is reduced while providing a scalable and distributed trust among the routing nodes.
|
Evaluating Text Entry in Virtual Reality using a Touchsensitive Physical Keyboard Text entry is a challenge for Virtual Reality (VR) applications. In the context of immersive VR HeadMounted Displays, text entry has been investigated for standard physical keyboards as well as for various hand representations. Specifically, prior work has indicated that minimalistic fingertip visualizations is an efficient hand representation. However, these representations typically require external tracking systems. Touchsensitive physical keyboards allow for onsurface interaction, with sensing integrated into the keyboard itself. However, they have not been thoroughly investigated within VR. We close this gap by comparing text entry on a standard physical keyboard and a touchsensitive physical keyboard in a controlled user study (n26). Our results indicate that text entry using touchsensitive physical keyboards can be as efficient as the fingertip visualization, but that results vary between experienced and inexperienced typists.
|
Implicit Transform Selection based on Cross Color Component Prediction for Future Video Coding In the study of video coding technologies, Discrete Cosine Transform type II (DCTII) has been employed for energy compaction. To improve coding efficiency, Multiple Transform Selection (MTS) scheme is recently proposed where Discrete Sine Transform type VII (DSTVII) and DCTVIII are newly introduced with explicit signaling. MTS is not applied for chroma components due to the limitation of computation complexity at an encoder side. This paper proposes implicit transform selection to apply DSTVII for the chroma blocks based on the intra prediction mode applied to the chroma block. The experimental results show 0.45% and 0.48% BDrate gain in all intra configuration for Cb and Cr components compared to the conventional method with negligible impact of encoding and decoding complexity.
|
Adaptive block transforms for hybrid video coding Todays standard video coders employ the hybrid coding scheme on a macroblock basis. In these coders blocks of 16xc3x9716, and 8 xc3x97 8 pixel are used for motion compensation of noninterlaced video. The Discrete Cosine Transform (DCT) is then applied to the prediction error on blocks of size 8xc3x978. The emerging coding standard H.26L employs a set of seven different block sizes for motion compensation. The size of these blocks varies from 4 x 4 to 16 x 16. The block sizes smaller than 8x8 imply that the 8x8 DCT cannot be used for transform coding of the prediciton error. In the current test model an integer approximation of the 4xc3x974 DCT matrix is employed. In this paper the concept of Adaptive Block Transforms is proposed. In this scheme, the transform block size is adapted to the block sizes used for motion compensation. The transform exploits the maximum possible signal length for transform coding without exceeding the compensated block boundaries. The proposed scheme is integrated into the H.26L test model. New integer approximations of the 8xc3x978 and the 16 x 16 DCT matrices are introduced. Like the TML 4xc3x974 transform the coefficient values of these matrices are restricted to a limited range. The results presented here are based on an entropy estimation. They reveal an increased ratedistortion performance of approximately 1.1 dB for high rates on the employed test sequences.
|
Rickshaw Buddy RICKSHAW BUDDY is a lowcost automated assistance system for threewheeler auto rickshaws to reduce the high rate of accidents in the streets of developing countries like Bangladesh. It is a given fact that the lack of over speed alert, back camera, detection of rear obstacle and delay of maintenance are causes behind fatal accidents. These systems are absent not only in auto rickshaws but also most public transports. For this system, surveys have been done in different phases among the passengers, drivers and even the conductors for a useful and successful result. Since the system is very cheap, the lowincome drivers and owners of vehicles will be able to afford it easily making road safety the first and foremost priority.
|
Implicit Transform Selection based on Cross Color Component Prediction for Future Video Coding In the study of video coding technologies, Discrete Cosine Transform type II (DCTII) has been employed for energy compaction. To improve coding efficiency, Multiple Transform Selection (MTS) scheme is recently proposed where Discrete Sine Transform type VII (DSTVII) and DCTVIII are newly introduced with explicit signaling. MTS is not applied for chroma components due to the limitation of computation complexity at an encoder side. This paper proposes implicit transform selection to apply DSTVII for the chroma blocks based on the intra prediction mode applied to the chroma block. The experimental results show 0.45% and 0.48% BDrate gain in all intra configuration for Cb and Cr components compared to the conventional method with negligible impact of encoding and decoding complexity.
|
Adaptive block transforms for hybrid video coding Todays standard video coders employ the hybrid coding scheme on a macroblock basis. In these coders blocks of 16xc3x9716, and 8 xc3x97 8 pixel are used for motion compensation of noninterlaced video. The Discrete Cosine Transform (DCT) is then applied to the prediction error on blocks of size 8xc3x978. The emerging coding standard H.26L employs a set of seven different block sizes for motion compensation. The size of these blocks varies from 4 x 4 to 16 x 16. The block sizes smaller than 8x8 imply that the 8x8 DCT cannot be used for transform coding of the prediciton error. In the current test model an integer approximation of the 4xc3x974 DCT matrix is employed. In this paper the concept of Adaptive Block Transforms is proposed. In this scheme, the transform block size is adapted to the block sizes used for motion compensation. The transform exploits the maximum possible signal length for transform coding without exceeding the compensated block boundaries. The proposed scheme is integrated into the H.26L test model. New integer approximations of the 8xc3x978 and the 16 x 16 DCT matrices are introduced. Like the TML 4xc3x974 transform the coefficient values of these matrices are restricted to a limited range. The results presented here are based on an entropy estimation. They reveal an increased ratedistortion performance of approximately 1.1 dB for high rates on the employed test sequences.
|
Quantum Gravity. Gravitons should have momentum just as photons do; and since graviton momentum would cause compression rather than elongation of spacetime outside of matter; it does not appear that gravitons are compatible with Swartzchildu0027s spacetime curvature. Also, since energy is proportional to mass, and mass is proportional to gravity; the energy of matter is proportional to gravity. The energy of matter could thus contract space within matter; and because of the interconnectedness of space, cause the elongation of space outside of matter. And this would be compatible with Swartzchild spacetime curvature. Since gravity could be initiated within matter by the energy of mass, transmitted to space outside of matter by the interconnectedness of space; and also transmitted through space by the same interconnectedness of space; and since spatial and relativistic gravities can apparently be produced without the aid of gravitons; massive gravity could also be produced without gravitons as well. Gravity divided by an infinite number of segments would result in zero expression of gravity, because it could not curve spacetime. So spatial segments must have a minimum size, which is the Planck length; thus resulting in quantized space. And since gravity is always expressed over some distance in space, quantum space would therefore always quantize gravity. So the nonmediation of gravity by gravitons does not result in unquantized gravity, because quantum space can quantize gravity; thus making gravitons unproven and unnecessary, and explaining why gravitons have never been found.
|
Implicit Transform Selection based on Cross Color Component Prediction for Future Video Coding In the study of video coding technologies, Discrete Cosine Transform type II (DCTII) has been employed for energy compaction. To improve coding efficiency, Multiple Transform Selection (MTS) scheme is recently proposed where Discrete Sine Transform type VII (DSTVII) and DCTVIII are newly introduced with explicit signaling. MTS is not applied for chroma components due to the limitation of computation complexity at an encoder side. This paper proposes implicit transform selection to apply DSTVII for the chroma blocks based on the intra prediction mode applied to the chroma block. The experimental results show 0.45% and 0.48% BDrate gain in all intra configuration for Cb and Cr components compared to the conventional method with negligible impact of encoding and decoding complexity.
|
Adaptive block transforms for hybrid video coding Todays standard video coders employ the hybrid coding scheme on a macroblock basis. In these coders blocks of 16xc3x9716, and 8 xc3x97 8 pixel are used for motion compensation of noninterlaced video. The Discrete Cosine Transform (DCT) is then applied to the prediction error on blocks of size 8xc3x978. The emerging coding standard H.26L employs a set of seven different block sizes for motion compensation. The size of these blocks varies from 4 x 4 to 16 x 16. The block sizes smaller than 8x8 imply that the 8x8 DCT cannot be used for transform coding of the prediciton error. In the current test model an integer approximation of the 4xc3x974 DCT matrix is employed. In this paper the concept of Adaptive Block Transforms is proposed. In this scheme, the transform block size is adapted to the block sizes used for motion compensation. The transform exploits the maximum possible signal length for transform coding without exceeding the compensated block boundaries. The proposed scheme is integrated into the H.26L test model. New integer approximations of the 8xc3x978 and the 16 x 16 DCT matrices are introduced. Like the TML 4xc3x974 transform the coefficient values of these matrices are restricted to a limited range. The results presented here are based on an entropy estimation. They reveal an increased ratedistortion performance of approximately 1.1 dB for high rates on the employed test sequences.
|
Death Ground Death Ground is a competitive musical installationgame for two players. The work is designed to provide the framework for the playersparticipants in which to perform gamemediated musical gestures against eachother. The main mechanic involves destroying the other playeru0027s avatar by outmaneuvering and using audio weapons and improvised musical actions against it. These weapons are spawned in an enclosed area during the performance and can be used by whoever is collects them first. There is a multitude of such powerups, all of which have different properties, such as speed boost, additional damage, ground traps and so on. All of these weapons affect the sound and sonic textures that each of the avatars produce. Additionally, the players can use elements of the environment such as platforms, obstructions and elevation in order to gain competitive advantage, or position themselves strategically to access first the spawned powerups.
|
Implicit Transform Selection based on Cross Color Component Prediction for Future Video Coding In the study of video coding technologies, Discrete Cosine Transform type II (DCTII) has been employed for energy compaction. To improve coding efficiency, Multiple Transform Selection (MTS) scheme is recently proposed where Discrete Sine Transform type VII (DSTVII) and DCTVIII are newly introduced with explicit signaling. MTS is not applied for chroma components due to the limitation of computation complexity at an encoder side. This paper proposes implicit transform selection to apply DSTVII for the chroma blocks based on the intra prediction mode applied to the chroma block. The experimental results show 0.45% and 0.48% BDrate gain in all intra configuration for Cb and Cr components compared to the conventional method with negligible impact of encoding and decoding complexity.
|
Adaptive block transforms for hybrid video coding Todays standard video coders employ the hybrid coding scheme on a macroblock basis. In these coders blocks of 16xc3x9716, and 8 xc3x97 8 pixel are used for motion compensation of noninterlaced video. The Discrete Cosine Transform (DCT) is then applied to the prediction error on blocks of size 8xc3x978. The emerging coding standard H.26L employs a set of seven different block sizes for motion compensation. The size of these blocks varies from 4 x 4 to 16 x 16. The block sizes smaller than 8x8 imply that the 8x8 DCT cannot be used for transform coding of the prediciton error. In the current test model an integer approximation of the 4xc3x974 DCT matrix is employed. In this paper the concept of Adaptive Block Transforms is proposed. In this scheme, the transform block size is adapted to the block sizes used for motion compensation. The transform exploits the maximum possible signal length for transform coding without exceeding the compensated block boundaries. The proposed scheme is integrated into the H.26L test model. New integer approximations of the 8xc3x978 and the 16 x 16 DCT matrices are introduced. Like the TML 4xc3x974 transform the coefficient values of these matrices are restricted to a limited range. The results presented here are based on an entropy estimation. They reveal an increased ratedistortion performance of approximately 1.1 dB for high rates on the employed test sequences.
|
General Data Protection Regulation in Health Clinics The focus on personal data has merited the EU concerns and attention, resulting in the legislative change regarding privacy and the protection of personal data. The General Data Protection Regulation (GDPR) aims to reform existing measures on the protection of personal data of European Union citizens, with a strong impact on the rights and freedoms of individuals in establishing rules for the processing of personal data. The GDPR considers a special category of personal data, the health data, being these considered as sensitive data and subject to special conditions regarding treatment and access by third parties. This work presents the evolution of the applicability of the Regulation (EU) 2016679 six months after its application in Portuguese health clinics. The results of the present study are discussed in the light of future literature and work are identified.
|
Implicit Transform Selection based on Cross Color Component Prediction for Future Video Coding In the study of video coding technologies, Discrete Cosine Transform type II (DCTII) has been employed for energy compaction. To improve coding efficiency, Multiple Transform Selection (MTS) scheme is recently proposed where Discrete Sine Transform type VII (DSTVII) and DCTVIII are newly introduced with explicit signaling. MTS is not applied for chroma components due to the limitation of computation complexity at an encoder side. This paper proposes implicit transform selection to apply DSTVII for the chroma blocks based on the intra prediction mode applied to the chroma block. The experimental results show 0.45% and 0.48% BDrate gain in all intra configuration for Cb and Cr components compared to the conventional method with negligible impact of encoding and decoding complexity.
|
Adaptive block transforms for hybrid video coding Todays standard video coders employ the hybrid coding scheme on a macroblock basis. In these coders blocks of 16xc3x9716, and 8 xc3x97 8 pixel are used for motion compensation of noninterlaced video. The Discrete Cosine Transform (DCT) is then applied to the prediction error on blocks of size 8xc3x978. The emerging coding standard H.26L employs a set of seven different block sizes for motion compensation. The size of these blocks varies from 4 x 4 to 16 x 16. The block sizes smaller than 8x8 imply that the 8x8 DCT cannot be used for transform coding of the prediciton error. In the current test model an integer approximation of the 4xc3x974 DCT matrix is employed. In this paper the concept of Adaptive Block Transforms is proposed. In this scheme, the transform block size is adapted to the block sizes used for motion compensation. The transform exploits the maximum possible signal length for transform coding without exceeding the compensated block boundaries. The proposed scheme is integrated into the H.26L test model. New integer approximations of the 8xc3x978 and the 16 x 16 DCT matrices are introduced. Like the TML 4xc3x974 transform the coefficient values of these matrices are restricted to a limited range. The results presented here are based on an entropy estimation. They reveal an increased ratedistortion performance of approximately 1.1 dB for high rates on the employed test sequences.
|
Using Electronic Patient Reported Outcomes to Foster Palliative Cancer Care: The MyPal Approach Palliative care is offered along with primary treatment to improve the quality of life of the patient by relieving the symptoms and stress of a serious illness such as cancer. As per modern definitions, palliative care is appropriate at any age and at any stage of the illness, regardless of the eventual outcome. Patientreported outcomes (PRO), i.e., health status measurements reported directly by the patients or their proxies, and especially their availability in electronic form (ePROs), are gradually gaining popularity as building blocks of innovative palliative care interventions. This paper presents MyPal, an ECfunded collaborative research project that aims to exploit advanced eHealth technologies to develop and evaluate two novel ePRObased general palliative care interventions for cancer patients. In particular, the paper presents: (1) a short overview of MyPal; (2) the target populations, i.e., adults suffering from chronic lymphocytic leukemia (CLL) or myelodysplastic syndromes (MDS), and children with solid or hematologic malignancies; (3) the ePRObased interventions being designed for the target populations, (4) the eHealth platform for delivering the interventions under development, and (5) the international, multicenter clinical studies to be conducted for assessing these interventions, i.e., a randomized controlled trial (RCT) and an observational study for adults and children, respectively.
|
Fast and Efficient Image Encryption Algorithm Based on Modular Addition and SPD Bitlevel and pixellevel methods are two classifications for image encryption, which describe the smallest processing elements manipulated in diffusion and permutation respectively. Most pixellevel permutation methods merely alter the positions of pixels, resulting in similar histograms for the original and permuted images. Bitlevel permutation methods, however, have the ability to change the histogram of the image, but are usually not preferred due to their timeconsuming nature, which is owed to bitlevel computation, unlike that of other permutation techniques. In this paper, we introduce a new image encryption algorithm which uses binary bitplane scrambling and an SPD diffusion technique for the bitplanes of a plain image, based on a card game trick. Integer values of the hexadecimal key SHA512 are also used, along with the adaptive blockbased modular addition of pixels to encrypt the images. To prove the firstrate encryption performance of our proposed algorithm, security analyses are provided in this paper. Simulations and other results confirmed the robustness of the proposed image encryption algorithm against many wellknown attacks; in particular, bruteforce attacks, knownchosen plain text attacks, occlusion attacks, differential attacks, and gray value difference attacks, among others.
|
The fast image encryption algorithm based on lifting scheme and chaos Abstract Image encryption technology is one of the most important means of image information security. Most image encryption algorithms are based on a permutationxe2x88x92diffusion structure. However, some image cryptosystems based on this structure have been proved to be insecure. Thus, a new image encryption structure based on a lifting scheme is proposed in this study. In the proposed algorithm, the plain image is decomposed into lowfrequency approximate components and highfrequency detailed components. Pseudorandom sequences generated by chaos are employed to sequentially disturb the two sets of components. Then a lifting scheme is used for image encryption. Compared to the currently popular permutationxe2x88x92diffusion structure, the proposed image cryptography requires fewer pseudorandom numbers, and it has a faster encryption speed and higher security. Simulations, performance analysis, and comparison tests show that the proposed method has the advantages of large key space, fast encryption and decryption speeds, strong system sensitivity, and excellent encryption security. The algorithm can be used in applications such as encryption of medical and cloud images.
|
How to Make a Medical Error Disclosure to Patients This paper aims to investigate Chinese publicu0027s expectations of medical error disclosure, and to develop guidelines for hospitals. A national questionnaire survey was conducted in 2019, collecting 1,008 valid responses. Respondentsu0027 were asked their views of the severity of error they would like to be disclosed and what, when, where and who they preferred in an error disclosure. Results showed that Chinese public would like to be disclosed any error reached them even no harm. For both moderate and severe outcome errors, they preferred to be disclosed facetoface, all the information as detail as possible, immediately after the error was recognized and in a prepared meeting room. Regarding attendance of patient side, disclosure was expected to be made to the patient and family. For hospital side, the healthcare provider who committed the error, hisher leader, patient safety manager and highpositioned person of the hospital were anticipated to be present. About the person to make the disclosure, respondents preferred the healthcare provider who committed the error in a moderate outcome case while the leader or highpositioned person in a severe case.
|
Fast and Efficient Image Encryption Algorithm Based on Modular Addition and SPD Bitlevel and pixellevel methods are two classifications for image encryption, which describe the smallest processing elements manipulated in diffusion and permutation respectively. Most pixellevel permutation methods merely alter the positions of pixels, resulting in similar histograms for the original and permuted images. Bitlevel permutation methods, however, have the ability to change the histogram of the image, but are usually not preferred due to their timeconsuming nature, which is owed to bitlevel computation, unlike that of other permutation techniques. In this paper, we introduce a new image encryption algorithm which uses binary bitplane scrambling and an SPD diffusion technique for the bitplanes of a plain image, based on a card game trick. Integer values of the hexadecimal key SHA512 are also used, along with the adaptive blockbased modular addition of pixels to encrypt the images. To prove the firstrate encryption performance of our proposed algorithm, security analyses are provided in this paper. Simulations and other results confirmed the robustness of the proposed image encryption algorithm against many wellknown attacks; in particular, bruteforce attacks, knownchosen plain text attacks, occlusion attacks, differential attacks, and gray value difference attacks, among others.
|
The fast image encryption algorithm based on lifting scheme and chaos Abstract Image encryption technology is one of the most important means of image information security. Most image encryption algorithms are based on a permutationxe2x88x92diffusion structure. However, some image cryptosystems based on this structure have been proved to be insecure. Thus, a new image encryption structure based on a lifting scheme is proposed in this study. In the proposed algorithm, the plain image is decomposed into lowfrequency approximate components and highfrequency detailed components. Pseudorandom sequences generated by chaos are employed to sequentially disturb the two sets of components. Then a lifting scheme is used for image encryption. Compared to the currently popular permutationxe2x88x92diffusion structure, the proposed image cryptography requires fewer pseudorandom numbers, and it has a faster encryption speed and higher security. Simulations, performance analysis, and comparison tests show that the proposed method has the advantages of large key space, fast encryption and decryption speeds, strong system sensitivity, and excellent encryption security. The algorithm can be used in applications such as encryption of medical and cloud images.
|
Erkundung und Erforschung. Alexander von Humboldts Amerikareise Zusammenfassung Ahnlich wie Adalbert Stifters Erzahler im Roman xe2x80x9eNachsommerxe2x80x9c verband A. v. Humboldt auf seiner Amerikareise Erkundung und Erforschung, Reiselust und Erkenntnisstreben. Humboldt hat sein doppeltes Ziel klar benannt: Bekanntmachung der besuchten Lander, Sammeln von Tatsachen zur Erweiterung der physikalischen Geographie. Der Aufsatz ist in funf Abschnitte gegliedert: Anliegen, Route, Methoden, Ergebnisse, Auswertung. Abstract In a similar way as Adalbert Stifteru0027s narrator in the novel xe2x80x9cLate summerxe2x80x9d A. v. Humboldt combined exploration with research, fondness for travelling with striving for findings during his travel through South America. Humboldt clearly indicated his double aim: to report on the visited countries, to collect facts in order to improve physical geography. The treatise consists of five sections: object, route, methods, results, evaluation.
|
Fast and Efficient Image Encryption Algorithm Based on Modular Addition and SPD Bitlevel and pixellevel methods are two classifications for image encryption, which describe the smallest processing elements manipulated in diffusion and permutation respectively. Most pixellevel permutation methods merely alter the positions of pixels, resulting in similar histograms for the original and permuted images. Bitlevel permutation methods, however, have the ability to change the histogram of the image, but are usually not preferred due to their timeconsuming nature, which is owed to bitlevel computation, unlike that of other permutation techniques. In this paper, we introduce a new image encryption algorithm which uses binary bitplane scrambling and an SPD diffusion technique for the bitplanes of a plain image, based on a card game trick. Integer values of the hexadecimal key SHA512 are also used, along with the adaptive blockbased modular addition of pixels to encrypt the images. To prove the firstrate encryption performance of our proposed algorithm, security analyses are provided in this paper. Simulations and other results confirmed the robustness of the proposed image encryption algorithm against many wellknown attacks; in particular, bruteforce attacks, knownchosen plain text attacks, occlusion attacks, differential attacks, and gray value difference attacks, among others.
|
The fast image encryption algorithm based on lifting scheme and chaos Abstract Image encryption technology is one of the most important means of image information security. Most image encryption algorithms are based on a permutationxe2x88x92diffusion structure. However, some image cryptosystems based on this structure have been proved to be insecure. Thus, a new image encryption structure based on a lifting scheme is proposed in this study. In the proposed algorithm, the plain image is decomposed into lowfrequency approximate components and highfrequency detailed components. Pseudorandom sequences generated by chaos are employed to sequentially disturb the two sets of components. Then a lifting scheme is used for image encryption. Compared to the currently popular permutationxe2x88x92diffusion structure, the proposed image cryptography requires fewer pseudorandom numbers, and it has a faster encryption speed and higher security. Simulations, performance analysis, and comparison tests show that the proposed method has the advantages of large key space, fast encryption and decryption speeds, strong system sensitivity, and excellent encryption security. The algorithm can be used in applications such as encryption of medical and cloud images.
|
Productivity Pants With thousands of books on the topic, productivity is a popular subject in the business world. Increase work output, reduce the bottom line, and stay competitivexe2x80x94those are some of the magic bullets that productivity promises. But with so many thoughts, schools, methods and gurus in the world, where do we begin? How do we find a productivity strategy that is right for our professional and personal goals? How do we know which voices to listen to? The short answer? xe2x80x9cIt depends.xe2x80x9d In this lightning talk, we will discuss important considerations when we decide to embark on our own productivity journey. We will focus on sharpening our goals, understanding what we value and creating a space for improving the way we work (and live).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.