Rajesh I S, Madhumitha V, D Sri Lakshmi Priya, Shreyas Sreenivas, BMS Institute of Technology and Management, Bangalore, India
Schizophrenia, a chronic psychotic disorder, is a very challenging disease for doctors for prediction as it involves a series of tests and event analysis for understanding the brain activity. Having reports such as MRI scans, Pet scans and EEG signal data of the patient is one of the most important requirements for analysis of the brain activity during such tasks. Since it involves huge amounts of data that corresponds to defining relationships between different parts of the brain, there is a need for technology interference for data visualization and understanding. This leads to bringing in the power of machine learning models that helps in resolving the complexities using mathematical models for better understanding of the correlations in the data features and to assist in predicting the disease. This paper provides an insight on Schizophrenia and various machine learning models built in various research conducted all over the world for diagnosis of the disease.
Schizophrenia, Machine learning, EEG signals, MRI scans, PET scans
Jonathan Luckett, College of Business, Innovation, Leadership, and Technology, Marymount University, United States of America
Artificial Intelligence (AI) has become an essential part of our lives, from the smartphones we use to the industries that provide us with goods and services. As AI technology advances, its impact on society and the economy is growing rapidly. The rapid growth of AI raises significant concerns about safety, privacy, security, and ethical issues. This paper examines the AI regulations and challenges that exist today, particularly in the United States. I will examine the rise of ChatGPT and conclude with some policy considerations and recommendations for federal agencies.
Artificial Intelligence, Generative AI, ChatGPT
Daniel A Lee, University of Tasmania, Australia
An ethnographic study conducted at an Australian university examined the curriculum design and pedagogical practices of Bachelor Degree Contemporary Popular Music (CPM) courses (n=25) delivered by Australian tertiary institutions. The study investigated participants’ perceptions of the potential presence of course design objectives, including eLearning and engaging with eResources via 21st century telecommunications. These objectives were explored for pedagogic value regarding the safeguarding of intangible cultural heritage in the form of an Australian cultural ‘voice’ in local, glocal and global popular music guitar communities and industry. Inductive thematic analysis of three datasets in the form of surveys, interviews and documents generated five themes. This paper presents and discusses the theme ‘Global Spectra’ which demonstrated the presence of a global perspective among course designers and educators. The discussion addresses the role of online communities and social media in 21st century music education. Findings include the presence of World music in the curricula, and engagement with information technologies, influences the performance practices of graduates. The paper concludes by claiming there are un-known cultural, aesthetic, and pedagogic risks taken when embracing global perspectives via the implementation of advances in telecommunications.
Popular Music Education, Higher Music Education, Globalisation, Intangible Cultural Heritage, Online Resources.
Vinodkumar Bhutnal, Aaryan Chaudhari, Shreya Panicker, Sakshi Khatke, Aaryan Chipkar, Department of Computer Engineering, JSPM’s Rajarshi Shahu College of Engineering, Tathawade, Pune, Maharashtra 411033, India
Working on the construction, mining sites is highly dangerous for laborers and workers due to which, an ordinary helmet for construction workers or miners is made mandatory with the objective of safety, but it doesn't guarantee safety to the workers. Everyday a large number of workers lose their lives, due to accidents, not receiving immediate help. This IoT based smart helmet has many components and features which can be of great help to the workers. The goal of the smart helmet is to give a method and mechanical assembly to recognizing and announcing mishaps. Sensors, distributed computing foundations, Wi-Fi empowered processor are used for building the framework. To reduce the casualty rate, endangering circumstances the intelligent helmet system consists of the advanced, latest automation. The series of smart helmets are connected with each other and the server, for the instant rescue by the nearby workers in case of emergencies.
IoT, Proximity sensor, Three- axis accelerometer, Zigbee, BLE module.
Mohammad Qazim Bhat, Jayalakshmi D. S., Mallegowda M., Geetha J., Department of Computer Science and Engineering, Ramaiah Institute of Technology Bengaluru, India
The issue of fake reviews on online platforms has become a pressing concern in recent years, with the potential to mislead consumers and negatively impact businesses. In this paper, we present a comprehensive approach to detecting fake reviews using both supervised and unsupervised learning techniques. Our approach includes classic machine learning algorithms, deep learning techniques such as RNN and attention networks, as well as state-of-the-art models like BERT and GPT. We leverage a labeled dataset of restaurant reviews from Yelp.com to train and evaluate our models. We also compare the performance of supervised and unsupervised learning techniques, and identify the most ef ective and explainable models for detecting fake reviews. Our results show that our approach achieves high accuracy in detecting fake reviews, and the interpretation of our models of ers valuable insights into the factors that contribute to the identification of fake reviews. We believe our work contributes to the ongoing ef ort of combating fake reviews, and provides a practical and ef ective solution for businesses and consumers to identify trustworthy reviews.
Anomaly detection, fake reviews detection, natural language processing, GPT, interpretable machine learning.
Kamilla Musina, Peoples’ Friendship University of Russia, Russian Federation
This article analyses for the first time some issues of determining legal status of artificial intelligence in law and legislation. Innovative technologies utilized by judges and possibilities of using the electronic Blockchain system are investigated. This article also analyzes legislation in order to explore ways of modernization to eliminate collisions, establishing general provisions on liability for criminal acts by robots, committed due to technical failures of artificial intelligence and drones without any presence of anthropogenic involvement and intentional human intervention. The presented results of the analysis are of philosophical significance, as well as of legal and ontological significance and are arising not only from the actual state of development of artificial intelligence but also from the very real prospects of future modifications of artificial intelligence of cybernetic organisms.This scientific article presents the results of a detailed retrospective and comparative analysis of some historical stages of the transformation of the legal regulation of AI in a number of different countries.
Smart machines, artificial intelligence, blockchain, automated data exchange, lack of human factor, IT technologies, emulation, ethical and legal, cybernetic organism, autonomous driving, electronic persons.
Roman Snytsar, AI & Research, Microsoft, Redmond WA 98052, USA
Sliding window sums are widely used for string indexing, hashing and time series analysis. We have developed a family of the generic vectorized sliding sum algorithms that provide speedup of O(P/w) for window size w and number of processors P. For a sum with a commutative operator the speedup is improved to O(P/log(w)). Even more important, our algorithms exhibit efficient memory access patterns. In this paper we study the application of sliding sum algorithms to the training and inference of Deep Neural Networks. We demonstrate how both pooling and convolution primitives could be expressed as sliding sums and evaluated by the compute kernels with a shared structure. We show that the sliding sum convolution kernels are more efficient than the commonly used GEMM kernels on CPUs and could even outperform their GPU counterparts.
Sai Javvadi, University of Louisville, Louisville Kentucky, USA
Less intensive preprocessing stages and their contribution to deep learning pipelines is often overlooked. Color normalization (CN) algorithms are among the most prominent methods in this stage, and they work by standardizing the staining pattern of a dataset. However, the impact of various color normalization algorithms on the detection of glomeruli in kidney tissue data has not been explored before. A dataset depicting kidney tissue data, and containing glomeruli, was normalized according to three different conventional techniques (Reinhard, Vahadane, Macenko) and fed into a U-NET deep learning model. The dice score coefficient (DSC) was used to compare the results of each run. It was determined that color normalization algorithms significantly impact the segmentation results of deep learning algorithms, with the Reinhard algorithm being the best technique. The work could contribute to the proliferation of color normalization techniques in preprocessing deep learning workflows, which would improve general segmentation accuracies.
Deep Learning, Color Normalization, Histopathology.