Index

Abstract

With the increasing pervasive computation, “Activity Recognition” has become a vast and popular field of research. In the field of automated Activity Recognition, we use multiple sensors in wearable/portable devices in order to recognize the human activities such as standing still, sitting, relaxing, laying, walking, climbing stairs, knee bending cycling jogging etc. The main purpose of this paper is to discuss the field of Activity Recognition for patients and old- age persons or any person in general. This research paper can also be used for telemedicine purposes. Besides, different machine learning algorithm will be applied to achieve Activity Recognition rather precisely. Microsoft Azure ML Studio and a bench marking data set are used for creation as well as evaluation of Machine Learning Model. In addition, a Web Service for Activity Recognition is also developed by using Microsoft Azure ML Studio in order to help the developer and researcher while working on Activity Recognition.

Keywords: Activity recognition, Microsoft azure, Machine learning, Automation, Neural Network, Logistic Regression, Decision Forest, Web service.

Received: 21 September 2020 / Revised: 6 November 2020 / Accepted: 8 January 2021/ Published: 4 February 2021

Contribution/ Originality

The main purpose of this paper is to discuss the field of Activity Recognition for patients and old- age persons or any person in general.


1. INTRODUCTION

In this era of technological growth, the portable gadgets are used by every person like smartphones, smart watches, etc. Large number of sensors are installed in portable gadgets. These sensors are used in human activity recognition  [1] and sensors data can be utilized in recognition of different aspects of human life. The everyday human activity recognition is becoming very popular and straight forward. Assisted living facility provides 24 hours’ personal care and health related care. Activity recognition can be done by wearable devices [2]; [3] in assisted living home to facilitate elderly [4] and disabled people [5]. As many mobile health applications are also monitoring patients remotely [6] elderly assistance, and providing telemedicine services.

Triaxial accelerometer [5] and gyroscope sensors are built-in in smart phones are utilized to recognize the situation of human activities as these sensors are used collectively or separately in Telemedicine services by Context Aware Remote Monitoring Assistant (CARMA) application monitor patients by accelerometer sensors [5]  of the smartphone. Classification and training of smartphone accelerometer [7] data using the efficiency of dynamic features [8] over static features are noted by simulation experiments with benchmark data set of different classifiers available in machine learning domain.

In this paper Microsoft Azure Machine learning platform has been chosen for the human activity recognition. Different machine learning algorithms are applied to get accurate activity recognition such as walking, standing, sitting, climbing up and down. For making machine learning model and evaluating the Microsoft machine learning studio the bench mark dataset will used. A Web Service of activity recognition will be created which will help the developers and researchers.

This paper is further divided into following sections;
Section II: It contains the related work regarding activity recognition.
Section III: It contains the methodology and experiment and development of Web Service.
Section IV: It summarizes the conclusion and discussion.

2. RELATED WORK

In the last decade, aging population has been constantly increasing. Hence guiding aged ones about their daily activities, escalating their well-being, and autonomy are the key research challenges of interest. One of the early efforts to tackle this problem is in Kwon and Sunwoong [9] problem of recognizing human activity has been addressed using accelerometer [9]. The main focus of the research in Bao and Stephen [10] is on human activities recognition using wearable sensors used for remotely monitoring dependent subjects in three steps: sensors placement, data pre-processing, and classification. Recognizing human activity not only depends on simply the sensor data but also on context awareness, so the work in Attal, et al. [6] has also been proposed using context-aware services relying on accelerometer data. Field focus is diverse and includes remote health care, social networking, and Activity Recognition. Here, intended Activity Recognition method makes out four different activities: Walking, Standing, Running and siting. It is applied on scenario of cardiac patient monitoring. Activity recognition plays a very important role in the field of pervasive healthcare by using daily wearable technology. The researchers in UCI [11] used five small biaxial accelerometers which were worn on different parts of body (four limb positions plus the right hip), they constructed a classification model which recognized twenty activities. Experimental results based on twenty volunteers manifested that classifier could obtain best performance with 84.0% precision. In this age smartphones have become very essential part of our daily routines and remain with us everywhere and it has become a perfect tool for analysis of day-to-day routine in Attal, et al. [6] WISDM and UCI datasets for training and performance evaluation of model. These data sets contain set of accelerometer data from different activities, such as walking, jogging, stair climbing, standing and lying which are being collected from cell phones that user performs. As it could be very difficult to identify activities with sensor data that is coming from different positions; therefore, in Damaševičius, et al. [12] motion sensors for positioning of phone are used to provide necessary information for physical activity recognition. Moreover, our purpose in this paper is to discuss about the field of activity recognition for elderly people, patients and for the purpose of telemedicine. For that, we have incorporated machine learning algorithm on a bench marking dataset for training. For all that, Microsoft azure machine learning studio is used.

3. AZURE MACHINE LEARNING STUDIO

Microsoft Azure Machine Learning Studio is a drag-and-drop collaborative tool that needs no programming. It is used to test, deploy, and build predictive analytics solutions on the data. It provides a visual interface to connect modules and datasets to construct your predictive analysis model.

3.1. Multiclass Decision Forest

It is an ensemble model that creates a sequence of decision trees very quickly and at the same time learning from tagged data -a classification algorithm. Decision trees are generally non-parametric models, which shows that varied distributions are supported. Multiclass Decision works best on dataset with more than 2 labels [13].

3.2. Multiclass Logistic Regression

Logistic regression is a popular technique in statistics. This algorithm predicts the probability of occurrence of an event by fitting data to a logistic function. The multiclass logistic regression is used to predict multiple outcome when we are performing classification using multiclass logistic regression. it is a supervised machine learning so dataset used here must be labeled [13].

3.3. Multiclass Neural Network

Neural Network is collection of interconnected layers. The inputs are given in the first layer. There may be some hidden layers and are connected with the output layers. Classification can be performed using Multiclass Neural network, multiclass. Neural network is used when we have multiple classes i.e. more than two labels [14].

4. DATASET

The dataset used here is one of the benchmarking dataset for activity recognition. it was collected with the help of 30 volunteers and the data from accelerometer gyroscope were collected from smartphones they had. Samsung SII was used in this experiment, 70% of volunteers were selected for training set and 30% testing set [15].

5. IMPLEMENTATION

In order to implement the automated Activity recognition, a benchmarking dataset is taken [15] and to create a machine learning model Microsoft azure machine learning studio is used. Three machine learning algorithms namely Multiclass Decision Forest, Multiclass Logistic Regression, and Multiclass Neural Network were applied on the benchmarking dataset and results were evaluated. After evaluating results, we come to conclusion that the multiclass logistic regression was the best among them. So, we have used multiclass logistic regression for creating Web Service for activity recognition using Microsoft azure machine learning studio. Everything above is done in two parts; in first part, models are developed and evaluated; whereas second part is creation of Web Service. Part 1 is shown in Figure 1 and part 2 is shown in Figure 2.

5.1. Develop and Evaluate Model

First of all, to create any machine learning model the most important part is data. Here   in this case, we have used a benchmarking dataset [15] to train and evaluate model. This dataset has two parts; one for training and one for testing. Once the dataset was there, we were to import train dataset into Azure Machine Learning Studio and then columns were selected from dataset that were used for training and evaluation. Here in this dataset, all columns except the subject column that contains ID of subject in experiment were taken once columns had been selected various machine learning algorithm were selected to train model with training data. For this experiment, three algorithms were chosen from Azure Machine Learning Studio i.e. Multiclass Decision Forest, Multiclass Logistic Regression and Multiclass Neural Network after algorithms being selected. We will train them using train model module from Azure Machine Learning Studio. After the training is complete, it is time to evaluate model for that score model and evaluated modules are used from Azure machine learning studio. For evaluation, we had imported test dataset. Once evaluation is completed, now it can be seen that which algorithm performed the best among them in this case Multiclass Logistic Regression outperformed them all so up till now a machine learning models were created and evaluated and based on evaluation it is known that which algorithm did best. After this second part of implementation started.

5.2. Creating Web Service

From Developed and Evaluated Model phase, it is clear that Multiclass Logistic Regression performed the best in this case so that model is used to create web service. Firstly, model was created then it was imported. after that, Web service was created with the help of Azure Machine learning studio. Web service created here takes all the features in input and provide the output i.e. activity. We can see results from web service in results portion.

Figure-1. Development and evaluation of model.

Figure-2. Web services.

6. RESULTS

The main purpose of this paper is to perform automated activity recognition so that it can be helpful for patients’ and elderly persons’ activity monitoring thus creating a web service for that purpose using the machine learning model. Here, six evaluation parameters are used for all three models, namely Overall Accuracy, Average Accuracy, Micro-Averaged precision, Macro-Averaged precision, Micro-Averaged recall and Macro-Averaged precision. Value for each evaluation parameter is shown in Table 1 as it can be noted that Logistic regression has highest overall accuracy and average accuracy of 96.75 and 98.55 respectively it can also be observed that multiclass decision jungle has least overall and average accuracy of 90.26 and 96.75. In order to get even closer look at results, confusion matrix is plotted as we have six labels, so we had the confusion matrix of order 6 by 6. Figure 3 shows confusion matrix for Logistic Regression. It can be seen that 99.3% times laying was recognized correctly and rest of 0.7% times it was recognized as standing. It was due to the reason that in both situations subjects were actually still. Sitting was recognized correctly 89.4% times, rest of the time it was recognized as standing or walking. Standing was recognized correctly for 96.2% times and 3.4% times it was recognized as laying and rest it was recognized as walking. Walking was recognized 99.4% times and all the other times it was considered as walking upstairs, walking upstairs was recognized correctly 94.8% times, all the other times it was recognized as either walking straight or walking down stairs. Walking down stairs was recognized perfectly 94.3% of the times. For all other instance, it recognized it as either walking or walking downstairs. From this confusion matrix, it can also be seen that standing was the activity that was recognized with most accuracy i.e. 99.4%, the activity that was recognized with least accuracy was sitting i.e. 89.4%.  Figure 4 shows confusion matrix for multiclass decision forest. It can be seen that the activity that was recognized with most accuracy is laying and activity recognized with least accuracy is walking down stairs. Figure 5 shows the confusion matrix for multiclass neural network. It shows that activity recognized with the highest and the least accuracy were walking and sitting respectively. Figure 5 shows the results from web service when Web Service is called with python code it returns scored labeled i.e. predicted output. it also returned probability for all the labels as here in Figure 6 it can be seen that it had predicted walking it has also given probability for other classes as well. 

Figure-3. Confusion Matrix for Logistic Regression.

Figure-4. Confusion Matrix for Multiclass decision forest.

Figure-5. Web service response

Figure-6.  Confusion Matrix for Multiclass Neural Network.

Table-1. Evaluation parameter and their values for different algorithms

Algorithms
Accuracy of Each Algorithm
Overall Accuracy
Average Accuracy
Micro-averaged precision
Macro-averaged precision
Micro-averaged recall
Macro-averaged recall
Multiclass Decision Forest
90.26
96.75
90.26
90.49
90.26
89.92
Multiclass Logistic Regression
95.65
98.55
95.65
95.82
95.65
95.55
Multiclass Neural Network
94.4
98.03
94.09
94.65
94.09
94.04

7. CONCLUSION

The motive of this paper is to create a web service for activity recognition using azure ml studio and a benchmarking dataset. This paper focuses on used case of activity recognition for patients and elderly people and people in general for in need for this. This web service can be used by researchers and developers to create activity recognition software. There is a lot of work done in this regard. Some previous researchers are mentioned in related work portion. We have used a bench marking dataset for evaluation. Table 1 shows the six-evaluation parameter and values for those parameters for three algorithms used and based on these results. Multiclass Logistic Regression outperformed them all with overall accuracy of 95.6%. We have also created confusion matrices for all three algorithms and evaluated that in multiclass decision forest laying was recognized with most accuracy and walking downstairs was recognized with least accuracy. In multiclass logistic regression laying was detected with most accuracy and sitting with least accuracy and in multiclass neural network walking was recognized with most accuracy and sitting with least accuracy.

There could be dozens of applications for this kind of research. Web service developed in this paper could be used to create mobile and web application in future by developer and researchers.

Funding: This study received no specific financial support.  

Competing Interests: The authors declare that they have no competing interests.

Acknowledgement: All authors contributed equally to the conception and design of the study.

REFERENCES

[1]          M. B. Del Rosario, S. J. Redmond, and N. H. Lovell, "Tracking the evolution of smartphone sensing for monitoring human movement," Sensors, vol. 15, pp. 18901-18933, 2015.Available at: https://doi.org/10.3390/s150818901.

[2]          P. Sok, T. Xiao, Y. Azeze, A. Jayaraman, and M. V. Albert, "Activity recognition for incomplete spinal cord injury subjects using hidden Markov models," IEEE Sensors Journal, vol. 18, pp. 6369-6374, 2018.Available at: https://doi.org/10.1109/jsen.2018.2845749.

[3]          M. Á. Á. De la Concepción, L. M. S. Morillo, J. A. Á. García, and L. González-Abril, "Mobile activity recognition and fall detection system for elderly people using Ameva algorithm," Pervasive and Mobile Computing, vol. 34, pp. 3-13, 2017.Available at: https://doi.org/10.1016/j.pmcj.2016.05.002.

[4]          M. S. Zainudin, M. N. Sulaiman, N. Mustapha, and T. Perumal, "Activity recognition based on accelerometer sensor using combinational classifiers," presented at the 2015 Ieee Conference on Open Systems (Icos) IEEE, 2015.

[5]          S. L. Lau, I. König, K. David, B. Parandian, C. Carius-Düssel, and M. Schultz, "Supporting patient monitoring using activity recognition with a smartphone," presented at the 7th International Symposium on Wireless Communication Systems. IEEE, 2010.

[6]          F. Attal, S. Mohammed, M. Dedabrishvili, F. Chamroukhi, L. Oukhellou, and Y. Amirat, "Physical human activity recognition using wearable sensors," Sensors, vol. 15, pp. 31314-31338, 2015.Available at: https://doi.org/10.3390/s151229858.

[7]          I. Bisio, F. Lavagetto, M. Marchese, and A. Sciarrone, "Smartphone-based user activity recognition method for health remote monitoring applications," in PECCS, 2012, pp. 200-205.

[8]          D. Coskun, D. I. Ozlem, and O. Atay, "Phone position/placement detection using accelerometer: Impact on activity recognition," presented at the 2015 IEEE Tenth International Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP). IEEE, 2015, 2015.

[9]          M.-C. Kwon and C. Sunwoong, "Recognition of daily human activity using an artificial neural network and smartwatch," Wireless Communications and Mobile Computing, vol. 2018, pp. 81-89, 2018.

[10]        L. Bao and S. I. Stephen, "Activity recognition from user-annotated acceleration data," presented at the International Conference on Pervasive Computing. Springer, Berlin, Heidelberg, 2004.

[11]        UCI, "UCI machine learning repository: Human activity recognition using smartphones data set. Retrieved from: https://archive.ics.uci.edu/ml/datasets/human+activity+recognition+using+smartphones," 2012.

[12]        R. Damaševičius, M. Vasiljevas, J. Šalkevičius, and M. Woźniak, "Human activity recognition in AAL environments using random projections," Computational and Mathematical Methods in Medicine, 2016.

[13]        Xiaoharper, "Multiclass logistic regression - ML Studio (Classic) - Azure.” ML Studio (Classic) - Azure | Microsoft docs. Retrieved from: https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/multiclass-logistic-regression," 2019.

[14]        Xiaoharper, "Multiclass neural network - ML Studio (Classic) - Azure.” ML Studio (Classic) - Azure | Microsoft docs. Retrieved from: https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/multiclass-neural-network," 2019.

[15]        Xiaoharper, "Multiclass decision forest - ML Studio (Classic) - Azure.” ML Studio (Classic) - Azure | Microsoft Docs. Retrieved from: https://docs.microsoft.com/en-us/azure/machine-learning/studio-module-reference/multiclass-decision-forest," 2019.

Views and opinions expressed in this article are the views and opinions of the author(s), Review of Computer Engineering Research shall not be responsible or answerable for any loss, damage or liability etc. caused in relation to/arising out of the use of the content.