Human Activity Detection Matlab Code Generator

Human Activity Recognition Simulink Model for Smartphone Deployment

PRML/PRMLT - Matlab code for machine learning algorithms in book PRML. Iso2Mesh - a 3D surface and volumetric mesh generator for MATLAB/Octave. Andreas-bulling/ActRecTut - MATLAB Human Activity Recognition Toolbox. This example shows how to prepare a Simulink® model that classifies human activity based on smartphone sensor signals for code generation and smartphone deployment. The example provides two Simulink models that are ready for deployment to an Android device and an iOS device.

This example shows how to prepare a Simulink® model that classifies human activity based on smartphone sensor signals for code generation and smartphone deployment. The example provides two Simulink models that are ready for deployment to an Android device and an iOS device. After you install the required support package for a target device, train the classification model and deploy the Simulink model to the device.

Prerequisites

Simulink support packages are required for the Simulink models in this example.

  • Download and Install Simulink Support Package for Android Devices (required for Android deployment)

  • Download and Install Simulink Support Package for Apple iOS Devices (required for iOS deployment)

Load Sample Data Set

Load the humanactivity data set.

The humanactivity data set contains 24,075 observations of five different physical human activities: Sitting, Standing, Walking, Running, and Dancing. Each observation has 60 features extracted from acceleration data measured by smartphone accelerometer sensors. The data set contains the following variables:

  • actid — Response vector containing the activity IDs in integers: 1, 2, 3, 4, and 5 representing Sitting, Standing, Walking, Running, and Dancing, respectively

  • actnames — Activity names corresponding to the integer activity IDs

  • feat — Feature matrix of 60 features for 24,075 observations

  • featlabels — Labels of the 60 features

The Sensor HAR (human activity recognition) App [1] was used to create the humanactivity data set. When measuring the raw acceleration data with this app, a person placed a smartphone in a pocket so that the smartphone was upside down and the screen faced toward the person. The software then calibrated the measured raw data accordingly and extracted the 60 features from the calibrated data. For details about the calibration and feature extraction, see [2] and [3], respectively. The Simulink models described later also use the raw acceleration data and include blocks for calibration and feature extraction.

Prepare Data

This example uses 90% of the observations to train a model that classifies the five types of human activities and 10% of the observations to validate the trained model. Use cvpartition to specify a 10% holdout for the test set.

Convert the feature matrix XTrain and the response vector YTrain into a table to load the training data set in the Classification Learner app.

Specify the variable name for each column of the table.

Train Boosted Tree Ensemble Using Classification Learner App

Train a classification model by using the Classification Learner app. To open the Classification Learner app, enter classificationLearner at the command line. Alternatively, click the Apps tab, and click the arrow at the right of the Apps section to open the gallery. Then, under Machine Learning, click Classification Learner.

On the Classification Learner tab, in the File section, click New Session and select From Workspace.

In the New Session dialog box, click the arrow for Workspace Variable, and then select the table tTrain. Classification Learner detects the predictors and the response from the table.

The default option is 5-fold cross-validation, which protects against overfitting. Click Start Session. Classification Learner loads the data set and plots a scatter plot of the first two features.

On the Classification Learner tab, click the arrow at the right of the Model Type section to open the gallery. Then, under Ensemble Classifiers, click Boosted Trees.

The Current Model pane of the Data Browser displays the default settings of the boosted tree ensemble model.

On the Classification Learner tab, in the Training section, click Train. When the training is complete, the History pane of the Data Browser displays the 5-fold, cross-validated classification accuracy.

On the Classification Learner tab, in the Export section, click Export Model, and then select Export Compact Model. Click OK in the dialog box. The structure trainedModel appears in the MATLAB Workspace. The field ClassificationEnsemble of trainedModel contains the compact model. Extract the trained model from the structure.

Train Boosted Tree Ensemble at Command Line

Alternatively, you can train the same classification model at the command line.

Perform 5-fold cross-validation for classificationEnsemble and compute the validation accuracy.

Evaluate Performance on Test Data

Evaluate performance on the test data set.

The trained model correctly classifies 97.63% of the human activities on the test data set. This result confirms that the trained model does not overfit to the training data set.

Note that the accuracy values can vary slightly depending on your operating system.

Save Trained Model

For code generation including a classification model object, use saveLearnerForCoder and loadLearnerForCoder.

Save the trained model by using saveLearnerForCoder.

The function block predictActivity in the Simulink models loads the trained model by using loadLearnerForCoder and uses the trained model to classify new data.

Deploy Simulink Model to Device

Now that you have prepared a classification model, you can open the Simulink model, depending on which type of smartphone you have, and deploy the model to your device. Note that the Simulink model requires the EnsembleModel.mat file and the calibration matrix file slexHARAndroidCalibrationMatrix.mat or slexHARiOSCalibrationMatrix.mat. If you click the button located in the upper-right section of this page and open this example in MATLAB®, then MATLAB® opens the example folder that includes these calibration matrix files.

Type slexHARAndroidExample to open the Simulink model for Android deployment.

Type slexHARiOSExample to open the Simulink model for iOS deployment. You can open the model on the Mac OS platform.

The two Simulink models classify human activity based on acceleration data measured by a smartphone sensor. The models include the following blocks:

  • The Accelerometer block receives raw acceleration data from accelerometer sensors on the device.

  • The calibrate block is a MATLAB Function block that calibrates the raw acceleration data. This block uses the calibration matrix in the slexHARAndroidCalibrationMatrix.mat file or the slexHARiOSCalibrationMatrix.mat file. If you click the button located in the upper-right section of this page and open this example in MATLAB®, then MATLAB® opens the example folder that includes these files.

  • The display blocks Acc X, Acc Y, and Acc Z are connected to the calibrate block and display calibrated data points for each axis on the device.

    Discover releases, reviews, credits, songs, and more about Rod Stewart - Foot Loose & Fancy Free at Discogs. Complete your Rod Stewart collection. Rod stewart foot loose fancy free rar program. Foot Loose & Fancy Free is the quintessential classic by Rod Stewart. The iconic musician performs a blistering set packed with mind-blowing arrangements and epic guitar riffs. Reaching #2 on the Billboard 200, Foot Loose & Fancy Free features the ’70 anthems, “Hot Legs,” “I Was Only Joking” and “You’re In My Heart”. Rod Stewart released his eighth solo album entitled Foot Loose and Fancy Free in November of 1977. The album came hot off the heels of his huge blockbuster, 1976's A Night on the Town.

    2. Download free madness project nexus hacked all levels unlocked If file is multipart don't forget to check all parts before downloading!.

  • Each of the Buffer blocks, X Buffer, Y Buffer, andI Z Buffer, buffers 32 samples of an accelerometer axis with 12 samples of overlap between buffered frames. After collecting 20 samples, each Buffer block joins the 20 samples with 12 samples from the previous frame and passes the total 32 samples to the extractFeatures block. Each Buffer block receives an input sample every 0.1 second and outputs a buffered frame including 32 samples every 2 seconds.

  • The extractFeatures block is a MATLAB Function block that extracts 60 features from a buffered frame of 32 accelerometer samples. This function block uses DSP System Toolbox™ and Signal Processing Toolbox™.

  • The predictActivity block is a MATLAB Function block that loads the trained model from the EnsembleModel.mat file by using loadLearnerForCoder and classifies the user activity using the extracted features. The output is an integer between 1 and 5, corresponding to Sitting, Standing, Walking, Running, and Dancing, respectively.

  • The Predicted Activity block displays the classified user activity values on the device.

  • The Video Output subsystem uses a multiport switch block to choose the corresponding user activity image data to display on the device. The Convert to RGB block decomposes the selected image into separate RGB vectors and passes the image to the Activity Display block.

To deploy the Simulink model to your device, follow the steps in Run Model on Android Devices or Run Model on iOS Devices. Run the model on your device, place the device in the same way as described earlier for collecting the training data, and try the five activities. The model displays the classified activity accordingly.

To ensure the accuracy of the model, you need to place your device in the same way as described for collecting the training data. If you want to place your device in a different location or orientation, then collect the data in your own way and use your data to train the classification model.

The accuracy of the model can be different from the accuracy of the test data set (testaccuracy), depending on the device. To improve the model, you can consider using additional sensors and updating the calibration matrix. Also, you can add another output block for audio feedback to the output subsystem using Audio Toolbox™. Use a ThingSpeak™ write block to publish classified activities and acceleration data from your device to the Internet of Things. For details, see https://thingspeak.com/.

References

[1] El Helou, A. Sensor HAR recognition App. MathWorks File Exchange https://www.mathworks.com/matlabcentral/fileexchange/54138-sensor-har-recognition-app

[2] STMicroelectronics, AN4508 Application note. “Parameters and calibration of a low-g 3-axis accelerometer.” 2014.

[3] El Helou, A. Sensor Data Analytics. MathWorks File Exchange https://www.mathworks.com/matlabcentral/fileexchange/54139-sensor-data-analytics--french-webinar-code-

See Also

fitcensembleloadLearnerForCoderpredictsaveLearnerForCoder

Related Topics

G.729 Voice Activity Detection

This example shows how to implement the ITU-T G.729 Voice Activity Detector (VAD)

Introduction

Voice Activity Detection (VAD) is a critical problem in many speech/audio applications including speech coding, speech recognition or speech enhancement. For instance, the ITU-T G.729 standard uses VAD modules to reduce the transmission rate during silence periods of speech.

Algorithm

At the first stage, four parametric features are extracted from the input signal. These parameters are the full-band and low-band frame energies, the set of line spectral frequencies (LSF) and the frame zero crossing rate. If the frame number is less than 32, an initialization stage of the long-term averages takes place, and the voice activity decision is forced to 1 if the frame energy from the LPC analysis is above 21 dB. Otherwise, the voice activity decision is forced to 0. If the frame number is equal to 32, an initialization stage for the characteristic energies of the background noise occurs.

At the next stage, a set of difference parameters is calculated. This set is generated as a difference measure between the current frame parameters and running averages of the background noise characteristics. Four difference measures are calculated:

The initial voice activity decision is made at the next stage, using multi-boundary decision regions in the space of the four difference measures. The active voice decision is given as the union of the decision regions and the non-active voice decision is its complementary logical decision. Energy considerations, together with neighboring past frames decisions, are used for decision smoothing. The running averages have to be updated only in the presence of background noise, and not in the presence of speech. An adaptive threshold is tested, and the update takes place only if the threshold criterion is met.

VAD Implementation

vadG729 is the function containing the algorithm's implementation.

Initialization

Set up an audio source. This example uses an audio file reader.

Stream Processing Loop

Cleanup

Close the audio input device and release resources

Generating and Using the MEX-File

MATLAB Coder can be used to generate C code for the function vadG729. In order to generate a MEX-file, execute the following command.

Speed Comparison

Creating MEX-Files often helps achieve faster run-times for simulations. The following lines of code first measure the time taken by the MATLAB function and then measure the time for the run of the corresponding MEX-file. Note that the speedup factor may be different for different machines.

Reference

ITU-T Recommendation G.729 - Annex B: A silence compression scheme for G.729 optimized for terminals conforming to ITU-T Recommendation V.70