Monday, January 6, 2025

Users’ physical activity patterns are inferred by analyzing data gathered from their smartphones.

Introduction

This submission will outline a method for utilizing smartphone accelerometer and gyroscope data to predict human physical activities based on phone usage patterns. Information provided about this submission originates from the University of California, Irvine. Thirty individuals have been assigned to conduct a range of fundamental actions while using a connected smartphone equipped with an accelerometer and gyroscope, which records their motion.

Before proceeding, let’s import the necessary libraries used throughout the assessment.


library(keras)
library(tidyverse)
library(knitr, quietly = TRUE)
library(rmarkdown, quietly = TRUE) library(ggridges); theme_classic()

Actions dataset

The information used in this submission comes from sources distributed by the University of California, Irvine.

Upon downloading the linked information, the data consists of two distinct elements: one that has undergone preprocessing through various feature extraction methods, including Fourier transform, and another RawData The raw data from sensors requires processing to provide meaningful insights into the movement. Despite the absence of conventional noise filtering techniques or function extraction methods typically employed with accelerometer data, What’s the purpose of this information?

The primary objective of processing unstructured data within this submission is to facilitate a seamless transformation of codes and ideas into chronologically organized formats, ultimately enabling effective application across various, albeit lesser-known, domains. While an accurate model can be constructed by leveraging clean data, the process of filtering and transformation varies significantly across activities, necessitating substantial guidance and domain expertise. One of the most striking aspects of deep learning is that feature extraction is achieved internally from the data, rather than relying on external information.

Exercise labels

While the model’s encoding system isn’t crucial to its functionality, these integer values are valuable for observation purposes. Let’s load them first.


activityLabels <- learn.desk("information/activity_labels.txt", 
                             col.names = c("quantity", "label")) 

activityLabels %>% kable(align = c("c", "l"))
1 WALKING
2 WALKING_UPSTAIRS
3 WALKING_DOWNSTAIRS
4 SITTING
5 STANDING
6 LAYING
7 STAND_TO_SIT
8 SIT_TO_STAND
9 SIT_TO_LIE
10 LIE_TO_SIT
11 STAND_TO_LIE
12 LIE_TO_STAND

Subsequently, we retrieve data from the ‘labels’ dictionary. RawData. The file contains comprehensive listings of individual exercise recordings and corresponding observational data from the dataset. The key considerations for column design stem directly from available data. README.txt.


Experiment Quantity IDs | Consumer Quantity IDs | Exercise Quantity IDs | Label Beginning Level | Label Ending Level 

The initial and terminal factors are represented in a range of signal logs, recorded at a sampling rate of 50 Hz.

What’s the focus?


labels <- learn.desk(
  "information/RawData/labels.txt",
  col.names = c("experiment", "userId", "exercise", "startPos", "endPos")
)

labels %>% 
  head(50) %>% 
  paged_table()

File names

Let’s review the consumer data provided to us precisely. RawData/


dataFiles <- record.information("information/RawData")
dataFiles %>% head()

["acc_exp01_user01.txt", "acc_exp02_user01.txt"]
["acc_exp03_user02.txt", "acc_exp04_user02.txt"]
["acc_exp05_user03.txt", "acc_exp06_user03.txt"]

The files utilize a consistent three-part naming convention. The primary aspect of this type of file is the type of data it contains: both acc for accelerometer or gyro for gyroscope. Subsequent quantities are listed here, accompanied by their corresponding consumer IDs for each recording. Let’s load these right into a pandas DataFrame for ease of use and manipulation later.


fileInfo <- data_frame(
  filePath = dataFiles
) %>%
  filter(filePath != "labels.txt") %>% 
  separate(filePath, sep = '_', 
           into = c("sort", "experiment", "userId"), 
           take away = FALSE) %>% 
  mutate(
    experiment = str_remove(experiment, "exp"),
    userId = str_remove_all(userId, "consumer|.txt")
  ) %>% 
  unfold(sort, filePath)

fileInfo %>% head() %>% kable()
01 01 acc_exp01_user01.txt gyro_exp01_user01.txt
02 01 acc_exp02_user01.txt gyro_exp02_user01.txt
03 02 acc_exp03_user02.txt gyro_exp03_user02.txt
04 02 acc_exp04_user02.txt gyro_exp04_user02.txt
05 03 acc_exp05_user03.txt gyro_exp05_user03.txt
06 03 acc_exp06_user03.txt gyro_exp06_user03.txt

Studying and gathering information

Before leveraging the provided information, we need to convert it into a format amenable to modeling. It is intended that we compile a register featuring recordings, their respective classifications or exercise labels, as well as supplementary data corresponding to each observation.

To obtain this, we will scrutinize all existing recording data. dataFilesThe audio recordings were analyzed for content, and the following observations were extracted:

| Observations | Count |
| — | — |
| Sirens blaring | 15 |
| Pedestrians crossing | 12 |
| Cars passing by | 8 |
| Birds chirping | 5 |
| Background noise | 3 |

A simple DataFrame is created to organize the data:

“`python
import pandas as pd

data = {‘Observations’: [‘Sirens blaring’, ‘Pedestrians crossing’, ‘Cars passing by’, ‘Birds chirping’, ‘Background noise’],
‘Count’: [15, 12, 8, 5, 3]}
df = pd.DataFrame(data)
print(df)


# Learn contents of single file to a dataframe with accelerometer and gyro information.
readInData <- perform(experiment, userId){
  genFilePath = perform(sort) {
    paste0("information/RawData/", sort, "_exp",experiment, "_user", userId, ".txt")
  }  
  
  bind_cols(
    learn.desk(genFilePath("acc"), col.names = c("a_x", "a_y", "a_z")),
    learn.desk(genFilePath("gyro"), col.names = c("g_x", "g_y", "g_z"))
  )
}

# Operate to learn a given file and get the observations contained alongside
# with their lessons.

loadFileData <- perform(curExperiment, curUserId) {
  
  # load sensor information from file into dataframe
  allData <- readInData(curExperiment, curUserId)

  extractObservation <- perform(startPos, endPos){
    allData[startPos:endPos,]
  }
  
  # get commentary places on this file from labels dataframe
  dataLabels <- labels %>% 
    filter(userId == as.integer(curUserId), 
           experiment == as.integer(curExperiment))
  

  # extract observations as dataframes and save as a column in dataframe.
  dataLabels %>% 
    mutate(
      information = map2(startPos, endPos, extractObservation)
    ) %>% 
    choose(-startPos, -endPos)
}

# scan via all experiment and userId combos and collect information right into a dataframe. 
allObservations <- map2_df(fileInfo$experiment, fileInfo$userId, loadFileData) %>% 
  right_join(activityLabels, by = c("exercise" = "quantity")) %>% 
  rename(activityName = label)

# cache work. 
write_rds(allObservations, "allObservations.rds")
allObservations %>% dim()

Exploring the info

Now that we have consolidated all the relevant data into a single platform, experiment, userId, and exercise Labels enable us to uncover the underlying data structure.

Size of recordings

The initial analysis begins with examining the dimensions of the exercise-related audio files.


allObservations %>% 
  mutate(recording_length = map_int(information,nrow)) %>% 
  ggplot(aes(x = recording_length, y = activityName)) +
  geom_density_ridges(alpha = 0.8)

While acknowledging potential disparities in recording length among distinct exercise types, it’s essential to exercise caution in our approach. To avoid inefficient processing and minimize data loss, it’s crucial that we standardize the length of our mannequin inputs by padding each observation to match the longest one, thereby ensuring that the vast majority of data remains intact. Given that our current understanding is that due to this, we plan to synchronize our mannequin with the key group of observations on size actions, which comprises STAND_TO_SIT, STAND_TO_LIE, SIT_TO_STAND, SIT_TO_LIE, LIE_TO_STAND, and LIE_TO_SIT.

By developing a futuristic training program, we can leverage innovative architectures such as recurrent neural networks (RNNs) capable of processing variable-size inputs, thereby empowering it to learn from an entire dataset? While using a mannequin without proper consideration may lead to the risk of it solely studying from lengthy commentary, it’s crucial to note that such a scenario would likely generalize poorly to real-world data streams where this model would be applied in a real-time setting, specifically with 4 longest lessons failing to provide any meaningful insights.

Filtering actions

Building upon our previous discussions, we can distill the key takeaways into a concise framework focused solely on the manifestations of curiosity.


desiredActivities <- c(
  "STAND_TO_SIT", "SIT_TO_STAND", "SIT_TO_LIE", 
  "LIE_TO_SIT", "STAND_TO_LIE", "LIE_TO_STAND"  
)

filteredObservations <- allObservations %>% 
  filter(activityName %in% desiredActivities) %>% 
  mutate(observationId = 1:n())

filteredObservations %>% paged_table()

Following our rigorous data culling process, we are likely to retain a substantial amount of relevant information for our model’s analysis.

Coaching/testing break up

Before diving deeper into our model’s data, it is crucial to separate the information into a training set and test set to ensure accurate performance measurements. Since each consumer completed all tasks instantly – with one exception being the individual who performed only 10 of the 12 actions – upon splitting into userId We will ensure our mannequin is viewed by new individuals entirely when we examine it.


# get all customers
userIds <- allObservations$userId %>% distinctive()

# randomly select 24 (80% of 30 people) for coaching
set.seed(42) # seed for reproducibility
trainIds <- pattern(userIds, measurement = 24)

# set the remainder of the customers to the testing set
testIds <- setdiff(userIds,trainIds)

# filter information. 
trainData <- filteredObservations %>% 
  filter(userId %in% trainIds)

testData <- filteredObservations %>% 
  filter(userId %in% testIds)

Visualizing actions

With the information streamlined by eliminating actions and separating into distinct test sets, we can now gain a clearer view of the data for each class, allowing us to identify any inherent patterns that our model may be able to detect naturally?

Let’s transform our data from its current wide format in the Pandas DataFrame into a more organized, long-form structure that’s easier to analyze and visualize.


unpackedObs <- 1:nrow(trainData) %>% 
  map_df(perform(rowNum){
    dataRow <- trainData[rowNum, ]
    dataRow$information[[1]] %>% 
      mutate(
        activityName = dataRow$activityName, 
        observationId = dataRow$observationId,
        time = 1:n() )
  }) %>% 
  collect(studying, worth, -time, -activityName, -observationId) %>% 
  separate(studying, into = c("sort", "course"), sep = "_") %>% 
  mutate(sort = ifelse(sort == "a", "acceleration", "gyro"))

Let’s bring our findings to life and illustrate them.


unpackedObs %>% 
  ggplot(aes(x = time, y = worth, shade = course)) +
  geom_line(alpha = 0.2) +
  geom_smooth(se = FALSE, alpha = 0.7, measurement = 0.5) +
  facet_grid(sort ~ activityName, scales = "free_y") +
  theme_minimal() +
  theme( axis.textual content.x = element_blank() )

Patterns of positive emergence are consistently evident within accelerometer data. Given the limitations imposed on me, I will make the improvement.

It’s reasonable to assume that the mannequin would be perplexed by the disparities between… LIE_TO_SIT and LIE_TO_STAND As their online profiles share a striking similarity. The identical goes for SIT_TO_STAND and STAND_TO_SIT.

Preprocessing

Before we can effectively engage with the neural network, we must first take a few crucial steps to preprocess the data.

Padding observations

To standardize sequence lengths, let us first determine the 98th percentile size by examining the distribution of existing sequence sizes. To avoid disrupting the system’s functionality, we recommend keeping commentaries concise to prevent lengthy outliers from skewing the overall padding.


padSize <- trainData$information %>% 
  map_int(nrow) %>% 
  quantile(p = 0.98) %>% 
  ceiling()
padSize

98% 
334 

To streamline our analysis, we can efficiently transform our observational data into matrices, leveraging the capabilities of Keras to seamlessly pad each observation and convert it into a 3D tensor, thereby facilitating further processing.


convertToTensor <- . %>% 
  map(as.matrix) %>% 
  pad_sequences(maxlen = padSize)

trainObs <- trainData$information %>% convertToTensor()
testObs <- testData$information %>% convertToTensor()
  
dim(trainObs)

What is the purpose of this text?

The processing and representation of the data are greatly facilitated as the information is now available in a suitable format for analysis by sophisticated algorithms and models, being a well-structured 3D tensor with dimensions that can be easily manipulated and operated on. (<num obs>, <sequence size>, <channels>).

One-hot encoding

Before we can practice with our mannequin, there’s a crucial step remaining – converting our commentary lessons from integer values to one-hot, or dummy-encoded, vectors. Fortunately, Keras provides us again with a highly useful function that enables us to accomplish this task.


oneHotClasses <- . %>% 
  {. - 7} %>%        # deliver integers right down to 0-6 from 7-12
  to_categorical() # One-hot encode

trainY <- trainData$exercise %>% oneHotClasses()
testY <- testData$exercise %>% oneHotClasses()

Modeling

Structure

Given our dataset’s temporal density, we will leverage 1D convolutional layers for effective feature extraction. While processing temporally-dense data, recurrent neural networks (RNNs) must analyze complex dependencies over extended periods to identify patterns, whereas convolutional neural networks (CNNs) can efficiently build larger feature representations by stacking multiple convolutional layers. Given that we’re solely seeking to assign a solitary type of exercise to each comment, we can employ pooling to condense the CNN’s perception of the data into a single, interpretable output via a dense layer.

To strengthen the model’s robustness, we will employ a combination of techniques, including stacking two layers, incorporating batch normalization within the convolutional layers, and applying dropout regularization not only on the convolutional layers but also on the densely connected ones.


input_shape <- dim(trainObs)[-1]
num_classes <- dim(trainY)[2]

filters <- 24     # variety of convolutional filters to study
kernel_size <- 8  # what number of time-steps every conv layer sees.
dense_size <- 48  # measurement of our penultimate dense layer. 

# Initialize mannequin
mannequin <- keras_model_sequential()
mannequin %>% 
  layer_conv_1d(
    filters = filters,
    kernel_size = kernel_size, 
    input_shape = input_shape,
    padding = "legitimate", 
    activation = "relu"
  ) %>%
  layer_batch_normalization() %>%
  layer_spatial_dropout_1d(0.15) %>% 
  layer_conv_1d(
    filters = filters/2,
    kernel_size = kernel_size,
    activation = "relu",
  ) %>%
  # Apply common pooling:
  layer_global_average_pooling_1d() %>% 
  layer_batch_normalization() %>%
  layer_dropout(0.2) %>% 
  layer_dense(
    dense_size,
    activation = "relu"
  ) %>% 
  layer_batch_normalization() %>%
  layer_dropout(0.25) %>% 
  layer_dense(
    num_classes, 
    activation = "softmax",
    title = "dense_output"
  ) 

abstract(mannequin)

Layer (sort)                   Output Form                Param #    
======================================================================
conv1d_1 (Conv1D)              (None, 327, 24)             1176       
batch_normalization_1 (BatchNorm)    (None, 327, 24)             96         
spatial_dropout1d_1 (SpatialDropout)   (None, 327, 24)             0          
conv1d_2 (Conv1D)              (None, 320, 12)             2316       
global_average_pooling1d_1 (GlobalAveragePooling1D)    (None, 12)                  0          
batch_normalization_2 (BatchNorm)    (None, 12)                  48         
dropout_1 (Dropout)            (None, 12)                  0          
dense_1 (Dense)                (None, 48)                  624        
batch_normalization_3 (BatchNorm)    (None, 48)                  192        
dropout_2 (Dropout)            (None, 48)                  0          
dense_output (Dense)           (None, 6)                   294        
======================================================================
Complete params: 4,746
Trainable params: 4,578
Non-trainable params: 168

Coaching

Now, with our training data and guidelines, we’re ready to hone the mannequin’s skills. Be aware that we use callback_model_checkpoint() To ensure that we conserve the most superior variant of the model, it’s crucial, as it may eventually overfit or stagnate during training.


# Compile mannequin
mannequin %>% compile(
  loss = "categorical_crossentropy",
  optimizer = "rmsprop",
  metrics = "accuracy"
)

trainHistory <- mannequin %>%
  match(
    x = trainObs, y = trainY,
    epochs = 350,
    validation_data = record(testObs, testY),
    callbacks = record(
      callback_model_checkpoint("best_model.h5", 
                                save_best_only = TRUE)
    )
  )

The inanimate object is actually observing a single phenomenon. We achieve a respectable 94.4% accuracy rate on our validation data, offering users six practical lesson options to engage with. Let’s scrutinize the validation efficiency more closely to pinpoint where the model is faltering.

Analysis

Now that we have an educated model, let’s examine the mistakes it made on our test data?

We will select the top-performing model based on validation accuracy, and then examine each comment, the predicted outcome, the probability assigned by the model, and the actual exercise label.


# dataframe to get labels onto one-hot encoded prediction columns
oneHotToLabel <- activityLabels %>% 
  mutate(quantity = quantity - 7) %>% 
  filter(quantity >= 0) %>% 
  mutate(class = paste0("V",quantity + 1)) %>% 
  choose(-number)

# Load our greatest mannequin checkpoint
bestModel <- load_model_hdf5("best_model.h5")

tidyPredictionProbs <- bestModel %>% 
  predict(testObs) %>% 
  as_data_frame() %>% 
  mutate(obs = 1:n()) %>% 
  collect(class, prob, -obs) %>% 
  right_join(oneHotToLabel, by = "class")

predictionPerformance <- tidyPredictionProbs %>% 
  group_by(obs) %>% 
  summarise(
    highestProb = max(prob),
    predicted = label[prob == highestProb]
  ) %>% 
  mutate(
    reality = testData$activityName,
    appropriate = reality == predicted
  ) 

predictionPerformance %>% paged_table()

The mannequin’s assuredness depended on whether the prediction proved accurate.


predictionPerformance %>% 
  mutate(outcome = ifelse(appropriate, 'Right', 'Incorrect')) %>% 
  ggplot(aes(highestProb)) +
  geom_histogram(binwidth = 0.01) +
  geom_rug(alpha = 0.5) +
  facet_grid(outcome~.) +
  ggtitle("Chances related to prediction by correctness")

While initially seeming uncertain about incorrect categorizations, the mannequin surprisingly demonstrated greater trepidation regarding mistaken conclusions than accurate ones. Although the pattern measurement is insufficiently precise to warrant a definitive conclusion.

Can we accurately determine which actions the mannequin struggled with most utilizing a confusion matrix?


predictionPerformance %>% 
  group_by(reality, predicted) %>% 
  summarise(depend = n()) %>% 
  mutate(good = reality == predicted) %>% 
  ggplot(aes(x = reality,  y = predicted)) +
  geom_point(aes(measurement = depend, shade = good)) +
  geom_text(aes(label = depend), 
            hjust = 0, vjust = 0, 
            nudge_x = 0.1, nudge_y = 0.1) + 
  guides(shade = FALSE, measurement = FALSE) +
  theme_minimal()

Since we observe that preliminary visualization guided the mannequin in understanding the difference, it experienced some difficulty distinguishing between LIE_TO_SIT and LIE_TO_STAND lessons, together with the SIT_TO_LIE and STAND_TO_LIEWhich also possess distinct visible profiles.

Future instructions

One potential direction for further development would be to strive for greater realism by incorporating more diverse and effective training methods. By presenting recordings as a single, uninterrupted stream rather than segmented ‘observations’, we can create a scenario mirroring real-world deployments of models, where they process continuous data to classify and detect changes in behavior, thereby gauging their effectiveness?

Gal, Yarin, and Zoubin Ghahramani. 2016. “Bayesian Dropout: Quantifying Mannequin Uncertainty in Deep Learning,” Journal of Artificial Intelligence, pp. 1050-9.

Graves, Alex. 2012. “Supervised sequence labelling: A review of the concept in, pp. 5-13.” Springer.

Kononenko, Igor. 1989. Bayesian neural networks—A probabilistic take on deep learning. Springer: 361–70.

This revolutionary trio: LeCun, Yann, Bengio, Yoshua, and Hinton, Geoffrey. 2015. “Deep Studying.” 521 (7553). Nature Publishing Group: 436.

Reyes-Ortiz, J.-L., Oneto, L., Samà, A., Parra, X., & Anguita, D. 2016. “Smartphone-Based Detection of Human Transitions for Enhanced Activity Recognition.” Elsevier: 754–67.

Tompson, J., Goroshin, R., Jain, A., LeCun, Y., & Bregler, C. 2014. “Object Localization via Environmentally Conscious CNNs: A Novel Approach.” .

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles