Search results

1 – 4 of 4
Article
Publication date: 9 January 2023

Omobolanle Ruth Ogunseiju, Nihar Gonsalves, Abiola Abosede Akanmu, Yewande Abraham and Chukwuma Nnaji

Construction companies are increasingly adopting sensing technologies like laser scanners, making it necessary to upskill the future workforce in this area. However, limited…

Abstract

Purpose

Construction companies are increasingly adopting sensing technologies like laser scanners, making it necessary to upskill the future workforce in this area. However, limited jobsite access hinders experiential learning of laser scanning, necessitating the need for an alternative learning environment. Previously, the authors explored mixed reality (MR) as an alternative learning environment for laser scanning, but to promote seamless learning, such learning environments must be proactive and intelligent. Toward this, the potentials of classification models for detecting user difficulties and learning stages in the MR environment were investigated in this study.

Design/methodology/approach

The study adopted machine learning classifiers on eye-tracking data and think-aloud data for detecting learning stages and interaction difficulties during the usability study of laser scanning in the MR environment.

Findings

The classification models demonstrated high performance, with neural network classifier showing superior performance (accuracy of 99.9%) during the detection of learning stages and an ensemble showing the highest accuracy of 84.6% for detecting interaction difficulty during laser scanning.

Research limitations/implications

The findings of this study revealed that eye movement data possess significant information about learning stages and interaction difficulties and provide evidence of the potentials of smart MR environments for improved learning experiences in construction education. The research implication further lies in the potential of an intelligent learning environment for providing personalized learning experiences that often culminate in improved learning outcomes. This study further highlights the potential of such an intelligent learning environment in promoting inclusive learning, whereby students with different cognitive capabilities can experience learning tailored to their specific needs irrespective of their individual differences.

Originality/value

The classification models will help detect learners requiring additional support to acquire the necessary technical skills for deploying laser scanners in the construction industry and inform the specific training needs of users to enhance seamless interaction with the learning environment.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 24 November 2022

Nihar Gonsalves, Omobolanle Ruth Ogunseiju and Abiola Abosede Akanmu

Recognizing construction workers' activities is critical for on-site performance and safety management. Thus, this study presents the potential of automatically recognizing…

Abstract

Purpose

Recognizing construction workers' activities is critical for on-site performance and safety management. Thus, this study presents the potential of automatically recognizing construction workers' actions from activations of the erector spinae muscles.

Design/methodology/approach

A lab study was conducted wherein the participants (n = 10) performed rebar task, which involved placing and tying subtasks, with and without a wearable robot (exoskeleton). Trunk muscle activations for both conditions were trained with nine well-established supervised machine learning algorithms. Hold-out validation was carried out, and the performance of the models was evaluated using accuracy, precision, recall and F1 score.

Findings

Results indicate that classification models performed well for both experimental conditions with support vector machine, achieving the highest accuracy of 83.8% for the “exoskeleton” condition and 74.1% for the “without exoskeleton” condition.

Research limitations/implications

The study paves the way for the development of smart wearable robotic technology which can augment itself based on the tasks performed by the construction workers.

Originality/value

This study contributes to the research on construction workers' action recognition using trunk muscle activity. Most of the human actions are largely performed with hands, and the advancements in ergonomic research have provided evidence for relationship between trunk muscles and the movements of hands. This relationship has not been explored for action recognition of construction workers, which is a gap in literature that this study attempts to address.

Details

Smart and Sustainable Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 15 June 2021

Omobolanle Ruth Ogunseiju, Johnson Olayiwola, Abiola Abosede Akanmu and Chukwuma Nnaji

The physically-demanding and repetitive nature of construction work often exposes workers to work-related musculoskeletal injuries. Real-time information about the ergonomic…

874

Abstract

Purpose

The physically-demanding and repetitive nature of construction work often exposes workers to work-related musculoskeletal injuries. Real-time information about the ergonomic consequences of workers' postures can enhance their ability to control or self-manage their exposures. This study proposes a digital twin framework to improve self-management ergonomic exposures through bi-directional mapping between workers' postures and their corresponding virtual replica.

Design/methodology/approach

The viability of the proposed approach was demonstrated by implementing the digital twin framework on a simulated floor-framing task. The proposed framework uses wearable sensors to track the kinematics of workers' body segments and communicates the ergonomic risks via an augmented virtual replica within the worker's field of view. Sequence-to-sequence long short-term memory (LSTM) network is employed to adapt the virtual feedback to workers' performance.

Findings

Results show promise for reducing ergonomic risks of the construction workforce through improved awareness. The experimental study demonstrates feasibility of the proposed approach for reducing overexertion of the trunk. Performance of the LSTM network improved when trained with augmented data but at a high computational cost.

Research limitations/implications

Suggested actionable feedback is currently based on actual work postures. The study is experimental and will need to be scaled up prior to field deployment.

Originality/value

This study reveals the potentials of digital twins for personalized posture training and sets precedence for further investigations into opportunities offered by digital twins for improving health and wellbeing of the construction workforce.

Details

Smart and Sustainable Built Environment, vol. 10 no. 3
Type: Research Article
ISSN: 2046-6099

Keywords

Article
Publication date: 29 April 2021

Omobolanle Ruth Ogunseiju, Johnson Olayiwola, Abiola Abosede Akanmu and Chukwuma Nnaji

Construction action recognition is essential to efficiently manage productivity, health and safety risks. These can be achieved by tracking and monitoring construction work. This…

Abstract

Purpose

Construction action recognition is essential to efficiently manage productivity, health and safety risks. These can be achieved by tracking and monitoring construction work. This study aims to examine the performance of a variant of deep convolutional neural networks (CNNs) for recognizing actions of construction workers from images of signals of time-series data.

Design/methodology/approach

This paper adopts Inception v1 to classify actions involved in carpentry and painting activities from images of motion data. Augmented time-series data from wearable sensors attached to worker's lower arms are converted to signal images to train an Inception v1 network. Performance of Inception v1 is compared with the highest performing supervised learning classifier, k-nearest neighbor (KNN).

Findings

Results show that the performance of Inception v1 network improved when trained with signal images of the augmented data but at a high computational cost. Inception v1 network and KNN achieved an accuracy of 95.2% and 99.8%, respectively when trained with 50-fold augmented carpentry dataset. The accuracy of Inception v1 and KNN with 10-fold painting augmented dataset is 95.3% and 97.1%, respectively.

Research limitations/implications

Only acceleration data of the lower arm of the two trades were used for action recognition. Each signal image comprises 20 datasets.

Originality/value

Little has been reported on recognizing construction workers' actions from signal images. This study adds value to the existing literature, in particular by providing insights into the extent to which a deep CNN can classify subtasks from patterns in signal images compared to a traditional best performing shallow network.

1 – 4 of 4