Search results

1 – 3 of 3
Article
Publication date: 5 March 2024

Sana Ramzan and Mark Lokanan

This study aims to objectively synthesize the volume of accounting literature on financial statement fraud (FSF) using a systematic literature review research method (SLRRM). This…

Abstract

Purpose

This study aims to objectively synthesize the volume of accounting literature on financial statement fraud (FSF) using a systematic literature review research method (SLRRM). This paper analyzes the vast FSF literature based on inclusion and exclusion criteria. These criteria filter articles that are present in the accounting fraud domain and are published in peer-reviewed quality journals based on Australian Business Deans Council (ABDC) journal ranking. Lastly, a reverse search, analyzing the articles' abstracts, further narrows the search to 88 peer-reviewed articles. After examining these 88 articles, the results imply that the current literature is shifting from traditional statistical approaches towards computational methods, specifically machine learning (ML), for predicting and detecting FSF. This evolution of the literature is influenced by the impact of micro and macro variables on FSF and the inadequacy of audit procedures to detect red flags of fraud. The findings also concluded that A* peer-reviewed journals accepted articles that showed a complete picture of performance measures of computational techniques in their results. Therefore, this paper contributes to the literature by providing insights to researchers about why ML articles on fraud do not make it to top accounting journals and which computational techniques are the best algorithms for predicting and detecting FSF.

Design/methodology/approach

This paper chronicles the cluster of narratives surrounding the inadequacy of current accounting and auditing practices in preventing and detecting Financial Statement Fraud. The primary objective of this study is to objectively synthesize the volume of accounting literature on financial statement fraud. More specifically, this study will conduct a systematic literature review (SLR) to examine the evolution of financial statement fraud research and the emergence of new computational techniques to detect fraud in the accounting and finance literature.

Findings

The storyline of this study illustrates how the literature has evolved from conventional fraud detection mechanisms to computational techniques such as artificial intelligence (AI) and machine learning (ML). The findings also concluded that A* peer-reviewed journals accepted articles that showed a complete picture of performance measures of computational techniques in their results. Therefore, this paper contributes to the literature by providing insights to researchers about why ML articles on fraud do not make it to top accounting journals and which computational techniques are the best algorithms for predicting and detecting FSF.

Originality/value

This paper contributes to the literature by providing insights to researchers about why the evolution of accounting fraud literature from traditional statistical methods to machine learning algorithms in fraud detection and prediction.

Details

Journal of Accounting Literature, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-4607

Keywords

Article
Publication date: 11 February 2019

Ata Allah Taleizadeh, Mahshid Yadegari and Shib Sankar Sana

The purpose of this study is to formulate two multi-product single-machine economic production quantity (EPQ) models by considering imperfect products. Two policies are assumed to…

Abstract

Purpose

The purpose of this study is to formulate two multi-product single-machine economic production quantity (EPQ) models by considering imperfect products. Two policies are assumed to deal with imperfect products: selling them at discount and applying a reworking process.

Design/methodology/approach

A screening process is used to identify imperfect items during and after production. Selling the imperfect items at a discount is examined in the first model and the reworking policy in the second model. In both models, demand during the production process is satisfied only by perfect items. Data collected from a case company are used to illustrate the performance of the two models. Moreover, a sensitivity analysis is carried out by varying the most important parameters of the models.

Findings

The case study in this research is used to demonstrate the applicability of the proposed models, i.e. the EPQ model with salvaging and reworking imperfect items. The models are applied to a high-tech un-plasticized polyvinyl chloride (UPVC) doors and windows manufacturer that produces different types of doors and windows. ROGAWIN Co. is a privately owned company that started in 2001 with fully automatic production lines. Finally, the results of applying the different ways of handling the imperfect items are discussed, along with managerial insights.

Originality/value

In real-world production systems, manufacturing imperfect products is unavoidable. That is why, it is important to make a proper decision about imperfect products to reduce overall production costs. Recently, applying a reworking strategy has gained the most interest when it comes to handling this problem. The principal idea of this research is to maximize the total profit of manufacturing systems by optimizing the period length under some capacity constraints. The proposed models were applied to a company of manufacturing UPVC doors and windows.

Details

Journal of Modelling in Management, vol. 14 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 25 October 2022

Narinder Kumar, Bikram Jit Singh and Pravin Khope

Inventory models are quantitative ways of calculating low-cost operating systems. These models can be either deterministic or stochastic. A deterministic model hypothesizes…

Abstract

Purpose

Inventory models are quantitative ways of calculating low-cost operating systems. These models can be either deterministic or stochastic. A deterministic model hypothesizes variable quantities like demand and lead time, as certain. However, various types of research have revealed that the value of demand and lead time is still ambiguous and vary unanimously. The main purpose of this research piece is to reduce the uncertainties in such a dynamic environment of Industry 4.0.

Design/methodology/approach

The current study tackles the multiperiod single-item inventory lot-size problem with varying demands. The three lot sizing policies – Lot for Lot, Silver–Meal heuristic and Wagner–Whitin algorithm – are reviewed and analyzed. The suggested machine learning (ML)–based technique implies the criteria, when and which of these inventory models (with varying demands and safety stock) are best fit (or suitable) for economical production.

Findings

When demand surpasses a predicted value, variance in demand comes into the picture. So the current work considers these things and formulates the proper lot size, which can fix this dynamic situation. To deduce sufficient lot size, all three considered stochastic models are explored exclusively, as per respective protocols, and have been analyzed collectively through suitable regression analysis. Further, the ML-based Classification And Regression Tree (CART) algorithm is used strategically to predict which model would be economical (or have the least inventory cost) with continuously varying demand and other inventory attributes.

Originality/value

The ML-based CART algorithm has rarely been seen to provide logical assistance to inventory practitioners in making wise-decision, while selecting inventory control models in dynamic batch-type production systems.

1 – 3 of 3