www.fgks.org   »   [go: up one dir, main page]

Posts Tagged ‘OperationsResearch’:


Assessing the Relationship between Airlines’ Maintenance Outsourcing and Aviation Professionals’ Job Satisfaction

The current economic and security challenges placed an additional burden on U.S. airlines to provide optimum service at reasonable costs to the flying public. In efforts to stay competitive, U.S. airlines increased foreign-based outsourcing of aircraft major repair and overhaul (MRO) mainly to reduce labor costs and conserve capital. This concentrated focus on outsourcing and restructuring, ignored job dissatisfaction among remaining employees which could reduce and or eliminate an airline’s competitiveness. The purpose of this quantitative study was (a) to assess the relationship between increased levels of foreign-based MRO outsourcing and aviation professionals’ job satisfaction (Y1); (b) to assess the influence of increased levels of foreign-based outsourcing on MRO control (Y2), MRO error rate (Y3), and MRO technical punctuality (Y4) as perceived by aviation professionals; and (c) to assess the influence of increased levels of foreign-based MRO outsourcing on technical skills (Y5) and morale ( Y6) as perceived by aviation professionals. The survey instrument was utilized based on Paul Spector’s Job Satisfaction Questionnaire and MRO specific questions. A random sample of 300 U.S. airline participants was requested via MarketTools to meet required sample size of 110 as determined through a priori power analysis. Study data rendered 198 useable surveys of 213 total responses, and correlation, multiple regression, and ANOVA methods were used to test study hypotheses. The Spearman’s rho for (Y 1) was statistically significant, p = .010 and multiple regression was statistically significant, p < .001. A one-way ANOVA indicated participants differed in their opinions of (Y2) through (Y6), Recommendations for future research include contrasting domestic and global MRO providers, and examining global aircraft parts suppliers and aviation technical training.



Modeling supply chain dynamics with calibrated simulation using data fusion

In today’s global market, companies are facing unprecedented levels of uncertainties in supply, demand and in the economic environment. A critical issue for companies to survive increasing competition is to monitor the changing business environment and manage disturbances and changes in real time. In this dissertation, an integrated framework is proposed using simulation and online calibration methods to enable the adaptive management of large-scale complex supply chain systems. The design, implementation and verification of the integrated approach are studied in this dissertation. The research contributions are two-fold. First, this work enriches symbiotic simulation methodology by proposing a framework of simulation and advanced data fusion methods to improve simulation accuracy. Data fusion techniques optimally calibrate the simulation state/parameters by considering errors in both the simulation models and in measurements of the real-world system. Data fusion methods – Kalman Filtering, Extended Kalman Filtering, and Ensemble Kalman Filtering – are examined and discussed under varied conditions of system chaotic levels, data quality and data availability. Second, the proposed framework is developed, validated and demonstrated in ‘proof-of-concept’ case studies on representative supply chain problems. In the case study of a simplified supply chain system, Kalman Filtering is applied to fuse simulation data and emulation data to effectively improve the accuracy of the detection of abnormalities. In the case study of the ‘beer game’ supply chain model, the system’s chaotic level is identified as a key factor to influence simulation performance and the choice of data fusion method. Ensemble Kalman Filtering is found more robust than Extended Kalman Filtering in a highly chaotic system. With appropriate tuning, the improvement of simulation accuracy is up to 80% in a chaotic system, and 60% in a stable system. In the last study, the integrated framework is applied to adaptive inventory control of a multi-echelon supply chain with non-stationary demand. It is worth pointing out that the framework proposed in this dissertation is not only useful in supply chain management, but also suitable to model other complex dynamic systems, such as healthcare delivery systems and energy consumption networks.



Approximate Policy Iteration Algorithms for Continuous, Multidimensional Applications and Convergence Analysis

The purpose of this dissertation is to present parametric and non-parametric policy iteration algorithms that handle Markov decision process problems with high-dimensional, continuous state and action spaces and to conduct convergence analysis of these algorithms under a variety of technical conditions. An online, on-policy least-squares policy iteration (LSPI) algorithm is proposed, which can be applied to infinite horizon problems with where states and controls are vector-valued and continuous. No special problem structure such as linear, additive noise is assumed, and the expectation is assumably uncomputable. The concept of the post-decision state variable is used to eliminate the expectation inside the optimization problem, and a formal convergence analysis of the algorithm is provided under the assumption that value functions are spanned by finitely many known basis functions. Furthermore, the convergence result extends to the more general case of unknown value function form using orthogonal polynomial approximation. Using kernel smoothing techniques, this dissertation presents three different online, on-policy approximate policy iteration algorithms which can be applied to infinite horizon problems with continuous and high-dimensional state and action spaces. They are kernel-based least squares approximate policy iteration, approximate policy iteration with kernel smoothing and policy iteration with finite horizon approximation and kernel estimators. The use of Monte Carlo sampling to estimate the value function around the post-decision state reduces the problem to a sequence of deterministic, nonlinear programming problems that allow the algorithms to handle continuous, vector-valued states and actions. Again, a formal convergence analysis of the algorithms under a variety of technical assumptions is presented. The algorithms are applied to different numerical applications including linear quadratic regulation, wind energy allocation and battery storage problems to demonstrate their effectiveness and convergence properties.



Evaluating trade-offs for profitable design of network infrastructure using multi-criteria optimization

Infrastructure design is a critical first step towards the long term success of any business. A well-planned supply chain design for example, is important for efficient flow of goods and information between the producers and the consumers. Similarly a well laid out electricity transmission grid is necessary for distribution of electric power and a well-designed road network is important for distribution of goods on a transportation network. Infrastructure investments are capital intensive and usually made for the long-term. Thorough analysis is very important for making large infrastructure investment decisions, to ensure long term profitability. Operations research is an interdisciplinary branch of applied mathematics that uses mathematical modeling and optimization methods to arrive at optimal solutions to complex decision making problems. Application in chemical industries include, use of real time process optimization, product scheduling and resource planning to name a few. Although there is an increased focus on the use of mathematical programming techniques in the chemical industry, most of the commercially available software packages have focused towards the design and operation of chemical plants. Other topics such as supply chain, logistics and distribution haven’t received as much attention by the chemical engineering community. This work focuses on these topics, which are very important for maintaining long-term profitability, while tackling issues from a systems engineering perspective. This work focuses on the use of principles of operations research, to aid in the decision making process. The predominant focus of this dissertation is towards making use of facility location models for finding out the best locations for establishing facilities such as warehouses in a supply chain network and refueling stations on an urban road transportation network. We have also made use of state of the art agent-based modeling and simulation techniques for accurate estimation of inputs required for optimization.



Topics in univariate time series analysis with business applications

Recent technological advances in sensor and computer technology allow the observation of business and industrial processes at fairly high frequencies. For example, data used for monitoring critical parameters of industrial furnaces, conveyor belts or chemical processes may be sampled every minute or second. A high sampling rate is also possible in business related processes such as mail order distribution, fast food restaurant operations, and electronic commerce. Data obtained from frequently monitored business processes are likely to be autocorrelated time series that may or may not be stationary. If left alone, processes will typically not be stable, and hence they will usually not posses a fixed mean, thus exhibiting homogeneous non-stationarity. For monitoring, control, and forecasting purposes of such potentially non-stationary processes it is often important to develop an understanding of the dynamic properties of processes. However, it is sometimes difficult if not impossible to conduct deliberate experiments on full scale industrial plants or business processes to gain the necessary insight of their dynamic properties. Fortunately, intentional or inadvertent process changes that occur in the course of normal operation sometimes offer an opportunity to identify and estimate aspects of the dynamic behavior. To determine if a time series is stationary, the standard exploratory data analytic approach is to check that the sample autocorrelation function ACF) fades out relatively quickly. An alternative, and at times a sounder approach is to use the variogram — a data exploratory tool widely used in spatial geo) statistics for the investigation of spatial correlation of data. The first objective of this dissertation is to derive the basic properties of the variogram and to provide the literature on confidence intervals for the variogram. We then show how to use the multivariate Delta method to derive asymptotic confidence intervals for the variogram that are both practical and computationally appealing. The second objective of this dissertation is to review the theory of dynamic process modeling based on time series intervention analysis and to show how this theory can be used for an assessment of the dynamic properties of business and industrial processes. This is accompanied by a detailed example of the study of a large scale ceramic plant that was exposed to an intentional but unplanned structural change a quasi experiment). The third objective of this dissertation concerns the analysis of multiple interventions. Multiple interventions occur either as a result of multiple changes made to the same process or because of a single change having non-homogeneous effects on time series. For evaluating the effects of undertaken structural changes, it is important to assess and compare the effects, such as gains or losses, of multiple interventions. A statistical hypothesis test for comparing the effects among multiple interventions on process dynamics is developed. Further, we investigate the statistical power of the suggested test and elucidate the results with examples.



Farmer land allocation for maize, groundnut and cotton production in Chipata District, Eastern Province, Zambia

Small-scale farmers in the Chipata District of Zambia rely on their farm fields to grow maize and groundnuts for food security. Cotton production and surplus food security crops are used to generate income to provide for their families. With increasing population pressure, available land has decreased and farmers struggle to provide the necessary food requirements and income to meet their family’s needs. The purpose of the study was to determine how a farmer can best allocate his land to produce maize, groundnuts and cotton when constrained by labor and capital resources to generate the highest potential for food security and financial gains. Data from the 2008-2009 growing season was compiled and analyzed using a linear programming model. The study determined that farmers make the most profit by allocating all additional land and resources to cotton after meeting their minimum food security requirements. The study suggests growing cotton is a beneficial practice for small-scale subsistence farmers to generate income when restricted by limited resources.



Properties of simulated path travel times

Simulation has long been adopted in the study of traffic flow and transportation systems. Although simulation is a strong tool in generating and replicating traffic patterns, computing travel times and providing us with a better understanding of the relationship between variables of traffic flow systems, its mathematical properties are not well understood due to its rule-based and algorithmic nature. In the authors view, one major obstacle to having a discussion about traffic flow simulation properties is the lack of a language by which such a discussion can occur. This dissertation seeks to overcome this void by defining a set of implicit variables and functions that facilitate the communication process regarding simulation models. To attain this goal, the simulation of traffic is modeled as a system consisting of elements, whose relations are easier to study. This approach is a reversal of sorts in the study of simulation models, considering that simulation models were initially designed to overcome the limitations of analytical methods, and therefore are not readily described by analytical equation. Some of the properties of traffic simulation models include shape in topographical view, continuity, monotonicity and sensitivity of outputs to inputs. The input variable of interest in this study is the path flow vector and the output of interest is the path travel time vector. These properties are investigated and their implications are discussed and some suggestions for overcoming some of the undesired properties are provided. Also this dissertation will provide an algorithmic framework for computing the sensitivity of the path travel time vector to the path flow vector. The sensitivities are necessary for capturing the marginal effects which is a requirement for improving the convergence of simulation-based dynamic traffic assignment DTA) and enhancing its application to problems such as system optimal traffic assignment and toll design. Another large application area of these findings is in online predictive traffic control. The techniques can also be applied to enrich the simulation scenario database which will later be used for forming control strategies. Estimating each of the derivatives is equivalent to repeating the simulation which is computationally very demanding considering the number of input and output variables in the system. This study will give a general definition of DTA model concept and path travel time and path flow variables. Afterwards, in each section, each property of the travel time function is investigated mathematically and experiments are performed to show how likely these properties are to occur. Finally a framework and algorithm for deriving the derivatives of each of the experienced path travel times with respect to each of the path departure rates is provided. In other words, changes in the experienced path travel time vector due to slight changes in the path departure rate vector are captured. The performance of the technique is evaluated by comparing its derivative values with the values obtained from a brute force method. The algorithm will also be implemented for solving system optimal dynamic traffic assignment SO_DTA) problem. Each property is investigated for a general discrete-event traffic simulation model which only imposes the weak FIFO assumption of particles at the segment level. In small examples, Cell Transmission Model is adopted, and for algorithm implementation and statistical frequency analysis, DYNASMART traffic simulation model is adopted. Key Words: Simulated path travel time, Traffic flow simulation model, Dynamic traffic assignment models, Concavity, Monotonicity, Continuity, Shape in topographical view, Sensitivity, Perturbation analysis, System optimal, Toll design, Marginal effects, Discrete-event, FIFO, Cell Transmission Model, DYNASMART



Problems in Supply Chain Location and Inventory under Uncertainty

We study three problems on supply chain location and inventory under uncertainty. In Chapter 2, we study the inventory purchasing and allocation problem in a movie rental chain under demand uncertainty. We formulate this problem as a newsvendor-like problem with multiple rental opportunities. We study several demand and return forecasting models based on comparable films using iterative maximum likelihood estimation and Bayesian estimation via Markov chain Monte Carlo simulation. Test results on data from a large movie rental firm reveal systematic under-buying of movies purchased through revenue sharing contracts and over-buying of movies purchased through standard ones. For the movies considered, the model estimates an increase in the average profit per title for new movies by 15.5% and 2.5% for revenue sharing and standard titles, respectively. We discuss the implications of revenue sharing on the profitability of both the rental firm and the studio. In Chapter 3, we focus on the effect of travel time uncertainty on the location of facilities that provide service within a given coverage radius on the transportation network. Three models – expected covering, robust covering and expected p-robust covering – are studied; each appropriate for different types of facilities. Exact and approximate algorithms are developed. The models are used to analyze the location of fire stations in the city of Toronto. Using real traffic data we show that the current system design is quite far from optimality and provide recommendations for improving the performance. In Chapter 4, we continue our analysis in Chapter 3 to study the trade-off between adding new facilities versus relocating some existing facilities. We consider a multi-objective problem that aims at minimizing the number of facility relocations while maximizing expected and worst case network coverage. Exact and approximate algorithms are developed to solve three variations of the problem and find expected worst case trade-off curves for any given number of relocations. The models are used to analyze the addition of four new fire stations to the city of Toronto. Our results suggest that the benefit of adding four new stations is achievable, at a lower cost, by relocating 4-5 stations.



Time dependent queuing models of the national airspace system

Air transportation in the US system has dramatically changed in the past few decades. The National Airspace System NAS) has increasingly become congested. A high volume of air traffic demand is one of the major challenges of the NAS. However, air traffic is very difficult to study due to many uncertainties involved. It is important that we be able to understand the relationship under uncertainties due to aviation operations, precision of navigation and control, and traffic flow efficiency. Many queuing models have been studied to better understand and quantify these relationships. In the past decade, most queuing network models assume that inter-arrival times and service times are exponentially distributed and stationary, which may not be suitable for all scenarios. These queuing models are time invariant and have several drawbacks. In particular, they do not account for increases and decreases in demand that are commonly observed in the NAS throughout a day. Previously, the NAS has been studied and analyzed by using traditional Makovian queues. However, observations from simulations of real traffic data reveal that the inter-arrival time and service time probability distributions cannot be represented by exponential probability density functions. The Coxian distribution is a phase-type distribution that has gained special importance in the research on queuing networks. In this study, several methods of fitting Coxian distribution to data together with different time dependent queuing models of the NAS are developed and discussed. In the past few decades, Coxian distributions have become increasingly more popular. The probability distribution functions for inter-arrival times/service times of airspace systems cannot be represented by traditional probability distribution functions. In the first part of this dissertation, we describe different algorithms to fit Coxian distributions to the service times of major Air Traffic Centers. Several fitting methods are developed and discussed. Finally, we compare and evaluate those methods by using the mean square error MSE) and the number of phases in the distribution. In the second part of this dissertation, we discuss a practical approach for modeling the NAS with time-dependent Coxian queues. Time-dependent Cmt)t)/Ck/s queuing models of the National Airspace are developed in which the inter-arrival distribution is a time-dependent piece-wise constant Coxian random variable, and the service time distribution is a Coxian random variable. We describe an algorithm for calibrating a Cmt)t)/Ck/s queuing model from simulated data of an Air Route Traffic Control Center and an algorithmic approach to determine average measures of the queues. Finally, we give future directions for studying such queuing models.



Portfolio management for private and illiquid investments

The goal of this study is to provide a quantitative framework for modeling illiquid investments in private markets and restricted funds. As such, we divide the study into two halves. First, we attempt to mimic private equity behavior by a quasi-replication strategy with a portfolio of publicly traded assets, as illustrated in an application to endowment investing. We also study its behavior and fair valuation by an earnings-based stochastic model Chapter 1 and 2). Second, we build a stochastic model for portfolios with non-traded assets and deliver numerical results on optimal portfolio choice under lock-up type of illiquidity. We also examine the option value of various fund redemption restrictions via a binomial pricing model Chapter 3 and 4). We believe there is a relatively strong link between private and public markets and our study is among the first few studies to exploit this linkage to partially hedge or manage illiquid equity exposures and apply it to fund-of-funds investing. We are also able to formulate a portfolio optimization model for making optimal consumption and investment decisions while having locked-up capital. With a stochastic risk premia for both stock index and non-tradable asset in the portfolio, we show that illiquidity can be beneficial or detrimental depending on risk premium levels as well as the portion of illiquid wealth. In particular, an indifference pricing approach illustrates how much unrestricted wealth makes the certainty equivalent of a liquidity constrained portfolio. Finally, we are able to quantify the cost of redemption restrictions in fund investment after a detailed fund survival analysis. Expected fund returns and volatility, recovery rate in case of default and lockup periods are among parameters that drives changes in cost of illiquidity.



© Social Sciences