www.fgks.org   »   [go: up one dir, main page]

Posts Tagged ‘OperationsResearch’:


Systemic Risk Measures: DistVaR and Other “Too Big To Fail” Risk Measures

In this paper systemic risk measures designed to find risk when firms or markets are in “distress” are introduced, motivated by trying to quantify what it means to be “too big to fail.” Specific risk measures of multiple random variables are defined, building on the extensive risk measure literature for a single random variable. This work expands on the CoVaR measure from [1]. DistVaR “Distressed Value-at-Risk”) and DistES “Distressed Expected Shortfall”) are introduced, which generalize the notion of VaR and AVaR to the case where a separate financial entity is in distress. In addition, DistES may be seen as a generalization of the coherent allocation of AVaR. In contrast to CoVaR and CoES), these measures have less dependence on the “local behavior” of the joint distribution of the random variables. For this reason, quantile regression is a viable tool to calculate CoVaR, but not DistVaR. However, using quantile regression for CoVaR may be compared with simply modeling the underlying random variables as multivariate Gaussian. In addition, this notion of depending on “local behavior” is made more precise by showing that DistVaR and DistES are smoother with respect to their parameters, specifically requiring weaker conditions for continuity. Also, Monte Carlo simulation is an important, practical tool for calculating DistVaR and DistES, but not CoVaR and CoES. All of these risk measures may be seen as a special case of generalized Co- and Dist-style risk measures on Orlicz hearts, as in [8] for one-dimensional risk measures. The final goal is to perform a real-world study wherein these values are calculated. A new reason why multivariate Black Scholes is not an appropriate model choice is given. Instead, a model which avoids procyclicity is considered: regime switching lognormals in continuous time. A new approach is devised to find the distribution, and then to calculate the risk measures. Several companies are considered, using the market value of their assets, to determine risk level. As expected, DistES is more conservative, ceteris paribus . In addition, we comment on why DistVaR or DistES are more appropriate risk measures for regulations than the alternatives. *Please refer to dissertation for footnotes.



Convex Optimization with Applications in Sparse Multivariate Statistics

The main focus of this thesis is to build sparse statistical models central to several machine learning tasks. Parsimonious modeling in statistics seeks to balance the tradeoff between solution accuracy and the curse of dimensionality. From an optimization perspective, challenges emanate from the inherent non-convexity in these problems and the computational bottlenecks in traditional algorithms. We first focus on capturing dependence relationships between variables in as sparse a manner as possible. Covariance selection seeks to estimate a covariance matrix by maximum likelihood while restricting the number of nonzero inverse covariance matrix coefficients. A single penalty parameter usually controls the tradeoff between log likelihood and sparsity in the inverse matrix. We describe an efficient algorithm for computing a full regularization path of solutions to this problem. We next derive a semidefinite relaxation for the problem of computing generalized eigenvalues with a constraint on the cardinality of the corresponding eigenvector. We first use this result to produce a sparse variant of linear discriminant analysis and compare classification performance with greedy and thresholding approaches. We then use this relaxation to produce lower performance bounds on the subset selection problem. Finally, we derive approximation algorithms for the nonnegative matrix factorization problem, i.e. the problem of factorizing a matrix as the product of two matrices with nonnegative coefficients. We form convex approximations of this problem which can be solved efficiently via first-order algorithms.



The vanpool assignment problem: Analysis and optimization

This dissertation presents new models and algorithms for the Vanpool Assignment problem. A vanpool is typically a group of nine to fifteen passengers who share their commute to a common target location typically an office building or corporate campus). Commuters in a vanpool drive from their homes to a park-and-ride location. The driver of the vanpool drives a van to the park-and-ride location while the other members of the vanpool drive their own vehicles and then board the van at the park-and-ride location. The driver drives the entire group to the target location and then drives the group back to the park-and-ride location at the end of the work day. The Vanpool Assignment problem studied in this dissertation is motivated by a program offered by Gulfstream Aerospace, a large employer in the Dallas/Fort-Worth area, Dallas Area Rapid Transit DART), and Enterprise Rent-A-Car. The objective function of the first model presented in this study is to minimize the total cost of a one-way trip to the target location for all employees at a particular location including those employees who opt-out of the program and choose not to join a vanpool). This model, called the Minimum Cost Vanpool Assignment Model MCVAM), utilizes constraints on the capacity of each van and quality-of-service constraints on the cost and travel time involved in joining a vanpool. The second model, which is based on the first model, maximizes the number of employees who join vanpools. Both of these models allow only one-stop park-and-ride vanpooling. The third model, called the Two-Stop Minimum Cost Vanpool Assignment Model TSMCVAM), is a different form of the first model that allows up to two park-and-ride locations per vanpool rather than just one. The fourth model is based on the third model and maximizes the number of employees who join vanpooling while allowing vanpools to stop at up to two park-and-ride locations. In the computational results presented in this study, the first model and the second model produced competitive vanpool assignments. However, these assignments dont include some of the potential vanpoolers because of park-and-ride limitations and also because of the distances to the park-and-ride locations used. The third and the fourth models allowed up to two park-and-ride locations per vanpool and attracted more customers, which in turn increased the overall fill rate of vans. The two-stop method also reduced the overall vanpooling costs significantly by taking advantage of the expanded customer base. However, the solution times for these models, even in medium size problem sets, were significantly longer than the solution times for the one-stop models. Solution times for the first and the second model for a small scale problem are fast, and it is observed that these models were scalable to a larger size problem. However, solution times for the third model were significantly longer on the smallest scale problem set. When the larger problem sets were introduced to evaluate the scalability of these models, it was apparent that getting a solution close to optimal may take days or even weeks. It is obvious that there is a need to solve larger size problem sets faster to take full advantage of the two-stop models. In order to derive high quality solutions in reasonable time limits, heuristics are developed to solve the two-stop models for larger-scale problem sets. First, we introduce the Incumbent Solution Heuristic, which uses the MCVAM solution as an incumbent solution for the TSMCVAM. Secondly, we present the Restricted Allowance Heuristic, which restricts the possible two-stop passenger park-and-ride combinations by removing combinations that are unlikely to be used in an optimal solution. Thirdly, we describe a Linear-Programming-based LP) heuristic called the Relaxed Restricted Allowance Heuristic. This heuristic utilizes the Integer-Programming-based IP) Restricted Allowance Heuristic as the starting point. The Relaxed Restricted Allowance Heuristic uses the LP-based solution to gain information on the IP problem to be able to further restrict the solution space and possibly reduce the solution time. Lastly, we introduce the Greedy Cover Heuristic GCH), which picks a minimal set of two-stop park-and-ride combinations to accommodate as many passengers as possible. On the largest problems 600 potential passengers and 120 park-and-ride locations), the GCH is found to be the best of these heuristicsï¼› it produces optimality gaps ranging from 5.38% to 9.45% with an average optimality gap of 7.84% in CPU times ranging from 1 minute to 15 minutes and 9 seconds with an average of 6 minutes and 6 seconds. We claim three main contributions with this dissertation: 1. Introduction of the MCVAM: the literature focuses on carpooling and related shared-vehicle transportation models. To the best of our knowledge, this is the first mathematical programming model proposed for the standard one-stop) vanpool assignment problem. 2. Introduction of the TSMCVAM: the MCVAM models the current practice in vanpooling of using one park-and-ride location per vanpool. However, the TSMVCAM can generate significant cost savings compared to the MCVAM with little or no increase in trip times for most passengers by allowing vanpools to stop at a second park-and-ride location. 3. Heuristics that can find high-quality solutions to the TSMCVAM with reasonable CPU times.



Strategic outsourcing for manufacturing firms

The decision for a company or institution to outsource either a manufacturing process or product is intrinsically tied to its tactical and strategic perspective. Tactical outsourcing is typically for near-term cost savings or to gain additional production capability. Strategic outsourcing is typically initiated for fundamental changes in business strategy or direction, but this does not exclude cost saving reasons. Companies that are considering outsourcing some part of their supply chain are currently without a publicly available, general-purpose, analytical tool for decision guidance on the risk-adjusted economic impact of an outsourcing decision. No single analytical decision-support tool can simply evaluate both subjective and objective criteria. The purpose of this praxis is to survey the relevant literature and then to develop an optimization tool based on linear programming that combines objective financial data and subjective risk-based data. This program aims to help businesses decide which manufacturing processes should be considered for outsourcing, and if it would make economic sense to outsource these processes or components. This praxis builds a bridge between management theory and practical economic analysis—at least as far as outsourcing decision matrices are concerned. Much has been written in the management literature about outsourcing and “virtual organizations.” In addition, economics literature articles tend to focus on subcontracting, the “make or buy” decision, general markets, and/or specific case studies of companies. Experts who profess to belong to one or other of the disciplines write for their designated audience, using vernacular that is largely meaningless to the other group. Economics papers focus on the transaction costs, unit costs, and financial measures. Most management papers focus on subjective concepts such as strategy, core competencies, and vision. The more technically based articles from operations research papers focus on algorithm development and applications for finding the new methods of computing costs. The union of management theory and economic analysis is a focus of this praxis.



Cost-savings motivation, new technology adoptation, regulatory compliance burden, and adoption of online banking services for Russian credit unions

This dissertation examines the relationship of cost savings motivation, capability to adopt new technology, regulatory compliance burden, and the rate of adoption of online banking services for credit unions in Russia. Information about these credit unions was obtained from the Russian Credit Union League, which is a member of World Council of Credit Unions, headquartered in Washington, DC. A survey was administered to managers who work for Russian credit unions. Correlational data analysis was performed on items that measure cost savings motivation, capability to adopt new technology, regulatory compliance burden, and the rate of adoption of online banking services. Research Question 1 examined the relationship between cost savings motivation and the rate of adoption of online banking services. The corresponding hypothesis was that credit unions with greater cost savings motivation have a higher rate of adoption of online banking services. The correlational analysis showed that cost savings motivation was positively correlated with the rate of adoption of online banking services. Research Question 2 examined the relationship between capability to adopt new technology and the rate of adoption of online banking services. The corresponding hypothesis was that credit unions with greater capability to adopt new technology have a higher rate of adoption of online banking services. The correlational analysis showed that capability to adopt new technology has positive correlation with the rate of adoption of online banking services. Research Question 3 examined the relationship between regulatory compliance burden and the rate of adoption of online banking services. The corresponding hypothesis was that credit unions that perceive high regulatory compliance burden have a lower rate of adoption of online banking services. The correlational analysis showed that perceived regulatory compliance burden was positively correlated with the rate of adoption of online banking services. In other words, those who believe that online banking has lower compliance burden have a higher rate of adoption. The research results suggest that if credit unions have greater cost savings motivation and greater capability to adopt new technology, and if they perceive low regulatory compliance burden, then they have a higher rate of adoption of online banking services.



Variable Rate Structure for Efficient, Equitable, and Stable Mileage-based User Fee

Variable rate structure is an engineering framework for determining the appropriate charge per mile of traveled by motorized vehicle users. The structure utilizes a bottom-up microscopic approach to calculate economically efficient rates that are tailored for individual vehicles specific to their route, time, and distance. In addition, the variable rate structure is designed to produce a stable source of revenue to support the much needed transportation projects and to convey a clear message to road users about the consequences of driving, in hope of positively influence their choices. The charge structure consists of four cost models corresponding to four major social cost components: road damage, accident risk, congestion, and environmental pollution. In each cost model, vehicle characteristics are converted into equivalent damaging units. Next, the values of marginal cost are determined per vehicle damaging unit-mile. Thus, the charge amount for a journey can be found by multiplying the marginal cost with each vehicles total damaging units and miles of traveled. Since the variable rate structure is a charging mechanism of the mileage-based user fee system, a section of this research is dedicated to justifying the needs and validating the benefits of this pricing instrument. Mileage-based pricing proves to be much more efficient and equitable than any other existing instruments, due to its ability to capture myriad impacts that vehicles have on the society and the environment.



A method to improve the sustainment of systems based on probability and consequences

The FROST Method is presented which improves the efficiency of long-term sustainment of hardware systems. The FROST Method makes sustainment and scheduling decisions based on the minimization of the expected value of current and future costs. This differs from current methods which tend to base decisions not on the expected value of costs, but on the expected inventory demand found through projections using data which is often inaccurate. Distributions are used to account for randomness and inaccuracy in inputs such as failure rates and vendor-claimed dates for end of production. A Monte Carlo technique is then used to convert these distributions into a statistically relevant set of possible futures. Finally, these futures are analyzed to determine what combination of actions will result in the system being sustained for least cost. Simulations show that, for a realistic range of system parameters, the FROST Method can be expected to reduce the cost of sparing and sustainment engineering between 21.1% and 69.1% depending on the situation, with an average of 43.6%. Implementation involves a slightly increased burden over current methods in terms of the amount of data that must be collected and provided as inputs.



Cap-and-Trade Modeling and Analysis for Electric Power Generation Systems

Cap-and-trade is the most discussed CO2 emissions control scheme in the U.S. It is a market-based mechanism that has been used previously to successfully reduce the levels of SO2 and NOx emitted by power generators. Since electricity generators are responsible for about 40% of the CO2 emissions in the U.S., the implementation of CO 2 cap-and-trade will have a significant impact on electric power generation systems. In particular, cap-and-trade will influence the investment decisions made by power generators. These decisions in turn, will affect electricity prices and demand. If the allowances or emission permits) created by a cap-and-trade program are auctioned, the government will collect a significant amount of money that can be redistributed back to the electricity market participants to mitigate increases on electricity prices due to cap-and-trade and also, to increase the market share of low-emission generators. In this dissertation, we develop two models to analyze the impact of CO2 cap-and-trade on electric power generation systems. The first model is intended to be used by power generators in a restructured market to evaluate investment decisions under different CO2 cap-and-trade programs for a given time horizon and a given forecast in demand growth. The second model is intended to aid policymakers in developing optimal CO 2 revenue redistribution policies via subsidies for low-emission generators. Through the development of these two models, our underlying objective is to provide analysis tools for policymakers and market participants so that they can make informed decisions about the design of cap-and-trade programs and about the market actions they can take if such programs are implemented.



Production Planning Models with Clearing Functions: Dual Behavior and Applications

Linear programming is widely used to model production planning decisions. In addition to the optimal solutions, the dual prices provided by those models are of great interest in different situations such as shop-floor dispatching, spare parts inventory management, setup cost estimation, and indirect cost allocation. While those techniques are effective and intuitive in nature, fixed-capacity production planning models are “dual-poor”, i.e., dual prices are zero unless resources are fully utilized. Another issue arises regarding the objective function that drives the dual prices of the model. While steady–state queuing models do not consider the finished goods inventory that is held due to insufficient capacity, the linear programming models of production planning do not consider the effects of resource utilization on queue lengths within the production system. Clearly both types of costs, those due to congestion as well as finished goods inventory are relevant. Hence a model that integrates both would appear to be desirable. In this dissertation, we examine the dual behavior of two different production planning models: a conventional fixed–capacity linear programming model and a model that captures queuing behavior at resources in an aggregate manner using non–linear clearing functions. The conventional formulation consistently underestimates the dual price of capacity due to its failure to capture the effects of queuing. The clearing function formulation, in contrast, produces positive dual prices even when utilization is below one, exhibits more realistic behavior, such as holding finished inventory at utilization levels below one, and in multi–stage models, allows for identification of near–bottlenecks as an alternative for improvement in cases where it is economically or physically not possible to improve or add capacity to the bottleneck.



Coordination Mechanism Design for Sustainable Global Supply Networks

This dissertation studies coordination mechanism design for sustainable supply networks in a globalized environment, with the goal of achieving long-term profitability, environmental friendliness and social responsibility. We examine three different types of supply networks in detail. The first network consists of one supplier and multiple retailers. The main issue is how to efficiently share a scarce resource, such as capacities for green technology, among all members with private information under dynamically changing environment. We design a shared surplus supply agreement among the members which can lead to both efficient private investments and efficient capacity allocation under unpredictable and unverifiable market conditions. The second network is a serial supply chain. The source node provides critical raw material like coffee cherries) for the entire chain and is typically located in an underdeveloped economy, the end node is a retailer serving consumer at a developed economy like Starbucks Co.). We construct a dynamic supply agreement that takes into account the changing market and production conditions to ensure fair compensations so that the partners have the right incentives to work together to develop sustainable quality supply. The third network is a stylized global production network of a multinational company consisting of a home plant and a foreign branch. The branch serves the foreign market but receives a key component from the home plant. The distinctive feature is that both facilities belong to the same company, governed by the headquarters, yet they each also have their own autonomies. We analyze the role of the headquarters in designing coordination mechanism to improve efficiency. We show the headquarters can delegate the coordination effort to the home plant, as long as it keeps veto power.



© Social Sciences